A Machine Learning Perspective On Predictive Coding With PAQ
A Machine Learning Perspective On Predictive Coding With PAQ
Abstract
PAQ8 is an open source lossless data compression algorithm that currently achieves the best
compression rates on many benchmarks. This report presents a detailed description of PAQ8
from a statistical machine learning perspective. It shows that it is possible to understand some
of the modules of PAQ8 and use this understanding to improve the method. However, intuitive
statistical explanations of the behavior of other modules remain elusive. We hope the description
in this report will be a starting point for discussions that will increase our understanding, lead
to improvements to PAQ8, and facilitate a transfer of knowledge from PAQ8 to other machine
learning methods, such a recurrent neural networks and stochastic memoizers. Finally, the report
presents a broad range of new applications of PAQ to machine learning tasks including language
modeling and adaptive text prediction, adaptive game playing, classification, and compression
using features from the field of deep learning.
1 Introduction
Detecting temporal patterns and predicting into the future is a fundamental problem in machine
learning. It has gained great interest recently in the areas of nonparametric Bayesian statistics
(Wood et al., 2009) and deep learning (Sutskever et al., 2011), with applications to several domains
including language modeling and unsupervised learning of audio and video sequences. Some re-
searchers have argued that sequence prediction is key to understanding human intelligence (Hawkins
and Blakeslee, 2005).
The close connections between sequence prediction and data compression are perhaps under-
appreciated within the machine learning community. The goal of this report is to describe a
state-of-the-art compression method called PAQ8 (Mahoney, 2005) from the perspective of ma-
chine learning. We show both how PAQ8 makes use of several simple, well known machine learning
models and algorithms, and how it can be improved by exchanging these components for more
sophisticated models and algorithms.
PAQ is a family of open-source compression algorithms closely related to the better known
Prediction by Partial Matching (PPM) algorithm (Cleary and Witten, 1984). PPM-based data
compression methods dominated many of the compression benchmarks (in terms of compression
ratio) in the 1990s, but have since been eclipsed by PAQ-based methods. Compression algorithms
typically need to make a trade-off between compression ratio, speed, and memory usage. PAQ8
is a version of PAQ which achieves record breaking compression ratios at the expense of increased
1
Table 1: Comparison of cross entropy rates of several compression algorithms on the Calgary corpus
files. The cross entropy rate metric is defined in Section 2.3.
time and memory usage. For example, all of the winning submissions in the Hutter Prize (Hutter,
accessed April 15, 2011), a contest to losslessly compress the first 100 MB (108 bytes) of Wikipedia,
have been specialized versions of PAQ8. Dozens of variations on the basic PAQ8 method can be
found on the web: http://cs.fit.edu/∼mmahoney/compression/paq.html. As stated on the Hutter
Prize website, “This compression contest is motivated by the fact that being able to compress well
is closely related to acting intelligently, thus reducing the slippery concept of intelligence to hard
file size numbers.”
The stochastic sequence memoizer (Gasthaus et al., 2010) is a language modeling technique
recently developed in the field of Bayesian nonparametrics. Table 1 shows a comparison of sev-
eral compression algorithms on the Calgary corpus (Bell et al., 1990), a widely-used compression
benchmark. A summary of the Calgary corpus files appears in Table 2. PPM-test is our own PPM
implementation used for testing different compression techniques. PPM*C is a PPM implementa-
tion that was state of the art in 1995 (Cleary et al., 1995). 1PF and UKN are implementations of
the stochastic sequence memoizer (Gasthaus et al., 2010). cPPMII-64 (Shkarin, 2002) is currently
among the best PPM implementations. paq8l outperforms all of these compression algorithms by
what is considered to be be a very large margin in this benchmark.
Despite the huge success of PAQ8, it is rarely mentioned or compared against in machine
learning papers. There are reasons for this. A core difficulty is the lack of scientific publications on
the inner-workings of PAQ8. To the best of our knowledge, there exist only incomplete high-level
descriptions of PAQ1 through 6 (Mahoney, 2005) and PAQ8 (Mahoney, accessed April 15, 2011).
The C++ source code, although available, is very close to machine language (due to optimizations)
and the underlying algorithms are difficult to extract. Many of the architectural details of PAQ8
in this report were understood by examining the source code and are presented here for the first
time.
2
Table 2: File size and description of Calgary corpus files.
bib 111,261 ASCII text in UNIX “refer” format - 725 bibliographic references.
book1 768,771 unformatted ASCII text - “Far from the Madding Crowd”
book2 610,856 ASCII text in UNIX “troff” format - “Principles of Computer Speech”
geo 102,400 32 bit numbers in IBM floating point format - seismic data.
news 377,109 ASCII text - USENET batch file on a variety of topics.
obj1 21,504 VAX executable program - compilation of PROGP.
obj2 246,814 Macintosh executable program - “Knowledge Support System”.
paper1 53,161 “troff” format - Arithmetic Coding for Data Compression.
paper2 82,199 “troff” format - Computer (in)security.
pic 513,216 1728 x 2376 bitmap image (MSB first).
progc 39,611 Source code in C - UNIX compress v4.0.
progl 71,646 Source code in Lisp - system software.
progp 49,379 Source code in Pascal - program to evaluate PPM compression.
trans 93,695 ASCII and control characters - transcript of a terminal session.
1.1 Contributions
We provide a detailed explanation of how PAQ8 works. We believe this contribution will be of great
value to the machine learning community. An understanding of PAQ8 could lead to the design of
better algorithms. As stated in (Mahoney, accessed April 15, 2011), PAQ was inspired by research
in neural networks: “Schmidhuber and Heil (1996) developed an experimental neural network data
compressor. It used a 3 layer network trained by back propagation to predict characters from an
80 character alphabet in text. It used separate training and prediction phases. Compressing 10
KB of text required several days of computation on an HP 700 workstation.” In 2000, Mahoney
made several improvements that made neural network compression practical. His new algorithm
ran 105 times faster. PAQ8 uses techniques (e.g. dynamic ensembles) that could lead to advances
in machine learning.
As a second contribution, we demonstrate that an understanding of PAQ8 enables us to deploy
machine learning techniques to achieve better compression rates. Specifically we show that a second
order adaptation scheme, the extended Kalman filter (EKF), results in improvements over PAQ8’s
first order adaptation scheme.
A third contribution is to present several novel applications of PAQ8. First, we demonstrate
how PAQ8 can be applied to adaptive text prediction and game playing. Both of these tasks
have been tackled before using other compression algorithms. Second, we show for the first time
that PAQ8 can be adapted for classification. Previous works have explored using compression
algorithms, such as RAR and ZIP, for classification (Marton et al., 2005). We show that our
proposed classifier, PAQclass, can outperform these techniques on a text classification task. We
also show that PAQclass achieves near state-of-the-art results on a shape recognition task. Finally,
we develop a lossy image compression algorithm by combining PAQ8 with recently developed
unsupervised feature learning techniques.
3
1.2 Organization of this Report
In Section 2 we provide general background information about the problem of lossless data com-
pression, including a description of arithmetic coding and PPM. In Section 3 we present a detailed
explanation of how PAQ8 works. This section also includes a description of our improvement to the
compression rate of PAQ8 using EKF. We present several novel applications of PAQ8 in Section 4.
Section 5 contains our conclusions and possible future work. Appendix A contains information on
how to access demonstration programs we created using PAQ8.
4
Figure 1: An example of arithmetic coding. The alphabet consists of three characters: A, B, and
C. The string being encoded is “ABA”.
assigns a probability of 0.5 to A, 0.25 to B, and 0.25 to C. Since the first character in the sequence
was A, the arithmetic encoder expands this region (0 to 1/3) and assigns the second predicted
probability distribution according to this expanded region. This is visualized in the middle layer of
Figure 1. For the final character in the sequence, assume that the predictor assigns a probability of
0.5 to A, 0.4 to B, and 0.1 to C. This is visualized in the bottom of Figure 1. Now all the arithmetic
coder needs to do is store a single number between the values of 1/6 and 5/24. This number can
be efficiently encoded using a binary search. The binary search ranges would be: “0 to 1”, “0 to
0.5”, “0 to 0.25”, and finally “0.125 to 0.25”. This represents the number 0.1875 (which falls in the
desired range). If we use “0” to encode the decision to use the lower half and “1” to encode the
decision to use the upper half, this sequence can be represented in binary as “001”.
Now consider the task of decoding the file. As input, the arithmetic decoder has the number
0.1875, and a sequence of predicted probability distributions. For the first character, the predic-
tor gives a uniform probability distribution. The number 0.1875 falls into the ‘A’ sector, so the
arithmetic decoder tells the predictor that the first character was ‘A’. Similarly, for the next two
characters the arithmetic decoder knows that the characters must be ‘B’ and ‘A’. At this point, the
arithmetic decoder needs some way to know that it has reached the end of the sequence. There are
typically two techniques that are used to communicate the length of the sequence to the decoder.
The first is to encode a special “end of sequence” character, so that when the decoder reaches
this character it knows it reached the end of the string. The second technique is to just store an
additional integer along with the compressed file which represents the length of the sequence (this
is usually more efficient in practice).
Although arithmetic coding can achieve optimal compression in theory, in practice there are
two factors which prevent this. The first is the fact that files can only be stored to disk using a
sequence of bytes, so this requires some overhead in comparison to storing the optimal number of
bits. The second is the fact that precision limitations of floating point numbers prevent optimal
encodings. In practice both of these factors result in relatively small overhead, so arithmetic coding
still produces near-optimal encodings.
2.2 PPM
PPM (Cleary and Witten, 1984) is a lossless compression algorithm which consistently performs
5
Table 3: PPM model after processing the string “abracadabra” (up to the second order model).
This table is a recreation of a table from (Cleary et al., 1995).
1
ra → c 1 2
1
→ Esc 1 2
well on text compression benchmarks. It creates predicted probability distributions based on the
history of characters in a sequence using a technique called context matching.
Consider the alphabet of lower case English characters and the input sequence “abracadabra”.
For each character in this string, PPM needs to create a probability distribution representing how
likely the character is to occur. For the first character in the sequence, there is no prior information
about what character is likely to occur, so assigning a uniform distribution is the optimal strategy.
For the second character in the sequence, ‘a’ can be assigned a slightly higher probability because
it has been observed once in the input history. Consider the task of predicting the character after
the entire sequence. One way to go about this prediction is to find the longest match in the input
history which matches the most recent input. In this case, the longest match is “abra” which
occurs in the first and eighth positions. Based on the longest match, a good prediction for the next
character in the sequence is simply the character immediately after the match in the input history.
After the string “abra” was the character ‘c’ in the fifth position. Therefore ‘c’ is a good prediction
for the next character. Longer context matches can result in better predictions than shorter ones.
This is because longer matches are less likely to occur by chance or due to noise in the data.
6
PPM essentially creates probability distributions according to the method described above.
Instead of generating the probability distribution entirely based on the longest context match, it
blends the predictions of multiple context lengths and assigns a higher weight to longer matches.
There are various techniques on how to go about blending different context lengths. The strategy
used for combining different context lengths is partially responsible for the performance differences
between various PPM implementations.
One example of a technique used to generate the predicted probabilities is shown in Table 3
(Cleary et al., 1995). The table shows the state of the model after the string “abracadabra” has
been processed. ‘k’ is the order of the context match, ‘c’ is the occurence count for the context,
and ‘p’ is the computed probability. ‘Esc’ refers to the event of an unexpected character and causes
the algorithm to use a lower order model (weighted by the probability of the escape event). Note
that the lowest order model (-1) has no escape event since it matches any possible character in the
alphabet A.
PPM is a nonparametric model that adaptively changes based on the data it is compressing.
It is not surprising that similar methods have been discovered in the field of Bayesian nonpara-
metrics. The stochastic memoizer (Wood et al., 2009) is a nonparametric model based on an
unbounded-depth hierarchical Pitman-Yor process. The stochastic memoizer shares several simi-
larities with PPM implementations. The compression performance of the stochastic memoizer is
currently comparable with some of the best PPM implementations.
7
Figure 2: The fourth and fifth iteration of the Hilbert curve construction. Image courtesy of
Zbigniew Fiedorowicz.
mapping is using a raster scan (i.e. scanning rows from top to bottom and pixels in each row
from left to right). Another mapping known as the Hilbert curve (Hilbert, 1891) maximizes spatial
locality (shown in Figure 2).
8
algorithms without modifying their source code. Fortunately, PAQ8 is open source and these
modifications can be easily implemented (so dc can be used). For the purposes of classification,
we investigated defining our own distance metrics. Using cross entropy, a more computationally
efficient distance metric can be defined which requires only one pass through the data:
E(x|y) is the cross entropy of x after the compressor has been trained on y. We also investigated
a symmetric version of this distance metric, in which de2 (x, y) is always equal to de2 (y, x):
E(x|y) + E(y|x)
de2 (x, y) =
2
Finally, Cilibrasi et al (Cilibrasi and Vitanyi, 2005) propose using the following distance metric:
3 PAQ8
3.1 Architecture
PAQ8 uses a weighted combination of predictions from a large number of models. Most of the
models are based on context matching. Unlike PPM, some of the models allow noncontiguous
context matches. Noncontiguous context matches improve noise robustness in comparison to PPM.
This also enables PAQ8 to capture longer-term dependencies. Some of the models are specialized
for particular types of data such as images or spreadsheets. Most PPM implementations make
predictions on the byte-level (given a sequence of bytes, they predict the next byte). However, all
of the models used by PAQ8 make predictions on the bit-level.
Some architectural details of PAQ8 depend on the version used. Even for a particular version
of PAQ8, the algorithm changes based on the type of data detected. For example, fewer predic-
tion models are used when image data is detected. We will provide a high-level overview of the
architecture used by paq8l in the general case of when the file type is not recognized. paq8l is a
stable version of PAQ8 released by Matt Mahoney in March 2007. The PAQ8 versions submitted
to the Hutter prize include additional language modeling components not present in paq8l such as
dictionary preprocessing and word-level modeling.
An overview of the paq8l architecture is shown in Figure 3. 552 prediction models are used. The
model mixer combines the output of the 552 predictors into a single prediction. This prediction
is then passed through an adaptive probability map (APM) before it is used by the arithmetic
coder. In practice, APMs typically reduce prediction error by about 1%. APMs are also known as
secondary symbol estimation (Mahoney, accessed April 15, 2011). APMs were originally developed
9
Figure 3: PAQ8 architecture.
by Serge Osnach for PAQ2. An APM is a two dimensional table which takes the model mixer
prediction and a low order context as inputs and outputs a new prediction on a nonlinear scale
(with finer resolution near 0 and 1). The table entries are adjusted according to prediction error
after each bit is coded.
The second major difference between the model mixer and a standard neural network is the
fact that the hidden nodes are partitioned into seven sets. For every bit of the data file, one node
is selected from each set. The set sizes are shown in the rectangles of Figure 4. We refer to the
leftmost rectangle as set 1 and the rightmost rectangle as set 7. Only the edges connected to these
seven selected nodes are updated for each bit of the data. That means of the 552×3,080 = 1,700,160
weights in the first layer, only 552×7 = 3,864 of the weights are updated for each bit. This makes
training the neural network several orders of magnitude faster.
1
We refer to “non-stationary data” as data in which the statistics change over time. For example, we would
consider a novel to be non-stationary while a text document of some repeating string (e.g. “abababab...”) to be
stationary.
10
Algorithm 1 paq8l node selection mechanism.
set1Index← 8 + history(1)
set2Index← history(0)
set3Index← lowOrderMatches + 8 × ((lastFourBytes/32) mod (8))
if history(1) = history(2) then
set3Index← set3Index + 64
end if
set4Index← history(2)
set5Index← history(3)
set6Index← round(log2 (longestMatch) × 16)
if bitP osition = 0 then
set7Index← history(3)/128 + bitMask(history(1),240) + 4×(history(2)/64) + 2×(lastFourBytes / 231 )
else
set7Index← history(0) ×28−bitPosition
if bitPosition = 1 then
set7Index← set7Index + history(3)/2
end if
set7Index← min(bitPosition,5) × 256 + history(1)/32 + 8 × (history(2)/32) + bitMask(set7Index,192)
end if
Each set uses a different selection mechanism to choose a node. Sets number 1, 2, 4, and 5
choose the node index based on a single byte in the input history. For example, if the byte for set
1 has a value of 4, the fifth node of set 1 would be selected. Set 1 uses the second most recent byte
from the input history, set 2 uses the most recent byte, set 4 uses the third most recent byte, and
set 5 uses the fourth most recent byte. Set 6 chooses the node based on the length of the longest
context matched with the most recent input. Sets 3 and 7 use a combination of several bytes of the
input history in order to choose a node index. The selection mechanism used by paq8l is shown in
Algorithm 1. history(i) returns the i’th most recent byte, lowOrderM atches the number of low-
order contexts which have been observed at least once before (between 0 and 7), lastF ourBytes
is the four most recent bytes, longestM atch is the length of the longest context match (between 0
and 65534), bitM ask(x, y) does a bitwise AND operation between x and y, and bitP osition is the
bit index of the current byte (between 0 and 7).
11
Figure 5: Mixtures of experts architecture. This figure is a recreation of a figure in (Jacobs et al.,
1991). All of the experts are feedforward networks and have the same input. The gating network
acts as a switch to select a single expert. The output of the selected expert becomes the output of
the system. Only the weights of the selected expert are trained.
be trained on each subset. Using a gating mechanism has the additional computational benefit
that only one expert needs to be trained at a time, instead of training all experts simultaneously.
Increasing the number of expert networks does not increase the time complexity of the algorithm.
One difference between the mixtures of experts model and the PAQ8 model mixture is the gating
mechanism. Jacobs et al use a feedforward network to learn the gating mechanism, while PAQ8 uses
a deterministic algorithm (shown in Algorithm 1) which does not perform adaptive learning. The
gating algorithm used by PAQ8 contains problem-specific knowledge which is specified a priori. One
interesting area for future work would be to investigate the effect of adaptive learning in the PAQ8
gating mechanism. Adaptive learning could potentially lead to a better distribution of the data
to each expert. Ideally, the data should be uniformly partitioned across all the experts. However,
using a deterministic gating mechanism runs the risk of a particular expert being selected too often.
The gating mechanism in PAQ is governed by the values of the input data. The idea of gating
units in a network according to the value of the input has also been used in other recurrent neural
network architectures. For example, in long-short-term-memory (LSTM), it is used to maintain
hidden units switched-on and hence avoid the problem of vanishing gradients in back-propagation,
see e.g. Graves et al. (2009). However, the deterministic gating mechanism of PAQ is not intended
at improving prediction performance or avoiding vanishing gradients. Rather, its objective is to
reduce computation vastly. We recommend that researchers working with RNNs, which take days
to train, investigate ways of incorporating these ideas to speed up the training of RNNs. Adaptive
training, instead of training to a fixed value, is another important aspect to keep in mind.
There is a direct mapping between the mixtures of experts architecture and the PAQ8 model
mixer architecture. Each “set” in the hidden layer of Figure 4 corresponds to a separate mixture of
experts model. The seven mixtures of experts are then combined using an additional feedforward
12
layer. The number of experts in each mixtures of experts model corresponds to the number of nodes
in each set (e.g. there are 264 experts in set 1 and 1536 experts in set 7). As with the mixtures
of experts architecture, only the weights of the expert chosen by the gating mechanism are trained
for each bit of the data. Another difference between the standard mixtures of experts model and
PAQ8 is the fact that mixtures of experts models typically are optimized to converge towards a
stationary objective function while PAQ8 is designed to adaptively train on both stationary and
non-stationary data.
where w ∈ Rnp is the vector of weights, xt ∈ [0, 1]np is the vector of predictors at time t, yt ∈ {0, 1}
is the next bit in the data being compressed, and sigm(η) = 1/(1 + e−η ) is the sigmoid or logistic
function. np is the number of predictors and is equal to 552 for the first layer of the neural network
and 7 for the second layer of the network. Let πt = sigm(wT xt ). The negative log-likelihood of the
t-th bit is given by
I(yt =1)
N LL(w) = − log[πt × (1 − πt )I(yt =0) ] = − [yt log πt + (1 − yt ) log(1 − πt )]
where I(·) denotes the indicator function. The last expression is the cross-entropy error (also known
as coding error) function term at time t. The logistic regression weights are updated online with
first order updates:
13
and r = 5. The following are the EKF update equations for each bit of data:
wt+1|t = wt
Pt+1|t = Pt + Q
Pt+1|t G0t+1
Kt+1 =
r + Gt+1 Pt+1|t G0t+1
wt+1 = wt+1|t + Kt+1 (yt − πt )
Pt+1 = Pt+1|t − Kt+1 Gt+1 Pt+1|t ,
where G1×7 is the Jacobian matrix: G = [∂y/∂w1 · · · ∂y/∂w7 ] with ∂y/∂wi = y(1 − y)xi . We
compared the performance of EKF with other variants of paq8l. The results are shown in Table 4.
The first three columns are paq8l with different settings of the level parameter. level is the only
paq8l parameter that can be changed via command-line (without modifying the source code). It
makes a tradeoff between speed, memory usage, and compression performance. It can be set to
an integer value between zero and eight. Lower values of level are faster and use less memory but
achieve worse compression performance. level =8 is the slowest setting and uses the most memory
(up to 1643 MiB) but achieves the best compression performance. level =5 has a 233 MiB memory
limit. paq8-8-tuned is a customized version of paq8l (with level =8) in which we changed the
value of the weight initialization for the second layer of the neural network. We found changing the
initialization value from 32,767 to 128 improved compression performance. Finally, paq8-8-ekf
refers to our modified version of paq8l with EKF used to update the weights in the second layer
of the neural network. We find that using EKF slightly outperforms the first order updates. The
improvement is about the same order of magnitude as the improvement between level =5 and
level =8. However, changing level has a significant cost in memory usage, while using EKF has no
significant computational cost. The initialization values for paq8-8-tuned and paq8-8-ekf were
determined using manual parameter tuning on the first Calgary corpus file (‘bib’). The performance
difference between paq8l-8 and paq8-8-tuned is similar to the difference between paq8-8-tuned
and paq8-8-ekf.
4 Applications
4.1 Adaptive Text Prediction and Game Playing
The fact that PAQ8 achieves state of the art compression results on text documents indicates that
it can be used as a powerful model for natural language. PAQ8 can be used to find the string x
that maximizes p(x|y) for some training string y. It can also be used to estimate the probability
p(z|y) of a particular string z given some training string y. Both of these tasks are useful for several
natural language applications. For example, many speech recognition systems are composed of an
acoustic modeling component and a language modeling component. PAQ8 could be used to directly
replace the language modeling component of any existing speech recognition system to achieve more
accurate word predictions.
Text prediction can be used to minimize the number of keystrokes required to type a particular
string (Garay-Vitoria and Abascal, 2006). These predictions can be used to improve the commu-
nication rate for people with disabilities and for people using slow input devices (such as mobile
phones). We modified the source code of paq8l to create a program which predicts the next n
14
Table 4: PAQ8 compression rates on the Calgary corpus
characters while the user is typing a string. A new prediction is created after each input character
is typed. It uses fork() after each input character to create a process which generates the most
likely next n characters. fork() is a system call on Unix-like operating systems which creates an
exact copy of an existing process. The program can also be given a set of files to train on.
Some preliminary observational studies on our text prediction system are shown in Figure 6.
Note that PAQ8 continuously does online learning, even while making a prediction (as seen by
the completion of “Byron Knoll” in the top example). The character predictions do capture some
syntactic structures (as seen by completion of LaTeX syntax) and even some semantic information
as implied by the training text.
Sutskever et al (Sutskever et al., 2011) use Recurrent Neural Networks (RNNs) to perform text
prediction. They also compare RNNs to the sequence memoizer and PAQ8 in terms of compression
rate. They conclude that RNNs achieve better compression than the sequence memoizer but worse
than PAQ8. They perform several text prediction tasks using RNNs with different training sets
(similar to the examples in Figure 6). One difference between their method and ours is the fact
that PAQ8 continuously does online learning on the test data. This feature could be beneficial for
text prediction applications because it allows the system to adapt to new users and data that does
not appear in the training set.
We found that the PAQ8 text prediction program could be modified into a rock-paper-scissors
AI that usually beats human players. Given a sequence of the opponent’s rock-paper-scissors moves
(such as “rpprssrps”) it predicts the most likely next move for the opponent. In the next round
the AI would then play the move that beats that prediction. The reason that this strategy usually
beats human players is that humans typically use predictable patterns after a large number of rock-
paper-scissors rounds. The PAQ8 text prediction program and rock-paper-scissors AI are available
to be downloaded (see Appendix A).
15
M—ay the contemplation of so many wonders extinguish the spirit ofvengeance in him!
My— companions and I had decided to escape as soon as the vessel cameclose enough for us to be heard
My n—erves calmed a little, but with my brain so aroused,I did a swift review of my whole existence
My name i—n my ears and some enormous baleen whales
My name is B—ay of Bengal, the seas of the East Indies, the seasof China
My name is Byr—on and as if it was an insane idea. But where the lounge.I stared at the ship bearing
My name is Byron K—eeling Island disappeared below the horizon,
My name is Byron Kn—ow how the skiff escaped from the Maelstrom’sfearsome eddies,
My name is Byron Knoll.— It was an insane idea. Fortunately I controlled myselfand stretched
My name is Byron Knoll. My name is B—yron Knoll. My name is Byron Knoll. My name is Byron Knoll.
F—or example, consider the form of the exponential family
Fi—gure∼\ref{fig:betaPriorPost}(c) shows what happens as the number of heads in the past data.
Figure o—f the data, as follows: \bea\gauss(\mu|\gamma, \lambda(2 \alpha-1))
Figure ou—r conclusions are a convex combination of the prior mean and the constraints
Figure out Bayesian theory we must. Jo—rdan conjugate prior
Figure out Bayesian theory we must. Jos—h Tenenbaum point of the posterior mean is and mode of the
Figure out Bayesian theory we must. Josh agrees. Long live P(\vtheta—|\data)
Figure 6: Two examples of PAQ8 interactive text prediction sessions. The user typed the text
in boldface and PAQ8 generated the prediction after the “—” symbol. We shortened some of the
predictions for presentation purposes. In the top example, PAQ8 was trained on “Twenty Thousand
Leagues Under the Seas” (Jules Verne, 1869). In the bottom example, PAQ8 was trained on the
LaTeX source of a machine learning book by Kevin P. Murphy.
4.2 Classification
In many classification settings of practical interest, the data appears in sequences (e.g. text). Text
categorization has particular relevance for classification tasks in the web domain (such as spam
filtering). Even when the data does not appear to be obviously sequential in nature (e.g. images),
one can sometimes find ingenious ways of mapping the data to sequences.
Compression-based classification was discovered independently by several researchers (Marton
et al., 2005). One of the main benefits of compression-based methods is that they are very easy to
apply as they usually require no data preprocessing or parameter tuning. There are several standard
procedures for performing compression-based classification. These procedures all take advantage
of the fact that when compressing the concatenation of two pieces of data, compression programs
tend to achieve better compression rates when the data share common patterns. If a data point
in the test set compresses well with a particular class in the training set, it likely belongs to that
class. Any of the distance metrics defined in Section 2.5 can be directly used to do classification (for
example, using the k-nearest neighbor algorithm). We developed a classification algorithm using
PAQ8 and show that it can be used to achieve competitive classification rates in two disparate
domains: text categorization and shape recognition.
4.2.1 Techniques
Marton et al (Marton et al., 2005) describe three common compression-based classification proce-
dures: standard minimum description length (SMDL), approximate minimum description length
(AMDL), and best-compression neighbor (BCN). Suppose each data point in the training and test
sets are stored in separate files. Each file in the training set belongs to one of the classes C1 , ..., CN .
Let the file Ai be the concatenation of all training files in class Ci . SMDL runs a compression
algorithm on each Ai to obtain a model (or dictionary) Mi . Each test file T is compressed using
each Mi . T is assigned to the class Ci whose model Mi results in the best compression of T . While
16
Table 5: The number of times each bit of data gets compressed using different compression-based
classification methods. NX is the number of training files, NY is the number of test files, and NZ
is the number of classes.
SMDL 1 NZ
AMDL NY + 1 NZ
BCN NY + 1 NX
4.2.2 PAQclass
AMDL and BCN both work with off-the-shelf compression programs. However, implementing
SMDL usually requires access to a compression program’s source code. Since PAQ8 is open source,
we modified the source code of paq8l (a version of PAQ8) to implement SMDL. We call this
classifier PAQclass. To the best of our knowledge, PAQ has never been modified to implement
SMDL before. We changed the source code to call fork() when it finishes processing data in the
training set (for a particular class). One forked process is created for every file in the test set.
This essentially copies the state of the compressor after training and allows each test file to be
compressed independently. Note that this procedure is slightly different from SMDL because the
model Mi continues to be adaptively modified while it is processing test file T . However, it still
has the same time complexity as SMDL. paq8l has one parameter to set the compression level. We
used the default parameter setting of 5 during classification experiments.
Compression performance for AMDL and BCN is measured using file size. The use of file size is
fundamentally limited in two ways. The first is that it is only accurate to within a byte (due to the
way files are stored on disk). The second is that it is reliant on the non-optimal arithmetic coding
process to encode files to disk. Cross entropy is a better measurement of compression performance
because it is subject to neither of these limitations. Since we had access to the paq8l source code,
we used cross entropy as a measure of compression performance instead of file size.
17
Table 6: Number of documents in each category of the 20news dataset.
Class Count
alt.atheism 799
comp.graphics 973
comp.os.ms-windows.misc 985
comp.sys.ibm.pc.hardware 982
comp.sys.mac.hardware 961
comp.windows.x 980
misc.forsale 972
rec.autos 990
rec.motorcycles 994
rec.sport.baseball 994
rec.sport.hockey 999
sci.crypt 991
sci.electronics 981
sci.med 990
sci.space 987
soc.religion.christian 997
talk.politics.guns 910
talk.politics.mideast 940
talk.politics.misc 775
talk.religion.misc 628
Total 18,828
18
Table 7: Classification results on the 20news dataset. Each row shows one run of a randomized
80-20 train-test split.
3470 92.1402
3482 92.4588
3466 92.034
3492 92.7244
3480 92.4057
Average 92.3526
Table 8: Comparative results on the 20news dataset. Our results are in boldface.
19
Figure 7: Five example images from each class of the chicken dataset. The images have not been
rotated.
Class Count
back 76
breast 96
drumstick 96
thigh and back 61
wing 117
Total 446
There are several options for creating one-dimensional lossy representations of images. For ex-
ample, Watanabe et al (Watanabe et al., 2002) demonstrate a method of converting images to text.
They show that their system is effective for image classification tasks. Wei et al (Wei et al., 2008)
describe a method of converting shape contours into time series data. They use this representation
to achieve successful classification results on the chicken dataset. Based on these results, we decided
to combine this representation with PAQ8 classification.
Figure 8 demonstrates how we convert images in the chicken dataset into one-dimensional time
series data. The first step is to calculate the centroid of the shape. We project a ray from the
centroid point and measure the distance to the edge of the shape. If the ray intersects the shape
edge at multiple points, we take the furthest intersection (as seen at point 5 in Figure 8). We
rotate the ray around the entire shape and take measurements at uniform intervals. The number of
measurements taken along the shape contour is a tunable parameter. Once the Euclidean distance
is measured for a particular angle of the ray, it is converted into a single byte by rounding the result
of the following formula: (100 ∗ distance/width). distance is the Euclidean distance measurement
and width is the width of the image. PAQclass is then run on this binary data.
20
Figure 8: An example of converting a shape into one-dimensional time series data. The original
image is shown on top (part of the “wing” class of the chicken dataset) and the time series data
shown on the bottom. Points along the contour have been labeled and the corresponding points on
the time series are shown.
Table 10: Leave-one-out classification results on the chicken dataset with different settings of the
“number of measurements” parameter. There are a total of 446 images. The row with the best
classification results is in boldface.
1 162 36.3229
5 271 60.7623
10 328 73.5426
30 365 81.8386
35 380 85.2018
38 365 81.8386
39 363 81.3901
40 389 87.2197
41 367 82.287
42 367 82.287
45 359 80.4933
50 352 78.9238
100 358 80.2691
200 348 78.0269
300 339 76.009
The classification results for different settings of the “number of measurements” parameter
are shown in Table 10. We used leave-one-out cross-validation since this seems to be the most
common evaluation protocol used on this dataset. The “number of measurements” parameter
21
Table 11: Confusion matrix for chicken dataset with the “number of measurements” parameter set
to 40. C1=back, C2=breast, C3=drumstick, C4=thigh and back, C5=wing.
Predicted
C1 C2 C3 C4 C5
C1 55 10 2 2 7
C2 0 93 0 3 0
Actual C3 0 5 84 0 7
C4 0 8 0 48 5
C5 3 1 3 1 109
Table 12: Comparative results on the chicken dataset. Our results are in boldface.
had a surprisingly large effect on classification accuracy. Adjusting the parameter by a single
measurement from the best value (40) resulted in ≈ 5 to 6% loss in accuracy. Another unfortunate
property of adjusting this parameter is that the classification accuracy is not a convex function (as
seen at the parameter value 35). This means finding the optimal value of the parameter would
require an exhaustive search. Due to time constraints, we did not perform an exhaustive search
(only the experiments in Table 10 were performed). Table 11 shows a confusion matrix at the best
parameter setting.
Our result of 87.2197% correct classifications is among the best results published for this dataset.
22
Table 12 shows some comparative results.
The classification procedure we used is not rotationally invariant. Since the chicken pieces in the
dataset can be in arbitrary orientations, this could lead to a decrease in classification accuracy. Wei
et al (Wei et al., 2008) use the same one dimensional image representation as we use, but they use
a rotationally invariant classification procedure. They use a 1-nearest-neighbor classifier combined
with a Euclidean distance metric. When comparing the distance between two images, they try
all possible rotations and use the angle which results in the lowest Euclidean distance between the
time series representations. This same procedure of trying all possible orientations could be used to
make the PAQclass classification procedure rotationally invariant (although it would increase the
algorithm’s running time). Alternatively, we could use a single rotationally invariant representation
such as always setting the smallest sampled edge distance to be at angle = 0. The effect of rotational
invariance on classification accuracy would make interesting future work.
Figure 9: 256 6×6 image filters trained using k-means on the CIFAR-10 dataset.
PAQ8 can also be used for performing lossy compression. Any lossy representation can poten-
tially be passed through PAQ8 to achieve additional compression. For example, paq8l can losslessly
compress JPEG images by about 20% to 30%. paq8l contains a model specifically designed for
JPEG images. It essentially undoes the lossless compression steps performed by JPEG (keeping
the lossy representation) and then performs lossless compression more efficiently. To create a lossy
23
Figure 10: top-left image: original (700×525 pixels), top-right image: our compression method
(4083 bytes), bottom left: JPEG (16783 bytes), bottom-right: JPEG2000 (4097 bytes)
image compression algorithm, we first created a set of filters based on a method described by Coates
et al (Coates et al., 2011). We used the k-means algorithm to learn a set of 256 6×6 filters on
the CIFAR-10 image dataset (Krizhevsky, 2009). The filters were trained using 400,000 randomly
selected images patches. The filters are shown in Figure 9.
In order to create a lossy image representation, we calculated the closest filter match to each
image patch in the original image. These filter selections were encoded by performing a raster scan
through the image and using one byte per patch to store the filter index. These filter selections
were then losslessly compressed using paq8l. Some example images compressed using this method
are shown in Figures 10, 11, 12, and 13. At the maximum JPEG compression rate, the JPEG
images were still larger than the images created using our method. Even at a larger file size the
JPEG images appeared to be of lower visual quality compared to the images compressed using our
method. We also compared against the more advanced lossy compression algorithm JPEG2000.
JPEG2000 has been designed to exploit limitations of human visual perception: the eye is less
sensitive to color variation at high spatial frequencies and it has different degrees of sensitivity
to brightness variation depending on spatial frequency (Mahoney, accessed April 15, 2011). Our
method was not designed to exploit these limitations (we leave this as future work). It simply uses
the filters learned from data. Based on the set of test images, JPEG2000 appears to outperform
our method in terms of visual quality (at the same compression rate).
24
Figure 11: top-left image: original (525×700 pixels), top-right image: our compression method
(1493 bytes), bottom left: JPEG (5995 bytes), bottom-right: JPEG2000 (1585 bytes)
5 Conclusion
We hope this technical exposition of PAQ8 will make the method more accessible and stir up
new research in the area of temporal pattern learning and prediction. Casting the weight updates
in a statistical setting already enabled us to make modest improvements to the technique. We
tried several other techniques from the fields of stochastic approximation and nonlinear filtering,
including the unscented Kalman filter, but did not observe significant improvements over the EKF
implementation. One promising technique from the field of nonlinear filtering we have not yet
implemented is Rao-Blackwellized particle filtering for online logistic regression (Andrieu et al.,
2001). We leave this for future work.
The way in which PAQ8 adaptively combines predictions from multiple models using context
matching is different from what is typically done with mixtures of experts and ensemble methods
such as boosting and random forests. A statistical perspective on this, which allows for a general-
ization of the technique, should be the focus of future efforts. Bridging the gap between the online
learning framework (Cesa-Bianchi and Lugosi, 2006) and PAQ8 is a potentially fruitful research
direction. Recent developments in RNNs seem to be synergistic with PAQ8, but this still requires
methodical exploration. Of particular relevance is the adoption of PAQ8’s deterministic gating
25
Figure 12: top-left image: original (700×525 pixels), top-right image: our compression method
(3239 bytes), bottom left: JPEG (16077 bytes), bottom-right: JPEG2000 (2948 bytes)
architecture so as to reduce the enormous computational cost of training RNNs. This should be
done in conjunction with a move toward adaptive prediction.
On the application front, we found it remarkable that a single algorithm could be used to tackle
such a broad range of tasks. In fact, there are many other tasks that could have been tackled,
including clustering, compression-based distance metrics, anomaly detection, speech recognition,
and interactive interfaces. It is equally remarkable how the method achieves comparable results to
state-of-the-art in text classification and image compression.
There are challenges in deploying PAQ beyond this point. The first challenge is that the models
are non-parameteric and hence require enormous storing capacity. A better memory architecture,
with some forgetting, is needed. The second challenge is the fact that PAQ applies only to univariate
sequences. A computationally efficient extension to multiple sequences does not seem trivial. In
this sense, RNNs have an advantage over PAQ, PPM and stochastic memoizers.
Acknowledgements
We would like to thank Matt Mahoney, who enthusiastically helped us understand important details
of PAQ and provided us with many insights into predictive data compression. We would also like
to thank Ben Marlin and Ilya Sutskever for discussions that helped improve this manuscript.
26
Figure 13: top-left image: original (700×525 pixels), top-right image: our compression method
(3970 bytes), bottom left: JPEG (6335 bytes), bottom-right: JPEG2000 (3863 bytes)
A PAQ8 Demonstrations
Two of the applications are available at: http://cs.ubc.ca/∼knoll/PAQ8-demos.zip. The first
demonstration is text prediction and the second demonstration is a rock-paper-scissors AI. In-
structions are provided on how to compile and run the programs in Linux.
References
G. Andreu, A. Crespo, and J. M. Valiente. Selecting the toroidal self-organizing feature maps
(TSOFM) best organized to object recognition. In International Conference on Neural Networks,
volume 2, pages 1341–1346, 1997.
C. Andrieu, N. de Freitas, and A. Doucet. Rao-Blackwellised particle filtering via data augmenta-
tion. Advances in Neural Information Processing Systems (NIPS), 2001.
M. Bicego and A. Trudda. 2D shape classification using multifractional brownian motion. In Struc-
tural, Syntactic, and Statistical Pattern Recognition, volume 5342 of Lecture Notes in Computer
Science, pages 906–916. Springer, 2008.
27
Vision and Pattern Recognition, volume 5681 of Lecture Notes in Computer Science, pages 466–
479. Springer, 2009.
A. Carli, M. Bicego, S. Baldo, and V. Murino. Non-linear generative embeddings for kernels on
latent variable models. In IEEE 12th International Conference on Computer Vision Workshops
(ICCV Workshops), pages 154–161, 2009.
N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
J. Cleary and I. Witten. Data compression using adaptive coding and partial string matching.
IEEE Transactions on Communications, 32(4):396–402, 1984.
J. Cleary, W. Teahan, and I. Witten. Unbounded length contexts for PPM. Data Compression
Conference, 1995.
N. Garay-Vitoria and J. Abascal. Text prediction systems: a survey. Universal Access in the
Information Society, 4:188–203, 2006.
J. Gasthaus, F. Wood, and Y. W. Teh. Lossless compression based on the sequence memoizer.
Data Compression Conference, pages 337–345, 2010.
D. Hilbert. Ueber die stetige abbildung einer line auf ein flachenstuck. Mathematische Annalen,
38:459–460, 1891.
M. Hutter. The human knowledge compression prize. http://prize.hutter1.net, accessed April 15,
2011.
R. Jacobs, M. Jordan, S. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. Neural
Computation, 3:79–87, 1991.
28
A. Kibriya, E. Frank, B. Pfahringer, and G. Holmes. Multinomial naive Bayes for text categorization
revisited. In Advances in Artificial Intelligence, volume 3339 of Lecture Notes in Computer
Science, pages 235–252. Springer, 2005.
A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University
of Toronto, 2009.
M. Mahoney. Adaptive weighing of context models for lossless data compression. Florida Tech.
Technical Report, CS-2005-16, 2005.
D. Opitz and R. Maclin. Popular ensemble methods: an empirical study. In Journal of Artificial
Intelligence Research 11, pages 169–198, 1999.
F. Peng, D. Schuurmans, and S. Wang. Augmenting naive Bayes classifiers with statistical language
models. Information Retrieval, 7:317–345, 2004.
A. Perina, M. Cristani, U. Castellani, and V. Murino. A new generative feature set based on entropy
distance for discriminative classification. In Image Analysis and Processing ICIAP 2009, volume
5716 of Lecture Notes in Computer Science, pages 199–208. Springer, 2009.
J. D. M. Rennie. Improving multi-class text classification with naive Bayes. Master’s thesis, M.I.T.,
2001.
J. D. M. Rennie, L. Shih, J. Teevan, and D. Karger. Tackling the poor assumptions of naive Bayes
text classifiers. In International Conference on Machine Learning (ICML), pages 616–623, 2003.
J. Rissanen and G. G. Langdon. Arithmetic coding. IBM J. Res. Dev., 23:149–162, 1979.
D. Shkarin. PPM: one step to practicality. In Data Compression Conference, pages 202–211, 2002.
S. Singhal and L. Wu. Training multilayer perceptrons with the extended kalman algorithm. In
Advances in neural information processing systems (NIPS), pages 133–140, 1989.
I. Sutskever, J. Martens, and G. E. Hinton. Generating text with recurrent neural networks. In
International Conference on Machine Learning (ICML), 2011.
29
T. Watanabe, K. Sugawara, and H. Sugihara. A new pattern representation scheme using data
compression. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):579–590,
2002.
L. Wei, E. Keogh, X. Xi, and M. Yoder. Efficiently finding unusual shapes in large image databases.
Data Mining and Knowledge Discovery, 17:343–376, 2008.
K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor
classification. J. Mach. Learn. Res., 10:207–244, 2009.
Z. Yong and D. A. Adjeroh. Prediction by partial approximate matching for lossless image com-
pression. IEEE Transactions on Image Processing, 17(6):924–935, 2008.
T. Zhang and F. Oles. Text categorization based on regularized linear classification methods.
Information Retrieval, 4:5–31, 2001.
30