ColoRIX compressed image format

[Update: See also the follow-up post about the 16-color format.]

There’s an old DOS program named ColoRIX VGA Paint. It was developed by RIX Softworks in the 1980s.

Its native graphics file format, which I’ll call “RIX” format or “ColoRIX” format, can be read by a fair number of old DOS graphics utilities, and even a few not-so-old graphics applications. Or rather, at least some RIX files can be read. RIX files can be either compressed or uncompressed, and there’s very little support for compressed images.

I might have missed something, but I couldn’t find any documentation of the compression scheme, or any open-source software that supports compressed RIX images.

Files in RIX format do exist. For example, there are some here: VGARIX.ZIP — and all of those are compressed.

The format is featured in the book Encyclopedia of Graphics File Formats by James D. Murray and William VanRyper (2nd edition, 1996). The book says:

Unfortunately, although the rest of the format […] is reasonably well-documented, the compression algorithm used in the files is not. RIX SoftWorks says that the algorithm is not published because it is “extremely complicated.”

Well, I guess we’re out of luck, then.

Just kidding. I wasted a bunch of time figuring it out.

Overview and scope of this project

The format I want to figure out is the one written by ColoRIX VGA Paint v1.36, in VGA 256-color modes, for compressed full-screen images (“Screen” → “File” → “Small” from the menu). Specifically, that’s files that start with ASCII “RIX3”, and have bytes 0xaf 0x80 at offset 8. It will probably apply to more files than that, but I haven’t yet done the necessary research. A common filename extension for such files is “.SCI”, though there are others.

The information in this post is not guaranteed to be correct. It seems to work for me, but there may well be things that I have overlooked.

These files have a 10 byte header, followed by a palette that for our purposes is always 768 bytes in size. After that, at offset 778, is the compressed pixel data.

Here’s a test/example file, shown in a hex editor:

The test image above starts with 23 yellow pixels (palette entry 0x0e), with the remainder of the image being blue (palette entry 0x01). I avoid black and white, because those colors have some special properties.

As shown in the test image hex dump, most of the blue “image segment 1” part has a regular pattern. But that’s not typical. For most real images, it looks almost like random bytes. I.e., it looks compressed. So, just from looking at a few sample files, it was clear that the compressed data could be separated into two “segments”, with only the second segment really being compressed.

Each segment starts with a two-byte integer (low byte first) that tells us the size of the remainder of the segment. Oddly, for the first segment, the size is measured in two-byte units, while for the second segment, it’s measured in bytes.

In fact, some files have multiple “image” segments. But there is always just one “codebook” segment.

The structure of this data made me think of “Squeeze” format. Squeeze is an old file compression format used on CP/M and DOS. It is one of the compression schemes supported by the ARC archiving utility by System Enhancement Associates. Squeeze uses run-length encoding (RLE), followed by Huffman coding with a codebook that is explicitly stored in the file.

Unfortunately, RIX’s format isn’t Squeeze. But it could be something a lot like it. So, I guessed that the first segment defined a Huffman codebook.

The age of RIX format makes it very unlikely that it uses both Huffman coding and LZ77. It could use one or the other, but formats that use both were practically nonexistent until 1989 or 1990.

Assuming the image segment is Huffman coded, a two-color image would have to use at least 1 bit per pixel. My sample image is 320×200 pixels, so 64000 pixels, which should require at least 8000 bytes. But the segment uses only 127 bytes. So the compression can’t be as simple as just a single blob of Huffman-coded pixel values. There must be something else involved, and my initial guess is that, like Squeeze, it also uses RLE.

The codebook segment

The segment I’m naming “codebook” does indeed encode a Huffman code tree (or “codebook”). In retrospect, the format is quite simple, but it took me some time to figure it out.

Aside from the first two bytes (the segment size), and the last four bytes (which always seem to be 0), it’s a sequence of two-byte integers. Some of these integers are between 0x1000 (4096) and 0x10ff (4351). The rest are between 0x0000 and [some unknown, but not very large, value].

It’s pretty clear that the 0x10xx items encode the “leaves” of the Huffman tree. They presumably supply the byte values we might get after we do the Huffman decompression — just subtract 0x1000.

The other items are the “branch nodes” of the tree. But a branch node should have two pointers, and we seem to only have one. In this format, the number is the number of bytes to skip (measured from the end of the branch node) to get the the “0” branch. The “1” branch always begins immediately after the branch node, so there is no explicit pointer to it.

(In fact, you don’t really need to know the value of the branch node at all — you just need to know that it is a branch node, and not a leaf. Once you recursively read the “1” subtree, you’ll know where the “0” subtree should begin.)

Here’s how to interpret the codebook segment in the example file:

000D         = 13 integers follow

 0006       ...        = Skip...
 ├0002      1...        } ... the 6 bytes of
 │├1000     11 = 0x00   } this "1" subtree...
 │└10FF     10 = 0xFF   }
 └0006      0... get to this "0" subtree.
  ├0002     01...
  │├100E    011 = 0x0E
  │└100F    010 = 0x0F
  └0002     00...
   ├1015    001 = 0x15
   └10E7    000 = 0xE7
 0000       Unused?
 0000       Unused?

The 0x0E byte value makes sense — that’s our first color, yellow. The 0x15 (21) byte value is not exactly our run length of 23, but it’s close enough that that’s probably where it came from. Somewhat surprisingly, though, our second color, blue (0x01), is not present at all. What is present is 0x0F, which is the XOR of 0x0E and 0x01.


Compression is a perhaps a little easier to understand than decompression, so here’s an overview of the compression algorithm:

  • In certain cases, start by splitting the image into horizontal strips (of height 64 pixels?).
  • For each strip, apply an XOR filter, then RLE-compress the result. The algorithms are explained below.
  • Taking all intermediate-compressed strips into account, construct a suitable Huffman code to use.
  • Compress each strip independently using that Huffman code. The fill order is most-significant bit first. If the compressed data does not end on a byte boundary, pad the last byte with ‘0’ bits until it does.

The XOR filter: Replace each byte with the exclusive-or (XOR) of its color value, and the color value of the previous pixel. Filtering carries over from one row to the next. For the first pixel of the first row of a strip, assume the previous pixel had color 0.

RLE: Scan the filtered image for bytes of value 0x00 and 0xff. Identify the “runs”, of length 1 to 256, of consecutive pixels of the same value (0x00 or 0xff). Split runs longer than 256 bytes into multiple runs. Replace each run with two bytes: the byte value (0x00 or 0xff) followed by a count. A count of 0 means a total run length of 1 (or, equivalently, 0 more bytes after the first one), up to a count of 255 which means a run of 256 (or 255 more).


To decompress, do the inverse of each of those things, in the reverse order. Note that each strip can be decompressed independently to an image, or portion thereof.

Read the codebook definition segment. Identify the image strips. Huffman-decode the data in each strip. Scan the result, and for each 0x00 or 0xff byte encountered, pair it with the following byte to do RLE decompression. Finally, for every byte produced by the RLE step, filter it by XORing it with the previous decoded byte. The “previous decoded byte” is initialized to 0 at the beginning of each strip.

The problem with filler bits

When you decompress a strip, other than the last strip, you need to know exactly how many bytes it should decompress to. Alternatively, knowing the bit-exact size of the compressed data would be sufficient. However, neither of these things is explicitly stored in the file. There is no obvious way to tell whether trailing 0-valued bits in the final byte of compressed data are filler bits, or Huffman codes.

This is definitely a real problem that can happen, for very simple multi-strip images. I don’t know how the ColoRIX software solves it. It might be the case that every strip other than the last one is always exactly 64 pixels high. If that’s true, it would solve the problem.

Short of that, a good-enough solution might be to truncate each decompressed strip to the most recent image row boundary.

(This does raise the question of how to know when we’ve reached the last strip. Normally, we know this because the file ends immediately after the last strip. But files from this era sometimes have extraneous bytes appended to them, so it might be good to tolerate that somehow.)


  • I haven’t looked at how 16-color images are stored. It’s on my to-do list.
  • Though the software supports 256 colors, the brush color selection feature only lets you choose from among 16 colors at a time. At first, I assumed these colors were palette entries 0 through 15. Wrong. They are 0 through 14, but the last one, white, is not 15; it’s 255.
  • A run of pixels that alternate between two different colors can only be RLE-compressed if those two colors have palette entries that are complementary — they must add to 255 (or, equivalently, XOR to 255). The fact that, by default, black and white are complementary in this sense was probably by design.
  • In the ColoRIX version I tested, 320×200 images are always stored in a single segment, on the assumption that the 64000 pixels cannot compress to more than the maximum segment size of 65535 bytes. But that assumption is false. It is possible, though not easy, to construct a pathological image that compresses to more bytes than will fit in a segment. If you save and reload such an image, you get garbage.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s