• Mic_Check_One_Two@reddthat.com
    link
    fedilink
    arrow-up
    47
    ·
    1 year ago

    It isn’t compressible at all, really. As far as a compression algorithm is concerned, it just looks like random data.

    Imagine trying to compress a text file. Each letter normally takes 8 bits to represent. The computer looks at 8 bits at a time, and knows which character to display. Normally, the computer needs to look at all 8 bits even when those bits are “empty” simply because you have no way of marking when one letter stops and another begins. It’s all just 1’s and 0’s, so it’s not like you can insert “next letter” flags in that. But we can cut that down.

    One of the easiest ways to do this is to count all the letters, then sort them from most to least common. Then we build a tree, with each character being a fork. You start at the top of the tree, and follow it down. You go down one fork for 0 and read the letter at your current fork on a 1. So for instance, if the letters are sorted “ABCDEF…” then “0001” would be D. Now D is represented with only 4 bits, instead of 8. And after reading the 1, you return to the top of the tree and start over again. So “01000101101” would be “BDBAB”. Normally that sequence would take 40 bits to represent, (because each character would be 8 bits long,) but we just did it in 11 bits total.

    But notice that this also has the potential to produce letters that are MORE than 8 bits long. If we follow that same pattern I listed above, “I” would be 9 bits, “J” would be 10, etc… The reason we’re able to achieve compression is because we’re using the more common (shorter) letters a lot and the less common (longer) letters less.

    Encryption undoes this completely, because (as far as compression is concerned) the data is completely random. And when you look at random data without any discernible pattern, it means that counting the characters and sorting by frequency is basically a lesson in futility. All the letters will be used about the same, so even the “most frequent” characters are only more frequent by a little bit due to random chance. So now. Even if the frequency still corresponds to my earlier pattern, the number of Z’s is so close to the number of A’s that the file will end up even longer than before. Because remember, the compression only works when the most frequent characters are actually used most frequently. Since there are a lot of characters that are longer than 8 bits and those characters are being used just as much as the shorter characters our compression method fails and actually produces a file that is larger than the original.

    • Eloise@lemm.ee
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      I understood this despite being drunk, thank you for the excellent explanation!