Why Does My C# Gzip Produce a Larger File Than Fiddler or PHP

Why does my C# gzip produce a larger file than Fiddler or PHP?

Preface: .NET users should not use the Microsoft-provided GZipStream or DeflateStream classes under any circumstances, unless Microsoft replaces them completely with something that works. Use the DotNetZip library instead.

Update to Preface: The .NET Framework 4.5 and later have fixed the compression problem, and GZipStream and DeflateStream use zlib in those versions. I do not know if the CRC problem referenced below has been fixed.

Another update: The CRC problem is not only not fixed, but Microsoft has decided that they won't fix it!

This is one of several bugs in GZipStream. No self-respecting gzip compressor should ever produce 133 bytes of output from 11 bytes of input. See my comments at Why does BCL GZipStream (with StreamReader) not reliably detect Data Errors with CRC32? .

What is happening internally is that GZipStream is not using the static or stored methods, both of which would produce compressed data about the same size as the input data (on top of which would be added 18 bytes of gzip header and trailer). Instead it is using the dynamic method, which creates a very large code descriptor header for a very small number of codes. It is simply a bug / very bad implementation.

Update:

With the hex dumps, I can provide some analysis. First, both the Fiddler and php output are correct and proper. The only difference between them is in the gzip header, in particular the timestamp set in Fiddler but not in php, and the originating operating system set in php but not in Fiddler. For both the 13 bytes of compressed data is identical, and can be represented as (using my infgen program to disassemble deflate streams):

last
static
literal 'Hello World
end

which is exactly as it should be. A single static block, which requires no code descriptors, and simply coding all of the bytes as literals. (No matches of previous strings with lengths and distances.)

The output of GZipStream on the other hand is a horrible mess in several ways. The compressed data is:

dynamic
code 3 5
code 4 5
code 5 4
code 6 4
code 7 4
code 8 3
code 9 3
code 10 4
code 11 4
code 12 4
code 13 4
code 14 3
code 16 3
litlen 0 14
litlen 1 14
litlen 2 14
litlen 3 14
litlen 4 14
litlen 5 14
litlen 6 14
litlen 7 14
litlen 8 14
litlen 9 12
litlen 10 6
litlen 11 14
litlen 12 14
litlen 13 14
litlen 14 14
litlen 15 14
litlen 16 14
litlen 17 14
litlen 18 14
litlen 19 14
litlen 20 14
litlen 21 14
litlen 22 14
litlen 23 14
litlen 24 14
litlen 25 14
litlen 26 14
litlen 27 14
litlen 28 14
litlen 29 14
litlen 30 13
litlen 31 14
litlen 32 6
litlen 33 14
litlen 34 10
litlen 35 12
litlen 36 14
litlen 37 14
litlen 38 13
litlen 39 10
litlen 40 8
litlen 41 9
litlen 42 11
litlen 43 10
litlen 44 7
litlen 45 8
litlen 46 7
litlen 47 9
litlen 48 8
litlen 49 8
litlen 50 8
litlen 51 9
litlen 52 8
litlen 53 9
litlen 54 10
litlen 55 9
litlen 56 8
litlen 57 9
litlen 58 9
litlen 59 8
litlen 60 9
litlen 61 10
litlen 62 8
litlen 63 14
litlen 64 14
litlen 65 8
litlen 66 9
litlen 67 8
litlen 68 9
litlen 69 8
litlen 70 9
litlen 71 10
litlen 72 11
litlen 73 8
litlen 74 11
litlen 75 14
litlen 76 9
litlen 77 10
litlen 78 9
litlen 79 10
litlen 80 9
litlen 81 12
litlen 82 9
litlen 83 9
litlen 84 9
litlen 85 10
litlen 86 12
litlen 87 11
litlen 88 14
litlen 89 14
litlen 90 12
litlen 91 11
litlen 92 14
litlen 93 11
litlen 94 14
litlen 95 14
litlen 96 14
litlen 97 6
litlen 98 7
litlen 99 7
litlen 100 7
litlen 101 6
litlen 102 8
litlen 103 8
litlen 104 7
litlen 105 6
litlen 106 12
litlen 107 9
litlen 108 6
litlen 109 7
litlen 110 7
litlen 111 6
litlen 112 7
litlen 113 13
litlen 114 6
litlen 115 6
litlen 116 6
litlen 117 7
litlen 118 8
litlen 119 8
litlen 120 9
litlen 121 8
litlen 122 11
litlen 123 13
litlen 124 12
litlen 125 13
litlen 126 13
litlen 127 14
litlen 128 14
litlen 129 14
litlen 130 14
litlen 131 14
litlen 132 14
litlen 133 14
litlen 134 14
litlen 135 14
litlen 136 14
litlen 137 14
litlen 138 14
litlen 139 14
litlen 140 14
litlen 141 14
litlen 142 14
litlen 143 14
litlen 144 14
litlen 145 14
litlen 146 14
litlen 147 14
litlen 148 14
litlen 149 14
litlen 150 14
litlen 151 14
litlen 152 14
litlen 153 14
litlen 154 14
litlen 155 14
litlen 156 14
litlen 157 14
litlen 158 14
litlen 159 14
litlen 160 14
litlen 161 14
litlen 162 14
litlen 163 14
litlen 164 14
litlen 165 14
litlen 166 14
litlen 167 14
litlen 168 14
litlen 169 14
litlen 170 14
litlen 171 14
litlen 172 14
litlen 173 14
litlen 174 14
litlen 175 14
litlen 176 14
litlen 177 14
litlen 178 14
litlen 179 14
litlen 180 14
litlen 181 14
litlen 182 14
litlen 183 14
litlen 184 14
litlen 185 14
litlen 186 14
litlen 187 14
litlen 188 14
litlen 189 14
litlen 190 14
litlen 191 14
litlen 192 14
litlen 193 14
litlen 194 14
litlen 195 14
litlen 196 14
litlen 197 14
litlen 198 14
litlen 199 14
litlen 200 14
litlen 201 14
litlen 202 14
litlen 203 14
litlen 204 14
litlen 205 14
litlen 206 14
litlen 207 14
litlen 208 14
litlen 209 14
litlen 210 14
litlen 211 14
litlen 212 14
litlen 213 14
litlen 214 14
litlen 215 14
litlen 216 14
litlen 217 14
litlen 218 14
litlen 219 14
litlen 220 14
litlen 221 14
litlen 222 14
litlen 223 14
litlen 224 14
litlen 225 14
litlen 226 14
litlen 227 14
litlen 228 14
litlen 229 14
litlen 230 14
litlen 231 14
litlen 232 14
litlen 233 14
litlen 234 14
litlen 235 14
litlen 236 14
litlen 237 14
litlen 238 14
litlen 239 14
litlen 240 14
litlen 241 14
litlen 242 14
litlen 243 13
litlen 244 13
litlen 245 13
litlen 246 14
litlen 247 13
litlen 248 14
litlen 249 13
litlen 250 14
litlen 251 13
litlen 252 14
litlen 253 14
litlen 254 14
litlen 255 14
litlen 256 14
litlen 257 4
litlen 258 3
litlen 259 4
litlen 260 4
litlen 261 4
litlen 262 5
litlen 263 5
litlen 264 5
litlen 265 5
litlen 266 5
litlen 267 6
litlen 268 6
litlen 269 5
litlen 270 6
litlen 271 7
litlen 272 8
litlen 273 8
litlen 274 9
litlen 275 10
litlen 276 9
litlen 277 10
litlen 278 12
litlen 279 11
litlen 280 12
litlen 281 14
litlen 282 14
litlen 283 14
litlen 284 12
litlen 285 11
dist 0 6
dist 1 10
dist 2 11
dist 3 11
dist 4 9
dist 5 8
dist 6 8
dist 7 8
dist 8 7
dist 9 7
dist 10 5
dist 11 6
dist 12 4
dist 13 5
dist 14 4
dist 15 5
dist 16 4
dist 17 5
dist 18 4
dist 19 4
dist 20 4
dist 21 4
dist 22 4
dist 23 4
dist 24 4
dist 25 5
dist 26 4
dist 27 5
dist 28 5
dist 29 5
literal 'Hello World
end
!
last
stored
end

So what is all that? The actual data is just the line near the end "literal 'Hello World", which just codes each byte of the input. What precedes it is a description of a set of Huffman codes for literals, lengths, and distances. Here are the things wrong with it:

  • First off, it should not be using dynamic at all. Describing the set of codes takes about 100 bytes. This is precisely why the deflate format provides a pre-defined set of codes used in static blocks. The compressor should select a static block in this case (which is what php and Fiddler are doing).
  • Second, every single possible code is defined, even though the vast majority are never used! When using a dynamic block, a proper compressor will only define codes for literals, lengths, and distances actually used in that block. In this case there are no lengths or distances used, and only eight different literals used (H, e, l, o, space, w, r, and d). Instead it proceeds to define 256 literal codes, 29 length codes, and 30 distance codes. I am guessing that some experimentation will show that the dynamic header from GZipStream is always the same, in which case it's not even dynamic, which is the whole point!
  • Third, it throws in an unnecessary empty stored block at the end. The first block should have been marked as the last block.

All of this points to the simple fact that whoever wrote this GZipStream code was, to put it as politely as I can, lacking in any understanding of the deflate format or compression in general. They elected to produce only dynamic blocks (except for an empty static block at the end), to only produce the same dynamic header every time (I think), defeating the purpose of dynamic blocks, and to not bother to figure out if the current block is last one, requiring putting out an empty block to mark the end.

As noted elsewhere, those aren't the only problems with GZipStream. It can't even properly use the CRC-32 as intended to detect corrupt streams.

The truly perplexing thing is not why Microsoft assigned someone incompetent to write a gzip compressor and decompressor, but rather why they assigned anyone at all to write it! There is freely available code, zlib, that has an extremely liberal license that permits commercial use with no attribution. This code has been deployed widely for almost two decades, and does all the things it's supposed to do correctly and efficiently. Most everything else uses zlib, including php and I suspect Fiddler as well.

.NET 6 failing at Decompress large gzip text

Just confirmed that the article linked in the comments below the question contains a valid clue on the issue.

Corrected code would be:

string Decompress(string compressedText)
{
var gZipBuffer = Convert.FromBase64String(compressedText);

using var memoryStream = new MemoryStream();
int dataLength = BitConverter.ToInt32(gZipBuffer, 0);
memoryStream.Write(gZipBuffer, 4, gZipBuffer.Length - 4);

var buffer = new byte[dataLength];
memoryStream.Position = 0;

using var gZipStream = new GZipStream(memoryStream, CompressionMode.Decompress);

int totalRead = 0;
while (totalRead < buffer.Length)
{
int bytesRead = gZipStream.Read(buffer, totalRead, buffer.Length - totalRead);
if (bytesRead == 0) break;
totalRead += bytesRead;
}

return Encoding.UTF8.GetString(buffer);
}

This approach changes

gZipStream.Read(buffer, 0, buffer.Length);

to

    int totalRead = 0;
while (totalRead < buffer.Length)
{
int bytesRead = gZipStream.Read(buffer, totalRead, buffer.Length - totalRead);
if (bytesRead == 0) break;
totalRead += bytesRead;
}

which takes the Read's return value into account correctly.

Without the change, the issue is easily repeatable on any string random enough to produce a gzip of length > ~10kb.

Here's the compressor, if anyone's interested in testing this on your own

string Compress(string plainText)
{
var buffer = Encoding.UTF8.GetBytes(plainText);
using var memoryStream = new MemoryStream();

var lengthBytes = BitConverter.GetBytes((int)buffer.Length);
memoryStream.Write(lengthBytes, 0, lengthBytes.Length);

using var gZipStream = new GZipStream(memoryStream, CompressionMode.Compress);

gZipStream.Write(buffer, 0, buffer.Length);
gZipStream.Flush();

var gZipBuffer = memoryStream.ToArray();

return Convert.ToBase64String(gZipBuffer);
}

Why is deflate making my data BIGGER?

The degree to which your data is expanding is a bug in the DeflateStream class. The bug also exists in the GZipStream class. See my description of this problem here: Why does my C# gzip produce a larger file than Fiddler or PHP?.

Do not use the DeflateStream class provided by Microsoft. Use DotNetZip instead, which provides replacement classes.

Incompressible data will expand slightly when you try to compress it, but only by a small amount. The maximum expansion from a properly written deflate compressor is five bytes plus a small fraction of a percent. zlib's expansion of incompressible data (with the default settings for raw deflate) is 5 bytes + 0.03% of the input size. Your 304 bytes, if incompressible, should come out as 309 bytes from a raw deflate compressor like DeflateStream. A factor of 1.9 expansion on something more than five or six bytes in length is a bug.

tail | gzip gives different file length than gzip after tail

http://www.zlib.org/rfc-gzip.html#header-trailer

If you compress a file, the original filename is recorded in the FNAME field.
If you compress a stream, there is no original file name.

That seems to account for the difference in your case.

GZIP outputs and sizes between C#/dot Net and Java

This is one of several bugs in the .NET gzip code. That code should be avoided. Use DotNetZip instead. See answer here: Why does my C# gzip produce a larger file than Fiddler or PHP? .

.NET GZipStream decompress producing empty stream

You need to close the GZipStream before getting the compressed bytes from the memory stream. In this case the closing is handled by the Dispose called due to the using.

using (var compressed = new MemoryStream())
{
using (var compressor = new GZipStream(compressed, CompressionMode.Compress))
{
uncompressed.CopyTo(compressor);
}
// Get the compressed bytes only after closing the GZipStream
compressedBytes = compressed.ToArray();
}

This works and you could even remove the using for the MemoryStream since it will be disposed by the GZipStream unless you use the constructor overload that allows you to specify that the underlying stream should be left open. This implies with that code you are calling ToArray on a disposed stream but that is allowed because the bytes are still available which makes disposing memory streams a bit weird but if you don't do it FXCop will annoy you.

Gzip magic number problem while decompressing it

I found out the problem. After researching more, I found this:

Why does my C# gzip produce a larger file than Fiddler or PHP?

And I used Ionic.Zlib libray. With using that I can decompress and get the original file without any problem.

Here is my code snippet:

     byte[] bytes = Convert.FromBase64String(voucher.Document.Value);
var zippedStream = new MemoryStream(bytes);
var decompressed = new MemoryStream();
zOut = new ZlibStream(decompressed, Ionic.Zlib.CompressionMode.Decompress,
true);
CopyStream(zippedStream, zOut);
byte[] byteArray = decompressed.ToArray();
return File(byteArray, "application/pdf", "voucher.pdf");

CopyStream function:

 static void CopyStream(System.IO.Stream src, System.IO.Stream dest)
{
byte[] buffer = new byte[1024];
int len;
while ((len = src.Read(buffer, 0, buffer.Length)) > 0)
dest.Write(buffer, 0, len);
dest.Flush();
}


Related Topics



Leave a reply



Submit