How to Do a Sha1 File Checksum in C#

how would I perform a SHA1 hash on a file?

using (FileStream stream = File.OpenRead(@"C:\File.ext"))
{
using (SHA1Managed sha = new SHA1Managed())
{
byte[] checksum = sha.ComputeHash(stream);
string sendCheckSum = BitConverter.ToString(checksum)
.Replace("-", string.Empty);
}
}

Calculate the checksum periodically.

Different Sha1 checksum of image in c#

Thanks @Alexei Levenkov, You were right. I was being passed an image that appears to have already had some converting done to it. I get the same checksum if I create the array from the original filepath:

e.g.
byte[] image = File.ReadAllBytes(originalFilePath);

Now I just have to hope that I always have access to this path in the final implementation.

Get a file SHA256 Hash code and Checksum

  1. My best guess is that there's some additional buffering in the Mono implementation of the File.Read operation. Having recently looked into checksums on a large file, on a decent spec Windows machine you should expect roughly 6 seconds per Gb if all is running smoothly.

    Oddly it has been reported in more than one benchmark test that SHA-512 is noticeably quicker than SHA-256 (see 3 below). One other possibility is that the problem is not in allocating the data, but in disposing of the bytes once read. You may be able to use TransformBlock (and TransformFinalBlock) on a single array rather than reading the stream in one big gulp—I have no idea if this will work, but it bears investigating.

  2. The difference between hashcode and checksum is (nearly) semantics. They both calculate a shorter 'magic' number that is fairly unique to the data in the input, though if you have 4.6GB of input and 64B of output, 'fairly' is somewhat limited.

    • A checksum is not secure, and with a bit of work you can figure out the input from enough outputs, work backwards from output to input and do all sorts of insecure things.
    • A Cryptographic hash takes longer to calculate, but changing just one bit in the input will radically change the output and for a good hash (e.g. SHA-512) there's no known way of getting from output back to input.
  3. MD5 is breakable: you can fabricate an input to produce any given output, if needed, on a PC. SHA-256 is (probably) still secure, but won't be in a few years time—if your project has a lifespan measured in decades, then assume you'll need to change it. SHA-512 has no known attacks and probably won't for quite a while, and since it's quicker than SHA-256 I'd recommend it anyway. Benchmarks show it takes about 3 times longer to calculate SHA-512 than MD5, so if your speed issue can be dealt with, it's the way to go.

  4. No idea, beyond those mentioned above. You're doing it right.

For a bit of light reading, see Crypto.SE: SHA51 is faster than SHA256?

Edit in response to question in comment

The purpose of a checksum is to allow you to check if a file has changed between the time you originally wrote it, and the time you come to use it. It does this by producing a small value (512 bits in the case of SHA512) where every bit of the original file contributes at least something to the output value. The purpose of a hashcode is the same, with the addition that it is really, really difficult for anyone else to get the same output value by making carefully managed changes to the file.

The premise is that if the checksums are the same at the start and when you check it, then the files are the same, and if they're different the file has certainly changed. What you are doing above is feeding the file, in its entirety, through an algorithm that rolls, folds and spindles the bits it reads to produce the small value.

As an example: in the application I'm currently writing, I need to know if parts of a file of any size have changed. I split the file into 16K blocks, take the SHA-512 hash of each block, and store it in a separate database on another drive. When I come to see if the file has changed, I reproduce the hash for each block and compare it to the original. Since I'm using SHA-512, the chances of a changed file having the same hash are unimaginably small, so I can be confident of detecting changes in 100s of GB of data whilst only storing a few MB of hashes in my database. I'm copying the file at the same time as taking the hash, and the process is entirely disk-bound; it takes about 5 minutes to transfer a file to a USB drive, of which 10 seconds is probably related to hashing.

Lack of disk space to store hashes is a problem I can't solve in a post—buy a USB stick?

How to get SHA1 and MD5 checksum from HttpPostedFileBase file.InputStream

If you are hashing a stream, you need to set the current position of the stream to 0 before computing the hash.

file.InputStream.Seek(0, SeekOrigin.Begin);

For me, this is a great place for an extension method, eg.:

//compute hash using extension method:
string checksumMd5 = file.InputStream.GetMD5hash();

Which is supported by the class:

using System;
using System.IO;

public static class Extension_Methods
{

public static string GetMD5hash(this Stream stream)
{
stream.Seek(0, SeekOrigin.Begin);
using (var md5Instance = System.Security.Cryptography.MD5.Create())
{
var hashResult = md5Instance.ComputeHash(stream);
stream.Seek(0, SeekOrigin.Begin);
return BitConverter.ToString(hashResult).Replace("-", "").ToLowerInvariant();
}
}

}

How would I hash a folder using SHA1

Just change all MD5 to SHA1 in your source code.



Related Topics



Leave a reply



Submit