Why is the Python calculated “hashlib.sha1” different from “git hash-object” for a file?

I’m trying to calculate the SHA-1 value of a file.

I’ve fabricated this script:

  • How can I build a git tag in TeamCity?
  • How to perform a 3-way visual diff on git?
  • Using Git with VB6
  • Jenkins build using variable ${GIT_BRANCH} as sonarqube parameter without “origin/”
  • How does git detect similar files, for its rename detection?
  • How do I keep GIT repositories inside Dropbox?
  • def hashfile(filepath):
        sha1 = hashlib.sha1()
        f = open(filepath, 'rb')
        try:
            sha1.update(f.read())
        finally:
            f.close()
        return sha1.hexdigest()
    

    For a specific file I get this hash value:
    8c3e109ff260f7b11087974ef7bcdbdc69a0a3b9
    But when i calculate the value with git hash_object, then I get this value: d339346ca154f6ed9e92205c3c5c38112e761eb7

    How come they differ? Am I doing something wrong, or can I just ignore the difference?

  • git: How do I overwrite all local changes on merge?
  • Python SVN bindings for Windows
  • Setting up post-receive hook for bare repo
  • Git submodule update: reference is not a tree… but commit IS there
  • How to use .jar tool BFG Repo Cleaner and reduce git repository?
  • Using git-remote-hg on windows
  • 2 Solutions collect form web for “Why is the Python calculated “hashlib.sha1” different from “git hash-object” for a file?”

    git calculates hashes like this:

    sha1("blob " + filesize + "\0" + data)
    

    Reference

    For reference, here’s a more concise version:

    def sha1OfFile(filepath):
        import hashlib
        with open(filepath, 'rb') as f:
            return hashlib.sha1(f.read()).hexdigest()
    

    On second thought: although I’ve never seen it, I think there’s potential for f.read() to return less than the full file, or for a many-gigabyte file, for f.read() to run out of memory. For everyone’s edification, let’s consider how to fix that: A first fix to that is:

    def sha1OfFile(filepath):
        import hashlib
        sha = hashlib.sha1()
        with open(filepath, 'rb') as f:
            for line in f:
                sha.update(line)
            return sha.hexdigest()
    

    However, there’s no guarantee that '\n' appears in the file at all, so the fact that the for loop will give us blocks of the file that end in '\n' could give us the same problem we had originally. Sadly, I don’t see any similarly Pythonic way to iterate over blocks of the file as large as possible, which, I think, means we are stuck with a while True: ... break loop and with a magic number for the block size:

    def sha1OfFile(filepath):
        import hashlib
        sha = hashlib.sha1()
        with open(filepath, 'rb') as f:
            while True:
                block = f.read(2**10) # Magic number: one-megabyte blocks.
                if not block: break
                sha.update(block)
            return sha.hexdigest()
    

    Of course, who’s to say we can store one-megabyte strings. We probably can, but what if we are on a tiny embedded computer?

    I wish I could think of a cleaner way that is guaranteed to not run out of memory on enormous files and that doesn’t have magic numbers and that performs as well as the original simple Pythonic solution.

    Git Baby is a git and github fan, let's start git clone.