Git Branching – Trusted on large projects?

I’m pleasantly surprised how easy it is to perform branching with Git. What worries me is, for all the hundreds of files that I may have in my directory structure, can I really trust Git to make sure all the files are put in to the right state when I checkout another branch? It seems too quick to be true.

Has anyone experienced a time where they checkout a different branch, and some files were skipped, or just not changed when you know they should have?

  • git jenkins and tags all together
  • How does Mercurial stack up against GIT and SVN?
  • What does the error `remote: fatal: bad object 0000000000000000000000000000000000000000` mean?
  • How do I batch delete redundant remote git branches?
  • git: How can I move a branch to an arbitrary commit?
  • creating a union branch of a number of git branches
  • git svn - Unrecognized URL scheme error
  • How to undo local changes to a specific file
  • How to use github repo as like CDN server for uploading assets file?
  • git eclipse synchronize workspace shows too many incoming changes due to line endings
  • ignore merges in a git diff
  • Committing all files from Android Studio to GitHub
  • 3 Solutions collect form web for “Git Branching – Trusted on large projects?”

    Because Git manages pointers to files based on the hashing of the contents of the files, it will not need to rewrite a nearly identical directory, no matter how many files are in there. If you look at the structure of commits and the concept of a tree, you will see that Git will walk these structures quite efficiently and only change the parts of your working directory that need to be changed. It’s power lays in the simplicity.

    I’ve never had any issues with Git and large projects. The linux kernel and other large projects are versioned with Git with no problems and fast and reliable performance.

    Yes, you can trust it. However, when really something gets wrong, any other clone of the repository (beside the main repository itself) is a backup 😉

    It’s so fast, because

    • Every operation is local in first place, which means: No slow network operations required
    • It only pushes the changes into the workspace: It takes the first common ancestor between the current branch and the branch-to-checkout, revert the changes from the current checkout to this ancestor and then applies the changes from the ancestor to other branch. This only happens in the background, but this highly reduces the amount of files, or changes to push into the workspace at the end.

    Maybe (possibly, probably ;)) there are other optimizations. I don’t know.

    I’ve been using Git for several years and have found it to be highly reliable in this area. I have used it with projects of thousands of files.

    Git Baby is a git and github fan, let's start git clone.