Git Branching – Trusted on large projects?

I’m pleasantly surprised how easy it is to perform branching with Git. What worries me is, for all the hundreds of files that I may have in my directory structure, can I really trust Git to make sure all the files are put in to the right state when I checkout another branch? It seems too quick to be true.

Has anyone experienced a time where they checkout a different branch, and some files were skipped, or just not changed when you know they should have?

  • How to commit specific file in index?
  • How can I get git and copSSH to look in the correct directory for keys?
  • TeamCity Visual Studio plugin with Git
  • can't add all files to git due to permissions
  • How to use gitlab search criteria
  • Fatal Error when updating submodule using GIT
  • jenkins cant pull from git repository using windows batch command
  • How do I git log with short commit hash + date/time?
  • git rebase onto remote updates
  • git GIT_WORK_TREE post-receive hook deployment remote
  • Log of remote history
  • How to require a local package that depends on other local package
  • 3 Solutions collect form web for “Git Branching – Trusted on large projects?”

    Because Git manages pointers to files based on the hashing of the contents of the files, it will not need to rewrite a nearly identical directory, no matter how many files are in there. If you look at the structure of commits and the concept of a tree, you will see that Git will walk these structures quite efficiently and only change the parts of your working directory that need to be changed. It’s power lays in the simplicity.

    I’ve never had any issues with Git and large projects. The linux kernel and other large projects are versioned with Git with no problems and fast and reliable performance.

    Yes, you can trust it. However, when really something gets wrong, any other clone of the repository (beside the main repository itself) is a backup 😉

    It’s so fast, because

    • Every operation is local in first place, which means: No slow network operations required
    • It only pushes the changes into the workspace: It takes the first common ancestor between the current branch and the branch-to-checkout, revert the changes from the current checkout to this ancestor and then applies the changes from the ancestor to other branch. This only happens in the background, but this highly reduces the amount of files, or changes to push into the workspace at the end.

    Maybe (possibly, probably ;)) there are other optimizations. I don’t know.

    I’ve been using Git for several years and have found it to be highly reliable in this area. I have used it with projects of thousands of files.

    Git Baby is a git and github fan, let's start git clone.