How do you organise multiple git repositories, so that all of them are backed up together?
With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the ‘master’ project, or I could checkout the entire thing.
Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command..
Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I’ve written, websites I’ve made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup.
The problem is, since it’s a private repository, and git doesn’t allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos)
I could use the git submodule system, but it doesn’t act how I want it too (submodules are pointers to other repositories, and don’t really contain the actual code, so it’s useless for backup)
Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do
git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do
git push backupdrive1,
git push mymemorystick etc
So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?
6 Solutions collect form web for “How do you organise multiple git repositories, so that all of them are backed up together?”
I would strongly advise against putting unrelated data in a given
Git repository. The overhead of creating new repositories is quite
low, and that is a feature that makes it possible to keep
different lineages completely separate.
Fighting that idea means ending up with unnecessarily tangled history,
which renders administration more difficult and–more
importantly–“archeology” tools less useful because of the resulting
dilution. Also, as you mentioned, Git assumes that the “unit of
cloning” is the repository, and practically has to do so because of
its distributed nature.
One solution is to keep every project/package/etc. as its own bare
repository (i.e., without working tree) under a blessed hierarchy,
/repos/a.git /repos/b.git /repos/c.git
Once a few conventions have been established, it becomes trivial to
apply administrative operations (backup, packing, web publishing) to
the complete hierarchy, which serves a role not entirely dissimilar to
“monolithic” SVN repositories. Working with these repositories also
becomes somewhat similar to SVN workflows, with the addition that one
can use local commits and branches:
svn checkout --> git clone svn update --> git pull svn commit --> git push
You can have multiple remotes in each working clone, for the ease of
synchronizing between the multiple parties:
$ cd ~/dev $ git clone /repos/foo.git # or the one from github, ... $ cd foo $ git remote add github ... $ git remote add memorystick ...
You can then fetch/pull from each of the “sources”, work and commit
locally, and then push (“backup”) to each of these remotes when you
are ready with something like (note how that pushes the same commits
and history to each of the remotes!):
$ for remote in origin github memorystick; do git push $remote; done
The easiest way to turn an existing working repository
into such a bare repository is probably:
$ cd ~/dev $ git clone --bare foo /repos/foo.git $ mv foo foo.old $ git clone /repos/foo.git
which is mostly equivalent to a
svn import–but does not throw the
existing, “local” history away.
Note: submodules are a mechanism to include shared related
lineages, so I indeed wouldn’t consider them an appropriate tool for
the problem you are trying to solve.
I want to add to Damien’s answer where he recommends:
$ for remote in origin github memorystick; do git push $remote; done
You can set up a special remote to push to all the individual real remotes with 1 command; I found it at http://marc.info/?l=git&m=116231242118202&w=2:
So for “git push” (where it makes
sense to push the same branches
multiple times), you can actually do
what I do:
[remote "all"] url = master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6 url = login.osdl.org:linux-2.6.git
git push all masterwill push the “master” branch to both
of those remote repositories.
You can also save yourself typing the URLs twice by using the contruction:
[url "<actual url base>"] insteadOf = <other url base>
,I haven’t tried nesting git repositories yet because I haven’t run into a situation where I need to. As I’ve read on the #git channel git seems to get confused by nesting the repositories, i.e. you’re trying to git-init inside a git repository. The only way to manage a nested git structure is to either use
git-submodule or Android’s
As for that backup responsibility you’re describing I say delegate it… For me I usually put the “origin” repository for each project at a network drive at work that is backed up regularly by the IT-techs by their backup strategy of choice. It is simple and I don’t have to worry about it. 😉
I also am curious about suggested ways to handle this and will describe the current setup that I use (with SVN). I have basically created a repository that contains a mini-filesystem hierarchy including its own bin and lib dirs. There is script in the root of this tree that will setup your environment to add these bin, lib, etc… other dirs to the proper environment variables. So the root directory essentially looks like:
./bin/ # prepended to $PATH ./lib/ # prepended to $LD_LIBRARY_PATH ./lib/python/ # prepended to $PYTHONPATH ./setup_env.bash # sets up the environment
Now inside /bin and /lib there are the multiple projects and and their corresponding libraries. I know this isn’t a standard project, but it is very easy for someone else in my group to checkout the repo, run the ‘setup_env.bash’ script and have the most up to date versions of all of the projects locally in their checkout. They don’t have to worry about installing/updating /usr/bin or /usr/lib and it keeps it simple to have multiple checkouts and a very localized environment per checkout. Someone can also just rm the entire repository and not worry about uninstalling any programs.
This is working fine for us, and I’m not sure if we’ll change it. The problem with this is that there are many projects in this one big repository. Is there a git/Hg/bzr standard way of creating an environment like this and breaking out the projects into their own repositories?
What about using mr for managing your multiple Git repos at once:
The mr(1) command can checkout, update, or perform other actions on a
set of repositories as if they were one combined respository. It
supports any combination of subversion, git, cvs, mercurial, bzr,
darcs, cvs, vcsh, fossil and veracity repositories, and support for
other revision control systems can easily be added. […]
It is extremely configurable via simple shell scripting. Some examples
of things it can do include:
- When updating a git repository, pull from two different upstreams and merge the two together.
- Run several repository updates in parallel, greatly speeding up the update process.
- Remember actions that failed due to a laptop being offline, so they can be retried when it comes back online.
There is another method for having nested git repos, but it doesn’t solve the problem you’re after. Still, for others who are looking for the solution I was:
In the top level git repo just hide the folder in .gitignore containing the nested git repo. This makes it easy to have two separate (but nested!) git repos.