On Sat, 25 Jul 2009 13:10:41 -0400 Simon wrote: > I have tried using git in the past and found that it doesnt work in my > 'space constrained' scenario. The need for a repository is a problem. > The use of the usbkey however is nice since it allows git to work > without having each computer maintain its own repository... but > still... i dont currently have a usbkey that's large enough to hold > all my data, even if i could compress it i doubt it would fit. > > Another thing is, i wonder if it retains the attributes of the file > (creation date, mod date, owner/group, permissions)? As this can be > important on some aspects of my synchronisation needs. Vanilla git doesn't, apart from executable bit. Due to highly-modular structure of git, one can easily implement it as a wrapper or replacement binary at some level, storing metadata in some form (plain list, mirror tree or just alongside each file) when pushing changes to repo, applying on each pull. Then there are also git-hooks, which should be a better way than wrapper in theory, but I found them much harder to use in practice. > Still, git is a very good solution that works incrementally in a > differential manner (makes patches from previous versions). But when > i tried it, i found to suit my needs it would require the programming > of a big wrapper that would interface git to make some daily quick > actions simpler than a few git commands. That's another advantage of wrapper, but note that git-commands themselves can be quite extensible via aliases, configurable in gitconfig at any level (repo, home, system-wide). [alias] ci = commit -a co = checkout st = status -a br = branch ru = remote update ui = update-index --refresh cp = cherry-pick Still, things such are "git ui && git cp X" are quite common, so wrapper, or at least a set of shell aliases is quite handy. >> I apologize if the existence of a bare repo as an intermediary is a problem. >> This can be done on a server as well. > > It is... it makes all my computer dependant on that repo... sync'ing > computers at home can be done alright, but will still require walking > around pluging/unpluging. Makes this practically impossible to do > over the network (or to sync my host on the internet, not all my pc > are connected to the internet so the repo cant be just on the server, > i would have to maintain several repositories to work this out...). > It may be possible to adapt it to my scenario, but i think it will > require a lot of design in advance... but i'll check it out... at > worst it will convince me i should program my own, better it will give > me some good ideas or fortify some of my own good ideas and at best it > will be the thing i've been looking for! Why keep bare repo at all? That's certainly not a prequisite with distributed VCS like git. You can fetch / merge / rebase / cherry-pick commits with git via ssh just as easy as with rsync, using some intermediate media only if machines aren't connected at all, but then there's just no way around it. And even here, knowing approximate date of last sync, you can use commands like git-bundle to create single pack of new objects, which remote(s) can easily import, transferring this via any applicable method or protocol between / to any number of hosts. As you've noted already, git is quite efficient when it comes to storage, keeping the changes alone. When this will become a problem due to long history of long-obsoleted changes, you can drop them all, effectively 'sqashing' all the commits in one of the repos, rebasing the rest against it. So that should cover requirement one. Cherry-picking commits or checking out individual files / dirs on top of any base from any other repo/revision is pretty much what is stated in the next three requirements. One gotcha here is that you should be used to making individual commits consistent and atomic, so each set of changes serves one purpose and you won't be in situation when you'll need "half of commit" anywhere. Conflict resolution is what you get with merge / rebase (just look at the fine "git-merge" man page), but due to abscence of "ultimate AI" these better used repeatedly against the same tree. About the last point of original post... I don't think git is "intuitive" until you understand exactly how it works - that's when it becomes one, with all the high-level and intermediate interfaces having great manpage and sole, clear purpose. That said, I don't think git is the best way to sync everything. I don't mix binary files with configuration, because just latter suffice with gentoo: you have git-synced portage (emerge can sync via VCS out-of-the-box), git-controlled overlay on top of it and pull the world/sets/flag/etc changes... just run emerge and you're set, without having to worry about architectural incompatibilities of binaries or missing misc libs, against which they're linked here and there. That's what portage is made for, after all. Just think of trendemous space efficiency here - no binaries are backed up at all, and all you need to do to restore 3G root from 2M pack is "git clone (or receive-pack) && emerge -uDN @world" ;) -- Mike Kazantsev // fraggod.net -- Mike Kazantsev // fraggod.net