> Large object promisors are special Git remotes that only house large files.
I like this approach. If I could configure my repos to use something like S3, I would switch away from using LFS. S3 seems like a really good synergy for large blobs in a VCS. The intelligent tiering feature can move data into colder tiers of storage as history naturally accumulates and old things are forgotten. I wouldn't mind a historical checkout taking half a day (i.e., restored from a robotic tape library) if I am pulling in stuff from a decade ago.
TFA asserts that Git LFS is bad for several reasons including because proprietary with vendor lock-in which I don't think is fair to claim. GitHub provided an open client and server which negates that.
LFS does break disconnected/offline/sneakernet operations which wasn't mentioned and is not awesome, but those are niche workflows. It sounds like that would also be broken with promisors.
The `git partial clone` examples are cool!
The description of Large Object Promisors makes it sound like they take the client-side complexity in LFS, move it server-side, and then increases the complexity? Instead of the client uploading to a git server and to a LFS server it uploads to a git server which in turn uploads to an object store, but the client will download directly from the object store? Obviously different tradeoffs there. I'm curious how often people will get bit by uploading to public git servers which upload to hidden promisor remotes.
LFS is bad. The server implementations suck. It conflates object contents with the storage method. It's opt-in, in a terrible way - if you do the obvious thing you get tiny text files instead of the files you actually want.
I dunno if their solution is any better but it's fairly unarguable that LFS is bad.
It does seem like this proposal has exactly the same issue. Unless this new method blocks cloning when unable to access the promisors, you'll end up with similar problems of broken large files.
Git LFS didn't work with SSH, you had to get an SSL cert which github knew was a barrier for people self hosting at home. I think gitlab got it patched for SSH finally though.
While git LFS is just a kludge for now, writing a filter argument during the clone operation is not the long-term solution either.
Git clone is the very first command most people will run when learning how to use git. Emphasized for effect: the very first command.
Will they remember to write the filter? Maybe, if the tutorial to the cool codebase they're trying to access mentions it. Maybe not. What happens if they don't? It may take a long time without any obvious indication. And if they do? The cloned repo might not be compilable/usable since the blobs are missing.
Say they do get it right. Will they understand it? Most likely not. We are exposing the inner workings of git on the very first command they learn. What's a blob? Why do I need to filter on it? Where are blobs stored? It's classic abstraction leakage.
This is a solved problem: Rsync does it. Just port the bloody implementation over. It does mean supporting alternative representations or moving away from blobs altogether, which git maintainers seem unwilling to do.
I totally agree. This follows a long tradition of Git "fixing" things by adding a flag that 99% of users won't ever discover. They never fix the defaults.
And yes, you can fix defaults without breaking backwards compatibility.
So what was the second time the stopped watch was right?
I agree with GP. The git community is very fond of doing checkbox fixes for team problems that aren’t or can’t be set as defaults and so require constant user intervention to work. See also some of the sparse checkout systems and adding notes to commits after the fact. They only work if you turn every pull and push into a flurry of activity. Which means they will never work from your IDE. Those are non fixes that pollute the space for actual fixes.
Can you explain what the solution is? I don't mean the details of the rsync algorithm, but rather what it would like like from the users' perspective. What files are on your local filesystem when you do a "git clone"?
When you do a shallow clone, no files would be present. However when doing a full clone you’ll get a full copy of each version of each blob, and what is being suggested is treat each revision as an rsync operation upon the last. And the more times you muck with a file, which can happen a lot both with assets and if you check in your deps to get exact snapshotting of code, that’s a lot of big file churn.
Would it be incorrect to say that most of the bloat relates to historical revisions? If so, maybe an rsync-like behavior starting with the most current version of the files would be the best starting point. (Which is all most people will need anyhow.)
> Would it be incorrect to say that most of the bloat relates to historical revisions?
Based on my experience (YMMV), I think it is incorrect, yes, because any time I've performed a shallow clone of a repository, the saving wasn't as much as one would intuitively imagine (in other words: history is stored very efficiently).
It is a solution. The fact beginners might not understand it doesn't really matter, solutions need not perish on that alone. Clone is a command people usually run once while setting up a repository. Maybe the case could be made that this behavior should be the default and that full clones should be opt-in but that's a separate issue.
So this filter argument will reduce the repo size when cloning, but how will one reduce the repo size after a long stint of local commits of changing binary assets? Delete the repo and clone again?
It's really not clear which behaviour you want though. For example when you do lots of bisects you probably want to keep everything downloaded locally. If you're just working on new things, you may want to prune the old blobs. This information only exists in your head though.
yeah, this isn't really solving the problem. It's just punting it. While I welcome a short-circuit filter, I see dragons ahead. Dependencies. Assets. Models... won't benefit at all as these repos need the large files - hence why there are large files.
There seems to be a misunderstanding around this thread. The --filter option merely doesn't populate content in the .git directory that is not required for the checkout. If there is a file that is large which is needed for the current checkout (ie the parts not in the .git folder), it will be fetched regardless of the filter option.
I'm really happy to see large file support in Git core. Any external solution would have similar opt-in procedures. I really wanted it to work seamlessly with as few extra commands as possible, so the API was constrained to the smudge and clean filters in the '.gitattributes' file.
Though I did work hard to remove any vendor lock-in by working directly with Atlassian and Microsoft pretty early in the process. It was a great working relationship, with a lot of help from Atlassian in particular on the file locking API. LFS shipped open source with compatible support in 3 separate git hosts.
We had a repo that was at one point 25GB. It had Git LFS turned on but the files weren’t stored outside of BitBucket. Whenever a build was run in Bamboo, it would choke big time.
We found that we could move the large files to Artifactory as it has Git LFS support.
But the problem was the entire history that did not have Artifactory pointers. Every clone included the large files (for some reason the filter functionality wouldn’t work for us - it was a large repo and it it had hundreds of users amongst other problems)
Anyways what we ended up doing was closing that repo and opening a new one with the large files stripped.
Nitpick in the authors page:
“ Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage”
This is not true as you don’t need GitHub to get LFS support
I was just using git LFS and was very concerned with how bad the help message was compared to the rest of git. I know it seems small, but it just never felt like a team player, and now I'm very happy to hear this.
I'm just dipping my toe into Data Version Control - DVC. It is aimed towards data science and large digital asset management using configurable storage sources under a git meta layer. The goal is separation of concerns: git is used for versioning and the storage layers are dumb storage.
Does anyone have feedback about personally using DVC vs LFS?
Is Git ever going to get proper support for binary files?
I’ve never used it for anything serious but my understanding is that Mercurial handles binary files better? Like it supports binary diffs if I understand correctly.
I would think that git would need a parallel storage scheme for binaries. Something that does binary chunking and deduplication between revisions, but keeps the same merkle referencing scheme as everything else.
Are there many binaries that people would store in git where this would actually help? I assume most files end up with compression or some other form of randomization between revisions making deduplication futile.
I don't know, it's all probability in the dataset that makes one optimization strategy better over another. Git annex iirc does file level dedupe. That would take care of most of the problem if you're storing binaries that are compressed or encrypted. It's a lot of work to go beyond that, and probably one reason no one has bothered with git yet. But borg and restic both do chunked dedupe I think.
What I would love to see in an SCM that properly supports large binary blobs is storing the contents using Prolly trees instead of a simple SHA hash.
Prolly trees are very similar to Merkle trees or the rsync algorithm, but they support mutation and version history retention with some nice properties. For example: you always obtain exactly the same tree (with the same root hash) irrespective of the order of incremental edit operations used to get to the same state.
In other words, two users could edit a subset of a 1 TB file, both could merge their edits, and both will then agree on the root hash without having to re-hash or even download the entire file!
Another major advantage on modern many-core CPUs is that Prolly trees can be constructed in parallel instead of having to be streamed sequentially on one thread.
Then the really big brained move is to store the entire SCM repo as a single Prolly tree for efficient incremental downloads, merges, or whatever. I.e.: a repo fork could share storage with the original not just up to the point-in-time of the fork, but all future changes too.
Git has had a good run. Maybe it’s time for a new system built by someone who learned about DX early in their career, instead of via their own bug database.
If there’s a new algorithm out there that warrants a look…
As it should be! If it's not native to git, it's not worth using. I'm glad these issues are finally being solved.
These new features are pretty awesome too. Especially separate large object remotes. They will probably enable git to be used for even more things than it's already being used for. They will enable new ways to work with git.
May I humbly suggest that those files probably belong in an LFS submodule called "assets" or "vendor"?
Then you can clone without checking out all the unnecessary large files to get a working build, This also helps on the legal side to correctly license your repos.
I'm struggling to see how this is a problem with git and not just antipatterns that arise from badly organized projects.
The user shouldn't have to think about such a thing. Version control should handle everything automatically and not force the user into doing extra work to workaround issues.
I always hated the “write your code like the next maintainer is a psychopath” mantra because it makes the goal unclear. I prefer the following:
Write your code/tools as if they will be used at 2:00 am while the server room is on fire. Because sooner or later they will be.
A lot of our processes are used like emergency procedures. Emergency procedures are meant to be brainless as much as possible. So you can reserve the rest of your capacity for the actual problem. My version essentially calls out Kernighan’s Law.
Organizing your files sensibly is not necessary to use LFS nor is it a "workaround". It's just a pattern I am suggesting to make life easier regardless of what tools you decide to use. I can't think of a case where organizing your project to fail gracefully is a bad idea.
Git does the responsible thing and lets the user determine how to proceed with the mess they've made.
I must say I'm increasingly suspicious of the hate that git receives these days.
> A good version control system would support petabyte scale history and terabyte scale clones via sparse virtual filesystem.
I like this idea in principle but I always wonder what that would look in practice, outside a FAANG company: How do you ensure the virtual file system works equally well on all platforms, without root access, possibly even inside containers? How do you ensure it's fast? What do you do in case of network errors?
Yeah we're at the CVS stage where everyone uses it because everyone uses it.
But most people don't need most of its features and many people need features it doesn't have.
If you look up git worktrees, you'll find a lot of blog articles referring to worktrees as a "secret weapon" or similar. So git's secret weapon is a mode that lets you work around the ugliness of branches. This suggests that many people would be better suited by an SCM that isn't branch-based.
It's nice having the full history offline. But the scaling problems force people to adopt a workflow where they have a large number of small git repos instead of keeping the history of related things together. I think there are better designs out there for the typical open source project.
Completely disagree. Git is fundamentally functional and good. All projects are local and decentralized, and any "centralization" is in fact just git hosting services, of which there are many options which are not even mutually exclusive.
I agree! They are excellent git backup services. I use several of them: github, codeberg, gitlab, sourcehut. I can easily set up remotes to push to all of them at once. I also have copies of my important repositories on all my personal computers, including my phone.
This is only possible because git is decentralized. Claiming that git is centralized is complete falsehood.
I like this approach. If I could configure my repos to use something like S3, I would switch away from using LFS. S3 seems like a really good synergy for large blobs in a VCS. The intelligent tiering feature can move data into colder tiers of storage as history naturally accumulates and old things are forgotten. I wouldn't mind a historical checkout taking half a day (i.e., restored from a robotic tape library) if I am pulling in stuff from a decade ago.
LFS does break disconnected/offline/sneakernet operations which wasn't mentioned and is not awesome, but those are niche workflows. It sounds like that would also be broken with promisors.
The `git partial clone` examples are cool!
The description of Large Object Promisors makes it sound like they take the client-side complexity in LFS, move it server-side, and then increases the complexity? Instead of the client uploading to a git server and to a LFS server it uploads to a git server which in turn uploads to an object store, but the client will download directly from the object store? Obviously different tradeoffs there. I'm curious how often people will get bit by uploading to public git servers which upload to hidden promisor remotes.
I dunno if their solution is any better but it's fairly unarguable that LFS is bad.
While git LFS is just a kludge for now, writing a filter argument during the clone operation is not the long-term solution either.
Git clone is the very first command most people will run when learning how to use git. Emphasized for effect: the very first command.
Will they remember to write the filter? Maybe, if the tutorial to the cool codebase they're trying to access mentions it. Maybe not. What happens if they don't? It may take a long time without any obvious indication. And if they do? The cloned repo might not be compilable/usable since the blobs are missing.
Say they do get it right. Will they understand it? Most likely not. We are exposing the inner workings of git on the very first command they learn. What's a blob? Why do I need to filter on it? Where are blobs stored? It's classic abstraction leakage.
This is a solved problem: Rsync does it. Just port the bloody implementation over. It does mean supporting alternative representations or moving away from blobs altogether, which git maintainers seem unwilling to do.
And yes, you can fix defaults without breaking backwards compatibility.
Not strictly true. They did change the default push behaviour from "matching" to "simple" in Git 2.0.
I agree with GP. The git community is very fond of doing checkbox fixes for team problems that aren’t or can’t be set as defaults and so require constant user intervention to work. See also some of the sparse checkout systems and adding notes to commits after the fact. They only work if you turn every pull and push into a flurry of activity. Which means they will never work from your IDE. Those are non fixes that pollute the space for actual fixes.
Can you explain what the solution is? I don't mean the details of the rsync algorithm, but rather what it would like like from the users' perspective. What files are on your local filesystem when you do a "git clone"?
Based on my experience (YMMV), I think it is incorrect, yes, because any time I've performed a shallow clone of a repository, the saving wasn't as much as one would intuitively imagine (in other words: history is stored very efficiently).
Only the histories of the blobs are filtered out.
> if I git clone a repo with many revisions of a noisome 25 MB PNG file
FYI ‘noisome’ is not a synonym for ‘noisy’ - it’s more of a synonym for ‘noxious’; it means something smells bad.
Though I did work hard to remove any vendor lock-in by working directly with Atlassian and Microsoft pretty early in the process. It was a great working relationship, with a lot of help from Atlassian in particular on the file locking API. LFS shipped open source with compatible support in 3 separate git hosts.
We found that we could move the large files to Artifactory as it has Git LFS support.
But the problem was the entire history that did not have Artifactory pointers. Every clone included the large files (for some reason the filter functionality wouldn’t work for us - it was a large repo and it it had hundreds of users amongst other problems)
Anyways what we ended up doing was closing that repo and opening a new one with the large files stripped.
Nitpick in the authors page:
“ Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage”
This is not true as you don’t need GitHub to get LFS support
Does anyone have feedback about personally using DVC vs LFS?
I’ve never used it for anything serious but my understanding is that Mercurial handles binary files better? Like it supports binary diffs if I understand correctly.
Any reason Git couldn’t get that?
Are there many binaries that people would store in git where this would actually help? I assume most files end up with compression or some other form of randomization between revisions making deduplication futile.
Prolly trees are very similar to Merkle trees or the rsync algorithm, but they support mutation and version history retention with some nice properties. For example: you always obtain exactly the same tree (with the same root hash) irrespective of the order of incremental edit operations used to get to the same state.
In other words, two users could edit a subset of a 1 TB file, both could merge their edits, and both will then agree on the root hash without having to re-hash or even download the entire file!
Another major advantage on modern many-core CPUs is that Prolly trees can be constructed in parallel instead of having to be streamed sequentially on one thread.
Then the really big brained move is to store the entire SCM repo as a single Prolly tree for efficient incremental downloads, merges, or whatever. I.e.: a repo fork could share storage with the original not just up to the point-in-time of the fork, but all future changes too.
If there’s a new algorithm out there that warrants a look…
These new features are pretty awesome too. Especially separate large object remotes. They will probably enable git to be used for even more things than it's already being used for. They will enable new ways to work with git.
Nice to see some Microsoft and Google emails contributing.
Then you can clone without checking out all the unnecessary large files to get a working build, This also helps on the legal side to correctly license your repos.
I'm struggling to see how this is a problem with git and not just antipatterns that arise from badly organized projects.
Write your code/tools as if they will be used at 2:00 am while the server room is on fire. Because sooner or later they will be.
A lot of our processes are used like emergency procedures. Emergency procedures are meant to be brainless as much as possible. So you can reserve the rest of your capacity for the actual problem. My version essentially calls out Kernighan’s Law.
Git does the responsible thing and lets the user determine how to proceed with the mess they've made.
I must say I'm increasingly suspicious of the hate that git receives these days.
A good version control system would support petabyte scale history and terabyte scale clones via sparse virtual filesystem.
Git’s design is just bad for almost all projects that aren’t Linux.
I like this idea in principle but I always wonder what that would look in practice, outside a FAANG company: How do you ensure the virtual file system works equally well on all platforms, without root access, possibly even inside containers? How do you ensure it's fast? What do you do in case of network errors?
Re network errors. How many things break when GitHub is down? Quite a lot! This isn’t particularly special. Prefetch and clone are the same operation.
Tom Lyon: NFS Must Die! From NLUUG 2024:
https://www.youtube.com/watch?v=ZVF_djcccKc
>Why NFS must die, and how to get Beyond FIle Sharing in the cloud.
Slides:
https://nluug.nl/bestanden/presentaties/2024-05-21-tom-lyon-...
Eminent Sun alumnus says NFS must die:
https://blocksandfiles.com/2024/06/17/eminent-sun-alumnus-sa...
But most people don't need most of its features and many people need features it doesn't have.
If you look up git worktrees, you'll find a lot of blog articles referring to worktrees as a "secret weapon" or similar. So git's secret weapon is a mode that lets you work around the ugliness of branches. This suggests that many people would be better suited by an SCM that isn't branch-based.
It's nice having the full history offline. But the scaling problems force people to adopt a workflow where they have a large number of small git repos instead of keeping the history of related things together. I think there are better designs out there for the typical open source project.
This is only possible because git is decentralized. Claiming that git is centralized is complete falsehood.