> If your data is stored in a database that a company can freely read and access (i.e. not end-to-end encrypted), the company will eventually update their ToS so they can use your data for AI training — the incentives are too strong to resist
The “do it first, apologize later” will be the general principle with anything. It’s going to be hard and futile to prove even if they don’t do it through ToS first. Amazon has one of the largest corporate training sets out there:)
Yes I think you are right. Even a super ethical company can be taken over. There may be exceptions but it is more luck. I work for a SP500 that absolutely won't dont this and locks down prod access so a rogue staff can't do it. But if Larry or Zuck or Bezos buys them out, who knows.
I'd rather the symbol be there and occasionally see this discussion happen then the symbol be omitted and occasionally have the discussion where we try and figure out if the person was serious. When talking in person there are all sorts of visual and vocal cues and the speaker has cues in response to confirm the sarcasm was received. There are two parties that can correct that misunderstanding and have well established tools to do so.
/s is basically the internet-enabled equivalent of a sarcasm tone or a wink - it is much more difficult to detect genuine subtle sarcasm on the internet because of the absence of common communication tools. /s is also a valuable accessibility tool for those that might have difficulty with social cues and subtlety so, for all my autistic friends, I'm happy to defend it.
I'm sure had you omitted it - instead of that reply there would have been a series of comments talking about how Microsoft actually has a track record of doing things like this. It's impossible to please everyone on the internet but I very much appreciate when people lean towards making their communication clearer.
I’m still concerned about MS using the code I write on my laptop to train AI. Tinfoil hat wearing Linux users are starting to make a lot of sense to me.
It's been interesting the past year or so watching myself turn more and more into one of the tin-foil wearing linux users. I'm not sure how it happened, but self-hosting became more and more alluring and hyperfocusing on taking as much data as I can offline became worth spending entire weekends on.
I thought that’s more what the CoPilot change is really about - not your repo, but all the code CoPilot read while it is offering helpful completions, etc - so literally the code on your laptop. I cancelled my account.
Back in 2003 he was advocating for legalization of child sexual abuse material. In 2006 he said he was skeptical of the harm caused by “voluntary pedophilia”, a statement that presupposes that children can consent to sex with adults.
About communication with other humans he’s pretty much always wrong.
Imagine we’d had a better communicator who wasn’t a gross toe nail picking troll fronting free software? It shouldn’t matter. Only the ideas should matter . But the reality is different.
Mass scale internet censorship in Russia also started with the premise of "protecting the children"
When you put in law that ISPs should adhere to some government-provided blocklist, this is already a game over. No matter how sane your government is. The government in 10 years might be vastly different, and the ability to control the ISPs is too alluring to not abuse
I'd rather live in a world where you could find words like "kill all russians", or child porn, or blatant propaganda than to live with the government censorship. I lived in Russia and the experience was nightmare. Who knows, maybe if the government didn't have the tools they had then the independent media would still be reachable by an average russian, the pictures of the pointless massacre would be public and the war would be over in a week
Thank you for your service. We really need more "canaries in the mine" giving out early warnings of things that might not be evident on a first glance.
Any takes on what 2029 will look like? (related to this topic, ofc)
Pro tip: You could instead spend that money to spin up a forgejo instance for as little as $2 a month https://www.pikapods.com/apps#development (not affiliated, just a happy customer)
I did exactly that. Containerized it and Forgejo simply became a small instance part of the fleet. UI is much snappier then GitHub. And more importantly: zero outages.
An enterprise licence won't save you, Google, Microsoft, et al have happily been breaking copyright laws for years.
If the publishing industry can't win a case against the AI firms then you don't stand a chance when you finally find out they've been training on your private data the whole time.
They can tell you one thing and do the opposite and there's effectively nothing you can do about it. You'd be a fool to trust them.
Or, they don't train on it, but who's to say they're not harvesting analytics which may or may or not code samples, prompt data, etc. Which are then laundered through some sort of anonymization pipeline, to the point where they can argue that it no longer qualifies as your data, and can be freely trained upon.
Conspiratorial thinking? Sure. But if you've been around for a couple decades and seen the games these people play (and you aren't a complete sucker), then you'll at least be aware that there's at least slight possibility that these companies can get things from their customers that they (the customers) did not knowingly agree to.
Nothing conspirational about it. Getting data that their users or customers don't actually intend to give is the bread and butter of these companies. And they will do what they can to get it.
Github's enterprise version "starts at" $21.99/seat, and requires you to "contact sales".
And I don't see any mention that that exempts you from being trained on. (Yes, the blog says you're still covered, but at that price I'd like to see a contract saying that)
For users of Free, Pro and Pro+ Copilot, if you don’t opt out then we will start collecting usage data of Copilot for use in model training.
If you are a subscriber for Business or Pro we do not train on usage.
The blog post covers more details but we do not train on private repo data at rest, just interaction data with Copilot. If you don’t use Copilot this will not affect you. However you can still opt out now if you wish and that preference will be retained if you decide to start using Copilot in the future.
Hey Martin, can you please work with Product to significantly clarify what is meant by the following language in the settings? Because right now it's nearly impossible for a layperson (or even an average programmer) to understand what this means:
"""
Allow GitHub to use my data for AI model training
Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement.
"""
If the reality is less scary than how it sounds, then the wording needs to be less scary-sounding. It may be that GitHub isn't training models on private repos, but the language certainly suggests that it is.
Finally, I read the Privacy Statement, and it's unclear what the applicable language is. "Inputs," "Outputs," and "Associated Context" are terms of art that have no matching definitions in the Statement. (The terms "Outputs" and "Associated Context" don't even appear in the Statement at all.)
First response: It doesn't matter if I use copilot right now. It matters if I will ever use copilot in the future. Opting-out is future-focused. What if I said "no, I don't use copilot, so I don't need to opt out", then a year from now start using copilot, completely forgetting about this whole debacle? That's the evil of opt-out. My inaction only benefits them, never me.
Second response: Maybe? I press the little button to auto-generate commit titles and messages that showed up in my Github Desktop. Does that count?
I'm asking sincerely. I don't "use Copilot" as in using it in VS Code or while writing code, so I'm honestly not sure if I am.
Do we get a choice? I did not ever explicitly enable it yet GitHub's web UI by default uses copilot to autofill my web-based edit commit messages. It also shows up on the home screen by default now.
I'm pretty sure if you use the site you're using GitHub Copilot in some way, so your question becomes irrelevant.
> interaction data—specifically inputs, outputs, code snippets, and associated context [...] will be used to train and improve our AI models
So using Copilot in a private repo, where lots of that repo will be used as context for Copilot, means GitHub will be using your private repo as training data when they were not before.
No it isn't. Most people don't use Copilot, so this term change won't effect most people. You can reasonably be unhappy about it anyways (or unreasonably still be using Copilot in 2026), but it's still ultra-useful information for them to add to the discussion.
Next step they'll rebrand search as "Copilot Search" or auto enable pull-request AI reviews (unless you hear about it and turn each off) and we'll all be "users".
So? This feature is available to everyone and you have zero idea how many people actually use it.
If I go to one of your GPL projects and I ask a simple question to find out what this project is about, you will be perfectly "ok" that this interaction (that includes most of the code that is required to answer my dumb the question) will be used for training?
They "gift you" a free standard plan if you have above a certain (non-transparent) level of stars, I don't think you can even disable your "subscription" if you get it for free.
So why do any of this at all? You're putting a large part of your customer base on edge in order to improve a service that "most people don't use." The erosion of trust this brings doesn't seem like a worthwhile or prudent sacrifice.
Isn't this pretty standard, using your interaction data for training and making it opt-out? Claude Code, Codex, Antigravity etc. all do the same. Private repo doesn't make a difference as they have a local copy to work from.
I think you're well aware that people aren't upset at the distinction between training on Copilot data versus training on private repo data (at rest). People are upset because GH is using an opt-out model.
The initial title and your reply are both too broad to be fully accurate. By April 24th Github will train on private repos (assuming a flag isn't set) but this change is limited to just non-Business/Pro users. So a number of private repos will be effected but it won't automatically affect all private repos (so my panic check on our corporate account wasn't necessary yet).
I am not certain if you're a spokesperson for github - but it's good to be careful in your language. Instead of "No we won't" a lead like "That isn't entirely accurate" would be more suitable. In the end both the original post title and your reply have ended up being misleading.
> By April 24th Github will train on private repos
This statement itself is misleading. Also, GitHub probably should have seen this coming.
They are not doing what I initially thought, which is slurping up your private repo, wholesale, into its training set. You don't have to opt out of anything to prevent that.
They are slurping any context and input containing code from your private repo which is provided to them as part of using Copilot.
So, in addition to the opt-out setting, there is an even easier way to avoid providing them your private repository data to train AI models, and that's by continuing to not use Copilot.
Thats still pretty bad. Its no longer private if all your code goes through LLM training set and is resurfable to everyone publicly.
Why would I ever use copilot on any code Id want to be kept private? Labling it a private repo and having a tiny clause in the TOS saying we can take your code and show it to everybody is just an upright lie
I mean, you shouldn't send data to any SaaS LLM for code you want to be private, unless you have had them sign some sort of contract saying they will not train on your use. In fact, it is probably never a good idea to send anything you want to be private off premises unencrypted.
Every Git commit is likely to contain personal data, in the form of the author’s name and email address usually present in a commit’s metadata. Furthermore, unless GitHub is prohibiting users from submitting personal data via their ToS (which, given the above, would be impractical), the only thing that matters is whether the data in fact contains personal data or not. GitHub cannot just assume that it doesn’t. And processing that data for new purposes requires user consent.
For example, license files often contain names and many package managers require a contact person.
When this goes to court, GitHub will probably make the excuse that they somehow did not know that people upload personal data, but the fact that this happens so often that they had to make a secret scanner to stop people from uploading their private keys will prove them as liars.
Yes, you will. This is what the setting says on my account when I clicked the link:
> model training
> Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement
Are you seriously trying to claim that the code isn't input, output, or associated context of Copilot operating on a private repo? What term do you think better applies to the code that's being read as input, used as context, and potentially produced as output?
I don't like that they are training on any interactions with Copilot by default but training on something that you've put through Copilot yourself is much different than them just shoving all the private repos currently on Github into the training data.
I don't use Copilot, and I don't have anything I particularly care about in private repos on my account on Github. My reaction here is entirely based on principles, not how I'm going to be personally affected.
If Copilot later adds a feature like "Scan your repo for vulnerabilities using Copilot <opt-out>", then that would both fit your criteria, and the baiting outrage of the original poster, in one swoop! Of course, Microsoft would _never_ do that, right?
How do you handle accounts that have copilot managed by an organisation? I've seen several cases where people cannot opt out their account because of the org connection (the option just isn't there in the settings). What happens to their account the moment they leave that org?
Right, but it shouldn't be opt-out only to begin with. It's a dishonest pattern that relies on people not noticing. Honest use of data is a "Caesar's wife must be above suspicion" moment for me -- if this is how you're acting when engaging with customers explicitly, I don't trust you to resist the temptation to tap into my data privately. AI companies already have trained their models illegally against the intellectual property of all of humanity with little consent along the way.
Honestly, if you work at GitHub, maybe you should focus on your uptime -- it's awful.
Say someone has a very sensitive secret (say, a Bitcoin private key) in their free private Github repo, and uses Copilot on that repo and touches the secret with it. Would you be willing to assure here that toggling that setting would not affect the likelihood of that secret leaking, and that that likelihood is also unaffected by whether the account is Business or Free?
I think the problem is more with using PRIVATE repos. My letters are also private and I would be pretty pissed if the mail carrier was reading them. Why does GitHub think it has the right to do this?
This affects anyone using VS Code or Copilot with proprietary data, including all the users automating workflows through the Copilot SDK and the like. A perfect storm.
Did anyone from GitHub's legal team actually authorise this, or did they use Copilot to sign off on it?
Under GDPR, opt-out is not considered informed consent, and repositories can contain personally identifiable information, which fall under GDPR. Do you think differently, or do you think ignoring the law will be worth it?
This is a distinction without a difference, according to the text of that enable/disable dialog,
> Allow GitHub to use my data for AI model training: Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement.
“Associated Context” is the repo. If I use copilot, I’m giving it access to my repo.
I don’t know in all the ways copilot can be triggered, and I’m not certain that I could stop it from being triggered, given Microsoft’s past behaviors in slapping Copilot on everything that exists.
Yes you do? If a user uses any form of copilot in one of his repos except ofc enterprise, says so right in the blog post. These aktshually corporate technicality defense posts aren’t helping, they just end up making you personally look a bit fishy.
What a wildly disingenuous take. Speaking earnestly from one human to another: your behavior and work is shameful, and you should feel embarrassed by your actions, Martin.
You’re laundering the code of users who don’t opt-in through Copilot users who do, to read in as many LoC as possible. It’s clear as day to everyone not morally bankrupt.
This headline is false; it will not go take your private repos and dump them into a training dataset. Rather, GitHub will train on your copilot interactions with your private repos. If you do not use copilot, this makes no difference to you, though you should probably still turn it off.
Then GitHub will train on their inputs, which includes your code.
Doesn’t seem to leave non-enterprise projects with much choice but to ban contributors from using copilot (to whatever extent they can - company policy, etc.)
That's also my read of the flag. But if they can train co pilot on input, I don't see what prevents them from training copilot on the code itself. In a court case they would simply say the opt in meant we can train from input. That's all we did.
To be fair, they display it reasonable prominently in GitHub when you are logged in. Given that, I feel the post title fall under the click bait category. I was fully aware of the Co-pilot opt-out change, but still clicked due the phrasing of the title.
I think this kind of nuance is useless or even harmful. That might be how it is now but they'll change it when you're not looking.
You see coders have this reasoning flaw where they go "Oh I've understood the system, now I can work out all the ramifications of my actions", and then they get tricked at every step of the life.
To be precise: the opt-out is for GitHub Copilot training specifically, which has always required opt-in for public repos under their policy. The change Apr 24 is about private repos being included by default unless you opt out. If you're using Copilot in your private repos, definitely opt out unless you're comfortable with that. The setting is at github.com/settings/copilot — takes 30 seconds.
It should take 0 seconds, because I shouldn't have to do it.
That's my bar. My time is my time, and anything that takes time from me better have a damn good excuse. Github is not bringing any good reasons to the table to justify making me take my time to protect privacy I've had by default up to now.
No, it takes an hour of perusing HN every day to stumble upon this. That's 20 hours per month, 240 hours per year, shall I bill it to GitHub or to Microsoft directly?
Corrupting Steinmetz' quip to Ford: it's 30 seconds to flip the switch, 240 hours to know that a switch needs to be flipped.
Previously we didn’t do any training on usage. However as other products have come into the market they do train on usage. We’ve been training on our internal usage for just over a year and have seen some major improvements. For details see of the types of improvements we’ve seen from training on our internal usage check out this article: https://github.blog/news-insights/product-news/copilot-new-e...
You can always ask your parent company to train on their usage. I hear they have incredibly massive codebases: Windows, Office, MSSQL, which stay out of training data for some reason.
I thought neural nets never repeat the training data verbatim, and copyright does not pass through them, so what's the problem?
> If they want to incentivise people to contribute their sources and copilot sessions, they could easily make it opt-in on a per-repository basis and provide some incentive, like an increased token quota.
The setting isn't even visible to everyone. If you're currently in an org that manages copilot business, it's gone. I imagine it instantly opts you back in when you leave an org.
If even one person in a repo does not disable this will copilot have full access to the repo? How can I determine if other members of my team have turned this off or not?
The only setting I'm seeing is on a per-user basis. Does anyone know how to blanket disable training on an organizational basis?
Is there any information about how much information from an organization managed repo may be trained on if an individual user has this flag enabled? Will one leaky account cause all of our source code to be considered fair game?
And even if you read the banner on the site, the email they sent, and the announcement itself, you would not see instructions that mention the specific thing(s) you must change in order to opt out.
Sure, you can poke around in the settings and find one that you believe opts you out, but in lieu of clear and explicit instructions from GitHub, you'll have no way to find out. Only the possibility of finding out later that you guessed wrong.
So? You guarantee that this setting is durable and will never revert? Or you guarantee that no client-side bug on that page will not override the setting with null value when you click save on something else? Please.
There’s a lot of furor in this thread, but people felt the same way when Google Street View came out. Eventually they worked through most of the thorny bits and people use Street View now.
I suspect MSFT is in a similar spot. If they don’t train on more data, they’ll be left behind by Anthropic/OAI. If they do, they’ll annoy a few diehards for a while, they’ll work through the kinks, then everyone will get used to it.
Jokes on them, my private repos are total dog dookie. If nobody but me can see the code then I don't have to worry about style, structure, comments, or any other best practices.
You don't want an LLM trained on my private repos. Trust me.
By migrating to another code forge and paying them so they're sustainable.
Which doesn't answer your question at all, but it is the metric they'll pay attention to. And it is the the thing that actually addresses the underlying problem.
And how many people who use git on github go to the website? I only do when my token has expired and I need to grab a new one to push again. Which is every 90 days. Github.com is mostly invisible infrastructure to me.
If they were being honest they would ask explicitly for permission instead of advertising opt-out. Now you might ask: who will explicitly give Microsoft permission to train on their private works? No one will -- and that's the point: this is a form of theft.
I have an individual GitHub Copilot Pro subscription and also am a member of an Enterprise account that has one of its GitHub Copilot Business seats assigned to me. The opt-out setting doesn't appear on my individual profile anymore. However, I want to be able to use individual GitHub Copilot subscription for my individual work, and it seems like I can't do it anymore as Enterprise has taken over all my preferences. What a mess.
I'm sure this is just me, but I don't mind if AI trains on my public or private repos. I suspect my imagination is just not good enough to come up with downsides.
So far it's been a benefit because coding agents seems to understand my code and can follow my style.
I don't store client data (much less credentials) in my repos (public or private) so I'm not worried about data leaks. And I don't expect any of my clients to decide to replace me and vibe code their way to a solution.
I do worry (slightly) about large company competitors using AI to lower their prices and compete with me, but that's going to happen regardless of whether anyone trains on my code. And my own increases in efficiency due to AI have made up for that.
Rather than defending this absurd decision, GitHub could instantly win back trust by admitting they f*** up and reversing it entirely.
If they want to incentivise people to contribute their sources and copilot sessions, they could easily make it opt-in on a per-repository basis and provide some incentive, like an increased token quota.
While I understand the network effect of github for public project, I don't really understand why one would want to use it for private repos.
There are tons of git providers including free ones that include full gitlab/gitea/forgejo to get similar features to github and there is nothing more easy to self host or host on a vps with near zero maintenance.
Lots of hair splitting in the comments. The service is so unreliable at this point that I don’t trust them to not train on private repos even accidentally. You’re one vibe-coded PR away from having all your data scooped up regardless of any policy or intention.
I've recently started hosting my own forgejo instance. It works so well! Free tailscale for connectivity. I expose mine over fly.io proxy, also free, but not to be done without caution.
It doesn't take much power or time to run your own local git server. My first one which lasted years was parts I mangled together from old computers from garage sales.
Just spitballing, don’t use these tools myself, but isn’t this something that should be encrypted to really prevent them from training? I personally don’t trust anyone with my data when they pivot to building AI products yet claim my data wasn’t a part of that strategy. It’s too easy to hide/lie.
But it always seemed to me that the UI should run locally with encryption keys that are shared and the service just manages encrypted blobs of diffs that can roll from version to version of encrypted data and that’s about it. Granted I probably don’t know the full workflow, i typically am a single dev on simple projects where I don’t need 99% of the overhead these introduce.
GitLab would be a good bet here. We started on their free tier and used that for a couple of years, I was very happy with it. Not sure how the tiers might have evolved since.
And according to their PM and privacy policy, they're not training their models on your code[0].
I use Fossil for mine. Dead easy to set up, and while the workflow might not be great for public contributions like Github is, that doesn't matter on something where I'm the only user.
They just lost my repos. I can not believe they snuck this in. My level of anger right now is far higher that I ever wanted to feel. I went to API access for anthropic, paying more in the process, to avoid them training on my code. And GH just -adds- this, without telling me? Without a prompt. They are dead to me.
make sure you opt-out anyway before deleting your account. they'll probably train on some archived version if it sees your profile didn't opt-out at some point.
honest question: is there any realistic mechanism that will make them accountable if let's say they just train on 100% of repos without regards to opt-ins? I operate under the premise these tech companies can do whatever they want and there's very little oversight.
Don't give your code to Microsoft if you don't want them to have your code.
This setting will make no difference to whether your code is fed into their training set. "Oops we accidentally ignored the private flag years ago and didn't realise, we are very sorry, we were trying to not do that".
I'm curious about specific consequences of this. I tend to think the importance of code secrecy has always been exaggerated (there are specific exceptions like hedge fund strategies and malware), even more so now in this post-Claude world. Does anyone have specific things they're trying to avoid by opting out of this?
Algorithms and models for a proprietary trading system? My personal notes? The latex text of my phd thesis?
I will go screaming and kicking and fighting into this dystopian nightmare post-privacy shithole world that so many people seem fine with. If I have to move off of every service or technology to maintain some semblance of privacy so be it.
Well, mostly I was thinking about code, and aside from the specific exceptions of trading algorithms (which I was trying to get at when I said hedge fund strategies), and now PhD theses (good point, at least if you're talking pre-publication), I'm still having trouble understanding the threat model even if AI did train on most proprietary, private business code. Can AI training on a CRUD app's code damage a business?
And I have the same question about private notes, or even a diary. Can an AI training on a bunch of personal stuff damage the person that wrote it?
It's very easy to set up and integrates nicely into git. Obviously only works if you don't need Actions or anything that requires Github to know what's in your repo (duh).
How do I opt out of this for my own private repos? I don't see anything related to this as I've got a ton of settings for Copilot itself (I have access to Copilot through my work org)
At least they are finally being honest about the direction of the business. I have thought for a long while that they were already doing this and just not telling anyone...
When Louis Rossmann started describing tech leadership as having a "rapist mentality" I brushed him off as being sensationalist. But actions like this make me think more and more he's right. The product managers pushing for changes like this are despicable scum.
The situation you describe has dynamics that don't apply when your windows laptop is trying to get you to install an update. A woman can't have 100% confidence that saying no won't trigger a man into rage, so just the question being asked at all is already a bit unpleasant. WinRAR trying to get me to buy a license is not as offensive because I know it won't beat me up for saying no.
However, do you think people accept Microsoft backup because they want a backup?
Or do you think they click yes because it makes the popup go away for good?
Wearing me down until I say yes isn’t the same as just yes.
It’s the same dark pattern for the 10-11 upgrade. My father in law managed to upgrade by accident because it kept popping up. He didn’t really make an informed choice for himself. One day he just couldn’t figure out why everything was different.
There is this distinct lack of giving a shit about the user that you see coming through in a lot of big tech nowadays.
Take this extremely simple example about antenna pod. I can change the order and what buttons show up in the app nav bar. For example I can remove the "home" button or put other things there instead like playback history.
This is a small minor point of the bigger picture. Yet there is this distinct sense in which when using that app I don't feel like I'm beholden to some chain of management in some company deciding they get to decide what I get to do.
Like its almost unthinkable that the YouTube app let you remove shorts or reorder the navigation bar and decide what you wanted to have there.
TLDR: As long as you aren't using Copilot, your code should be safe (according to GitHub).
What data are you collecting?
When an individual user has this setting enabled, the interaction data we may collect includes:
- Outputs accepted or modified by the user
- Inputs sent to GitHub Copilot, including code snippets shown to the model
- Code context surrounding the user’s cursor position
- Comment and documentation that the user wrote
- File names, repository structure, and navigation patterns
- Interactions with Copilot features including Chat and inline suggestions
I wonder how effective it would be to sabotage the training by publishing deliberately bad code. A FizzBuzz with O(n^2) complexity. A function named "quicksort" that actually implements bogosort. A "filter_xss" function that's a no-op or just does something else entirely.
The possibilities are endless. I thought of this after remembering seeing a post a couple months ago about how it doesn't take a significant amount of bad data to poison an LLM's training.
Probably extremely ineffective, it's an issue of scale and unless you really automate the terrible code generation and somehow manage to make it distinct enough in style that it isn't easy to detect and eliminate wholesale then you just won't have the volume to significantly impact the result set.
I'm absolutely sure that there are state actors with gigantic budgets that are putting a lot of effort into similar attacks, though.
If you use Github, you should have an email from ~2 days ago with the subject "Important Update to GitHub Copilot Interaction Data Usage Policy". Easy to skip over assuming it's just one of a million private policy update emails.
If you don't use Github Copilot, this shouldn't effect you, and may be why you got no email. The current headline is fairly misleading--it's about Copilot usage, not private repos per se.
I saw that too, it feels like it's worded to make it sound like it's mandatory for Copilot. Based on their blog post the "feature" is them training on your data.
I started self hosting my own git on a digital ocean droplet with Gitea (1). It’s been unbelievably fantastic and trivially easy to manage experience and I can make them public and invite contrib ans do integrations … I see zero downsides
I see no reason to ever go back to holding my code elsewhere.
Don’t forget git is fairly new
When I first started doing production code it was pre-github so we used some other kind of repo management system
This is a perfect example of where the they’re starting to cannibalize their base and now we have the ability to get away from them entirely.
Wow. This is theft. Should be illegal! It's like if I own a vault storage business and I am keeping other people's gold in my vaults and then I just take all the gold for myself and claim that the customers should have opted out of me stealing their gold but they missed the deadline...
> If your data is stored in a database that a company can freely read and access (i.e. not end-to-end encrypted), the company will eventually update their ToS so they can use your data for AI training — the incentives are too strong to resist
https://news.ycombinator.com/item?id=37124188
(-:
Like using that /s or using that smiling emoji sign you used.
A good joke would land even if some other people miss it because of the text format.
"Microsoft would never do this" would have landed for me.
/s is basically the internet-enabled equivalent of a sarcasm tone or a wink - it is much more difficult to detect genuine subtle sarcasm on the internet because of the absence of common communication tools. /s is also a valuable accessibility tool for those that might have difficulty with social cues and subtlety so, for all my autistic friends, I'm happy to defend it.
If you can tell sarcasm from text, that doesn't meam everyone can.
For my part, the smiley face was much-appreciated as I've seen people who genuinely would think that with a straight face.
:)
I didn't become paranoid, everybody else didn't!
Stallman is always right.
So I dunno bout that.
Not really. Almost always right....
About communication with other humans he’s pretty much always wrong.
Imagine we’d had a better communicator who wasn’t a gross toe nail picking troll fronting free software? It shouldn’t matter. Only the ideas should matter . But the reality is different.
When you put in law that ISPs should adhere to some government-provided blocklist, this is already a game over. No matter how sane your government is. The government in 10 years might be vastly different, and the ability to control the ISPs is too alluring to not abuse
I'd rather live in a world where you could find words like "kill all russians", or child porn, or blatant propaganda than to live with the government censorship. I lived in Russia and the experience was nightmare. Who knows, maybe if the government didn't have the tools they had then the independent media would still be reachable by an average russian, the pictures of the pointless massacre would be public and the war would be over in a week
Any takes on what 2029 will look like? (related to this topic, ofc)
Pro tip: sign up for the business/enterprise version when reasonable in price.
I do this with Google Workspace. You can also do it with GitHub.
(Google doesn’t train on Workspace, Github doesn’t train on business customers, etc)
Please don't reward these companies with money.
If the publishing industry can't win a case against the AI firms then you don't stand a chance when you finally find out they've been training on your private data the whole time.
They can tell you one thing and do the opposite and there's effectively nothing you can do about it. You'd be a fool to trust them.
...yet
Conspiratorial thinking? Sure. But if you've been around for a couple decades and seen the games these people play (and you aren't a complete sucker), then you'll at least be aware that there's at least slight possibility that these companies can get things from their customers that they (the customers) did not knowingly agree to.
The belief of business users that this will remain true is grounded more in hope than in cold, dispassionate, business based decision making.
If it's not life or death, encrypt every byte of data you send to the cloud.
If it is life or death, you should probably not be letting that data traverse the open internet in any form.
And I don't see any mention that that exempts you from being trained on. (Yes, the blog says you're still covered, but at that price I'd like to see a contract saying that)
For users of Free, Pro and Pro+ Copilot, if you don’t opt out then we will start collecting usage data of Copilot for use in model training.
If you are a subscriber for Business or Pro we do not train on usage.
The blog post covers more details but we do not train on private repo data at rest, just interaction data with Copilot. If you don’t use Copilot this will not affect you. However you can still opt out now if you wish and that preference will be retained if you decide to start using Copilot in the future.
Hope that helps.
""" Allow GitHub to use my data for AI model training
Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement. """
If the reality is less scary than how it sounds, then the wording needs to be less scary-sounding. It may be that GitHub isn't training models on private repos, but the language certainly suggests that it is.
Finally, I read the Privacy Statement, and it's unclear what the applicable language is. "Inputs," "Outputs," and "Associated Context" are terms of art that have no matching definitions in the Statement. (The terms "Outputs" and "Associated Context" don't even appear in the Statement at all.)
> Should you decide to participate in this program, the interaction data we may collect and leverage includes:
> - Outputs accepted or modified by you
> - Inputs sent to GitHub Copilot, including code snippets shown to the model
> - Code context surrounding your cursor position
> - Comments and documentation you write
> - File names, repository structure, and navigation patterns
> - Interactions with Copilot features (chat, inline suggestions, etc.)
> - Your feedback on suggestions (thumbs up/down ratings)
"should you decide to participate.."??? You didn't ask if I wanted to participate. You asked if I didn't.
I didn't get to decide to participate. I had to decide not to. You made me do work to prevent my privacy from being violated.
Second response: Maybe? I press the little button to auto-generate commit titles and messages that showed up in my Github Desktop. Does that count?
I'm asking sincerely. I don't "use Copilot" as in using it in VS Code or while writing code, so I'm honestly not sure if I am.
I'm pretty sure if you use the site you're using GitHub Copilot in some way, so your question becomes irrelevant.
> interaction data—specifically inputs, outputs, code snippets, and associated context [...] will be used to train and improve our AI models
So using Copilot in a private repo, where lots of that repo will be used as context for Copilot, means GitHub will be using your private repo as training data when they were not before.
Boiling the frog with a Venn diagram.
I don't have to be a Copilot user to click on it.
This change is malicious, and it doesn't only affect Copilot users. It affects everyone on the platform!
If I go to one of your GPL projects and I ask a simple question to find out what this project is about, you will be perfectly "ok" that this interaction (that includes most of the code that is required to answer my dumb the question) will be used for training?
This is not ok.
So why do any of this at all? You're putting a large part of your customer base on edge in order to improve a service that "most people don't use." The erosion of trust this brings doesn't seem like a worthwhile or prudent sacrifice.
I am not certain if you're a spokesperson for github - but it's good to be careful in your language. Instead of "No we won't" a lead like "That isn't entirely accurate" would be more suitable. In the end both the original post title and your reply have ended up being misleading.
This statement itself is misleading. Also, GitHub probably should have seen this coming.
They are not doing what I initially thought, which is slurping up your private repo, wholesale, into its training set. You don't have to opt out of anything to prevent that.
They are slurping any context and input containing code from your private repo which is provided to them as part of using Copilot.
So, in addition to the opt-out setting, there is an even easier way to avoid providing them your private repository data to train AI models, and that's by continuing to not use Copilot.
Why would I ever use copilot on any code Id want to be kept private? Labling it a private repo and having a tiny clause in the TOS saying we can take your code and show it to everybody is just an upright lie
https://grep.app/search?regexp=true&q=%5Ba-z%5D%7B8%2C%7D%5C...
For example, license files often contain names and many package managers require a contact person.
When this goes to court, GitHub will probably make the excuse that they somehow did not know that people upload personal data, but the fact that this happens so often that they had to make a secret scanner to stop people from uploading their private keys will prove them as liars.
> model training
> Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement
Are you seriously trying to claim that the code isn't input, output, or associated context of Copilot operating on a private repo? What term do you think better applies to the code that's being read as input, used as context, and potentially produced as output?
How does this work for a private repository with access granted to additional contributors? Which setting is consulted then?
To the PM behind this - developers are sensitive to this kind of thing. Just make it opt-in instead?
Honestly, if you work at GitHub, maybe you should focus on your uptime -- it's awful.
This affects anyone using VS Code or Copilot with proprietary data, including all the users automating workflows through the Copilot SDK and the like. A perfect storm.
Did anyone from GitHub's legal team actually authorise this, or did they use Copilot to sign off on it?
> Allow GitHub to use my data for AI model training: Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement.
“Associated Context” is the repo. If I use copilot, I’m giving it access to my repo.
I don’t know in all the ways copilot can be triggered, and I’m not certain that I could stop it from being triggered, given Microsoft’s past behaviors in slapping Copilot on everything that exists.
Honestly, what the fuck? This changes was already pretty bad but this being the apparent corporate response is insane.
Done with Github and Microsoft after this. Just disgusting how little you care for users, ethics, or morals.
This suspect denial is why I will get my clients moved off of github.
No? Because no one would opt-in, you say?
Wow. It's almost like this is a user-hostile feature that breaks the implicit promise behind a "private" repo.
You’re laundering the code of users who don’t opt-in through Copilot users who do, to read in as many LoC as possible. It’s clear as day to everyone not morally bankrupt.
That’s fucking terrifying.
Why the smug sarcastic attitude? nah, fuck github i'm out.
I'm not bidding against you to not train on my data.
I didn’t think Github had much of a brand left to damage, but here we are.
Doesn’t seem to leave non-enterprise projects with much choice but to ban contributors from using copilot (to whatever extent they can - company policy, etc.)
“HALT IMMEDIATELY. Copilot is banned on this project.”
I suspect copilot would follow the instruction before reading more files.
Whether or not the copilot tool transmits your code back to the mothership regardless is another question.
[1] https://docs.github.com/en/copilot/how-tos/configure-custom-...
You see coders have this reasoning flaw where they go "Oh I've understood the system, now I can work out all the ramifications of my actions", and then they get tricked at every step of the life.
That's my bar. My time is my time, and anything that takes time from me better have a damn good excuse. Github is not bringing any good reasons to the table to justify making me take my time to protect privacy I've had by default up to now.
Corrupting Steinmetz' quip to Ford: it's 30 seconds to flip the switch, 240 hours to know that a switch needs to be flipped.
Previously we didn’t do any training on usage. However as other products have come into the market they do train on usage. We’ve been training on our internal usage for just over a year and have seen some major improvements. For details see of the types of improvements we’ve seen from training on our internal usage check out this article: https://github.blog/news-insights/product-news/copilot-new-e...
I thought neural nets never repeat the training data verbatim, and copyright does not pass through them, so what's the problem?
> If they want to incentivise people to contribute their sources and copilot sessions, they could easily make it opt-in on a per-repository basis and provide some incentive, like an increased token quota.
It's convenient for MS to make this opt in by default for sure.
Is there any information about how much information from an organization managed repo may be trained on if an individual user has this flag enabled? Will one leaky account cause all of our source code to be considered fair game?
Sure, you can poke around in the settings and find one that you believe opts you out, but in lieu of clear and explicit instructions from GitHub, you'll have no way to find out. Only the possibility of finding out later that you guessed wrong.
You might have closed it...
Just go to your account settings and find the opt-out option.
The chip on your shoulder doesn't make for productive conversation here.
I suspect MSFT is in a similar spot. If they don’t train on more data, they’ll be left behind by Anthropic/OAI. If they do, they’ll annoy a few diehards for a while, they’ll work through the kinks, then everyone will get used to it.
Or, perhaps more directly, training their image-gen models on your private Google Photos.
They’re training (with an opt out) on stuff people feel is an invasion of their privacy to make their service better.
You don't want an LLM trained on my private repos. Trust me.
Which doesn't answer your question at all, but it is the metric they'll pay attention to. And it is the the thing that actually addresses the underlying problem.
The feature to opt out is at the bottom under privacy: "Allow GitHub to use my data for AI model training"
TIL: you cannot opt out of a copilot-pro subscription. How is it a subscription if I can't cancel?
(Honestly, who has time to evade all these traps? Or to migrate 150+ repo's on 6+ machines...)
So far it's been a benefit because coding agents seems to understand my code and can follow my style.
I don't store client data (much less credentials) in my repos (public or private) so I'm not worried about data leaks. And I don't expect any of my clients to decide to replace me and vibe code their way to a solution.
I do worry (slightly) about large company competitors using AI to lower their prices and compete with me, but that's going to happen regardless of whether anyone trains on my code. And my own increases in efficiency due to AI have made up for that.
If they want to incentivise people to contribute their sources and copilot sessions, they could easily make it opt-in on a per-repository basis and provide some incentive, like an increased token quota.
This is not hard.
There are tons of git providers including free ones that include full gitlab/gitea/forgejo to get similar features to github and there is nothing more easy to self host or host on a vps with near zero maintenance.
Microsoft services are tech debt. I moved the moment they were acquired and never regretted it.
"Finally, AI for the entire software lifecycle."
Not very trust inspiring, that.
Can I even have git hosting without anything else being crammed down my throat, or it's just like Microsoft?
If it's really important to you that the repo is private, I'd self-host.
There's instructions on running a Git server in the git book: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protoco...
But it always seemed to me that the UI should run locally with encryption keys that are shared and the service just manages encrypted blobs of diffs that can roll from version to version of encrypted data and that’s about it. Granted I probably don’t know the full workflow, i typically am a single dev on simple projects where I don’t need 99% of the overhead these introduce.
Apparently someone has developed something similar to this
I just looked up gitosis on github though and it was last updated 12 years ago.... still works for me though.
Overall, hosting your own repos is very easy.
And according to their PM and privacy policy, they're not training their models on your code[0].
[0]: https://forum.gitlab.com/t/can-i-opt-out-from-my-code-being-...
I definitely feel like more can be done within this space and that there is space for more competitors (even forgejo instances for that matter)
This setting will make no difference to whether your code is fed into their training set. "Oops we accidentally ignored the private flag years ago and didn't realise, we are very sorry, we were trying to not do that".
I will go screaming and kicking and fighting into this dystopian nightmare post-privacy shithole world that so many people seem fine with. If I have to move off of every service or technology to maintain some semblance of privacy so be it.
And I have the same question about private notes, or even a diary. Can an AI training on a bunch of personal stuff damage the person that wrote it?
Do you really keep trading algorithms on github?
https://github.com/flolu/git-gcrypt
It's very easy to set up and integrates nicely into git. Obviously only works if you don't need Actions or anything that requires Github to know what's in your repo (duh).
// todo… remove this before it goes to prod lol
Settings->Copilot->Features->Privacy=>[ Allow GitHub to use my data for AI model training
Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement. ]
> Allow GitHub to use my data for AI model training
https://postimg.cc/LJD5w1rv
It's not a new setting, fwiw. I opted out years(??) ago.
Not for commercial success, just wanted a git and github like experience for my new game project.
Then I started getting into features specific to game dev like moving away from LFS and properly diffing binaries.
paganartifact.com/benny/artifact
Mirror: GitHub bennyschmidt/artifact
I don't have much hope, but I wish that ignoring software licensing and attribution at scale becomes harder than it currently seems.
Imagine a man asking a woman “want to have sex? Or maybe later?” out of the blue, then asking her again every 3 days until she says “yes”
Yeah, it ain't sex, but it does still come down to basic respect.
However, do you think people accept Microsoft backup because they want a backup?
Or do you think they click yes because it makes the popup go away for good?
Wearing me down until I say yes isn’t the same as just yes.
It’s the same dark pattern for the 10-11 upgrade. My father in law managed to upgrade by accident because it kept popping up. He didn’t really make an informed choice for himself. One day he just couldn’t figure out why everything was different.
Take this extremely simple example about antenna pod. I can change the order and what buttons show up in the app nav bar. For example I can remove the "home" button or put other things there instead like playback history.
This is a small minor point of the bigger picture. Yet there is this distinct sense in which when using that app I don't feel like I'm beholden to some chain of management in some company deciding they get to decide what I get to do.
Like its almost unthinkable that the YouTube app let you remove shorts or reorder the navigation bar and decide what you wanted to have there.
And it is absolute dogshit. And offensive to actual copilots.
TLDR: As long as you aren't using Copilot, your code should be safe (according to GitHub).
The possibilities are endless. I thought of this after remembering seeing a post a couple months ago about how it doesn't take a significant amount of bad data to poison an LLM's training.
I'm absolutely sure that there are state actors with gigantic budgets that are putting a lot of effort into similar attacks, though.
If you don't use Github Copilot, this shouldn't effect you, and may be why you got no email. The current headline is fairly misleading--it's about Copilot usage, not private repos per se.
I meant it in the sense of "bringing it to our collective attention."
Enabled - "You will have access to this feature" as help text. Disabled - "You will not have access to this feature".
WTF does that mean?
I see no reason to ever go back to holding my code elsewhere.
Don’t forget git is fairly new
When I first started doing production code it was pre-github so we used some other kind of repo management system
This is a perfect example of where the they’re starting to cannibalize their base and now we have the ability to get away from them entirely.
(1) https://about.gitea.com/
If so, this might be illegal.
Or am I missing some trick / dark GUI pattern? Just want to make sure.