How many such stories we have to come across before we as a community come together? Apple and Google's monopolies have to be broken. It's insane that your livelihood depends upon the mercy of one organization.
If they cannot support their customers at the scale at which they operate, they should not be allowed to do business at that scale. Google clearly cannot, and they trivially mow people down, as ruthlessly as any careless driver plowing through a street cart, with no accountability for their actions, and no recourse for the customer.
Yes, they shouldn't be dependent on Alphabet, they should back up their content and diversify platforms, but because we decided to allow monopolization of monetization of the web, and to vigorously encourage the surveillance based adtech of Google and Facebook, they control the full stack and effectively hold audiences hostage; you have to play on their platforms in order to engage with the audience you build, and a vast majority of the consumers of content are ignorant of the roles platforms play. If you leave the platform, you lose the access; if you have multiple channels, you get shadowbans and other soft-penalties to discourage people from being disloyal to Google.
We should have a massive diversity of federated, decentralized platforms, with compatible protocols and tools. People should have to think about CDNs and platforms as little as they think about what particular ISP is carrying their traffic between a server and their home.
There should be a digital bill of rights that curtails the power of platforms in controlling access, reach, and forces interoperability, and eliminates arbitrary algorithmic enforcement, and allow due process with mandatory backout periods giving people the reasonable opportunity to recover digital assets, communicate with audience, and migrate to a new platform.
The status quo is entirely untenable; these companies should not have the power to so casually and arbitrarily destroy people's livelihoods.
It's not really that simple. There are already alternative video hosting and streaming sites. In the article it mentions that this creator is already using one in fact. The reason why youtube is such a big deal is because of it's market dominance. Everyone watches there, and therefore it is valuable. "breaking it up" just turns it into another one of the many many competitors that already exist.
Don't get me wrong, I'm not defending Youtube's behavior here. It's bad and shouldn't just be shrugged off. I just don't think that shouting "monopoly!" actually fixes anything. If you want a video hosting and streaming site that has less market dominance and better moderation policies, that already exists. Everyone is free to use them.
> "breaking it up" just turns it into another one of the many many competitors that already exist.
That's very much the point: collaring and tranquilizing the 900 pound gorilla in the room so that the reasons people might have to interact with the 30 other monkeys become relevant.
Except that that still doesn't fix the problem. This behavior is downstream of bad laws and regulations. Do you think that Youtube wants to delete a random channel with hundreds of thousands of subscribers? No, that is obviously against it's interests. However, dealing with copyright law in intelligent, nuanced way is too expensive and difficult at scale, and so they resort to these very bad methods. There is a reason that they are probably the only profitable ad-supported platform. Right now, copyright holders aren't focusing on any of the other platforms because 99% of all activity is on youtube. If youtube went away, and the traffic was split up among the other competitors, the same bad dynamics would suddenly get pointed at them, and in 5-10 years we'd be having the same conversation.
You need to address the underlying causes of this kind of behavior.
Nobody forced google to maintain a single coherent identity for users across all their services, such that a ban on one service risks impacts to several unrelated ones.
There's a sort of circular problem where basically every creator's videos are on YouTube, but many don't replicate their videos to other video platforms. Viewers won't leave in part because other sites lack content, creators won't cross-post because other sites lack viewers.
Some of that would be alleviated if we separated hosting/serving videos from the frontend and indexing, perhaps with a radio-like agreement on what the host gets paid for serving the video to a customer of the frontend. Frontend/index makes money off ads, and then pays some of that back to the host. Creators could in theory be paid by the video hosts, since views make the host money.
Then heavy handed moderation could be a disadvantage then, because they would be lacking content other sites have (though some of that content would be distasteful enough most frontends would ban it).
Or maybe breaking up YouTube allows for a syndication standard to take its place and we'd get an explosion of value for consumers like we got in podcasting
These companies have simply too much influence on a global scale for the US to ever kneecap them. For every valid worry the West has of TikTok the exact same argument could be made of YouTube in reverse.
There's some kind of basic theorem about situations like this: doing something about injustice happens at a rate proportional to both (a) the injustice and (b) the ease of doing something about it. The injustice is pervasive (low-level, but constant, and indicative of a situation in which people have unaccountable power over the public). But doing something about it requires a type of organizing that... nobody knows how to do. Or at least nobody remembers how to do. So the barrier to it happening is extremely high.
And do what exactly? Personally I avoid youtube as much as possible, I might watch two or three short videos per month. I also never bought an Apple product save for an ipod years ago. No one needs any of those things.
It's not a consumer issue. The fix we would need is laws that are analogous to laws that protect workers from their employers, though that is pretty far away in the current US political economy, and would presumably require creators bringing some kind of organized pressure on Big Tech or their government, analogous to a union.
If an automated system is making the decision to cancel a customer's account, then companies should be required to give cancelled customers a way to speak to a human about the inevitable false positives.
They do what they want and you (or me or anyone else not on the board of directors) don't have any say over it. They could even have a daily lottery that randomly chooses a couple of people and have all their accounts permanently frozen/closed/cancelled with no recourse at all, ever.
We often make fun at stupid European regulations, like AI ones, but it is typically in such a case that it is useful. So to ensure that it could not happen when companies like that have such a monopoly that users have no power.
Do those regulations really "ensure that [incidents like this] could not happen"?
I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.
*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.
I don't recall the full stack of EU rwgulations in detail, but a requirement that appeal to an actual human is possible after automated decisions is in there somewhere AFAIK.
> The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
They can, but what incentive would they have to do so? They are probably measured off the number of cases they close. The fastest way to close them would be to agree with the conclusions of the algorithm
That'd likely be a violation of some kind of laws, but you could probably work to have HR ensure that various teams were aligned in the goals of the operational attributes the company finds necessary to produce an an environment which maximizes the opportunities for individuals to contribute without fear of repression.
Because humans cost a lot of money and I don’t want to train my users to think they can get a more favourable answer by asking to have a human review the decision.
I don't think there's any regulation that can really help here. You can't force a plumber to do business with Rita, American Airlines to accept Steve who's been super rude to the stewards on board, you can't force anybody really to do business with you.
The only exception I know of, for which there is some regulation where they can't just say "no", legally, are banks. And trust me, if banks don't want you as a customer they will do everything in their power to maliciously comply to the point your account is useless and perma frozen.
What is this lunacy about Google regulation about? If Google doesn't want Enderman, you can't force them to have him.
I get what you really mean is regulating so companies are forced to process and communicate via non-automated, non-AI systems for whatever a, b, c issue or reason, but this doesn't change anything because of how simple and cheap is malicious compliance.
All Google needs to do is "yeah, okay, we'll also review it with human", and put some intern to press a green button manually.
Unless you can prove discrimination, it's their house, it's their business, they can and should do what they want.
The issue is that Youtube is one of the strongest and hardest to break monopolies on the internet. It's the hardest part of the degoogling process.
Then they shouldn't be permitted to operate at a scale where their unwillingness to do business with you causes you to be unable to transact with entire business sectors.
If digikey decides they don't want to do business with me, I am not suddenly unable to buy from 30% of the world's manufacturers, unable to sell to 70% of my customers and locked out of my manufacturing line's plc.
If Safeway decides to decline my business, I am not locked out of eating bread from anyone who buys their flour from them.
If Cocacola doesn't want to renew our contract because I mentioned to my customers that we also stock Pepsi, I can still buy Cocacola from the wholesaler and resell it, and regardless I don't lose access to my accountant and mailbox when they terminate that relationship.
> Then they shouldn't be permitted to operate at a scale where their unwillingness to do business with you causes you to be unable to transact with entire business sectors.
That I agree 100%.
But Youtube really did nothing to become or preserve its monopoly really. It's really a reinforcing most creators -> most users -> most money -> most creators -> most users.
> If Google or any other platform doesn't want you on their platform, nobody can force them to have you.
This is demonstrably false.
Where I live, stores aren't allowed to refuse a sale under most circumstances (barring some specifically-listed exceptions like selling alcohol to minors). Same for schools, we don't have a concept of "expulsion" unless it's court-mandated. There's no reason a similar regulation couldn't be applied to digital platforms.
Whether such a regulation should exist is a different matter entirely. Fighting fraud and scams is difficult enough already, making them harder to fight means we get more of them. Either that, or Google starts demanding rigorous ID verification from everybody who wants a Youtube channel.
No it's not, in most of the world if a business doesn't want you as a customer they can refuse you, end of story.
That's not only true for B2C, as most codexes have at best laws about public utilities (you can't be denied electricity for no reason), sometimes banks, and sometimes regulated professionals (lawyers, insurers, etc).
This is particularly true for B2B, as Youtube and creators transactions are.
Any government which will assert it has the right to force you to platform people will absolutely also assert that it has the right to force you to deplatform people.
> If Google or any other platform doesn't want you on their platform, nobody can force them to have you.
That's just not true.
Up till now, no government has (to my knowledge) tried to dictate to a major American platform owner that they may not ban certain users or classes of users, but that doesn't mean that they can't.
It's really not the same thing as the issue of forcing an employer to rehire an illegally-fired employee—where the employee then remains there under a cloud, because they have to continually interact with the people who wanted them gone. In 99.999% of cases, when a platform removes a user, there's zero relationship between that user and the people involved in making that decision.
If Congress made a law tomorrow (laughable in the current environment, I know) that said that any public video platform provider with over X users couldn't ban anyone except for specific reasons, then YouTube would, indeed, have to keep such people on their platform.
Prove me the contrary: find me a single law that forces any business to have business with any other, regardless of them wanting to or not.
I'm 100% sure nobody can force me to do business with people I don't want and if you're a professional I can't force you either to do business with me. Why would you think this would be a good law to have? Only discrimination would be a valid reason.
If Google (business) doesn't want to platform a creator (another business), that's their right.
Of course we can question the morale or ethics, but that's about it.
> If Congress made a law tomorrow (laughable in the current environment, I know) that said that any public video platform provider with over X users couldn't ban anyone except for specific reasons, then YouTube would, indeed, have to keep such people on their platform.
But such laws do not exist in pretty much any part of the world: you can't force a business (Youtube) to do business with another one (a creator).
The reason why this is obviously different is because Youtube is a de facto monopoly on large parts of internet content.
Sorry, but either you've phrased yourself poorly for what you actually want to say, or you're genuinely unaware of the many anti-discrimination laws in the US, a substantial number of which explicitly prohibit businesses from refusing service to people in protected categories.
> unless you can prove discrimination, it's their house, it's their business, they can and should do what they want
None of the links you posted, I skimmed quickly, says anything about my point: that you cannot refuse business if you don't want to (unless you can prove the reason you don't want to is illegal).
Ah, so as long as you're allowed to ignore any laws that actually prohibit the thing you're talking about (because you have to actually prove that they were broken!), you can say that no laws prohibit the thing you're talking about!
Yeah, I don't think that's a particularly strong statement anymore.
What's up with the TeamYouTube account advising him to delete his X post for security reasons because the post contains a channel ID? Like channel ID is not public information and some secret private key or something?
The reason this happened will probably never be revealed, but I predict it's probably because the channel was uploading videos through a VPN, and wound up sharing an IP address with someone who was using the same VPN for piracy.
This is a very difficult situation for creators. It is a hassle, but spreading videos out across different platforms seems to be the only viable solution. This is not great as some platforms bring more revenue than others.
Because the current hype cycle of "AI" has subsumed the terms "Algorithm" or "Machine Learning" to classify automated decision making processes that rely on some level of modeling / applied statistics instead of deterministic code.
I don't love the way language around this is evolving as it is mostly a marketing tool to make these tools seem much more than they are. Primarily this is driven by the current generative "AI" bubble
"A popular tech YouTuber with over 350,000 subscribers has lost his channel after YouTube’s automated systems flagged him for an alleged connection to a completely unrelated Japanese channel that received copyright strikes."
The term "the algorithm" has been replaced by "the AI" in modern parlance, and doesn't refer to any specific AI architecture - just something that makes a decision without a human in the loop.
Wouldn’t a heuristic score be AI? It could very probably *not* be an LLM or Stable Diffusion or similar which has coopted the overall term “AI”; but that doesn’t make it not an expert system, or an SLM used for categorization, or even A* search, all of which fell under the umbrella of “AI” for a long time.
Artificial Intelligence is a thriving and active discipline of the Computer Science field.
It includes things like A* search and expert systems even now, despite current popular parlance shoving LLMs into the spotlight as "AI" and implying that that's all the term means.
Heuristics are just rules of thumb without necessarily having a rigid law or clean classification.
You can derive heuristics from mathematically modeling something or even applying machine learning, but they need not necessarily involve either set of techniques.
Having been told to implement this sort of thing at least once, it's not even heuristic. It was exactly things like "if firstName == 'Saddam' and lastName == 'Hussein' then lockAccount()". Keep in mind this was years after he was dead, for some reason the big list of prohibited persons the US circulates includes lots of dead people
It's automated. It's based on information. Why is it not intelligent. Why is it demonstrably less intelligent than an LLM which may make no attempt to retrieve information from other sources but merely has what it was created with.
Yes, they shouldn't be dependent on Alphabet, they should back up their content and diversify platforms, but because we decided to allow monopolization of monetization of the web, and to vigorously encourage the surveillance based adtech of Google and Facebook, they control the full stack and effectively hold audiences hostage; you have to play on their platforms in order to engage with the audience you build, and a vast majority of the consumers of content are ignorant of the roles platforms play. If you leave the platform, you lose the access; if you have multiple channels, you get shadowbans and other soft-penalties to discourage people from being disloyal to Google.
We should have a massive diversity of federated, decentralized platforms, with compatible protocols and tools. People should have to think about CDNs and platforms as little as they think about what particular ISP is carrying their traffic between a server and their home.
There should be a digital bill of rights that curtails the power of platforms in controlling access, reach, and forces interoperability, and eliminates arbitrary algorithmic enforcement, and allow due process with mandatory backout periods giving people the reasonable opportunity to recover digital assets, communicate with audience, and migrate to a new platform.
The status quo is entirely untenable; these companies should not have the power to so casually and arbitrarily destroy people's livelihoods.
Don't get me wrong, I'm not defending Youtube's behavior here. It's bad and shouldn't just be shrugged off. I just don't think that shouting "monopoly!" actually fixes anything. If you want a video hosting and streaming site that has less market dominance and better moderation policies, that already exists. Everyone is free to use them.
That's very much the point: collaring and tranquilizing the 900 pound gorilla in the room so that the reasons people might have to interact with the 30 other monkeys become relevant.
You need to address the underlying causes of this kind of behavior.
Some of that would be alleviated if we separated hosting/serving videos from the frontend and indexing, perhaps with a radio-like agreement on what the host gets paid for serving the video to a customer of the frontend. Frontend/index makes money off ads, and then pays some of that back to the host. Creators could in theory be paid by the video hosts, since views make the host money.
Then heavy handed moderation could be a disadvantage then, because they would be lacking content other sites have (though some of that content would be distasteful enough most frontends would ban it).
This is only the beginning of fucking around and finding out how putting "AI" into everything will create all kinds of problems for humanity.
Relevant Idiocracy clip:
https://www.youtube.com/watch?v=7THG28GprSM
I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.
*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.
https://gdpr-info.eu/art-22-gdpr/
Regulations never prevent stuff happening. They offer recompense when they do. Laws don't either.
In terms of distribution of fines, it is rare.
The only exception I know of, for which there is some regulation where they can't just say "no", legally, are banks. And trust me, if banks don't want you as a customer they will do everything in their power to maliciously comply to the point your account is useless and perma frozen.
What is this lunacy about Google regulation about? If Google doesn't want Enderman, you can't force them to have him.
I get what you really mean is regulating so companies are forced to process and communicate via non-automated, non-AI systems for whatever a, b, c issue or reason, but this doesn't change anything because of how simple and cheap is malicious compliance.
All Google needs to do is "yeah, okay, we'll also review it with human", and put some intern to press a green button manually.
Unless you can prove discrimination, it's their house, it's their business, they can and should do what they want.
The issue is that Youtube is one of the strongest and hardest to break monopolies on the internet. It's the hardest part of the degoogling process.
If digikey decides they don't want to do business with me, I am not suddenly unable to buy from 30% of the world's manufacturers, unable to sell to 70% of my customers and locked out of my manufacturing line's plc.
If Safeway decides to decline my business, I am not locked out of eating bread from anyone who buys their flour from them.
If Cocacola doesn't want to renew our contract because I mentioned to my customers that we also stock Pepsi, I can still buy Cocacola from the wholesaler and resell it, and regardless I don't lose access to my accountant and mailbox when they terminate that relationship.
That I agree 100%.
But Youtube really did nothing to become or preserve its monopoly really. It's really a reinforcing most creators -> most users -> most money -> most creators -> most users.
This is demonstrably false.
Where I live, stores aren't allowed to refuse a sale under most circumstances (barring some specifically-listed exceptions like selling alcohol to minors). Same for schools, we don't have a concept of "expulsion" unless it's court-mandated. There's no reason a similar regulation couldn't be applied to digital platforms.
Whether such a regulation should exist is a different matter entirely. Fighting fraud and scams is difficult enough already, making them harder to fight means we get more of them. Either that, or Google starts demanding rigorous ID verification from everybody who wants a Youtube channel.
That's not only true for B2C, as most codexes have at best laws about public utilities (you can't be denied electricity for no reason), sometimes banks, and sometimes regulated professionals (lawyers, insurers, etc).
This is particularly true for B2B, as Youtube and creators transactions are.
That's just not true.
Up till now, no government has (to my knowledge) tried to dictate to a major American platform owner that they may not ban certain users or classes of users, but that doesn't mean that they can't.
It's really not the same thing as the issue of forcing an employer to rehire an illegally-fired employee—where the employee then remains there under a cloud, because they have to continually interact with the people who wanted them gone. In 99.999% of cases, when a platform removes a user, there's zero relationship between that user and the people involved in making that decision.
If Congress made a law tomorrow (laughable in the current environment, I know) that said that any public video platform provider with over X users couldn't ban anyone except for specific reasons, then YouTube would, indeed, have to keep such people on their platform.
Prove me the contrary: find me a single law that forces any business to have business with any other, regardless of them wanting to or not.
I'm 100% sure nobody can force me to do business with people I don't want and if you're a professional I can't force you either to do business with me. Why would you think this would be a good law to have? Only discrimination would be a valid reason.
If Google (business) doesn't want to platform a creator (another business), that's their right.
Of course we can question the morale or ethics, but that's about it.
> If Congress made a law tomorrow (laughable in the current environment, I know) that said that any public video platform provider with over X users couldn't ban anyone except for specific reasons, then YouTube would, indeed, have to keep such people on their platform.
But such laws do not exist in pretty much any part of the world: you can't force a business (Youtube) to do business with another one (a creator).
The reason why this is obviously different is because Youtube is a de facto monopoly on large parts of internet content.
https://en.wikipedia.org/wiki/List_of_anti-discrimination_la...
Sorry, but either you've phrased yourself poorly for what you actually want to say, or you're genuinely unaware of the many anti-discrimination laws in the US, a substantial number of which explicitly prohibit businesses from refusing service to people in protected categories.
> unless you can prove discrimination, it's their house, it's their business, they can and should do what they want
None of the links you posted, I skimmed quickly, says anything about my point: that you cannot refuse business if you don't want to (unless you can prove the reason you don't want to is illegal).
Yeah, I don't think that's a particularly strong statement anymore.
https://x.com/TeamYouTube/status/1985378776562168037
I don't love the way language around this is evolving as it is mostly a marketing tool to make these tools seem much more than they are. Primarily this is driven by the current generative "AI" bubble
That could be as simple as a database lookup against flagged accounts or a simple heuristic score.
We're over-AI-ing everything.
Artificial Intelligence is a thriving and active discipline of the Computer Science field.
It includes things like A* search and expert systems even now, despite current popular parlance shoving LLMs into the spotlight as "AI" and implying that that's all the term means.
Heuristics are just rules of thumb without necessarily having a rigid law or clean classification.
You can derive heuristics from mathematically modeling something or even applying machine learning, but they need not necessarily involve either set of techniques.
I'm always confused because they've seen special effects for decades, and now the very same explosions, filters, etc, cgi is suddenly AI.
It's automated. It's based on information. Why is it not intelligent. Why is it demonstrably less intelligent than an LLM which may make no attempt to retrieve information from other sources but merely has what it was created with.