Years ago, I joined a company, took over a dev team and was asked to launch the product in 3 months.
They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I had to have the two tables open, cross check the specs and price.
If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.
Some years before that I saw a video online where a person digs a hole near a river and puts a pipe connecting the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.
That's one of the things I like about Azure, they don't overwhelm you with listing prices beside every individual item as you're creating it, but they seem to always present a price on things that could be expensive. It's a good balance, I have yet to be surprised by a charge.
AWS actually has a pretty good price calculator with some decent presets (but FFS, can I have an "uncheck all" button?) but of course it's an entirely separate app. Amazon naturally wants some friction to having this pricing information handy, though I suspect the main reason has to do with Conway's Law: AWS still ships their org chart.
I agree with you at some degree, but I would like to point out that AWS pricing is much more complicated that you can calculate how much will you pay from a static number showing up on the UI.
If it bothers you that you need to open two tabs for cross-checking the costs, you may want to avoid every cloud provider, not just AWS.
Once you have NAT gateways, CloudFront, S3, auto scaling, loadbalancers, etc, calculating the cost becomes an art rather than an exact science. And if you don't use these, there is no point of using AWS, there are plenty of "cheap" VPS providers.
They don't know in advance how much bandwidth will you use, how much traffic will you have, what auto-scaling rule will it trigger, etc. It's not obfuscation, it's billing based on your usage. And as with everything in life, there are tradeoffs.
Sure, but providing estimated costs based on reasonable pricing buckets alongside the options to add the new machine is something that every other vendor manages to do..
Price simulators are fine. They also know the distribution of use. They can do cost plus pricing (many cloud providers do). You're defending deliberately obfuscated pricing when it need not be obfuscated.
Not showing the price was not "my problem". It was the sign of a product packed with traps, footguns and all kind of things that would go wrong and the blame goes to the user.
> They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
> The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I don’t want to be the one defending AWS, but I don’t think that this is a valid reason not to like them. I mean, pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.
I don’t even think that using the UI to spin up the machine is the right way to do that in an enterprise setting, you should always do that through Infrastructure as Code, to know exactly what you have up and running, just by looking at that as you would with any program. I’d suggest to use the UI for simple testing, for which the costs are often (but not always) negligible.
Jeff Bezos if you see this please send me some cash.
I must disagree so heavily with you here. Prices can depend on so many factors, but.... when that particular account is choosing that particular machine, AWS knows what it will cost, and they can show it to them dynamically. It's very difficult to be convinced in this day and age that you cannot have a dynamic price chart right beside the machine sellector which is showing or calculating prices in real time for that particular product.
About using IaaC to set-up the infrastructure, sure, but sometimes you just need to browse stuff before actually writing code to get a feel.
Tell me how I can easily determine the price from my IaC deployment as well.
Heck, I even have a hard time telling the price I pay on an account by account basis; because we have savings plans, those get charged against the root account and then I see $0 spent on EC2 in the individual account because it's all covered with a savings plan.
And when I'm putting together that IaC and trying to decide which new instance type to upgrade to, I have to dig through multiple confusing interfaces to figure out that what I want is to upgrade from m8a.4xlarge to c8a.8xlarge and how much that is going to cost me.
I've been in a similar situation - a surprising amount of companies really just click to create instances. Last time I've encountered that at a customer I improved things a bit by creating templates, and scripting instance creation based on those templates - but ideally we'd have had the templates themselves as well as the network side generated by ansible.
But that's the problem: The complexity of doing that properly is pretty much the same as just doing your own hardware (which is what I'm working with most of the time - handling stuff on physical servers). And at that point the question should be why you're paying AWS so much money and pay your people to automate AWS workflows when you could just pay them to automate workflows on physical hardware, which would be way cheaper to run than the AWS instances.
I'm with you. Nobody serious uses the UI to make changes with AWS. At the very least, use the AWS CLI. IaaS is the norm though.
I'm tired of people acting like complex infrastructure tooling is adversarial because it's not completely intuitive. Infrastructure is hard. AWS can give you tooling and docs with patterns to follow, but they can't read your mind. Neither can the PaaS providers - they just make choices on your behalf and hope it won't matter to you.
> AWS stomped on open source projects - despite the clear desire of projects like Elasticsearch, Redis, and MongoDB not to be cloned and monetized, AWS pushed ahead with OpenSearch, Valkey, and DocumentDB anyway, capturing the hosted-service money after those communities and companies had built the markets; the result was a wave of defensive licenses like SSPL, Elastic License, RSAL, and other source-available models designed less to stop ordinary users than to stop AWS from stripping open-source infrastructure for parts, owning the customer relationship.
This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.
A lot of these projects work on a business model where they open-source their core product, and provide advanced services, installation, maintenance or fully-managed services around their product. AWS was bypassing them by providing fully-managed services. On this, I am on the side of the people behind the projects. Basically AWS was eating their lunch. They had no choice but to change the licenses.
They have a problem with their business model, then. License changes to a formerly open source project are costly. The community reacts very strongly when license terms change after they've come to depend on a product, and they should.
Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.
It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.
Competition would mean Amazon creating their own software. Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product.
Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.
> Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product
They took software that others gave away for free without restriction and did what they wanted with it. It took time but the community figured out this exploit path and patched it in subsequent license versions.
It's not just Amazon, it's also smaller providers like Dreamhost, which I've been using for 20 years. I feel like people are in favor of killing the hosting ecosystem so that we can support businesses that didn't have a working plan to monetize their open source offering.
agreed. i’m no aws apologist but if you’re going to try to monetize open source and then complain when someone else does it more efficiently/effectively, it really feels disingenuous. “we were going to do that, but they got there first. it’s not fair.”
i’m only familiar with the postgres side, but it seems like a more nuanced view of this debate would be to discuss aws monetizing open source relative to their upstream, community-beneficial contributions.
We are supportive of 3rd party ink cartridges, and there's little concern for the business model of the printer manufacturers. We instead care about the rights of the folks using the printers.
With Postgres, no one bats an eye that there are thousands of hosting companies providing Postgres as an offering, and they give nothing back to the project. Same with Apache, Nextcloud, Linux, Nginx, Sqlite, and thousands of other pieces of open-source software. Are folks against hosting companies like https://yunohost.org/?
It's only when (1) the software is open-source, and (2) the entity behind it doesn't know how to sustain itself with open-source, that we suddenly change positions and view the project as a victim. This doesn't happen with printers, it doesn't happen with other open source software. I'm not even against a change in the license, but claiming that AWS is evil for doing this doesn't track.
A lot of those projects are not companies selling software. They're effectively public infrastructure projects, often governed by non-profit foundations or community institutions.
Also, many of them predate hyperscalers and developed governance/economic structures that make them harder for AWS to capture or destabilize, whereas AWS free-riding a vendor-controlled project can destroy the economic engine sustaining the project itself.
Quite ironically, the only example from your list that doesn't predate hyperscalers (Nextcloud) is fundamentally a self-hosting/federation product. It exists largely as an alternative to hyperscaler-native platforms, not as a cloud primitive AWS can easily commoditise into its own stack.
So, treating PostgreSQL, Linux, Elasticsearch and Nextcloud as interchangeable "open source projects" ignores the completely different institutional and economic realities behind the projects.
That's the desired outcome of competition but the effects can go all over the place and the second-order effects in fragile towns can matter more than the price drop. As an extreme example, some people may lose their jobs, local spending may fall, some small shops may close and Aldi may pull out too, so everybody loses (here's [0] as an approximate example).
Usually a community can tolerate changes only when it's not already near the bottom. When you're near the bottom, almost any destabilisation can kill your little system.
Arguably the town is at fault for choosing to permit Walmart to open in their town in that analogy. If you want to control the negative externalities of capitalism you can't just expect to not provide regulations and hope things will work out.
Even if it weren't AWS, someone else with enough determination could use the same open source code to create a compelling alternative taking away business from the original authors. Trying to use social norms to make people not do that is not effective. You need mechanisms that can be enforced via legal procedures to be effective.
Just because they picked a bad business model doesn’t mean they deserve to avoid competition. Don’t give away your source code if you don’t want someone else to provide hosting.
> But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects
Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.
If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.
What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.
Sometimes I wonder how much it would hurt Amazon to pay the creators and maintainers of OSS software they sell 1 cent per billing period of use(1 hr?). I also wonder how much money that would offer an oss team. To contribute risk free to improving the product
Of course AWS didn't create the forks until the projects changed their license to disallow AWS from making money from their code! That's the whole point here.
When they changed their license, they were no longer open source. They could have chosen open source licenses such as the AGPL, but they did not. They were a non-open source company at that point, and AWS was putting out a product build on open source. Simple as that.
Redis was not an open source company when AWS moved to Valkey.
Companies are free to license under the AGPL if they want. Or other open source licenses.
Sorry, but non-open source companies aren't getting sympathy from me because they are hating on open source projects.
These were open source projects that had to change licenses away from open source because of AWS. I'm not sure how the OSS companies are the bad guy here.
I think there's plenty of room for people to object to the "had to change licenses" framing. They chose to change licenses, same as they chose the original license.
That original license probably helped them with goodwill and to gain a community; when those benefits no longer exceeded the downsides of using that license, they changed licenses to one that suited them better.
Naturally, this change costs them some amount of goodwill, the very goodwill that they harvested by choosing an open-source license in the first place.
I don't see this as an issue with the company. They were happy to release their code as OSS, as long as that allowed them to make enough money to develop the software. It was a win/win, and them AWS came and took advantage of that.
If you leave some apples at the side of the road, with a sign "$1 per apple" or whatever, and people largely pay enough for you to continue to pick apples, that's great. If someone starts coming every day and taking the entire crate, I don't blame you for discontinuing the convenient apple sales, I blame the thief.
I've transitioned between cloud services and self-hosting a few times:
1. Vercel Phase
My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.
2. Self-host Phase (Hetzner + Coolify)
Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it.
But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.
3. Cloudflare Phase
So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.
Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.
Yeah Cloudflare offering has become everything I wanted from AWS. So much simpler to deploy a basic full stack app + files. AWS has become considerably more difficult than self hosting.
I'm surprised by the author's hate towards DynamoDB. It's probably one of my favorite AWS Services. Great availability and no operational overhead. Cost was pretty minimal too each time I've used it, but you do need to spend some time architecting your data model up front, and that requires reading service docs and understanding it.
That's pretty much what I came into this thread to say. The thing I'd add is, DynamoDB is pretty nice if you understand how it's meant to be used - a relatively dumb key-value store with good persistence and table-size scaling to the sky. Definitely don't attempt to use it as a SQL database.
The best way I can come up with to rack up a $75 bill for some prototype code is to vibe-code a thing that attempts to treat it like a SQL database with JOINs and GROUP BYs etc. Or similarly write code against it absent-mindedly with about as much understanding as a 2-year-old free AI tool.
Where it really shines is use-cases like I need like 1 or 2 simple relatively small tables of persistent storage and don't want to deal with a full RDBMS. Or I need 1 ridiculously huge table to be queried in a relatively simple way, and don't want to deal with fitting that data into a RDBMS.
I don't work in that area, so I only touch AWS once in a while for personal fun projects.
And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.
Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.
You can just spin up a raw VPS on EC2 or Lightsail, give it a public IP, and call it a day. You aren't required to implement every enterprise pattern in the book.
If there is any single service I'd avoid on AWS it's Lightsail, it'll cost you a lot more than almost anything out there, is slow as molasses (even tiny services can need tens of minutes to deploy) and you'll experience random failures not even AWS reps can explain to you. Avoid at all costs.
It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.
But that's costly. Speaking of my own experience: going from a webapp fully hosted on an EC2 instance to a railway and vercel setup reduced my costs 10x.
I miss heroku dearly. somewhere at Salesforce there is an exec who killed the product and shifted it to enterprise and is now looking at the vibe coding revolution seeing their opportunity missed.
AWS is aimed at enterprise, not personal projects. Personal projects wouldn’t give them any meaningful revenue because the only thing that matters is cost.
AWS has been systematically hollowed out of technical staff since 2023. Either through mass layoffs or via 2 cycles of performance improvement plans. Often I find most skilled peers in presales or support are not with AWS whilst the ones with most ambiguous work history have been retained at promoted.
Use AWS at your own risk, Paul Vixie is not there to save you.
> If you're using AWS Lambda then you have to work to keep convincing yourself this is better than your own web servers. Keep convincing yourself that using AWS Lambda is not a horrible mistake.
lol ok. I have ~50 lambdas running in my personal aws account. Some of them are webservers running behind an api gateway or using a lambda function url to expose them to the internet. Some are running on a schedule, some are triggered from s3 events. The cost to run these for me is less than the cost of the cheapest vps (my total requests per month stay under the free tier limit). There is also zero maintenance I need to do for these functions (ok, this year I did have to find-replace al2 to al.2023 in my terraform config). I don't have to worry about making sure the os is patched for the latest vulnerabilities. And I don't have to worry about the specific hardware my code is running on at any time. Doing maintenance for old projects sucks. It is great to have servers I deployed years ago continue to chug along without me needing to think about it.
Now, all of my lambdas are written in Go and I suspect if I was using one of the manged runtime libraries I would find the language upgrades to be quite annoying. Go also helps quite a lot with cold start times.
Then again maybe I have just drank the koolaid. In my quest to use lambdas for as much as I can as cheaply as I can, I made a library[0] to use sqlite on top of s3 (not just readonly). It uses the sqlite session extension plus s3 compare-and-swap to allow you to write updates safely to s3, even if you have concurrent writers.
As you yourself said. Your load is so light you keep it in free tier. Their entire business model is for them to capture you while your load is light and then when you scale the price goes up.
+1 on the IAM over engineering, though to AWS credit, I suspect it was evolved rather than design, and that's what you get when evolution has to maintain some level of backward compatibility (think humans still having to be able to lay eggs).
Another thing that happens occasionally for saas companies is AWS creating a copy of their product in a bit sus way - but it's not a technical problem, it's a business model problem.
The billing footguns are a major pain point for anyone that doesn't have the capital to just dump faith paired with a credit card into. This of course is not limited to AWS...
Something that has always bothered me an outsized amount is Elasticache.
I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.
But Elasticache is exploitatively priced with almost no value add.
It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.
There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.
the A.I (LLM) merchants will tell you - that AI is now writing software (agentic coding they call it ) - yet one they can't bill you properly or have a broken billing mechanism.
their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager
I won't even mention the hyped up LLM vendors.
but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters
I don't think that's the real issue. The problems with billing and dashboards at cloud vendors are not new within the past few years, they have existed far longer than the LLM coding.
This is always the weird things in those rants. He's complaining that after 4 days his mails are offline.
Now I'm doing a mix of physical servers in rented rackspace, and rented servers - but even there I can have billing mixups where they deactivate servers for no good reason. And to get email working again the limiting factor would be the DNS TTL - new servers would be online somewhere else within hours of it going down. (And yes, I tested that just last year - one hoster threatened cutoff due to non-payment on a paid invoice, which prompted me to move the mail server just in case while getting this resolved).
The only way that email is down for days for a competent sysadmin is if their DNS is also with AWS, so I assumed that was the case. Assuming that is true, what is weird to me is that, after deciding he hated AWS and left it, that he still kept his business DNS (the most important service there is) with AWS.
If you would have read the article, you would know that the writer had DNS hosted at AWS, would have read why he made that choice and would know of his plans to migrate off.
You can accomplish a lot by just having a basic knowledge of Linux sysadmin. I was clueless and then learned some systemd-and-curl-fu. Will never forget the "holy sh*t, this is deceptively simple" moment. A bit more research and I found that beyond convenience and specialty APIs, you really just don't need a lot of this stuff to run a healthy system (since reducing absolute cloud dependence, my reliability has gone through the roof).
100%. I'm not really sure why we all agreed that deployment is somehow the hardest thing that you need to outsource when setting the linux server is one the richest experience you can get and it will pay dividents forever.
How is this an issue in a world where load balancers exist? I was part of a Unicorn that ran prod on 8 boxes and literally never had customer facing outages due to infrastructure updates.
> I am reminded why I left AWS and how I need to finish the job, get off AWS Workmail, move my domains from Route53 and never return.
Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option. Another option is to run your own server but have it use something like AWS SES to send externally, avoiding the IP reputation issue.
My current favorites on AWS, in no particular order:
1. IAM and policies. I’m not convinced that anyone knows how IAM rules and policy rules interact. There’s a flow chart that appears to be incomplete. There is not obviously a complete enough spec that one could, say, write a test suite to confirm that the actual behavior follows the spec. LLMs, of course, don’t know either because the training data does not exist.
2. Utter nonsense pricing. The cost of listing an S3 bucket goes up by an order of magnitude if you set the default storage class to archive despite this having nothing whatsoever to do with the operation in question. (But GCS adds two orders of magnitude for the same offense.) Conclusion: NEVER EVER set your default storage class to an archive tier.
3. Boto. It’s an Unbelievable Piece Of Crap. It’s not a library at all — it’s a meta-library that generates itself at runtime because someone had fun doing that and because Python didn’t stop them. Python type checkers, of course, just give up. And Boto is, um, a community project that AWS claims not to care about. Which is, of course, why its maintainers refused to fix an interop bug with GCS (I fully documented the entire bug for them, and the fix would have been the removal of a bit of pointless code).
4. Egress pricing. And the way it multiplies if you use any advanced VPC features. Why on Earth is it cheaper to sent an object to S3 from my own machine than to send the same object to the same endpoint from within a different AWS region nearby?
5. Authentication. It’s so bad that they invented Identity Center to try to unsuck it. But if you use Identity Center you get logged out even while actively using the console, and you get a helpful link to the WRONG PLACE to sign back in. Because of course core AWS isn’t even aware that Identity Center exists.
I don’t even use AWS very much. I’m sure I would fall in love with more of it if I did.
I'd tend to agree with the author. If forced to choose a cloud platform though (and that often is the case) then AWS is probably the best of the bunch in terms of reliability. Have heard and experienced some real horror stories with Azure & GCP by comparison.
Slightly different but related topic - for people who work with people vibe coding, what is the easiest way to allow that for non tech users (and reducing risk)? AWS or something like vercel? Coolify?
I'm old and bitter about this, but you're not reducing risk by going with PaaS, you're just outsourcing it. That recent "My AI Agent deleted my prod DB" story was only possible because the PaaS they were using allowed for 1-click permanent delete. At least AWS has a "prevent accidental termination" checkbox.
Nobody wants to hear this, but as things stand, there's no escaping risk for vibe coders right now. Personally, I think AWS is still a good choice for the long run, but don't make the mistake of thinking current LLMs will actually be able to manage the environment on par with a decent infra engineer. That's one of their weaker areas right now. Good news is there are million managed service providers and AWS-competent humans still in existence. Also Premium Support is a good resource.
Whatever you do, make a lot of backups and store them on a different service somewhere. Then if you get to a situation where you need to do something with sensitive data, or need to raise money, engage with someone who can do a proper review.
Vercel and supabase seems to be the norm around here.
DX is simple, integrations between the two, and the stack is well understood by the LLM.
Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.
The set of core services on AWS remains amazing: EC2, S3, IAM, EKS, Route53, RDS etc.
AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.
Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.
With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.
> Cloud computing was an absolutely mind blowing revolution - suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center. This was an absolute game changer, and I really drank the AWS Kool Aid down to every last drop then I licked out the cup. I was all in on AWS in a big way.
Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.
> suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center.
The “in minutes” is doing a lot of the work in that sentence above.
I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
AWS changed that, and the rest of the industry eventually followed.
No you could rent virtualised servers way before AWS.
AWS simply had good marketing.
The virtualised server thing was not a AWS thing, the thing that was were their other services. For example instead of renting a virtual server and installing a database on it. You could rent the database; that was sort of a new thing that AWS made in to thing.
It was never cheaper what you paid for was a promise of fire and forget. You would no longer need to worry about any responsibility to update the server or the database cause the AWS crew took care of that.
> I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.
If you recall AWS didn't scale instantly originally either.
We had super bursty traffic, and had to go with Google Cloud (very early days! [0]) because you'd need to communicate with AWS and pre-warm the ELB capacity of your expected bursts.
We did a dead launch to 60 million customers (0 to 60 million, no organic growth phase) this way. I wouldn't want to do that on a VPS.
Am I the only one who remembers how shady a lot of those VPS/hosting companies were? Seemed to be a race to the bottom, so a 'good' outfit might suck or completely disappear a couple years later. (Also, pricing was all over the map, I had a client who was paying $150/mo for a VPS.) Hetzner survived, but for a long time they had a reputation as spamfarm. So I get the initial appeal of AWS, used tactically. But for larger companies, its something like IBM or Oracle, if you are price-sensitive, it's not for you.
Here's a fun game to play. Every time you see a negative story on the front page about some US company or technology (be it Amazon, Google, OpenAI, you name it) go look at the submitter's info and see if they're European (or in this case, Australian, same difference). You'll find that in 99.99% of cases they are. Isn't that funny? Isn't that an interesting coincidence? How might you explain that?
At last my quest to find the stooge has come to a bitter end!
I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?
I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.
What do you find appealing about GCP? I occasionally hear positive sentiment like this but don’t entirely understand the reason, mostly because I haven’t used non-GCP clouds professionally. Is it just the least bad of all the big clouds?
>works for AWS
>quits the BS
>needs more money
>sells himself as a meat puppeteer once again to AWS
>big bs corpo is still the same surprisedpikatchu.jpeg
ok
There was a time when AWS was truly innovative, but it’s long since transformed into Amazon’s cash cow and is behaving like such.
Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.
As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.
The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.
AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.
> Of course I do not pay for premium support, so I have to wait the 24 hours that they said it would take them to reply. It's 3 days and AWS support has not replied.
The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.
Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.
“Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.
I worked for AWS Premium Support over a decade ago. Waiting 3 days or more, for a non-premium support customer would have been not unusual back then either. But at least the quality of response was typically pretty good, haven't seen what it's like lately.
They've always struggled to hire for those roles. The people who are best at Engineering Support also tend to be the people who move on to other roles after a year or two.
This belies the fact that AWS is so far in the lead in cloud market share and even host so much of Anthropic's business. If you dabble, it's confusing. If you're an enterprise with a lot of expertise then it's indispensable.
I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.
People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.
Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.
I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.
I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.
OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.
They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.
I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.
I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.
I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.
The complexity of stuff in cloud is high for nothing.
At my previous startup: because AWS gave us a bunch of credits and helped us design the infra. It meant we ran for free what they designed for free.
At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.
Many a big company migrated because they have those very same slow procurement problems with internal data centers. I saw multiple cloud migrations because internal friction was at a level that the price didn't matter: 6 months for the smallest VM kind of thing. Very adversarial relationships, often with very poor incentives, as the service setup costs for other business units were way inflated, but then the maintenance costs didn't pay enough. Paying 3x-4x more a year for just a semblance of reliability was seen as a big plus.
If you let it go that far then you were going to blow it one way or another - it’s not an excuse to totally ignore the cloud spend but it’s a n excuse to defer it to a later date. If your successful, fix it, if your not then AWS aren’t getting paid anyway!
I think AWS is liked is because when AWS started, being able to get a new VPS up in minutes was still quite unusual. Many hosts would require about 24hr, I suspect, for getting a new VM up. At least those are some experiences I had. But nowaways, they are probably many options for getting a VM instantly.
I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.
It’s flexible but slow. we ran our C++ CI/CD on AWS at a previous company, and we used spot instances with volumes attached and detached dynamically. The performance was absolutely abysmal because in effect you’re running compilation across a networked file system, no matter what AWS says your throughput is.
Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.
One is just the disassociation with opex that seems ever present in the VC model. The other is that many startups settle in on a ops solution before hiring ops and the cost of switching isn't that attractive until they're faced with a dwindling runway and a down round.
The ease of getting things set up quickly and usually for free when starting up is very tempting. Later, migration is usually considered risky and not worth it because of maintenance overhead - which I would argue has become very easy.
Grafana (and especially Loki) is hot garbage compared to what you get out of the box in GCP. I'm in a Grafana organization today and the sheer amount of developer and devops time it wastes is mind boggling.
You moved something from a single datastore to three different database technologies? I don't know your domain, but that sure doesn't sound like a complexity reduction.
>You moved something from a single datastore to three different database technologies? I don't know your domain, but that sure doesn't sound like a complexity reduction.
what's bad about graphana? it's simply used for some alerts and monitoring, i've used it for really long time and it has never failed me not even once.
it's much simpler to query postgres or mongo compared to duplicating data dozens of times on datastore.
I worked for a startup company - the founders were really nice people and had put their own money in - quite a lot of money - to get the software built for the vision they had.
By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.
It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.
They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.
Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.
> hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use
To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper.
If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.
I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"
To be fair that’s an AWS problem not a lambda problem. If you replace lambda with EC2 the only thing you save in is lambda and step functions(and maybe api gateway but now you need to pay for a load balancer or a public IP), the rest you need to pay for anyway.
This isn’t a like for like comparison though, is it.
You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.
And if someone penetrates this mega server, they’ll be able to wipe all your logs or tamper with them, to hide the attack.
If your storage servers go down, everything they have is gone. And these providers don’t offer the finest hardware. How do you know all of those drives aren’t from the same batch? They will be, because they’re a bulk buyer with a single data centre.
>You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.
they'll never need it, a misconfiguration on those service ends up costing several grands.
>If your storage servers go down, everything they have is gone
It’s just logs for an app server, not some banking critical info that will cause a panic if lost. Most of what they are using for logging is for finding some errors, not for mission-critical things which must not be lost.
Lambda is incredibly simple to use, it just runs a function for you.
Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.
Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.
I wish our Lambdas would spin up faster, but otherwise I've been very happy with them over the past six years. We seldom run over the free tier limits, and when we do we get a bill for a couple of dollars. Dead simple to code for, dead simple to spin up a new instance or scale an instance if we need to.
They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I had to have the two tables open, cross check the specs and price.
If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.
Some years before that I saw a video online where a person digs a hole near a river and puts a pipe connecting the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.
This is false. The price shows up right away when you select a machine. I dont work for AWS...
Some devs are really good at that but the vast majority are horrendously bad at UX.
My point is that maybe it was intentional, but just bad UX culture.
If it bothers you that you need to open two tabs for cross-checking the costs, you may want to avoid every cloud provider, not just AWS.
Once you have NAT gateways, CloudFront, S3, auto scaling, loadbalancers, etc, calculating the cost becomes an art rather than an exact science. And if you don't use these, there is no point of using AWS, there are plenty of "cheap" VPS providers.
No thank you
> The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I don’t want to be the one defending AWS, but I don’t think that this is a valid reason not to like them. I mean, pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.
I don’t even think that using the UI to spin up the machine is the right way to do that in an enterprise setting, you should always do that through Infrastructure as Code, to know exactly what you have up and running, just by looking at that as you would with any program. I’d suggest to use the UI for simple testing, for which the costs are often (but not always) negligible.
Jeff Bezos if you see this please send me some cash.
About using IaaC to set-up the infrastructure, sure, but sometimes you just need to browse stuff before actually writing code to get a feel.
Heck, I even have a hard time telling the price I pay on an account by account basis; because we have savings plans, those get charged against the root account and then I see $0 spent on EC2 in the individual account because it's all covered with a savings plan.
And when I'm putting together that IaC and trying to decide which new instance type to upgrade to, I have to dig through multiple confusing interfaces to figure out that what I want is to upgrade from m8a.4xlarge to c8a.8xlarge and how much that is going to cost me.
But that's the problem: The complexity of doing that properly is pretty much the same as just doing your own hardware (which is what I'm working with most of the time - handling stuff on physical servers). And at that point the question should be why you're paying AWS so much money and pay your people to automate AWS workflows when you could just pay them to automate workflows on physical hardware, which would be way cheaper to run than the AWS instances.
i just use vantage (https://instances.vantage.sh/) now. their api is functional and reasonable.
Or you can have your own negotiated private pricing which is a whole different story in itself.
I'm tired of people acting like complex infrastructure tooling is adversarial because it's not completely intuitive. Infrastructure is hard. AWS can give you tooling and docs with patterns to follow, but they can't read your mind. Neither can the PaaS providers - they just make choices on your behalf and hope it won't matter to you.
It should really be a read-only layer for metadata and logs.
This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.
Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.
It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.
Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.
They took software that others gave away for free without restriction and did what they wanted with it. It took time but the community figured out this exploit path and patched it in subsequent license versions.
i’m only familiar with the postgres side, but it seems like a more nuanced view of this debate would be to discuss aws monetizing open source relative to their upstream, community-beneficial contributions.
Just try a little bit of understanding.
https://www.eff.org/deeplinks/2019/06/felony-contempt-busine...
We are supportive of 3rd party ink cartridges, and there's little concern for the business model of the printer manufacturers. We instead care about the rights of the folks using the printers.
With Postgres, no one bats an eye that there are thousands of hosting companies providing Postgres as an offering, and they give nothing back to the project. Same with Apache, Nextcloud, Linux, Nginx, Sqlite, and thousands of other pieces of open-source software. Are folks against hosting companies like https://yunohost.org/?
It's only when (1) the software is open-source, and (2) the entity behind it doesn't know how to sustain itself with open-source, that we suddenly change positions and view the project as a victim. This doesn't happen with printers, it doesn't happen with other open source software. I'm not even against a change in the license, but claiming that AWS is evil for doing this doesn't track.
Also, many of them predate hyperscalers and developed governance/economic structures that make them harder for AWS to capture or destabilize, whereas AWS free-riding a vendor-controlled project can destroy the economic engine sustaining the project itself.
Quite ironically, the only example from your list that doesn't predate hyperscalers (Nextcloud) is fundamentally a self-hosting/federation product. It exists largely as an alternative to hyperscaler-native platforms, not as a cloud primitive AWS can easily commoditise into its own stack.
So, treating PostgreSQL, Linux, Elasticsearch and Nextcloud as interchangeable "open source projects" ignores the completely different institutional and economic realities behind the projects.
Usually a community can tolerate changes only when it's not already near the bottom. When you're near the bottom, almost any destabilisation can kill your little system.
[0] https://www.fox32chicago.com/news/aldi-closes-west-pullman-c...
Even if it weren't AWS, someone else with enough determination could use the same open source code to create a compelling alternative taking away business from the original authors. Trying to use social norms to make people not do that is not effective. You need mechanisms that can be enforced via legal procedures to be effective.
But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects.
Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.
If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.
What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.
Redis was not an open source company when AWS moved to Valkey.
Companies are free to license under the AGPL if they want. Or other open source licenses.
Sorry, but non-open source companies aren't getting sympathy from me because they are hating on open source projects.
That original license probably helped them with goodwill and to gain a community; when those benefits no longer exceeded the downsides of using that license, they changed licenses to one that suited them better.
Naturally, this change costs them some amount of goodwill, the very goodwill that they harvested by choosing an open-source license in the first place.
If you leave some apples at the side of the road, with a sign "$1 per apple" or whatever, and people largely pay enough for you to continue to pick apples, that's great. If someone starts coming every day and taking the entire crate, I don't blame you for discontinuing the convenient apple sales, I blame the thief.
1. Vercel Phase My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.
2. Self-host Phase (Hetzner + Coolify) Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it. But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.
3. Cloudflare Phase So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.
Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.
The best way I can come up with to rack up a $75 bill for some prototype code is to vibe-code a thing that attempts to treat it like a SQL database with JOINs and GROUP BYs etc. Or similarly write code against it absent-mindedly with about as much understanding as a 2-year-old free AI tool.
Where it really shines is use-cases like I need like 1 or 2 simple relatively small tables of persistent storage and don't want to deal with a full RDBMS. Or I need 1 ridiculously huge table to be queried in a relatively simple way, and don't want to deal with fitting that data into a RDBMS.
And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.
Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.
It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.
For 90% of the time when they're up.
Use AWS at your own risk, Paul Vixie is not there to save you.
lol ok. I have ~50 lambdas running in my personal aws account. Some of them are webservers running behind an api gateway or using a lambda function url to expose them to the internet. Some are running on a schedule, some are triggered from s3 events. The cost to run these for me is less than the cost of the cheapest vps (my total requests per month stay under the free tier limit). There is also zero maintenance I need to do for these functions (ok, this year I did have to find-replace al2 to al.2023 in my terraform config). I don't have to worry about making sure the os is patched for the latest vulnerabilities. And I don't have to worry about the specific hardware my code is running on at any time. Doing maintenance for old projects sucks. It is great to have servers I deployed years ago continue to chug along without me needing to think about it.
Now, all of my lambdas are written in Go and I suspect if I was using one of the manged runtime libraries I would find the language upgrades to be quite annoying. Go also helps quite a lot with cold start times.
Then again maybe I have just drank the koolaid. In my quest to use lambdas for as much as I can as cheaply as I can, I made a library[0] to use sqlite on top of s3 (not just readonly). It uses the sqlite session extension plus s3 compare-and-swap to allow you to write updates safely to s3, even if you have concurrent writers.
[0]: https://github.com/psanford/s3db
I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.
But Elasticache is exploitatively priced with almost no value add.
It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.
There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.
their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager
I won't even mention the hyped up LLM vendors.
but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters
This is always the weird things in those rants. He's complaining that after 4 days his mails are offline.
Now I'm doing a mix of physical servers in rented rackspace, and rented servers - but even there I can have billing mixups where they deactivate servers for no good reason. And to get email working again the limiting factor would be the DNS TTL - new servers would be online somewhere else within hours of it going down. (And yes, I tested that just last year - one hoster threatened cutoff due to non-payment on a paid invoice, which prompted me to move the mail server just in case while getting this resolved).
That he is complaining about his email being down or that he trusted AWS at all with email?
Yeah, no that's not how it works with email. You have to build reputation for weeks or receivers throttle you.
Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option. Another option is to run your own server but have it use something like AWS SES to send externally, avoiding the IP reputation issue.
1. IAM and policies. I’m not convinced that anyone knows how IAM rules and policy rules interact. There’s a flow chart that appears to be incomplete. There is not obviously a complete enough spec that one could, say, write a test suite to confirm that the actual behavior follows the spec. LLMs, of course, don’t know either because the training data does not exist.
2. Utter nonsense pricing. The cost of listing an S3 bucket goes up by an order of magnitude if you set the default storage class to archive despite this having nothing whatsoever to do with the operation in question. (But GCS adds two orders of magnitude for the same offense.) Conclusion: NEVER EVER set your default storage class to an archive tier.
3. Boto. It’s an Unbelievable Piece Of Crap. It’s not a library at all — it’s a meta-library that generates itself at runtime because someone had fun doing that and because Python didn’t stop them. Python type checkers, of course, just give up. And Boto is, um, a community project that AWS claims not to care about. Which is, of course, why its maintainers refused to fix an interop bug with GCS (I fully documented the entire bug for them, and the fix would have been the removal of a bit of pointless code).
4. Egress pricing. And the way it multiplies if you use any advanced VPC features. Why on Earth is it cheaper to sent an object to S3 from my own machine than to send the same object to the same endpoint from within a different AWS region nearby?
5. Authentication. It’s so bad that they invented Identity Center to try to unsuck it. But if you use Identity Center you get logged out even while actively using the console, and you get a helpful link to the WRONG PLACE to sign back in. Because of course core AWS isn’t even aware that Identity Center exists.
I don’t even use AWS very much. I’m sure I would fall in love with more of it if I did.
Nobody wants to hear this, but as things stand, there's no escaping risk for vibe coders right now. Personally, I think AWS is still a good choice for the long run, but don't make the mistake of thinking current LLMs will actually be able to manage the environment on par with a decent infra engineer. That's one of their weaker areas right now. Good news is there are million managed service providers and AWS-competent humans still in existence. Also Premium Support is a good resource.
Whatever you do, make a lot of backups and store them on a different service somewhere. Then if you get to a situation where you need to do something with sensitive data, or need to raise money, engage with someone who can do a proper review.
DX is simple, integrations between the two, and the stack is well understood by the LLM.
Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.
AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.
Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.
With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.
Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.
The “in minutes” is doing a lot of the work in that sentence above.
I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
AWS changed that, and the rest of the industry eventually followed.
The virtualised server thing was not a AWS thing, the thing that was were their other services. For example instead of renting a virtual server and installing a database on it. You could rent the database; that was sort of a new thing that AWS made in to thing.
It was never cheaper what you paid for was a promise of fire and forget. You would no longer need to worry about any responsibility to update the server or the database cause the AWS crew took care of that.
VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.
We had super bursty traffic, and had to go with Google Cloud (very early days! [0]) because you'd need to communicate with AWS and pre-warm the ELB capacity of your expected bursts.
We did a dead launch to 60 million customers (0 to 60 million, no organic growth phase) this way. I wouldn't want to do that on a VPS.
[0] https://cloudplatform.googleblog.com/2013/11/?m=1
I miss the Media Temple days.
US Americans don't like to complain? They are moderated? They prefer to pay an extra to help the cause?
You’ll have to explain that because it doesn’t seem very logical to me.
I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?
I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.
The ancient forgotten art of Vertical Scaling.
Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.
As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.
The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.
AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.
The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.
Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.
“Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.
They've always struggled to hire for those roles. The people who are best at Engineering Support also tend to be the people who move on to other roles after a year or two.
I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.
People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.
Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.
I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.
I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.
OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.
They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.
I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.
I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.
I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.
The complexity of stuff in cloud is high for nothing.
At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.
We use it because we don’t want to deal with slow procurement process. It kills all the momentum.
Watched one company end up with a $250k AWS bill when their credits expired (which they could not pay).
As an example, they needed a lot of proxy servers. Instead of just using a proxy service, there was a fleet of ec2 instances.
I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.
Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.
> burning money in cloud
I suspect there's two reasons why this happens.
One is just the disassociation with opex that seems ever present in the VC model. The other is that many startups settle in on a ops solution before hiring ops and the cost of switching isn't that attractive until they're faced with a dwindling runway and a down round.
You moved something from a single datastore to three different database technologies? I don't know your domain, but that sure doesn't sound like a complexity reduction.
what's bad about graphana? it's simply used for some alerts and monitoring, i've used it for really long time and it has never failed me not even once.
it's much simpler to query postgres or mongo compared to duplicating data dozens of times on datastore.
By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.
It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.
They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.
Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.
To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper.
If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.
I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"
You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.
And if someone penetrates this mega server, they’ll be able to wipe all your logs or tamper with them, to hide the attack.
If your storage servers go down, everything they have is gone. And these providers don’t offer the finest hardware. How do you know all of those drives aren’t from the same batch? They will be, because they’re a bulk buyer with a single data centre.
they'll never need it, a misconfiguration on those service ends up costing several grands.
>If your storage servers go down, everything they have is gone
It’s just logs for an app server, not some banking critical info that will cause a panic if lost. Most of what they are using for logging is for finding some errors, not for mission-critical things which must not be lost.
AWS CLI??? Holy guacamole, what a mess. AWS CLI looks what is now the digital identification to get the basics done.
While GCP CLI is like "sure, here"!
You're also putting your business at risk with Google randomly banning accounts and not providing timely appeals. [1]
[1]: https://news.ycombinator.com/item?id=45798827
Hey good lookin'
Lambda is incredibly simple to use, it just runs a function for you.
Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.
Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.