This is an article written by a company/llm trying to justify huge increases to the pricing structure.
Oh! Yknow that thing we were charging you $200 a month for now? We're going to start charging you for the value we provide, and it will now be $5,000 a month.
Meanwhile, the metrics for "value" are completely gamed.
The price will be what you are willing to pay. No justification required, excepting for fairness (and what else?). It is written by me. Unfunded bootstrapped !!call it dire straits.
At the same time, I actually wouldn’t mind a world in which AI agents cost $5000 a month if that’s what companies want to charge.
I feel like at some level that would remove the possibility of making a “just as good as humans but basically free” arguments and move discussion in the direction that feels more productive: discussing real benefits and shortcomings of both. Eg, loss of context with agents vs HR costs with humans, etc…
Take for instance, customer support Agent , that is supposed to resolve tickets. Assuming it resolves around 30% tickets by an objective measure. Do you think that cannot be captured and agreed upon by both sides?
Already, today, human customer support agents' performance is measured in ticket resolution, and the Goodhart's Law consequences of that are trivial visible to anyone that's ever tried to get a ticket actually resolved, as opposed to simply marked "resolved" in a ticketing system somewhere…
If your customer base is so broud that you can't define a clear outcome for your nitche, your company probably isnt focused enough. Especially for a start up.
I'd assume an outcome is a negotiated agreement between buyer and Agent provider.
Think of all the n8n workflows. If we take a simple example of Expense receipt processing workflows, or a lead sourcing workflow, I'd think the outcomes can be counted pretty well. In these cases, successfully entered receipts into ERP or number of Entries captured in salesforce.
I am sure there are cases where outcomes are fuzzy, for instances employer-employee agreement.
But in some cases, for instance, my accounting agent would only get paid if he successfully uploads my tax returns.
Surely not applicable in all cases. But, in cases Where a human is measured on outcomes, the same should be applicable for agents too, I guess
The obvious solution is just to throw more LLM's at it to verify the output of the other LLM and that it is doing its job...
\s (mostly because you know this will be the "Solution" that many will just run with despite the very real issue of how "persuadable" these systems are)...
The real answer is that even that will fail and there will have to be a feedback loop with a human that will likely in many cases lead to more churn trying to fix the work the AI did vs if the human just did it in the first place.
Instead of focusing on the places that using an AI tool can truly cut down on time spent like searching for something (which can still fail but at least the risk when a failure is far lower vs producing output).
This is the problem with this, in simple cases like “you add N employees” then you can vaguely approximate it, like they do in the article.
But for anything that’s not this trivial example, the person who knows the value most accurately is … the customer! Who is also the person who is paying the bill, so there’s strong financial incentive for them not to reveal this info to you.
I often go back to customer support voice AI agent example. Let's say, The bot can resolve tickets successfully at a certain rate . This is capturable easily. Why is this difficult? What cases am I missing?
The one wrinkle this might have is that it incentivizes the agent developer to over-resolve or “over outcome” to ensure they hit targets.
This is risking the end customer experience for your Agent buyer, which might not be worth the risk to a company that wants to keep customers very happy.
You can apply the same philosophy to employees and if you dare to do so you will quickly find out that it does not work. When a measure becomes a target, it ceases to be a good measure - Goodhart's law. I cannot see why AI agents should be treated differently when it comes to fuzzy measurements of performance.
Bcuz the performance is usually not fuzzy and also the law only applies to certain jobs -- you would not apply the law to salesmen or customer support agents.
Salesmen making bad deals that boost their numbers and then don't make money in the long-term is one of the first things you learn when you work in an org that sells in the enterprise market.
Ur in a software bubble, there are millions of sales jobs where you sell a simple product and the only thing that matters is sale volume and maybe "dont be a dick". The really strategic sales process we employ in tech is the exception.
Salesmen are absolutely perfect example. They quite often have even greater incentives as they can directly financially benefit. So selling products that are not needed, that are over priced or entirely misrepresented is extremely common.
ok... how do you measure the performance of a coding assistant? Counting the lines of code written, bugs closed, PRs reviewed, some fuzzy measurement of quality or something else?
Maybe it's not as nice a story there as he's from India, but outside India people like to talk about their cobra problem and failed solution (retold below). This feels like that. If it's a ticket system, it could close them all as unresovable overnight. If it cares about customer satisfaction, it could give everybody thousand dollar gift cards. Point is, AIs existence is predicated on finding a way to improve its score by any means necessary, and that needs very careful bounding.
I believe it was under British rule, they offered a reward for people bringing in dead cobras as proof of culling. Which worked until people started breeding them just to get the reward. Humans gamed the system and it made the problem worse.
Humans respect the rules because if they don't, then they lose their jobs, can't pay their mortgages, and become homeless. That's quite a powerful incentive not to fudge the numbers too much.
Right, it doesn't work the same for humans as it does AI agents.
If you finetune a model and it starts misbehaving, what are you going to do to it exactly? PIP it? Fire it? Of course not. AIs cannot be managed the same ways as humans (and I would argue that's for the best). Best you can do is try using a different model, but you have no guarantee that whatever issue your model has is actually solved in the new one.
It really makes sense, and the best part — customers love it. It’s the simple form of pricing, and it’s simple to understand.
In many cases though, you don’t know whether the outcome is correct or not but we just have evals for that.
Our product is a SOTA recall-first web search for complex queries. For example, let’s say your agent needs to find all instances of product launches in the past week.
“Classic” web search would return top results while ours return a full dataset where each row is a unique product (with citations to web pages)
We charge a flat fee per record. So, if we found 100 records, you pay us for 100. Of its 0 then it’s free.
I get sad when I read comments like these, because I feel like HN is the only forum left where real discussion between real people providing real thoughts are happening. I think that is changing unfortunately. The em-dashes and the strange ticks immediate triggers my anti-bodies and devalues it, whether that is appropriate or not.
Not the writing style, but the fact that the em-dashes and strange ticks make it indistinguishable from something AI-generated. At least take the time to replace them with something you can produce easily on a physical keyboard.
Edit:
Well, actually - this kind of writing style does feel quite AI-ish:
> It really makes sense, and the best part — customers love it
It might be a Windows vs. MacOS/Linux thing, but regardless - it's becoming a similar kind of pattern that I'm subconsciously learning to ignore/filter out, similar to banner blindness and ads/editorials.
Is this actually different from just guaranteeing some metrics? Like if you have a document processing “agent” that extracts fields from forms, you’d have an accuracy threshold and have some checks set up to verify this?
Does “outcome billing” amount to anything different?
I think what you described would be a good definition of outcome. But, Who bills customers that way if you think about software providers? The prevailing models are fixed fee , hourly fee or infra-spend fee.
There is an argument to be made that SaaS tools tap the tool budget whereas AI agents can tap the worker budget of companies.
This is actually what I thought. Although, AI agent developers can capture 1:10 of value delivered - assuming AI agents deliver - but with competiton among Agent builders, the value capture will go down. That is one possibility
> If AI agents help each support employee handle 30% more tickets, that's like adding 30 new hires to a 100-person team, without the cost.
I think this is an oversimplification designed to make LLMs seem more profitable than they actually are.
Oh! Yknow that thing we were charging you $200 a month for now? We're going to start charging you for the value we provide, and it will now be $5,000 a month.
Meanwhile, the metrics for "value" are completely gamed.
I feel like at some level that would remove the possibility of making a “just as good as humans but basically free” arguments and move discussion in the direction that feels more productive: discussing real benefits and shortcomings of both. Eg, loss of context with agents vs HR costs with humans, etc…
Well, of course. One of the huge advantages of agents is that they will actually help you to almost any extent game metrics.
Unlike people, who have ...
As much as I hate the assumptions, the worst case scenario is that AI is surely affecting some jobs.
Sounds great in theory, until you realize everyone has a different definition of outcome.
Take for instance, customer support Agent , that is supposed to resolve tickets. Assuming it resolves around 30% tickets by an objective measure. Do you think that cannot be captured and agreed upon by both sides?
The same oversight mechanism that applies to humans cannot correct the flaws of AI agents?
And how do you programmatically measure it?
I'd assume an outcome is a negotiated agreement between buyer and Agent provider.
Think of all the n8n workflows. If we take a simple example of Expense receipt processing workflows, or a lead sourcing workflow, I'd think the outcomes can be counted pretty well. In these cases, successfully entered receipts into ERP or number of Entries captured in salesforce.
I am sure there are cases where outcomes are fuzzy, for instances employer-employee agreement.
But in some cases, for instance, my accounting agent would only get paid if he successfully uploads my tax returns.
Surely not applicable in all cases. But, in cases Where a human is measured on outcomes, the same should be applicable for agents too, I guess
\s (mostly because you know this will be the "Solution" that many will just run with despite the very real issue of how "persuadable" these systems are)...
The real answer is that even that will fail and there will have to be a feedback loop with a human that will likely in many cases lead to more churn trying to fix the work the AI did vs if the human just did it in the first place.
Instead of focusing on the places that using an AI tool can truly cut down on time spent like searching for something (which can still fail but at least the risk when a failure is far lower vs producing output).
But for anything that’s not this trivial example, the person who knows the value most accurately is … the customer! Who is also the person who is paying the bill, so there’s strong financial incentive for them not to reveal this info to you.
I don’t think this will work …
This is risking the end customer experience for your Agent buyer, which might not be worth the risk to a company that wants to keep customers very happy.
But, again, such systems already exist. The folk theorem guarantees this. In a repeated game, people crave reputation.
For instance, seller over-resolving will suffer in the long run, I guess.
I believe it was under British rule, they offered a reward for people bringing in dead cobras as proof of culling. Which worked until people started breeding them just to get the reward. Humans gamed the system and it made the problem worse.
The same oversight mechanism that applies to humans cannot correct the flaws of AI agents? What do you think is the catch?
I am not saying things are clearly defined in most settings. But my accounting agent ( real person) gets paid only when he files my tax returns.
There's no LLM equivalent.
If you finetune a model and it starts misbehaving, what are you going to do to it exactly? PIP it? Fire it? Of course not. AIs cannot be managed the same ways as humans (and I would argue that's for the best). Best you can do is try using a different model, but you have no guarantee that whatever issue your model has is actually solved in the new one.
In many cases though, you don’t know whether the outcome is correct or not but we just have evals for that.
Our product is a SOTA recall-first web search for complex queries. For example, let’s say your agent needs to find all instances of product launches in the past week.
“Classic” web search would return top results while ours return a full dataset where each row is a unique product (with citations to web pages)
We charge a flat fee per record. So, if we found 100 records, you pay us for 100. Of its 0 then it’s free.
Or just my writing style?
Edit:
Well, actually - this kind of writing style does feel quite AI-ish:
> It really makes sense, and the best part — customers love it
Does “outcome billing” amount to anything different?
There is an argument to be made that SaaS tools tap the tool budget whereas AI agents can tap the worker budget of companies.
I am looking to understand more nuances here.
Maybe the pricing model makes sense in the beginning.
Until people will realize the big secret - AI is still just software.
A new category of software.
The price of software generally only goes in one direction, and that’s a race to the bottom.