whatisthiseven
today at 6:00 PM
> But will it?
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
mullingitover
today at 6:18 PM
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
bwestergard
today at 6:30 PM
"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
mullingitover
today at 6:50 PM
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of âgoodâ education/care is more than just the other humanâs co-opted attention.
How many of us have a reminiscence that starts âlooking back, the most life-changing part of my primary or secondary education was ________,â where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunchesâon totalities of perception-filtered-through-experience that they canât fully put into words?
Iâm reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another humanâs attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that donât fully overlap (fine, Iâll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human elementâŠ
[0] https://news.ycombinator.com/item?id=47287344
bwestergard
today at 7:39 PM
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
somekyle2
today at 6:38 PM
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
mullingitover
today at 6:54 PM
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
Iâm sick of this idea that âfreeâ services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
By selling those services at a cost of âfreeâ, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the âfreeâ users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, âfreeâ services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the âfreeâ service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise itâs just going to concentrate market power with the megacorps.
This 1000x. "Free" is only a viable business model if the govt funds it. Otherwise, the $$ has to come from somewhere else in the company - how long will it take for the company to lose interest in a loss-leader when they're making $$ from other parts?
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
> We may reach a point where the only ones able to afford compute are AI companies
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
1990? We were solid lower-middle class, and I got a computer for Christmas in 1983. I bought my own, from $$ saved by working in 1987.
> because productivity gains must benefit the lower classes to see a multiplier in the economy
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
trollbridge
today at 7:51 PM
The benefits largely accrued to the poorest people.
babypuncher
today at 6:47 PM
I would argue we've even already seen this play out with productivity gains across the economy over the last 40 years. The American middle class has been gradually declining since the '80s. AI seems likely to accelerate that trend for the exact reasons you point out.
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
ElevenLathe
today at 7:44 PM
> Technological progress that hurts more people than it helps isn't progress, it's class warfare.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.