LLMs are breaking 20 year old system design
27 points - today at 6:39 AM
Sourcegrugdev42
today at 8:23 AM
Article doesn't make sense. Some of the "horizontally scaled" servers have their own state. A local cache, a temporary filesystem etc.
Also, has teh author never heard of long running queued jobs? Or long running scheduled jobs? They ultimately report back into the DB (updating their status etc).
This article reeks of someone using AI to make huge leaping jumps of logic. The "single source of truth" rule has survived this long for a reason. It works!
manueltgomes
today at 7:43 AM
> Long running work: an agent doing a 10 minute task isn’t a ‘request’, it’s a long-running async process.
Correct, but we solved this a long time ago when we started sending files to servers to be converted, for example. We either got a 'job_id' or a call to a webhook when the job was finished."
bilbo-b-baggins
today at 7:52 AM
Claude code runs as a nearly stateless server using session JSONL files as a conversation database, sending stateless API requests to Anthropic, etc.
This post doesn’t seem to understand how these systems work at the core of agent harnesses.
To me this makes no sense. Nothing in web development changes because of long running requests, there are plenty of solutions for this. The most easy one is to just listen long enough on a http request for the answer.
The routing problem can be mitigated with session pinning.
Http2 and 3 have solutions for streaming data, websockets can be used, and pub/sub also.
Heck, we could push the LLM response in a k2v system/redis and read it from there.
"State is in the DB" is running strong and will be for decades to come.
It feels like the virtual actors are the primitive the author is reaching for. As an erstwhile Elixir hobbyist I've often found myself wishing for the simplicity of actors when solving problems in my day job. I tend to work in an AWS environment, but I believe over in Azure they have something like it. I think it was called Orleans when I read about it but I think it's got a more corporate name now.
mattjoyce
today at 7:29 AM
Durable is used 13 times in this article.
NitpickLawyer
today at 7:31 AM
> LLMs just make this problem more visible.
This theme keeps popping up everywhere. Lots of things were "the way we did things" because a lot of reasons. LLMs just amplify some things and they get enhanced visibility. It can be a good thing, if you're able to understand what/why/how changed, or it can be a bad thing if you insist that "this is how we do things, because this is how we've always done things".
or maybe it is a bad thing, because right now the model is "throw it against the wall and see what sticks or how many billions we need to make it stick"?
NitpickLawyer
today at 7:49 AM
> right now the model is "throw it against the wall and see what sticks
When was it not? We've been doing this for decades. Something usually sticks.
endofreach
today at 7:34 AM
> or it can be a bad thing if you insist that "this is how we do things, because this is how we've always done things"
Or... maybe... just maybe... it can be a bad thing, because it's a bad thing.
NitpickLawyer
today at 7:40 AM
Many things can be wrong, for many reasons. The problem is when people think LLMs make it wrong, instead of understanding that LLMs just expose the thing for what it was. It's like shooting the messenger just because the messenger is an LLM. That was my point, in case I worded it badly.
If I'm reading it correctly, the TL;DR of the article is: given the client and the server, we need to be able to ingest messages to the client-server communication channel, and this channel should survive a disconnection. The article suggests using named pub/sub channels for communication, so that the “connection” between a given client and a given (cloud) server had a name and it was possible to ingest data chunks into that named channel.
I would suggest that there is a much, much older technology than pub/sub that can be used for such kind of data transfer: it's UDP, documented in 1980.
I can't stop thinking how overcomplicated our software engineering reality is so we need to reinvent layers and layers of stuff on top of the other stuff. We must make applications for browsers; browsers disallow basic network communication for the code they execute; so sending a chunk of data from a client to a server becomes a real adventure.
The premise is incorrect and ignorant of the history - this is sticky sessions and the idea has been around longer than 20 years.
The "cloud native" (as the author refers to it) idea that app servers should be stateless is actually the new idea.
The industry eventually reached consensus on sticky sessions being a bad idea a lot of the time. That's why stateless app servers became the norm.
skywhopper
today at 8:01 AM
This article is clearly written by someone who’s never done any work on actually complex web applications. Nothing here is a new problem nor unsolved. The pattern identified as being “LLM specific” (long-running async jobs) is not particularly unusual.
throwaway27448
today at 7:24 AM
? Yes if you treat llms like deterministic computation you'll get fucked, news at eleven. In terms of apps "shitty but uncannily useful search" seems like a better fit