lambdaone
yesterday at 7:24 AM
I built my own LLM-based interactive text adventure system last year, but in addition to LLMs, it used an auxiliary database to keep track of where everything was, what state it was in, and so on.
The LLM thus did two things: first, it used a combination of the prompt, the user input, and the recent history of the chat and relevant extracts from the database state, to update the database with a new state; then after updating the database, it used all of the above to synthesize the output response.
A major component of the system was that, in addition to the usual blocks-world subject-property-object representation of the game state, the system also used the database to store textual fragments that encoded things like thoughts, plans, moods and plot points.
Adding programmatic constraints to the database and its updates made a vast difference to the quality of the output by preventing logical absurdity of the sort that has been described above - attempts by the LLM to do absurd things were simply rejected.
Finally, there was a hierarchical element to the story-telling, not just simulation. There is much more to storytelling than simulation, and experimenting with this became the most interesting thing about the whole enterprise; there is plenty of space for future research in this.
The downside was that all this complexity meant it was _extremely_ compute-intensive, and thus very slow as a result. But the gameplay was fascinating. We will no doubt see more things like this in future as LLMs get faster; it's an obvious application for open source software.
Fascinating that you actually went through with an implementation. Iāve been throwing the idea of LLMs somehow having a sat solver built into them (maybe trained to have one as an emergent property), but something like you describe is the next best thing.
bo1024
yesterday at 12:51 PM
Is it by chance open source?