majormajor
today at 3:56 PM
> Discriminating good answers is easier than generating them.
I don't think this is true for many fields - especially outside of math/programming. Let's say the task is "find the ten most promising energy startups in Europe." (This is essentially the sort of work I see people frequently talk about using research modes of models for here or on LinkedIn.)
In ye olden days pre-LLM you'd be able to easily filter out a bunch of bad answers from lazy humans since they'd be short, contain no detail, have a bunch of typos, formatting inconsistencies from copy-paste, etc. You can't do that for LLM output.
So unless you're a domain expert on European energy startups you can't check for a good answer without doing a LOT of homework. And if you're using a model that usually only looks at, say, the top two pages of Google results to try to figure this out, how is the validator going to do better than the original generator?
And what about when the top two pages of Google results start turning into model-generated blogspam?
If your benchmark can't evaluate prospective real-world tasks like this, it's of limited use.
A larger issue is that once your benchmark, that used this task as a criteria, based on an expert's knowledge, is published, anyone making an AI Agent is incredibly incentivized to (intentionally or not!) to train specifically on this answer without necessarily actually getting better at the fundamental steps in the task.
IMO you can never use an AI agent benchmark that is published on the internet more than once.
jgraettinger1
today at 5:09 PM
> You can't do that for LLM output.
That's true if you're just evaluating the final answer. However, wouldn't you evaluate the context -- including internal tokens -- built by the LLM under test ?
In essence, the evaluator's job isn't to do separate fact-finding, but to evaluate whether the under-test LLM made good decisions given the facts at hand.
majormajor
today at 6:19 PM
I would if I was the developer, but if I'm the user being sold the product, or a third-party benchmarker, I don't think I'd have full access to that if most of that is happening in the vendor's internal services.
alextheparrot
today at 8:20 PM
> Good evaluations write test sets for the discriminators to show when this is or isn’t true.
If they can’t write an evaluation for the discriminator I agree. All the input data issues you highlight also apply to generators.
> IMO you can never use an AI agent benchmark that is published on the internet more than once.
This is a long-solved problem far predating AI.
You do it by releasing 90% of the benchmark publicly and holding back 10% for yourself or closely trusted partners.
Then benchmark performance can be independently evaluated to determine if performance on the 10% holdback matches the 90% public.