> I propose a new kind of type inference algorithm that always prioritises the type unifications the end-user is most likely to care about, independent from the types order in source code.
> If we could instead unify types in the order the end-user deems most important
I think the problem with this is that the desirable priority is context and programmer-dependent, and no real reason is given why ordering expressions is unacceptable. Also this approach apparently still isn't independent from the order of expressions, it just imposes additional structure on top of that with multiple inference passes, making the whole affair more rigid and more complicated. I can only guess this makes reasoning about higher order polymorphism much harder.
Having it ranked on order alone is something which is simple, easy to internalize, easy to reason about, and gives the utmost control to the programmer. For instance with the example given, this is the intuitive way I'd have constructed that function without even really thinking about it:
fn example(x) -> Point[int] {
let p = Point(x, x);
log(["origin: ", x]);
return p;
}
netting the "ideal" error.
I think the problem here stems from an expectation that type inference is kind of a magic wand for getting something like dynamic typing. So then when no thought is put into the actual structure of the program WRT the type inference, what's instead gotten is an icky system that isn't Just Doing The Thing. Vibes-based-typing? If that's the goal, I wonder if it might be better served by fuzzy inference based on multiple-passes generating potential conclusions with various confidence scores.