@commaok@inuh.net cover

a wombat with an umbrella

This profile is from a federated server and may be incomplete. View on remote instance

@tao@mathstodon.xyz avatar tao , to random

This is an expansion of a point I initially wrote in the context of a MathOverflow question https://mathoverflow.net/questions/487041/collaborative-repositories-on-open-problems/487065#487065, but also has relevance to the role of AI in activities such as mathematics where ideation is important.

It is commonly accepted that one of the impediments to further progress in mathematics is a shortage of new ideas. Naively, one can model this hypothesis by proposing that

(number of new ideas) (*)

is the key factor determining the rate of progress, and then try to support efforts to maximize the quantity (*).

However, in the era of increasingly large amounts of AI-generated mathematics, the quality of these ideas becomes increasingly relevant. Only a small fraction of new ideas tend to be good and fruitful ones; a bad idea can actually impede progress by wasting more time than it saves. So, a more realistic model would be that it is the actually the product

(number of good new ideas) * (signal-to-noise ratio of the idea pool) (**)

that is the important factor which is worth maximizing. (This is still a massive oversimplification - for instance, it assumes a binary classification of ideas into "good" and "bad" - but will serve as a minimal toy model that suffices to illustrate the broader points that I wish to make here.) (1/3)

commaok ,
@commaok@inuh.net avatar

@tao thanks for writing these notes. This one inspired an entire blog post: https://commaok.xyz/ai/llm-language/