It is commonly accepted that one of the impediments to further progress in mathematics is a shortage of new ideas. Naively, one can model this hypothesis by proposing that
(number of new ideas) (*)
is the key factor determining the rate of progress, and then try to support efforts to maximize the quantity (*).
However, in the era of increasingly large amounts of AI-generated mathematics, the quality of these ideas becomes increasingly relevant. Only a small fraction of new ideas tend to be good and fruitful ones; a bad idea can actually impede progress by wasting more time than it saves. So, a more realistic model would be that it is the actually the product
(number of good new ideas) * (signal-to-noise ratio of the idea pool) (**)
that is the important factor which is worth maximizing. (This is still a massive oversimplification - for instance, it assumes a binary classification of ideas into "good" and "bad" - but will serve as a minimal toy model that suffices to illustrate the broader points that I wish to make here.) (1/3)