المواضيع الرائجة
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
A “camera lucida” is a device that uses a prism held by a metal armature to project an image of the scene in front of it onto a piece of paper below it, sort of like a modern-day projector hooked up to a live camera stream.
These were probably invented in the early 1400s, although published accounts of them don’t appear until the late 1500s. Part of the reason for that is likely that they were closely guarded trade secrets of artists who used them to achieve a degree of accuracy that was previously impossible or at least extremely difficult to do in an unaided “freehand.”
The artist David Hockney became very interested in this subject years ago and wrote a book about it in 2001. His basic theory was that the remarkable improvement in accuracy and realism was directly attributable to secret use of the camera lucida (and also an earlier device called a camera obscura).
As he pointed out, before that period, you would never see a painting of a lute in perspective that didn’t look distorted and wrong. While you could use the “rules of perspective” to draw simple rectilinear shapes realistically, the more complex geometry of a lute was beyond normal human ability to portray realistically in space. This theory is known as the Hockney-Falco thesis.
Ever since I learned about this in college in the early 2000s, I sort of mentally applied an asterisk to the works of certain painters. For instance, as much as I respect and admire Ingres and Caravaggio, the awe I had for their skills was tempered by the realization that they likely availed themselves of this sort of mechanical aid.
And sure, much of the artistry is in the concept, the composition and framing, the colors, the paint strokes, etc. But that breathtaking lifelike realism is what impressed me the most, and that part of it was shattered at least partly by this revelation. It also made me respect even more the sculptural realism of Michelangelo (and also his studies which are clearly sketches made from life).
In any case, the reason I bring this up now is that I believe we are on the verge of the same kind of thing happening in mathematical research fields with the advent of models like GPT-5 Pro.
I’ve already used it to do what I suspect is genuinely new and interesting research (as I’ve detailed in recent threads), and we just today got an update from Sebastien Bubeck at OpenAI showing that the model was able to prove an interesting result in contemporary mathematics using a new proof, in a single shot no less.
So this new age is suddenly upon us. We just saw a result from Chinese computer scientists last week beating a record for optimal sorting that stood for 45 years.
I mused at the time about how I wondered whether AI was used in some way to generate that result.
Also see the recent paper in the quoted tweet, which has a similar character in that it is both surprising and yet also elementary. These seem to me to be hallmarks of results that may have benefited from AI in some way.
Now, I don’t want to accuse these authors of anything. For all I know, they did everything manually, just like the painters in the 1300s did.
And even if they did use AI to help them, we don’t yet have accepted mores about how to deal with that: what disclosures are warranted, and how credit should be parceled out and considered. The whole concept of authorship must be reconsidered today.
In my recent thread where I investigated alongside GPT-5 Pro about the use of Lie theory in deep learning, I devised the prompts myself, even though I would never in a million years be able to generate the theory and code that the model developed as a result of those prompts. Do I get the credit for the result if it turns out to revolutionize the field?
What about my subsequent experiment, where I used my original prompts that I wrote myself along with a “meta prompt” to get GPT-5 Pro to come up with 10 more pairs of prompts modeled loosely after my own, but involving totally different branches of mathematics that developed in totally different directions.
Do I get credit for those theories if they turn out to be important? I sure hope so, because I already published the ideas and code on GitHub and publicized them widely, so if anyone does follow up on those lines of inquiry, academic ethics would require them to cite me.
But even if you think I deserve credit for steering the AI with my own prompt, then surely my claims to priority are somewhat weakened for the 10 other theories that are the result of using my prompt as a model for meta prompting, right? After all, I didn’t even know that “Tropical Geometry” was a thing a couple of days ago, yet now I have a theory and code that applies it to AI research.
I would posit that, just as I started mentally applying an asterisk to certain artworks and artists akin to the asterisk next to Barry Bonds’ name in the Hall of Fame, I suspect that most scientists will start doing the same thing for any new mathematical theory-based papers in the next year or so.
I suspect people will soon enough say things like “this guy is the real deal; he wrote his best papers pre-2025!” to distinguish between those who did all their work manually using their own brains versus those who used AI assistance.
And that is absolutely a valid way to think about things if AI is able to not only answer hard theoretical problems but even pose the interesting questions on its own.
If I’m right, we should prepare for a coming tsunami of shocking research papers that beat long standing records and limits, and shatter through walls that were long assumed to be relatively impenetrable short of some breakthrough theory.
And I believe many of these results will share something in common: that they were always right there in front of us, but required combining theory from different areas of math and applied subjects in new ways that weren’t pursued before because of human, sociological reasons: different fields splitting across lines, with different terminology, journals, practices, departments, conferences, social networks, etc.
The other form I suspect these results will take is to leverage elementary results in bizarre ways that, for whatever reason, don’t come naturally to human brains, but which we can understand once they are clearly explicated for us.
Another form they might take is results that leverage long-forgotten esoterica from late 19th century analysis. The kinds of tricks that allowed Feynman to solve integrals that no one else could solve.
These results are known and exist in books, but no one reads those books anymore, and the original theories they were developed for have been largely superseded by our modern machinery that operates at many levels of increased abstraction and generality.
Another form they might take is simply applying known mathematics that is understood by just a few hundred specialized geniuses in the world who focus only on theory and not at all on applications.
This math may have just been “sitting around” since the 1950s or 1970s, waiting for someone to apply it to a practical problem such as those in AI research. Many of the ideas I investigated with GPT-5 seem to fall into this category.


20 أغسطس، 20:43
holy shit they found a power series solution to ALL polynomial equations!! (bypassing Galois which says you can’t solve them in radicals)

79.59K
الأفضل
المُتصدِّرة
التطبيقات المفضلة