Generative AI 2025
Posted August 12, 2025
It is an empirical fact that generative AI has surpassed human intelligence in many areas.
It is also an empirical fact that the use of AI does not need any skill. It does not even need
reading skills to submit a PDF to a machine and get a PDF back. My
Chickens could have done that. That is a huge difference to other
assistants like computer algebra systems. Now, you can take an assignment (as more convoluted as better)
and give it directly to the machine. Any moron can do that.
AI is even better in highly comprehensive riddles that are designed to
deter AI so that the recommendations to write problems which AI can not solve becomes very tricky.
There was recently a An article
about Tran Nam Dung who teaches gifted students recommends that teachers should not become dependent
on AI. This will be a big challenge. Many seem already depend on generating course work automatically. It is
like shooting your future self.
AI starts to do well in math competitions. It can solve problems nicely. Strangely, even Mathematicians seem not too concerned.
A scientific American Article of August 7 by Emily Riehl A quote of Riehl: "Yet I'm still not worried".
It is also an empirical fact that most students use AI to do work (probably already since the beginings in fall 2022).
From a year ago already:
86 Percent of students use AI in their studies even to summarize, paragraph or do first drafts.
Since these tools have emerged as decent tutors which are patient, never lose temper and have
rather decent knowledge in basic areas, the question could come up for schools: why we need teachers at all?
On a large scale, it leads to natural worries about whether humans will be fading out from education. Why not cut
the middle man? One already speaks about the collapse of higher education.
The danger is that AI demotivates to hone basic skills like programming or learning languages or
even be learn problem solving skills. My own feelings about this
have fluctuated over the last 25 years. I started to think seriously about AI during a project 2003/2004.
At the moment, I'm very worried. First of all about education and I think soon also
for research.
If you can write an essay or book or report in a few seconds, if you can compose a song or paint a painting
on the spot, if you can write a paper with a push of a button, what is its value?
Of course, the use is actually quite exciting. One can get surprising effects quite easily.
My concerns are that the use of these tools needs to be acknowledged, even tiny parts.
At the moment, I see things as follows:
- It will become more and more difficult to evaluate and value. The nature of evaluations and value criteria will change.
- The ability to think without AI, to solve problems independently will be valued more.
- Testing will happen more and more in person. Maybe this is an opportunity for humans to stay in the game.
The problem of course being that not everybody is able to do human evaluation well.
(There is the Dr Fox effect for example).
- Courses taken online and being tested online will hardly give any credit any more even so they
still can have high value. But giving the credit will be tricky.
- We might get to a moral imperative that the use of AI needs to be acknowledged, even for small things.
Even for spell checking or proof reading or administration. Otherwise, it is just cheating.
As for me personally at the moment, if I see anything which only has remotely been authored, painted or filmed AI
(and which is not declared as such!)
I push the dislike button, cancel the author or channel. As a referee, I refuse to read papers that appear
to be AI generated. I simply do not have interest to read random ramblings by a machine, no interest to hear
music that has been written by a machine, I turn off AI settings on my phone. I try to scroll past AI generated summaries
by search engines.
Will AI in future be able to identify humans who use work of somebody else (AI is just "somebody else")
and claim it to be theirs. But this opens a can of worms. How can we trust that these tools are not manipulated to
favor some who might just pay enough to buy a good evaluation?
Maybe the steps to be taken are more drastic. Many schools have already started to ban cell phones during
class. It is still an outrageous thought but we might actually see in future, that the use of generative AI will be
forbidden similarly than performance enhancing drugs in sports. Once one realizes the that the side effects of this
drugs are much more severe than anticipated. The biggest negative side effect is "nihilism". Why do we still do things
at all?
Generative AI in the classroom
Oliver Knill
posted 04/08/22
Having experimented with AI in
education 20 years ago, (we were programming bots from scratch too but the bulk of work was
data writing. I talked about some
Math and AI in a math circle in 2007
and
a computer vision project in 2007,
I remained interested in the subject but did not work in AI. Since last year when deep learning models started
to generate texts or images that were astoundingly good and got better within a few months.
- Quality in math
Last fall the math was still rather poor:
1
2,
3,
An illustration in
higher math.
- A major problem is lack of references. The tools are opaque black boxes which need to be taken with grain of salt.
A journalist has documented hallucinations.
- Related is the problem that the machines do not give credit for ideas. They grab without attribution.
- Detection tools
have appeared but only quickly. They are too weak.
- There is some hope:
The models have become worse over time: filtering, manipulation, purpose misinformation and possibly forced restraint.
- Humanity not not only the well known dangers of nuclear annihilation or climate change, or a malicious AI
but also that the machines become so good in anything that we might not see any purpose any more.