November 10, 2025: Niayesh Afshordi on Big Bang

Oliver Knill, November 11, 2025
It was a fun event. Quite well attended. Remarkable only that the audience was not that young. I would have as a high school student pilgrimed even to an other town to see such a thing. But we live in a world of youtube and shorts. Melissa Franklin as usual made some fun jokes (which of course at Harvard do not resonate so much; Harvard is after all known to be a rather toxic place for humor; I myself had to defend it once.) Also Lisa Randall did not disappoint, as she is known to be someone who can be quite direct and stating her honest opinion. She started with a rather harsh critic of the book essentially telling that she did not like it. Randall even answered at the end a question from the audience herself rather than have the guest talk. But Niayesh Afshordi handled Harvard pretty well and navigated things gracefully. Also tried to bring in some fun like showing himself and his co-author as Einstein and Oppenheimer. Thank god, that part had not been AI generated, but hand drawn. There was at least one slide (which I did not catch on photo unfortunately), where a joke has been drawn by AI. The colorized picture of Einstein and Lemaitre did not bother. One can look up the picture and it is rather obvious that has been colorized.
[P.S. Including AI generated parts to the presentation has become more and more tricky. I have done that myself but always try to label it as such. One can defend this today still with "it is clear that this was AI" but it has become increasingly difficult to find out. I now am more and more locked into a defensive attitude: AI generated stuff is ok if labeled as such but it should not be used without disclosure. The reason is that it will be more and more difficult to find out whether a paper, or book or art has been generated in a few seconds by a bot or whether it has been seriously researched and fought with by a human. What if machines start to produce better research or write better books than humans? How can we then distinguish? Do humans have to prove that they wrote it by videotaping their work? If a machine can write one book in a few seconds it can write thousands of book books in a day. Even "proof" that the book has been written by a human could be AI generated. ]
An other thing more on the meta level: Afshordi hinted some political statements about the current US administration. This is natural given that funding in science is political. Even if one agrees, this feels not appropriate in a talk, even in a mostly liberal audience. Critics of very expensive machines have a point and one should be listened to. Even if one does not agree. Especially in a field like fundamental theoretical physics, there is of public relation (PR) work necessary to get the billions. It was a bit difficult to Afshordi to talk about the critics (Sabine Hossenfelder was mentioned as one of the entities that are anti-science) because Hossenfelder and Afshordi know each other from the Perimeter institute. Afshordi voiced the opinion that it is good to take critics seriously and address them. I agree. Science needs to be about arguments. To the right is part of the discussion about Hossenfelder. The fact that Hossenfelder is critical about spending billions on a new machine that not necessarily will deliver can be understood. The physicists are themselves to blame to hype fields which then do not deliver. It is again a matter of trust. If you claim for decades that one should find super symmetric particles in the LHC and then does not find them, this is a PR problem. On the other hand, we live in a time, where ideas are cheep (thousands of papers are written every day) and real experiments become more and more important. But one should be able to ask questions whether it is reasonable to spend so much money on something which might not deliver.


[P.S. It is not only in particle physics. During the last couple of years, the arrogance of so called "experts" has backfired. There is nothing worse than telling something to the public for political, policy or PR reasons (even if it is with good intentions like "public safety" or for "the better good") and then to be proven wrong. In recent times, critics of health policies backed up by "science", have been silenced, not with arguments and reasoning, but with statements like "because I told you so" or by claiming the critic to be "anti science" and playing the "expert" game. This does not work in the long term. You can be right 10 times and lie once and the "taste of the lie" wins. The public trust in science has eroded during the last couple of years and it will take decades to regain this trust. Also in high energy physics, things bet political because we deal with projects which are costly. Silencing critics with promises that might not turn out to be fulfilled is never a good strategy. Lisa Randall (who also can be funny) joked during the event about LISA, the laser interferometer planned in space. LISA has been put on ice . Also on the funny side is that if you googling LISA, then the K-pop artist Lisa Manobal comes first. It is brutal to be a scientist. A humble 28 year old Thai rock-star has currently maybe a million times more influence and reach than an accomplished American theoretical physicist. The only way to cope with this is humor and maybe to believe in some hope that in a 100 years, only the physicist will be remembered. But there is the nagging suspicion, that in 100 years, there will be nobody any more who is interested in the history of human scientists. Maybe we will all have chosen to go into rigor mortis and die swiping videos and let AI do the thinking, writing and talking. It does not look good for the survival of human culture. ]

Slides (click to see them larger)

Oliver Knill, Posted November 11, 2025