Jesmond shared path bridge

I was heavily involved in the campaign in 2016 to preserve a separated shared pathway across the new inner city bypass at Jesmond, in particular by setting up the Kiss Your Path Goodbye website.

It was very gratifying to see today the very visible physical evidence that the campaign was a success, with the overhead pedestrian bridge put in place over the weekend.

The new shared path bridge, looking from Newcastle Rd to the south.

The AI Con

I’ve just finished reading the excellent book “The AI Con” by Emily M Bender and Alex Hanna. I was made aware of the book when David Marr interviewed Emily on the Late Night Live radio show on 3 July 2025. The book eloquently and powerfully, with much considered insight, and with doses of humour, exposes the vacuous and dangerous hype of AI promoters.

The book analyses many egregious claims, and one that particularly resonated with me is the way big tech positions AI as the authoritative information source … you ask a question and you get back the answer. Bender and Hanna describe this as the fantasy of “frictionless” access. (I have edited the text below slightly so that it reads more fluently as a stand-alone quote.)

Many of the proposed use cases of Large Language Models (LLMs) are as information access systems, often as a direct replacement for search engines. This uses case trades on a long standing fantasy of making information access “frictionless”: you type in your question and you get the answer. But text synthesis machines are a terrible match for this use case, on two levels. 1. They are inherently unreliable and inclined to make s**t up. 2. Friction in information access is not only beneficial, but critically important.

“The AI COn”, Emily M BEnder and Alex Hanna, 2025. paGE 171.

The point here is that learning is not the simplistic “ask a question, get an answer” model that AI chatbots espouse. Learning involves asking questions, getting multiple answers, evaluating the answers, comparing and contrasting the answers, rating the authority of the answers, identifying gaps in the answers, refining our understanding so that we can ask follow-up questions. AI systems in their mode of presenting “the answer” devalues, demotes, and dissuades the very essential human components required for authentic information enquiry – analysis, comparison, evaluation, reflection etc.

Take for example Google’s “AI Overview” that they have unhelpfully foisted upon us all with no way to disable. Recently I had an occasion where my laptop rebooted overnight after a Windows Update and I lost the set of web pages open in my browser while doing ongoing research. No big deal, but just a bit annoying. Let’s ask Google “How to disable automatic restart in Windows 10”.

The AI Overview is then confidently presented as the authoritative answer in 4 ways …

  1. Positioned at the top of the page
  2. With an icon
  3. In large font
  4. With the instructions highlighted

The answer is in the AI Overview is a correct answer, but it is not the answer to the question I was wanting answered. The AI Overview answer provided instructions on how to prevent an automatic restart after a system failure, whereas I was wanting to know about preventing an automatic restart after a Windows Update. The list of conventional Google search results quickly reveals (by the use of human intelligence) that my question covered a number of scenarios and that I need to refine my question. The AI Overview taking up considerable space at the top of the page was a prominent impediment to getting to the actual information I was seeking.

Clearly AI Overview performs badly when presented with questions that are ambiguous, or where there are multiple legitimate answers. But even for questions where there is no ambiguity, AI Overview’s results are egregiously bad. Take the example of postcodes used in mailing addresses. In Australia each 4 digit postcode is associated with one or more suburb/town names. There is a canonical and unambiguous correlation of postcodes to suburb/town names maintained by Australia Post. How does AI Overview fare in this space? Let’s try “What is the postcode for Lambton?”

Full marks to the system for using location information to know that I am asking about “Lambton, NSW, Australia” and not “Lambton, Quebec, Canada”, and the answer of 2299 is correct, and the comment that “this also applies to North Lambton” is also correct. But the answer omits the information that 2299 is also the postcode for Jesmond. So while the AI Overview answer is correct, it only provides two-thirds of the relevant information about postcode 2299 while presenting itself as the authoritative answer.

It gets worse if we pose the question the other way round and ask “What suburb has postcode 2299?”.

In this case the answer of Lambton and North Lambton is correct, but New Lambton is incorrect! (It’s postcode is 2305)

So in the three examples preceding, how has Google AI Overview fared?

  1. It provided a correct answer, but not to the question I was seeking an answer for.
  2. It provided a correct but misleadingly incomplete answer.
  3. It provided an incorrect answer.

Interestingly I tried the postcode queries on other devices and browsers, and the answers returned were not always the same, and sometimes the answer was correct. But regardless of whether the answer was correct, partially correct or totally incorrect – it was presented with the same level of confident self-assertive authority.

And that illustrates perfectly the essence of the AI Con, it is a confidence trick. It claims to deliver something (the answer) that it often does not deliver, and by design cannot deliver reliably. At the heart of this confidence trick is the “I” in “AI”, which is promising Intelligence where there is none.

A Large Language Model AI is just a machine with inputs and outputs. It takes inputs in the form of text training data, does lots of pattern matching, takes an input of text question, the internals of the machine spin around doing algorithmic pattern matching, and then it emits a text “answer” as the output. But there is no thinking involved, there is no meaningful analysis or comparison of alternative answers, there is no understanding of me as a person and my current state of knowledge, there is no real world human experience or plain common sense brought to bear in producing the answer. There is no intelligence. None. Zero.

It’s just like a mechanical sausage making machine, where you add inputs such as meat and spices, turn the handle, and the machine unthinkingly extrudes a sausage.

Sausage stuffer. Wikimedia Commons.

One of my favourite things I gained from Bender and Hanna’s book is their designation of large language model AI’s as “synthetic text extruding machines.” We wouldn’t describe a machine that extrudes sausages as “intelligent”, nor should we describe an algorithmic pattern matching machine that extrudes text as “intelligent”.

From now on I’m taking “AI” to stand for “Accelerated Ineptitude”.

Bad battery

Here’s why you should never purchase from asusbatteryshop.com.au …

Swelling battery failed after just 20 months.

I purchased a replacement battery for an Asus laptop, and 20 months later it failed. Inspecting the battery shows considerable swelling of the battery cells. I notified the seller of the problem, but they expressed no concern over the poor quality product they had supplied, merely offering “a small discount” if I wanted to buy another one.

Conclusion: Avoid the Asus Battery Shop online store.

Spring 2024

In what seemed a colder than usual August, the budding of my mulberry tree is significantly later this year compared to last year.

  • Checking the weather data for Williamtown shows that average daily maximum temperatures for August 2024 is unchanged from last year, and the minimums are were on average 1.5 degrees warmer than last year. This is a good reminder that impressions/recollections about weather are not the same as data about weather.
  • Data from my solar panel system however, shows that Lambton received 19% less sunshine in August this year compared with last year, so maybe that accounts for the later budding.

Random Psalm

Spring 2023

Following my usual custom of noting the start of Spring when the first leaf buds appear on my mulberry tree, this year is unseasonably early. One could almost say unreasonably early, coming 15 days earlier than last year.

This is now the seventh year I’ve photographed the first buds of spring on this tree. I’ve graphed the results below, and while this is admittedly just one collection point of climatic related data, the trend is clear.