My article for the August 2025 edition of “The Local” is now out. This month’s story is on the Lahay butcher shop in Morehead St Lambton, and the development of the Newcastle District Abattoir.
I have a HP EliteBook laptop that caused me a bit of head-scratching today. The inbuilt display had become quite bright, with low contrast, and was difficult to see unless I was directly in front of the display. Adjusting the brightness down made things worse as the contrast became very low. Looking at the screen, it was as though my glasses had fogged over.
I started trawling through all the possible Windows settings for the display, color profile settings, color calibration wizard, as well as the Intel graphics control panel settings. None of this improved matters. Googling for an answer didn’t help as it only suggested things I’d already tried. I was just about to conclude that the display had some kind of hardware failure, when I noticed an icon on the F2 key (next to the brightness down/up keys) that I didn’t recognise. What does that key do??
On clicking the button, the display is back to normal. Hurray!
Unknown to me before now, that button toggles the HP Sure View feature, which is supposed to be a privacy guard feature that makes the display hard to read by someone sitting beside you. (I say ‘supposed to be’ because it makes the screen so irritating to look at that users will quickly switch it off again.)
It was somewhat annoying to discover that what is supposed to be a feature, when activated accidentally is indistinguishable from a hardware failure. What would have been better if HP had done something like what Microsoft does with their Sticky Keys feature, where if you accidentally try to activate it (by pressing the Shift Key 5 times) it
Explains the feature.
Gives you the option to enable it, or cancel.
Tells you how to disable the warning in future, if that’s what you’d like to do.
A good example of how to handle accidental activation of a hidden feature.
I was heavily involved in the campaign in 2016 to preserve a separated shared pathway across the new inner city bypass at Jesmond, in particular by setting up the Kiss Your Path Goodbye website.
It was very gratifying to see today the very visible physical evidence that the campaign was a success, with the overhead pedestrian bridge put in place over the weekend.
The new shared path bridge, looking from Newcastle Rd to the south.
I’ve just finished reading the excellent book “The AI Con” by Emily M Bender and Alex Hanna. I was made aware of the book when David Marr interviewed Emily on the Late Night Live radio show on 3 July 2025. The book eloquently and powerfully, with much considered insight, and with doses of humour, exposes the vacuous and dangerous hype of AI promoters.
The book analyses many egregious claims, and one that particularly resonated with me is the way big tech positions AI as the authoritative information source … you ask a question and you get back the answer. Bender and Hanna describe this as the fantasy of “frictionless” access. (I have edited the text below slightly so that it reads more fluently as a stand-alone quote.)
Many of the proposed use cases of Large Language Models (LLMs) are as information access systems, often as a direct replacement for search engines. This uses case trades on a long standing fantasy of making information access “frictionless”: you type in your question and you get the answer. But text synthesis machines are a terrible match for this use case, on two levels. 1. They are inherently unreliable and inclined to make s**t up. 2. Friction in information access is not only beneficial, but critically important.
“The AI COn”, Emily M BEnder and Alex Hanna, 2025. paGE 171.
The point here is that learning is not the simplistic “ask a question, get an answer” model that AI chatbots espouse. Learning involves asking questions, getting multiple answers, evaluating the answers, comparing and contrasting the answers, rating the authority of the answers, identifying gaps in the answers, refining our understanding so that we can ask follow-up questions. AI systems in their mode of presenting “the answer” devalues, demotes, and dissuades the very essential human components required for authentic information enquiry – analysis, comparison, evaluation, reflection etc.
Take for example Google’s “AI Overview” that they have unhelpfully foisted upon us all with no way to disable. Recently I had an occasion where my laptop rebooted overnight after a Windows Update and I lost the set of web pages open in my browser while doing ongoing research. No big deal, but just a bit annoying. Let’s ask Google “How to disable automatic restart in Windows 10”.
The AI Overview is then confidently presented as the authoritative answer in 4 ways …
Positioned at the top of the page
With an icon
In large font
With the instructions highlighted
The answer is in the AI Overview is a correct answer, but it is not the answer to the question I was wanting answered. The AI Overview answer provided instructions on how to prevent an automatic restart after a system failure, whereas I was wanting to know about preventing an automatic restart after a Windows Update. The list of conventional Google search results quickly reveals (by the use of human intelligence) that my question covered a number of scenarios and that I need to refine my question. The AI Overview taking up considerable space at the top of the page was a prominent impediment to getting to the actual information I was seeking.
Clearly AI Overview performs badly when presented with questions that are ambiguous, or where there are multiple legitimate answers. But even for questions where there is no ambiguity, AI Overview’s results are egregiously bad. Take the example of postcodes used in mailing addresses. In Australia each 4 digit postcode is associated with one or more suburb/town names. There is a canonical and unambiguous correlation of postcodes to suburb/town names maintained by Australia Post. How does AI Overview fare in this space? Let’s try “What is the postcode for Lambton?”
Full marks to the system for using location information to know that I am asking about “Lambton, NSW, Australia” and not “Lambton, Quebec, Canada”, and the answer of 2299 is correct, and the comment that “this also applies to North Lambton” is also correct. But the answer omits the information that 2299 is also the postcode for Jesmond. So while the AI Overview answer is correct, it only provides two-thirds of the relevant information about postcode 2299 while presenting itself as the authoritative answer.
It gets worse if we pose the question the other way round and ask “What suburb has postcode 2299?”.
In this case the answer of Lambton and North Lambton is correct, but New Lambton is incorrect! (It’s postcode is 2305)
So in the three examples preceding, how has Google AI Overview fared?
It provided a correct answer, but not to the question I was seeking an answer for.
It provided a correct but misleadingly incomplete answer.
It provided an incorrect answer.
Interestingly I tried the postcode queries on other devices and browsers, and the answers returned were not always the same, and sometimes the answer was correct. But regardless of whether the answer was correct, partially correct or totally incorrect – it was presented with the same level of confident self-assertive authority.
And that illustrates perfectly the essence of the AI Con, it is a confidence trick. It claims to deliver something (the answer) that it often does not deliver, and by design cannot deliver reliably. At the heart of this confidence trick is the “I” in “AI”, which is promising Intelligence where there is none.
A Large Language Model AI is just a machine with inputs and outputs. It takes inputs in the form of text training data, does lots of pattern matching, takes an input of text question, the internals of the machine spin around doing algorithmic pattern matching, and then it emits a text “answer” as the output. But there is no thinking involved, there is no meaningful analysis or comparison of alternative answers, there is no understanding of me as a person and my current state of knowledge, there is no real world human experience or plain common sense brought to bear in producing the answer. There is no intelligence. None. Zero.
It’s just like a mechanical sausage making machine, where you add inputs such as meat and spices, turn the handle, and the machine unthinkingly extrudes a sausage.
One of my favourite things I gained from Bender and Hanna’s book is their designation of large language model AI’s as “synthetic text extruding machines.” We wouldn’t describe a machine that extrudes sausages as “intelligent”, nor should we describe an algorithmic pattern matching machine that extrudes text as “intelligent”.
From now on I’m taking “AI” to stand for “Accelerated Ineptitude”.
Here’s why you should never purchase from asusbatteryshop.com.au …
Swelling battery failed after just 20 months.
I purchased a replacement battery for an Asus laptop, and 20 months later it failed. Inspecting the battery shows considerable swelling of the battery cells. I notified the seller of the problem, but they expressed no concern over the poor quality product they had supplied, merely offering “a small discount” if I wanted to buy another one.
Conclusion: Avoid the Asus Battery Shop online store.
In researching for my March 2025 article on the New Lambton real estate riot, I came across information that enabled me to identify the location and approximate date of this Ralph Snowball photo . See my page on Horsfield’s Lease for more details.