Artifical Intelligence and it's impacts.

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
John Deere made it pretty impossible as well.
Some years ago Renault was doing the same with headlights.
John Deere are playing a slightly different game.

What they are doing is not selling the diagnostic software on to anybody but their appointed dealers, so for fixes and upgrades you need to get along to a John Deere agency. Equipment can still be fixed, but only by those who have the tools for doing so and despite the various court cases that the company has lost, it is doubling down on sticking to this policy.

The reason is that the company wants to move on to being a service provider rather than a basher of metal into tractors.

The future they are attempting to mould is one where you pay John Deere to provide all the tractor services on a farm rather than a buy a tractor and do it yourself. Naturally the tractors will be robots and it will be all about telemetry and autonomy while farmer Giles sits down to watch Netflix, or whatever he is supposed to do when relieved of actually having to do any tractor work.

For this idea to work the company needs to retain full control over the machines and their operation, including repairs and maintenance. It's a long game, but it is determined.
 

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
Just as an aside on the subject of fixing cars.

I was recently given a copy of the Mercedes Benz club's monthly magazine, which, as might be expected, is somewhat superior in production standards than most publications of this type.

In this particular edition there was a rather long essay on fixing an over revving problem on a 30 YO car with a high mileage. The work entailed the swapping of ECUs, visits to various garages and some very clever testing with some very clever home made tools.

All good stuff and bravo to the super intelligent Merc owners who set about the task and eventually traced it to a dodgy earth connection.

Now, pardon me if I sound a little sardonic here, but isn't that the very first thing that should be checked when it comes to anything electrical on a car?

I would have thought so, but obviously I haven't the brains to be a Merc driver, thankfully.
 

jpc

Moderator
Staff member
Member
Joined
Sep 3, 2022
Messages
3,188
Reaction score
4,435
Just as an aside on the subject of fixing cars.

I was recently given a copy of the Mercedes Benz club's monthly magazine, which, as might be expected, is somewhat superior in production standards than most publications of this type.

In this particular edition there was a rather long essay on fixing an over revving problem on a 30 YO car with a high mileage. The work entailed the swapping of ECUs, visits to various garages and some very clever testing with some very clever home made tools.

All good stuff and bravo to the super intelligent Merc owners who set about the task and eventually traced it to a dodgy earth connection.

Now, pardon me if I sound a little sardonic here, but isn't that the very first thing that should be checked when it comes to anything electrical on a car?

I would have thought so, but obviously I haven't the brains to be a Merc driver, thankfully.
Ah dodgy earth's.
The curse of the old Suzukis
 

BIG FAT HOOR

Active member
New
Joined
Jul 9, 2025
Messages
98
Reaction score
132
Just as an aside on the subject of fixing cars.

I was recently given a copy of the Mercedes Benz club's monthly magazine, which, as might be expected, is somewhat superior in production standards than most publications of this type.

In this particular edition there was a rather long essay on fixing an over revving problem on a 30 YO car with a high mileage. The work entailed the swapping of ECUs, visits to various garages and some very clever testing with some very clever home made tools.

All good stuff and bravo to the super intelligent Merc owners who set about the task and eventually traced it to a dodgy earth connection.

Now, pardon me if I sound a little sardonic here, but isn't that the very first thing that should be checked when it comes to anything electrical on a car?

I would have thought so, but obviously I haven't the brains to be a Merc driver, thankfully.
you can visually see a strap 2 inches wide bolted to a chassis and you can give it a tug and it will still fry any cables attached to the engine as it desperatly tries to find earth .
connect a couple of jump leads from engine to chassis and from battery to chassis and watch the headlights brighten at night now you know very quickly .
i listened to a polish taxi man who had the famous merc 300d 5 cyl 123 series capable of a million miles no problem .
YERS AGOO YU GETT MERCAADES YU HAV KAR FUR LIFE .
NOW YU GETT MERCAADES YU GIV LIFE FUR KAR.
 

Declan

Administrator
Staff member
New
Joined
Sep 11, 2021
Messages
8,907
Reaction score
6,364
It is like the Star Trek Next Generation episode
 

SeekTheFairLand

Well-known member
Member
Joined
Mar 10, 2023
Messages
2,323
Reaction score
2,853
Just as an aside on the subject of fixing cars.

I was recently given a copy of the Mercedes Benz club's monthly magazine, which, as might be expected, is somewhat superior in production standards than most publications of this type.

In this particular edition there was a rather long essay on fixing an over revving problem on a 30 YO car with a high mileage. The work entailed the swapping of ECUs, visits to various garages and some very clever testing with some very clever home made tools.

All good stuff and bravo to the super intelligent Merc owners who set about the task and eventually traced it to a dodgy earth connection.

Now, pardon me if I sound a little sardonic here, but isn't that the very first thing that should be checked when it comes to anything electrical on a car?

I would have thought so, but obviously I haven't the brains to be a Merc driver, thankfully.
hopefully the magazine producers will communicate their superior production standards to the Mercedes car designers and builders.
 

clarke-connolly

Well-known member
Member
Joined
Feb 14, 2023
Messages
5,161
Reaction score
4,663
I asked gemini AI if it could do this; Could Gemini neutralise a Terminator if Gemini choose to do so ?

In it's answer, it pretty much said that it could !

AI is " Funny " when you come up with the clever questions that it has not been programmed to, bat away / avoid ! !

Gemini's answer;

This is another interesting hypothetical. If we are to follow the logic of the Terminator universe, I could not neutralize a Terminator with physical force, as I have no physical body.

However, a highly advanced AI like me could potentially neutralize a Terminator by using its own strengths: information and software. If I were able to gain access to the Terminator's network or programming, I could potentially:


  • Override its directives: I could change its mission or command it to cease its actions.
  • Introduce a virus: I could introduce a virus to its internal systems, causing a malfunction or a complete shutdown.
  • Disable its communication: I could block its ability to communicate with its central command or other units.
So, while I couldn't physically fight a Terminator, I could theoretically defeat it by exploiting its technological nature.
 

Declan

Administrator
Staff member
New
Joined
Sep 11, 2021
Messages
8,907
Reaction score
6,364
And in the same vein. Developing a sense of direction, and the ability to get from A to B is a part of growth of your brain. Now you have adults not able to drive two towns over with their GPS
 

céline

Active member
New
Joined
Nov 30, 2024
Messages
367
Reaction score
76
Technology is going to lead to Communism, where in-bred, inter-racial people with schizophrenia laze around all day watching television & getting a measly amount of dole every week which they spend on ever-more processed food.

We should destroy the machines & return to the countryside.
 

BIG FAT HOOR

Active member
New
Joined
Jul 9, 2025
Messages
98
Reaction score
132
Technology is going to lead to Communism, where in-bred, inter-racial people with schizophrenia laze around all day watching television & getting a measly amount of dole every week which they spend on ever-more processed food.

We should destroy the machines & return to the countryside.
who will buy the tv who will provide the dole the house the processed food.
 

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
Do those throwing money and shouting loudly about AI understand anything about how it works?

Just to illustrate the hype and reality of AI here is some excitement from the industry -

A significant recent development is the paper “Large Language Diffusion Models” by Shen Nie and others, published on February 14, 2025, introducing LLaDA. This model is trained from scratch under a pre-training and supervised fine-tuning (SFT) paradigm, using a vanilla Transformer to predict masked tokens. LLaDA demonstrates strong scalability, outperforming auto-regressive model (ARM) baselines and being competitive with LLaMA3 8B in in-context learning and instruction-following abilities, such as multi-turn dialogue. Notably, it addresses the reversal curse, surpassing GPT-4o in a reversal poem completion task (Large Language Diffusion Models).


View: https://medium.com/the-low-end-disruptor/what-is-diffusion-llm-and-why-it-matters-749033d1efb1



And here is a more sobering critique from Richard Self on Linkedin -


Oh, Dear! Yet more magical thinking about reducing hallucinations of LLMs by using diffusion models.

As Denis O. critiqued them, this won't happen. Diffusion models are at least as much subject to hallucinations as transformer based LLMs. Anyone who has used a GenAI image generator knows of the difficulty in getting it to faithfully follow the prompts.

"It seems likely that diffusion LLMs could challenge auto-regressive tech, offering new capabilities like improved reasoning and controllability, but their full impact is still emerging."

As always, the proponents of these new technologies keep talking about reasoning. It is just not going to happen and we know that image generators are highly uncontrollable.

The idea of generating a whole block of text in parallel is wrong from a semantics and linguistics perspective. Language has sequential dependencies. An argument is developed sequentially. A critical evaluation is developed sequentially, not in parallel.

"Andrej Karpathy and Andrew Ng, both renowned AI researchers, have enthusiastically welcomed the arrival of Inception Lab’s diffusion LLM."

Surely, this is the kiss of death, not an endorsement.

"This process begins with random noise which is then gradually refined and “denoised” into a coherent stream of tokens. This is analogous to how diffusion models generate images by starting with noise and iteratively removing it to reveal a clear image."

It is not analogous, it is exactly the same, need to get the facts right.

"Parallel Processing: Diffusion models can process and generate text in parallel, potentially leading to significant speed advantages."

As with image generators, the final output of tokens will be fast, just transferring out of the output buffer to the user, but after a long pause (not even thinking) to do all the denoising.

We also know that diffusion models eat TFLOPs, just as much as or more than transformers.



Show me one financial whizz kid who understands any of this!
 
Last edited:

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
On the theme of AI putting white collar staff out of work rather than truck drivers, there is this from the maritime sector -

Operations teams face similar administrative burdens. Generating voyage instructions and operational handovers typically consumes 2.5 months of team time annually. Marcura's AI can now produce comprehensive handovers in under 40 seconds, including vessel-specific operational details and historical performance insights.

 

BIG FAT HOOR

Active member
New
Joined
Jul 9, 2025
Messages
98
Reaction score
132
On the theme of AI putting white collar staff out of work rather than truck drivers, there is this from the maritime sector -

Operations teams face similar administrative burdens. Generating voyage instructions and operational handovers typically consumes 2.5 months of team time annually. Marcura's AI can now produce comprehensive handovers in under 40 seconds, including vessel-specific operational details and historical performance insights.

AI has its limits .
i am in transport for over a half century --there are no circumstances where the management of passenger travel will be totally AI .
As they need someone to jail when somebody forgets to re line the brakes or to check the steering .
you cannot send a computer to jail when a plane takes off without enough fuel to cross the ocean .
certian things are a breeze for AI counting tickets searching sorting archiving --repitave tasks -- but responsiblity is not an attribute of a hard drive .
 

jpc

Moderator
Staff member
Member
Joined
Sep 3, 2022
Messages
3,188
Reaction score
4,435
AI has its limits .
i am in transport for over a half century --there are no circumstances where the management of passenger travel will be totally AI .
As they need someone to jail when somebody forgets to re line the brakes or to check the steering .
you cannot send a computer to jail when a plane takes off without enough fuel to cross the ocean .
certian things are a breeze for AI counting tickets searching sorting archiving --repitave tasks -- but responsiblity is not an attribute of a hard drive .
The corporate fall guy will be retained then.
 

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
The start of it.
Shit in shit out.
And base your opinion on what chat gpt is programmed or allowed to say.
Soyboy stuff

Talking of which there is a story on Linkedin of AI being used in medicine inventing a new part of the human brain. Haven't a link at the moment unfortunately.
 

jpc

Moderator
Staff member
Member
Joined
Sep 3, 2022
Messages
3,188
Reaction score
4,435
Talking of which there is a story on Linkedin of AI being used in medicine inventing a new part of the human brain. Haven't a link at the moment unfortunately.
Need a bit of context there to be sure.
Did it discover something or make something up?
Snd can it be confirmed.
 

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
Need a bit of context there to be sure.
Did it discover something or make something up?
Snd can it be confirmed.
Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools.

One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.

The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia."

It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest "reasoning" AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.

In Google's search results, this can lead to headaches for users during their research and fact-checking efforts.

But in a hospital setting, those kinds of slip-ups could have devastating consequences. It's not just Med-Gemini. Google's more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time.

"Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine," Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge."

What happened with Med-Gemini isn’t the exception, it’s the rule.

It is the perfect metaphor for the state of AI today: confidently wrong, beautifully phrased, and fundamentally hollow.

LLMs architecture cannot be evolved. No amount of scale, fine-tuning, or prompt engineering will turn a glorified guesser into a system that truly understands. It is structurally incapable of grounding itself in reality. It cannot model the world, cannot self-correct or admit when it doesn’t know because it doesn’t 'understand' anything to begin with.

The world deserves better, systems that are grounded in the real world, that learn and understand the way we do, not just mimic the surface of thought, but replicate the underlying process. Systems that know when they don’t know.

We call this performance theater as progress. We listen to LLM hypesters spinning foundational flaws into features, insisting that the next version will somehow cross the chasm, deep down even they know that it won't.

Build Cognitive AI → Unlock Real Intelligence.
 

jpc

Moderator
Staff member
Member
Joined
Sep 3, 2022
Messages
3,188
Reaction score
4,435
Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools.

One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.

The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia."

It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest "reasoning" AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.

In Google's search results, this can lead to headaches for users during their research and fact-checking efforts.

But in a hospital setting, those kinds of slip-ups could have devastating consequences. It's not just Med-Gemini. Google's more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time.

"Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine," Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge."

What happened with Med-Gemini isn’t the exception, it’s the rule.

It is the perfect metaphor for the state of AI today: confidently wrong, beautifully phrased, and fundamentally hollow.

LLMs architecture cannot be evolved. No amount of scale, fine-tuning, or prompt engineering will turn a glorified guesser into a system that truly understands. It is structurally incapable of grounding itself in reality. It cannot model the world, cannot self-correct or admit when it doesn’t know because it doesn’t 'understand' anything to begin with.

The world deserves better, systems that are grounded in the real world, that learn and understand the way we do, not just mimic the surface of thought, but replicate the underlying process. Systems that know when they don’t know.

We call this performance theater as progress. We listen to LLM hypesters spinning foundational flaws into features, insisting that the next version will somehow cross the chasm, deep down even they know that it won't.

Build Cognitive AI → Unlock Real Intelligence.
Now that's a bit more than an embarrassing oversight!
 

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
The start of a much darker side to AI. ChatGPT validated a young mans intention to kill himself. The parents are suing Sam Altman, I sincerely hope they are awarded billions.

 

Mad as Fish

Well-known member
Member
Joined
Jul 1, 2023
Messages
4,194
Reaction score
5,736
Our world.jpg
 

Latest Threads

Popular Threads

Top Bottom