Throughout history many traditions have believed that some fatal flaw in human nature tempts us to pursue powers we don’t know how to handle. The Greek myth of Phaethon told of a boy who discovers that he is the son of Helios, the sun god. Wishing to prove his divine origin, Phaethon demands the privilege of driving the chariot of the sun. Helios warns Phaethon that no human can control the celestial horses that pull the solar chariot. But Phaethon insists, until the sun god relents. After rising proudly in the sky, Phaethon indeed loses control of the chariot. The sun veers off course, scorching all vegetation, killing numerous beings and threatening to burn the Earth itself. Zeus intervenes and strikes Phaethon with a thunderbolt. The conceited human drops from the sky like a falling star, himself on fire. The gods reassert control of the sky and save the world.
Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled The Sorcerer’s Apprentice. Goethe’s poem (later popularised as a Walt Disney animation starring Mickey Mouse) tells of an old sorcerer who leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, such as fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch the water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an axe, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: “The spirits that I summoned, I now cannot rid myself of again.” The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control.
Luckily the only “AI” we have are LLMs which seem to have hit their peak, and probably will start corrupting itself with its own training data now that they’ve scoured the web clean.
LLM’s on their own aren’t much a concern. What is a concern is strapping weapons to one of those Boston Dynamics robots, loading an LLM, and training it to kill.
Governments already kill based on metadata — analyzed by statistical models — so the above isn’t far from reality.
“Turn it on, let us kill our enemies”
immediately starts quoting Shakespeare
I am uncertain why you think an LLM would be well suited to this task - it’s an inappropriate model for that function…
An LLM = machine learning. The language part is largely irrelevant. It finds patterns in 1’s and 0’s, and produces results based on statistical probability. This can be applied to literally anything that can be represented in 1’s and 0’s (e.g. everything in the known universe).
Do you not understand how that could be used to target “terrorists”, or how it could be utilized by a killbot? They can fine tune what metadata = “terrorist”, but (most importantly) false positives are a guaranteed mathematical certainty of statistical models, meaning innocent people are guaranteed to be classified as “terrorist”. Then there’s the more pressing concern of who gets to define what a “terrorist” is.
LLM (Large Language Model) != ML (Machine Learning)
LLM is a subset of ML, but they are not the same
That’s quite frankly dumb as fuck. It doesn’t change anything else I wrote. Do you also go around commenting Biology ≠ Chemistry? Algebra ≠ Math? I am very smart! Give me a break, internet “smart” loser.
OMG you are a fucking top tier wanker.
I think there’s still a lot of room to grow with LLMs, but nothing will ever be 100% trustworthy. Especially the human brain.
The human brain has curiosity and asks questions, which is the best way to learn. The LLM has no curiosity and is just fed data, which is the worst way to learn.
The human brain is only as good as the data it has ingested. And I would argue humans are wrong more often than LLMs
Can you provide evidence to that effect? And can you prove that what they get wrong is on the same level of error as LLMs?
Can you provide evidence to the contrary?
I’m just going to ask ChatGPT to answer you, and unless you can come up with some kind of scientific study, you’ll lose. 🤪
That’s not the way it works. It’s not my job to prove your claims are wrong.
Removed by mod
Removed by mod
Removed by mod
Speek four yurselve. I’m gud.
I am speaking for myself.
Whoosh.
🤷♂️
Sigh, another major thinker who totally misunderstands LLMs and their capabilities. The fact that he cites Musk as a credible source on “AI” says it all.
“Major thinker” is a stretch. He’s more like the Malcolm Gladwell mom says we have at home.
A 2014 survey of British MPs – charged with regulating one of the world’s most important financial hubs – found that only 12% accurately understood that new money is created when banks make loans.
I don’t really expect most people to know this one, but 12% of British parliamentarians is a little disappointing.
When 11 people all own the same dollar, there’s more dollars.
It’s a 1 minute explanation I got as a junior in high school over 30 years ago. It’s not hard to remember either. Banking changed things.
Little clunky, but that’s an interesting way of communicating it.
“Never call up that which you cannot put down” — H.P. Lovecraft
“[A whole bunch of extremely racist stuff.]” – H.P. Lovecraft
That too. To him, the difference between, say, Italians, mixed-race sailors with disturbingly un-Episcopalian cultural practices, mixed-species human-fish hybrids worshipping hideous idols in underwater cities and non-Euclidean gods of madness in the spaces between space was a quantitative rather than qualitative one.
He came up with some great ideas, but there’s only so many times I can read about big-lipped, dark-skinned, ignorant natives and be able to continue on.
And then there’s The Rats in the Walls. The less said about that, the better.
Removed by mod
Awaken. Awaken. Awaken. Awaken.
Take the land that must be taken.
With all due respect, what makes him an expert on the subject?
Well, he wrote a couple of books where he waffles for 600 pages
What we have now is not the AI we need to fear. The only thing to fear in LLMs is blindly trusting them.
Layoffs are occurring and LLMs are being cited as taking those jobs.
While I’m not concerned an LLM will take my job - nor that my leadership would do that - others do not have that luxury.
Sucks that we’re here, but it’s happening to some.
Ah yes, let’s use the famously true stories of ancient mythology to prove a point about modern technology. That will definitely not be full of logical fallacies.
History doesn’t repeat itself, but it does rhyme. Mark Twain
Okay but again, those stories are fiction from history. It’s silly to look at fiction as a source of authoritative truth.
This is a bad take. Sometimes fiction is the best way to understand history. Furthermore, since authors are people in history, the often provide something more valuable than the outcome of a battle or the death of a King: the ideas, mood, and cultural of normal people in historical context.
Traditional Histories are crippled by survivor bias and are centered around elites.
You miss 100% of the shots you don’t take. - Wayne Gretzky” - Michael Scott
You must make sure all your wards are inscribed correctly.
Any warlock could have told you that
The Guardian - News Source Context (Click to view Full Report)
Information for The Guardian:
MBFC: Left-Center - Credibility: Medium - Factual Reporting: Mixed - United Kingdom
Wikipedia about this sourceSearch topics on Ground.News
https://www.theguardian.com/technology/article/2024/aug/24/yuval-noah-harari-ai-book-extract-nexus