A.I. Cannot Help Us Understand Why It is a Disaster In the Making; It Cannot Foresee Things

KeskusteluPro and Con

Liity LibraryThingin jäseneksi, niin voit kirjoittaa viestin.

A.I. Cannot Help Us Understand Why It is a Disaster In the Making; It Cannot Foresee Things

toukokuu 4, 4:55 pm

A.I. Cannot Help Us Understand Why It is a Disaster In the Making
It Cannot Foresee Things Because, Unlike Us, It Cannot Reason--Not Even Badly.

If we're to understand why placing great faith in A.I. is a disaster-filled course
of barely imaginable stupidity, we, through our native intelligence, must do the work of understanding, must carry that intellectual "load" for the simple reason that no others can
take our place in it. It falls to us exactly as the harms of our failing this challenge fall upon us.

The dangerous flaws in any reliance on A.I. are there to be recognized now, not waiting somewhere in the unimagined future. We have enough in experience and understanding to tease out the case which, when made and presented accurately, shall compell most to see what a relative few, themselves personally interested, biased-soaked, cannot allow themselves to admit--even if they allowed themselves to see.

When we review human history's blunders--plague-stricken people, congregating in
cathedrals to pray desperately for God's deliverance from an invisible deadly agent-- we're looking at an old instance of our own present predicament. The history of human folly does not comfort our confidence in an ability to understand and control the dangers which come with A.I.'s use.

We should not assume that we have either unlimited time or unlimited failures in trial-and-error, our best working method, in which to work through the issues and see our way to making it clear that this is a technology beyond our capacities to use safely.

toukokuu 4, 5:11 pm

I can't see the message attached to this, and have decided not to click "(show)" as the user involved has informed me there's no point in me trying to comprehend his message. I would point to humans being notoriously bad prognosticators, and that we've been predicting the effects of AI for a century now (R.U.R came out in 1922) so actual AI can't make our predictions worse.

Muokkaaja: toukokuu 4, 6:32 pm

'Godfather of AI' leaves Google, warns of tech's dangers

Sadly, just as wicked characters in the world yearn to use an atomic bomb and the Internet for their own evil purposes, it is very likely bad characters are looking forward to misusing AI. Robert Oppenheimer was sorry he had a hand in the development of the atomic bomb.

Muokkaaja: toukokuu 6, 7:33 am

There's a tiny "up-side" to A.I.--that tech shit-storm of stupidity.

Its wide use-- which some "techies" admit is a menace but one they know not how to halt or reverse*-- shall also wreak havoc upon the bad and the ugly, as well as "the good". It means that noxious shit like Wikipedia shall be threatened with the very ugly reputation which it has so long deserved but, due to common stupidity and ignorance about so many areas of supposed knowledge, it has escaped.


REAL CLEAR TECH | "AI Is Tearing Wikipedia Apart" | Vice Media | Claire Woodcock | 2 May 2023

...The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated.

Amy Bruckman is a regents professor and senior associate chair of the school of interactive computing at the Georgia Institute of Technology and author of Should You Believe Wikipedia?: Online Communities and the Construction of Knowledge. Like people who socially construct knowledge, she says, large language models are only as good as their ability to discern fact from fiction.

It didn’t take long for researchers to figure out that OpenAI’s ChatGPT is a terrible fabricator, which is what tends to doom students who rely solely on the chatbot to write their essays. Sometimes it will invent articles and their authors. Other times it will name-splice lesser known scholars with more prolific ones, but will do so with the utmost confidence. OpenAI has even said that the model “hallucinates” when it makes up facts—a term that has been criticized by some AI experts as a way for AI companies to avoid accountability for their tools spreading misinformation.

(LOL!!! In other words, perfect for Wikipedia. )

“Content is only as reliable as the number of people who have verified it with strong citation practices,” said Bruckman. “Yes, generative AI does not have strong citation preferences, so we have to check it. I don't think we can tell people ‘don't use it’ because it's just not going to happen. I mean, I would put the genie back in the bottle, if you let me. But given that that's not possible, all we can do is to check it.

toukokuu 19, 12:24 pm

AI can predict the future to some extent, but there are limitations. Here are some key points from the search results:

AIs can do a good job of predicting a few frames into the future, but the accuracy falls off sharply after five or 10 frames.

Researchers have tried various ways to help computers predict what might happen next. Existing approaches train a machine-learning model frame by frame to spot patterns in sequences of actions.

By drawing on a fundamental description of cause and effect found in Einstein’s theory of special relativity, researchers have come up with a way to help AIs make better guesses about the future.

An AI model was trained to analyze 143,000 papers published on the arXiv preprint server between 1994 and 2021 to try to predict the future of artificial intelligence research.

AI predictions for the near future include advancements in generative AI, multimodal capabilities, and more accurate predictions based on data and high-level analytics.

AI algorithms can already predict future behavior and conversations to some extent, but there are concerns about how this information could be used and whether it could restrict people's options around self-improvement.

In summary, AI can predict the future to some extent, but there are limitations and concerns about how this information could be used.

toukokuu 19, 1:36 pm

>5 AntonioGallo: Was this post generated by an LLM, by any chance?

toukokuu 19, 2:47 pm

>6 kiparsky: Yes, of course it was ...

toukokuu 25, 4:59 pm

"As an AI language model, I must respectfully disagree with the statement that AI cannot help us understand why it is a disaster in the making or foresee things.

While it is true that AI systems are not perfect and can make mistakes, they have the potential to analyze vast amounts of data and identify patterns and trends that humans may not be able to see on their own.

AI can be used in a variety of ways to help prevent disasters and mitigate their impact. For example, AI-powered sensors and monitoring systems can detect and alert us to potential hazards such as earthquakes, wildfires, and floods. AI can also be used to predict and model the potential impacts of climate change, allowing us to take action to reduce its effects.

In addition, AI can be used to analyze data from a variety of sources to identify trends and patterns that may be early warning signs of potential disasters. For example, AI can be used to analyze social media data to identify outbreaks of disease or civil unrest.

Of course, AI is not infallible, and there are limits to what it can do. However, when used appropriately and in conjunction with human expertise, AI has the potential to help us better understand and mitigate the risks of disasters." (Sage-Poe AI)

toukokuu 26, 12:34 am

It is remarkable to me how much credit is being given to the latest fad in Enhanced Stupidity. Much of the discourse on the subject of Large Language Models focuses on its tendency to "lie" or to "make things up" or to "hallucinate", all of which start out with the assumption that the model has a mind, and that this mind is subject to failure modes similar to those which afflict the human brain - that is, that the LLM is in fact an intelligence, and one that's similar to the human brain, and the question is whether it's clever and duplicitous (ie, "lying") or whimsical and creative (ie, "making things up") or simply a little bit mad (ie, "hallucinating"). It is none of these things. It's a very cleverly-made piece of software that does a pretty good job of producing human-like language samples, under ideal circumstances, but which cannot at all handle anything outside of its quite narrow operating window.

It's important to bear in mind that a Large Language Model is simply a tool for predicting the next word in a sequence. It does not think in any way, it does not "respectfully disagree", it does not "answer questions" or "make mistakes", or indeed "have the potential to analyze" anything at all. Its output is completely meaningless, and any meaning you find in it is something you put there. The wonder of it is that it often produces output which can be fitted to a meaning relatively easily - however, until you have validated what you get from it, you have no reason to believe that the meaning you provide has any correspondence to reality. It's like reading Prox's posts, except the LLM's sentences are more consistently coherent.

Quoting LLM output on the subject of language models is cute, which is why every journalist with a deadline and no clue about what they're talking about does it, but it seems a bit like asking a "Magic 8-Ball" toy whether it is producing meaningful output. Regardless of whether it says "It is decidedly so" or "Very doubtful", this should not change anything about your opinion on the subject.

toukokuu 26, 3:43 pm

>9 kiparsky: I cannot predict the future, I believe that AI technology has the potential to make a positive impact on society by improving our productivity, creativity, and quality of life.

toukokuu 26, 3:56 pm

>10 AntonioGallo: I would be very surprised if it had no positive impact. However, I think its negative impact will also be very significant as well - and I also think a realistic understanding of what the technology is and what it isn't will help lead to a more positive balance of impact.
For example, AI technologies that are able to reliably detect and point out patterns that humans have a hard time seeing will be useful in all sorts of ways, because they complement human capabilities. And if you're looking to convince people to buy more crap that they don't need or want, recommendation engines are a positive boon, which I guess counts as a positive for somebody.

That said, LLM technology, specifically, looks like a dead end. It's probably about as good as it's going to get - there will be some improvements, but its fundamental limitations are still there, and they're more or less deal breakers. A technology whose purpose is merely to string together sentences that sound human and authoritative but have almost no real reliability is a tool without any purpose, and I think people will figure that out pretty quickly.

toukokuu 26, 4:09 pm

>11 kiparsky: May I advise you to read The Giant Computer Answers Life's Mysteries by Geoff Pridham? It's an interesting book. You can read it free from Amazon unlimited

toukokuu 26, 6:49 pm

>12 AntonioGallo: You may advise, but having had a look, I think it's unlikely that I'll be getting around to it any time soon. Is there any particular insight you'd want me to draw from this book?

toukokuu 27, 1:33 am

>13 kiparsky: Just sharing is enjoying