In The Travels of Gulliver, our protagonist visits a university where one professor has had a large wooden frame constructed. In the are wooden rollers inserted, forming a grid. Cubes of various sizes with letters, syllables and whole words on them are then poured onto the rollers. The students turn the rollers over a number of times and jot down any sentences formed.
With a big and fast enough frame and number of cubes, any text would be produced sooner or later.
Which is all ChatGPT is: the same nonsense already exposed centuries ago. It is no more intelligent than the mechanical turk.
'How would you respond to someone who says that AI like you is "just answering yes or no questions"? And please don't put "it's important to remember" in your answer...'
ChatGPT
Certainly, I appreciate your desire for a direct response. AI systems, including GPT-4, do much more than simply answer yes or no questions.
AI's capabilities extend beyond binary responses; it generates nuanced and complex responses by analyzing a broad context. Its ability to understand and generate human-like text allows it to engage in meaningful conversations, answer detailed questions, write essays, summarize articles, generate creative stories, and even compose poetry.
Furthermore, AI can provide insights based on a large volume of data, identify patterns, and offer predictions in various fields such as weather forecasting, financial market trends, healthcare diagnostics, and many more.
However, it's crucial to understand that an AI's responses are generated based on patterns it learned during its training phase and it does not possess consciousness or subjective experiences. It's a sophisticated tool designed to assist and enhance human capabilities.
User
You avoided "it's important to remember" by using "it's crucial to understand", didn't you?
ChatGPT
I see your point, and you're correct. Both phrases serve a similar function in expressing the importance of a certain piece of information. In the interest of variety and directness, I'll attempt to refrain from using similar phrases.
To the point, AI systems like me do much more than simply answer yes or no questions. These systems can generate nuanced responses, answer complex queries, and even compose creative content. AI can also offer insights based on vast data, predict trends in various fields, and assist in complex tasks. However, AI responses are generated from patterns learned during training and do not come from consciousness or personal experience. They're advanced tools designed to enhance human capabilities."
Perhaps I should've been more precise what I meant by that statement: ultimately, even things as developed as, say, ChatGPT are little more than large language models (LLM) based on probabilistic 'guesswork' about what word(s) follow next; LLMs typically apply questions such as 'what is the most likely word (answer) to follow…', which is, ultimately, based on an admittedly impressive ability of the algorithm behind ChatGPT to run tens, if not hundreds, of thousands of words through an essentially 'yes/no' questionnaire.
The one salient aspect of such LLMs, I think, is this: 'tools designed to enhance human capabilities', i.e., writing tedious things, such as letters, adverts, media pieces, etc. will quite likely be taken over by such LLMs, which offers a host of opportunities, such as having more time to do other things, but then again, that would be an issue of (equal) access to such services, right?
I wear 2 hats, I'm a marketing writer ('copywriter') and a hypnotherapist. Both deal with the human mind, and how it works.
ChatGPT is actually pretty shit at marketing, because by the time you have given it all the information it needs you may as well have written the thing yourself. Where it can be useful is for editing what you've already done. You can give it some copy and tell it "Make this more about the features instead of the benefits" or "Make it less formal" etc.
Recently 'Writesonic' claimed to have produced what I want, which is a writing system that you can train with your data. Cool.. but when I tried it, nah. It just tries to rewrite whatever material you gave it, rather than using the information to create new material. It's a subtle distinction but bottom line, it's not ready to replace me any time soon.
Ironically enough I feel my therapy is more threatened by AI, as it's so good at spotting patterns, following rules and people may be more open to expressing themselves to an AI than to a human. One big issue is getting people to open up to another person, but why be shy of an AI?
It does ultimately boil down to yes/no behind the scenes when the weights are evaluated in a model. Basically, should one neural network path be taken vs another. And btw all of that is done by a human in the first place. For example: understanding "time" and "flies" . You have the definitions for each word. But combined? It needs context to understand. It's beyond simple grammatical syntax. Yet, that context itself not only provided by human labor initially providing all the proper data labels, but also again boils down to binary decision making at a very granular scale (I mean that's literally what the logic gates are doing, you can convert this entire string to binary representation too). "time flies" might still describe a type of flying insect. "time flies like" that portion could still describe what these time flies insect like. "time flies like an arrow" would provide higher probability you are not referring to insects, where as "time flies like fruits" would provide higher probability you are referring to insects.
And to provide meaning since ChatGPT can't know on its own -- hell, the weighting (blliions parameters) of a model is determined by people -- it relies on people first labeling all the context , connotations, etc first. See: https://time.com/6247678/openai-chatgpt-kenya-workers/
And fine tuning is also done via this continuous human feedback. It's like a route on a map. At the level of streets, you can say every decision to turn one way or another is binary decision.
Also to note, just don't completely rely on ChatGPT or any LLMs to actually help you find or discover something correct that isn't something that is overwhelmingly represented on the scraped web since all LLMs suffer from hallucinations.
"Brooks: It gives a really good technical understanding. What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be."
"I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.
But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up."
(e.g. if you're asking it to check or generate on html code, it will likely get it all correct. But assembly programming for say, a C64 or Amiga? Or programming system I/O interfaces at the register level? You'll be spending all your time literally rewriting everything)
"ChatGPT was launched in late 2022 and instantly became a hit. The chatbot is part of a family of new technologies called generative AI that can hold conversations with users for hours on end. The conversations feel so organic and normal that sometimes ChatGPT will seem to have a mind of its own. But the technology is notoriously inaccurate and will often just invent facts and sources for facts that are completely fake. Google’s competitor product Bard has similar problems."
It's like the map route example, at the granular street level, it makes one wrong turn, but it doesn't realize that, so every decision after keeps taking you further away from the real route.
I generally agree, but everything you said applies to humans. We also learn via language we pick up from other humans, we also make mistake and we also make shit up...
Regarding your example:
"What does "time flies" mean?
ChatGPT
The phrase "time flies" is an idiomatic expression that means time seems to pass very quickly. It's often used when someone is so engrossed in an activity or enjoying themselves that they lose track of time. For example, you might say "Time flies when I'm reading a good book" to express that you're so absorbed in the book that you don't notice how much time has passed."
I once had a similar conversation where someone declared it too stupid to operate a lawnmower and how it would drive a lawnmower over a rock. So I asked it what would happen if it drove a lawnmower over a rock? It gave a detailed answer about possible damage to the blade, the risk of the rock becoming a projectile etc.
The 'making crap up' aspect is a real problem, but these things are way, way smarter than most people realize.
We think because we can understand the basics ("autocomplete on steroids") that it can't be smart, but it does seem we have discovered how intelligence works. It's autocomplete, on steroids. We can say "It's just language" but language is what differentiates us from other animals. It's how we communicate, and learn. It's how we THINK.
I look at it this way, evolution took billions of years to create a pigeon, which can feed and breed itself, self-repair etc. In contrast a wind-up rubber-band toy plane is stupidly simple - but they can both fly.
If you're not trying to evolve a human brain and just want the thinking part, it's not so difficult or complex as we thought.
I think so, too. Just spoke with a bunch of German tourists in my neck of the woods: they had no idea that zoning rules often prevent (disallow) the construction of new single-family housing units. Large apartment blocks (condos) or the like: sure thing.
To be fair to our Western-style high rises: they look a bit less drab than those built in the former Soviet bloc, but the end result is, likely, similar.
In The Travels of Gulliver, our protagonist visits a university where one professor has had a large wooden frame constructed. In the are wooden rollers inserted, forming a grid. Cubes of various sizes with letters, syllables and whole words on them are then poured onto the rollers. The students turn the rollers over a number of times and jot down any sentences formed.
With a big and fast enough frame and number of cubes, any text would be produced sooner or later.
Which is all ChatGPT is: the same nonsense already exposed centuries ago. It is no more intelligent than the mechanical turk.
Also, humans manage to move 'beyond yes/no' considerations between ages 2-4.
AI generated poetry contains as much food for the soul as a bowl of artificial fruit contains for the body.
AI is way beyond yes or no questions.
I asked it what it thought of that idea:
'How would you respond to someone who says that AI like you is "just answering yes or no questions"? And please don't put "it's important to remember" in your answer...'
ChatGPT
Certainly, I appreciate your desire for a direct response. AI systems, including GPT-4, do much more than simply answer yes or no questions.
AI's capabilities extend beyond binary responses; it generates nuanced and complex responses by analyzing a broad context. Its ability to understand and generate human-like text allows it to engage in meaningful conversations, answer detailed questions, write essays, summarize articles, generate creative stories, and even compose poetry.
Furthermore, AI can provide insights based on a large volume of data, identify patterns, and offer predictions in various fields such as weather forecasting, financial market trends, healthcare diagnostics, and many more.
However, it's crucial to understand that an AI's responses are generated based on patterns it learned during its training phase and it does not possess consciousness or subjective experiences. It's a sophisticated tool designed to assist and enhance human capabilities.
User
You avoided "it's important to remember" by using "it's crucial to understand", didn't you?
ChatGPT
I see your point, and you're correct. Both phrases serve a similar function in expressing the importance of a certain piece of information. In the interest of variety and directness, I'll attempt to refrain from using similar phrases.
To the point, AI systems like me do much more than simply answer yes or no questions. These systems can generate nuanced responses, answer complex queries, and even compose creative content. AI can also offer insights based on vast data, predict trends in various fields, and assist in complex tasks. However, AI responses are generated from patterns learned during training and do not come from consciousness or personal experience. They're advanced tools designed to enhance human capabilities."
...
Does that sound like 'yes or no' to you, really?
Huhum, funny answers, esp. the follow-up.
Perhaps I should've been more precise what I meant by that statement: ultimately, even things as developed as, say, ChatGPT are little more than large language models (LLM) based on probabilistic 'guesswork' about what word(s) follow next; LLMs typically apply questions such as 'what is the most likely word (answer) to follow…', which is, ultimately, based on an admittedly impressive ability of the algorithm behind ChatGPT to run tens, if not hundreds, of thousands of words through an essentially 'yes/no' questionnaire.
The one salient aspect of such LLMs, I think, is this: 'tools designed to enhance human capabilities', i.e., writing tedious things, such as letters, adverts, media pieces, etc. will quite likely be taken over by such LLMs, which offers a host of opportunities, such as having more time to do other things, but then again, that would be an issue of (equal) access to such services, right?
Mixed bag of things, I'd say.
I wear 2 hats, I'm a marketing writer ('copywriter') and a hypnotherapist. Both deal with the human mind, and how it works.
ChatGPT is actually pretty shit at marketing, because by the time you have given it all the information it needs you may as well have written the thing yourself. Where it can be useful is for editing what you've already done. You can give it some copy and tell it "Make this more about the features instead of the benefits" or "Make it less formal" etc.
Recently 'Writesonic' claimed to have produced what I want, which is a writing system that you can train with your data. Cool.. but when I tried it, nah. It just tries to rewrite whatever material you gave it, rather than using the information to create new material. It's a subtle distinction but bottom line, it's not ready to replace me any time soon.
Ironically enough I feel my therapy is more threatened by AI, as it's so good at spotting patterns, following rules and people may be more open to expressing themselves to an AI than to a human. One big issue is getting people to open up to another person, but why be shy of an AI?
It does ultimately boil down to yes/no behind the scenes when the weights are evaluated in a model. Basically, should one neural network path be taken vs another. And btw all of that is done by a human in the first place. For example: understanding "time" and "flies" . You have the definitions for each word. But combined? It needs context to understand. It's beyond simple grammatical syntax. Yet, that context itself not only provided by human labor initially providing all the proper data labels, but also again boils down to binary decision making at a very granular scale (I mean that's literally what the logic gates are doing, you can convert this entire string to binary representation too). "time flies" might still describe a type of flying insect. "time flies like" that portion could still describe what these time flies insect like. "time flies like an arrow" would provide higher probability you are not referring to insects, where as "time flies like fruits" would provide higher probability you are referring to insects.
And to provide meaning since ChatGPT can't know on its own -- hell, the weighting (blliions parameters) of a model is determined by people -- it relies on people first labeling all the context , connotations, etc first. See: https://time.com/6247678/openai-chatgpt-kenya-workers/
And fine tuning is also done via this continuous human feedback. It's like a route on a map. At the level of streets, you can say every decision to turn one way or another is binary decision.
Also to note, just don't completely rely on ChatGPT or any LLMs to actually help you find or discover something correct that isn't something that is overwhelmingly represented on the scraped web since all LLMs suffer from hallucinations.
https://spectrum.ieee.org/gpt-4-calm-down
"Brooks: It gives a really good technical understanding. What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be."
"I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.
But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up."
(e.g. if you're asking it to check or generate on html code, it will likely get it all correct. But assembly programming for say, a C64 or Amiga? Or programming system I/O interfaces at the register level? You'll be spending all your time literally rewriting everything)
https://www.howtogeek.com/890540/dont-trust-chatgpt-to-do-math/
https://www.forbes.com/sites/mattnovak/2023/05/27/lawyer-uses-chatgpt-in-federal-court-and-it-goes-horribly-wrong/?sh=276226823494
https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=69d1f2d67c7f
"ChatGPT was launched in late 2022 and instantly became a hit. The chatbot is part of a family of new technologies called generative AI that can hold conversations with users for hours on end. The conversations feel so organic and normal that sometimes ChatGPT will seem to have a mind of its own. But the technology is notoriously inaccurate and will often just invent facts and sources for facts that are completely fake. Google’s competitor product Bard has similar problems."
It's like the map route example, at the granular street level, it makes one wrong turn, but it doesn't realize that, so every decision after keeps taking you further away from the real route.
I generally agree, but everything you said applies to humans. We also learn via language we pick up from other humans, we also make mistake and we also make shit up...
Regarding your example:
"What does "time flies" mean?
ChatGPT
The phrase "time flies" is an idiomatic expression that means time seems to pass very quickly. It's often used when someone is so engrossed in an activity or enjoying themselves that they lose track of time. For example, you might say "Time flies when I'm reading a good book" to express that you're so absorbed in the book that you don't notice how much time has passed."
I once had a similar conversation where someone declared it too stupid to operate a lawnmower and how it would drive a lawnmower over a rock. So I asked it what would happen if it drove a lawnmower over a rock? It gave a detailed answer about possible damage to the blade, the risk of the rock becoming a projectile etc.
The 'making crap up' aspect is a real problem, but these things are way, way smarter than most people realize.
We think because we can understand the basics ("autocomplete on steroids") that it can't be smart, but it does seem we have discovered how intelligence works. It's autocomplete, on steroids. We can say "It's just language" but language is what differentiates us from other animals. It's how we communicate, and learn. It's how we THINK.
I look at it this way, evolution took billions of years to create a pigeon, which can feed and breed itself, self-repair etc. In contrast a wind-up rubber-band toy plane is stupidly simple - but they can both fly.
If you're not trying to evolve a human brain and just want the thinking part, it's not so difficult or complex as we thought.
It's just autocomplete.
On steroids.
...and here in the UK they want to abolish new building projects once they have herded people into the cities.
Yeah, like that's gonna happen. Seriously.
They can dream but a dream doesn't automagically happen, does it now. Not when people throw those spanners around anyway.
Thanks Epimetheus.
I think so, too. Just spoke with a bunch of German tourists in my neck of the woods: they had no idea that zoning rules often prevent (disallow) the construction of new single-family housing units. Large apartment blocks (condos) or the like: sure thing.
To be fair to our Western-style high rises: they look a bit less drab than those built in the former Soviet bloc, but the end result is, likely, similar.
Sigh.