OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.
The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.
Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.
Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.
Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.
The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.
At least in my opinion that’s where the negativity comes from.
Personally, having been in Tech for almost 3 decades I am massivelly skeptical when the usual suspects put out yet another incredible claim backed up by overly positive one-sided evaluations of something they own, and worse in an area I actually have quite a lot of knowledge in and can see through a lot of the bullshit, and it gets picked up by mindless fanboys who don’t have the expertise to understand jack-shit of what they’re parroting and greedy fuckers using salesspeak because they stand to personally gain if enough usefull idiots jump into the hype train.
You don’t even need to be old enough to remember that “revolution in human transportation” was how the Segway was announced: all it takes is to look at the claims about Bitcoin and the blockchain and remember the fraud-ridden shitshow the whole area became.
As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.
It’s not even negativity (except towards the greedy fuckers trying to take advantage of others and who can Burn In Hell), it’s informed (both historically and by domain knowledge) skepticism.
As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.
I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.
In my experience they’re a great tool to wrap and unwrap knowledge in and from language envelopes with different characteristics and I wouldn’t at all be surprised if they replace certain jobs which deal mostly with communicating with people (for example, I suspect the kind of news reporting of news agencies doesn’t really need human writters to compose articles, just data in bullet point format an LLM to turn it into a “story”).
What LLMs are not is AGI and using them as knowledge engines or even just knowledge sources is a recipe for frustration as you end up either going down the wrong route by believing the AI or spending more time validating the AI output than the time it would take to find out the knowledge yourself from reliable sources.
Whilst I’ve been on and off on the whole “might they be the starting point from which AGI comes” (which is really down to the question “what is intelligence”), what I am certain is nobody who is trully knowledgeable about it can honestly and assuredly state that “they are the seed from which AGI will come”, and that kind of crap (or worse, people just stating LLMs already are intelligent) is almost all of the hype we get about AI at the moment.
At the moment and judging by the developments we are seeing, I’m more inclined to think that at least the reasoning part of intelligence won’t be solved by this path, though the intuition part of it might as that stuff is mainly about pattern recognition.
Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.
In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.
Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.
I’ve been thinking about the possibility of LLM revolutionizing search (basically search engines) which are not autoritative sources of information (far from) but they’ll get you much faster to those.
LLM’s do have most of the same information as they do, add the whole extra level of being able to use natural language to query it in a more natural way and due to their massive training sets, even if one’s question is slightly incorrect the nearest cluster of textual tokens in the token space (an oversimplified descriptions of how LLMs work, I know) to said incorrect question might very well be were the correct questions and answers are, so you get the correct answer (and funnilly enough the more naturally one poses the question the better).
However as a direct provider of answers, certainly in a professional setting, it quickly becomes something that produces more work than it saves, because you always have to check the answers since there are no cues about how certain or uncertain that result was.
I suspect many if not most of us also had human colleagues who were just like that: delivering even the most “this is a wild guess” answer to somebody’s question as an assured “this is the way things are”, and I suspect also that most of of those who had such colleagues quickly learned to not go to them for answers and always double check the answer when they did.
This is why I doubt it will do things like revolutionizing programming or in fact replace humans in producing output in hard-knowledge domains that operate mainly on logic, though it might very well replace humans whose work is to wrap things up in the appropriate language for the target audience (I suspect it’s going to revolutionize the production of highly segmented and even individually targetted propaganda in social networks)
It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.
The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.
Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.
Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.
Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.
The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.
At least in my opinion that’s where the negativity comes from.
Personally, having been in Tech for almost 3 decades I am massivelly skeptical when the usual suspects put out yet another incredible claim backed up by overly positive one-sided evaluations of something they own, and worse in an area I actually have quite a lot of knowledge in and can see through a lot of the bullshit, and it gets picked up by mindless fanboys who don’t have the expertise to understand jack-shit of what they’re parroting and greedy fuckers using salesspeak because they stand to personally gain if enough usefull idiots jump into the hype train.
You don’t even need to be old enough to remember that “revolution in human transportation” was how the Segway was announced: all it takes is to look at the claims about Bitcoin and the blockchain and remember the fraud-ridden shitshow the whole area became.
As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.
It’s not even negativity (except towards the greedy fuckers trying to take advantage of others and who can Burn In Hell), it’s informed (both historically and by domain knowledge) skepticism.
I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.
In my experience they’re a great tool to wrap and unwrap knowledge in and from language envelopes with different characteristics and I wouldn’t at all be surprised if they replace certain jobs which deal mostly with communicating with people (for example, I suspect the kind of news reporting of news agencies doesn’t really need human writters to compose articles, just data in bullet point format an LLM to turn it into a “story”).
What LLMs are not is AGI and using them as knowledge engines or even just knowledge sources is a recipe for frustration as you end up either going down the wrong route by believing the AI or spending more time validating the AI output than the time it would take to find out the knowledge yourself from reliable sources.
Whilst I’ve been on and off on the whole “might they be the starting point from which AGI comes” (which is really down to the question “what is intelligence”), what I am certain is nobody who is trully knowledgeable about it can honestly and assuredly state that “they are the seed from which AGI will come”, and that kind of crap (or worse, people just stating LLMs already are intelligent) is almost all of the hype we get about AI at the moment.
At the moment and judging by the developments we are seeing, I’m more inclined to think that at least the reasoning part of intelligence won’t be solved by this path, though the intuition part of it might as that stuff is mainly about pattern recognition.
Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.
In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.
Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.
That’s a good point.
I’ve been thinking about the possibility of LLM revolutionizing search (basically search engines) which are not autoritative sources of information (far from) but they’ll get you much faster to those.
LLM’s do have most of the same information as they do, add the whole extra level of being able to use natural language to query it in a more natural way and due to their massive training sets, even if one’s question is slightly incorrect the nearest cluster of textual tokens in the token space (an oversimplified descriptions of how LLMs work, I know) to said incorrect question might very well be were the correct questions and answers are, so you get the correct answer (and funnilly enough the more naturally one poses the question the better).
However as a direct provider of answers, certainly in a professional setting, it quickly becomes something that produces more work than it saves, because you always have to check the answers since there are no cues about how certain or uncertain that result was.
I suspect many if not most of us also had human colleagues who were just like that: delivering even the most “this is a wild guess” answer to somebody’s question as an assured “this is the way things are”, and I suspect also that most of of those who had such colleagues quickly learned to not go to them for answers and always double check the answer when they did.
This is why I doubt it will do things like revolutionizing programming or in fact replace humans in producing output in hard-knowledge domains that operate mainly on logic, though it might very well replace humans whose work is to wrap things up in the appropriate language for the target audience (I suspect it’s going to revolutionize the production of highly segmented and even individually targetted propaganda in social networks)