I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!
Let me ask chatgpt what I think about this
Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.
It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.
Copilot told me you’re wrong and that I can’t play with you anymore.
Vs text books? What’s the difference?
The variety of available text books, reviewed for use by educators vs autocratic loving tech bros pushing black box solutions to the masses.
Just off thebtopnofnmy head.
Tech Bros aren’t really reviewing it individually.
I know
I’ve only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me “I’ve fixed the problem now”
I use it to write code for me sometimes, saving me remembering the different syntax and syntactic sugar when I hop between languages. And I use to answer questions about things I wonder - it always provides references. So far it’s been quite useful. And for all that people bitch and piss and cry giant crocodile tears while gnashing their teeth - I quite enjoy Apple AI. It’s summaries have been amazing and even scarily accurate. No, it doesn’t mean Siri’s good now, but the rest of it’s pretty amazing.
so no real chinese LLMs…who would have thought…not the chinese apparently…but yet they think their “culture” of opression and stome-like-thinking will get them anywhere. the honey badger Xi calls himself an antiintellectual. this is how i perceive moat students from china i get to know. i pitty the chinese kids for the regime they live in.
Quickly, ask AI how to improve or practice critical thinking skills!
Chat GPT et al; “To improve your critical thinking skills you should rely completely on AI.”
That sounds right. Lemme ask Gemini and DeepSink just in case.
“Deepsink” lmao sounds like some sink cleaner brand
Improving your critical thinking skills is a process that involves learning new techniques, practicing them regularly, and reflecting on your thought processes. Here’s a comprehensive approach:
1. Build a Foundation in Logic and Reasoning
• Study basic logic: Familiarize yourself with formal and informal logic (e.g., learning about common fallacies, syllogisms, and deductive vs. inductive reasoning). This forms the groundwork for assessing arguments objectively.
• Learn structured methods: Books and online courses on critical thinking (such as Lewis Vaughn’s texts) provide a systematic introduction to these concepts.
2. Practice Socratic Questioning
• Ask open-ended questions: Challenge assumptions by repeatedly asking “why†and “how†to uncover underlying beliefs and evidence.
• Reflect on responses: This method helps you clarify your own reasoning and discover alternative viewpoints.
3. Engage in Reflective Practice
• Keep a journal: Write about decisions, problems, or debates you’ve had. Reflect on what went well, where you might have been biased, and what could be improved.
• Use structured reflection models: Approaches like Gibbs’ reflective cycle guide you through describing an experience, analyzing it, and planning improvements.
4. Use Structured Frameworks
• Follow multi-step processes: For example, the Asana article “How to build your critical thinking skills in 7 steps†suggests: identify the problem, gather information, analyze data, consider alternatives, draw conclusions, communicate solutions, and then reflect on the process.
• Experiment with frameworks like Six Thinking Hats: This method helps you view issues from different angles (facts, emotions, positives, negatives, creativity, and process control) by “wearing†a different metaphorical hat for each perspective.
5. Read Widely and Critically
• Expose yourself to diverse perspectives: Reading quality journalism (e.g., The Economist, FT) or academic articles forces you to analyze arguments, recognize biases, and evaluate evidence.
• Practice lateral reading: Verify information by consulting multiple sources and questioning the credibility of each.
6. Participate in Discussions and Debates
• Engage with peers: Whether through formal debates, classroom discussions, or online forums, articulating your views and defending them against criticism deepens your reasoning.
• Embrace feedback: Learn to view criticism as an opportunity to refine your thought process rather than a personal attack.
7. Apply Critical Thinking to Real-World Problems
• Experiment in everyday scenarios: Use critical thinking when making decisions—such as planning your day, solving work problems, or evaluating news stories.
• Practice with “what-if†scenarios: This helps build your ability to foresee consequences and assess risks (as noted by Harvard Business’s discussion on avoiding the urgency trap).
8. Develop a Habit of Continuous Learning
• Set aside regular “mental workout†time: Like scheduled exercise, devote time to tackling complex questions without distractions.
• Reflect on your biases and update your beliefs: Over time, becoming aware of and adjusting for your cognitive biases will improve your judgment.
By integrating these strategies into your daily routine, you can gradually sharpen your critical thinking abilities. Remember, the key is consistency and the willingness to challenge your own assumptions continually.
Happy thinking!
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
This isn’t a profound extrapolation. It’s akin to saying “Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework.” Or “kids who watch TV lack the reading skills of kids who read books”.
Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.
This isn’t predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.
Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you’re going to suffer in your capacity for future inquiry.
All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.
After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.
Good programmers do what I described, bad programmers copy and run without reading. It’s just like SO kids.
Seriously, ask AI about anything you have expert knowledge in. It’s laughable sometimes… However you need to know, to know it’s wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.
Corporations and politicians: “oh great news everyone… It worked. Time to kick off phase 2…”
- Replace all the water trump wasted in California with brawndo
- Sell mortgages for eggs, but call them patriot pods
- Welcome to Costco, I love you
- All medicine replaced with raw milk enemas
- Handjobs at Starbucks
- Ow my balls, Tuesdays this fall on CBS
- Chocolate rations have gone up from 10 to 6
- All government vehicles are cybertrucks
- trump nft cartoons on all USD, incest legal, Ivanka new first lady.
- Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
- FDA doesn’t inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
that “ow, my balls” reference caught me off-guard
I love how you mix in the Idiocracy quotes :D
I hate how it just seems to slide in.
A savvy consumer, glad you mentioned. Felt better than hitting it on the nose.
- Handjobs at Starbucks
Well that’s just solid policy right there, cum on.
It would wake me up more than coffee that’s for sure
Bullet point 3 was my single issue vote
You mean an AI that literally generated text based on applying a mathematical function to input text doesn’t do reasoning for me? (/s)
I’m pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.
It’s funny because I never get what I want out of AI. I’ve been thinking this whole time “am I just too dumb to ask the AI to do what I need?” Now I’m beginning to think “am I not dumb enough to find AI tools useful?”
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn’t do otherwise (I’m not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
Like any tool, it’s only as good as the person wielding it.
I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.
Legit, being able to say “I want these questions. But… not these…” and get them back in a moment’s notice really does let me say “FUCK it. Pop quiz. Let’s go, class.” And be ready with brand new questions on the board that I didn’t have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer’s block, with a “yeah, and—!” machine. If for no other reason than saying “uhh… no, not that, NAI…” and then correct it my way.
I’ve spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.
AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: “the dawn brings the warmth of the sun, and awakens the world. So does your trial begin.” He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.
I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).
I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.
Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn’t understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.
I ran a campaign by myself with 2 of my characters. I had DS act as DM. It seemed to handle it all perfectly fine. I tested it later and gave it scenarios. I asked it to roll the dice and show all its work. Dice rolls, any bonuses, any advantage/disadvantage. It got all of it right.
I then tested a few scenarios to check and see if it would follow the rules as they are supposed to be from 5e. It got all of that correct as well. It did give me options as if the rules were corrected (I asked it to roll damage as a barbarian casting fireball, it said barbs couldn’t, but gave me reasons that would allow exceptions).
What it ended up flubbing on later was forgetting the proper initiative order. I had to remind it a couple times that it messed it up. This only happened way later in the campaign. So I think I was approaching the limits of its memory window.
I tried the distilled locally. It didn’t even realize I was asking it to DM. It just repeating the outline of the campaign.
It is good to hear what a full DeepSeek can do. I am really looking forward to having a better, localized version in 2030. Thank you for relating your experience, it is helpful. :)
I’m anxious to see it as well. I would love to see something like this implemented into games, and focused solely on whatever game it’s in. I imagine something like Skyrim but with a LLM on every character, or at least the main ones. I downloaded the mod that adds it to Skyrim now, but I haven’t had the chance to play with it. It does require prompts for the NPC to let you know you’re talking to it. I’d love to see a natural thing. Even NPCs carrying out their own natural conversations with each other and not with the PC.
I’ve also been watching the Vivaladirt people. We need a 4th wall breaking npc in every game when we get a llm like above.
Looking up the Vilvaladirt, I am guessing it is a group of Let’s Players who do a Mystery Science Theater 3,000 take on their gameplay? If so, that would be neat.
These guys. Greg the garlic farmer is their 4th wall breaking guy.
I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.
But I agree that relying on it to think for you is not a good thing.
Good. Maybe the dumbest people will forget how to breathe, and global society can move forward.
Oh you can guarantee they won’t forget how to vote 😃
Microsoft will just make a subscription AI for that, BaaS.
Which we will rebrand “Bullshit as a service”!
I thought that’s what it means?
No, he said Breath as a service, which is funny!
Well thank goodness that Microsoft isn’t pushing AI on us as hard as it can, via every channel that it can.
Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I’ve had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.
Well at least they communicate such findings openly and don’t try to hide them. Other than ExxonMobil who saw global warming coming due to internal studies since the 1970s and tried to hide or dispute it, because it was bad for business.
No shit.