• 11 Posts
  • 27 Comments
Joined 4 months ago
cake
Cake day: October 9th, 2024

help-circle



  • From Deepseek:

    Chain of Thought (CoT) in LLMs refers to a prompting technique that guides large language models to articulate intermediate reasoning steps when solving a problem, mimicking human-like logical progression. Here’s a concise breakdown:

    1. Purpose: Enhances performance on complex tasks (e.g., math, logic, commonsense reasoning) by breaking problems into sequential steps, reducing errors from direct, unstructured answers.

    2. Mechanism:

      • Prompting Strategy: Users provide examples (few-shot) or explicit instructions (zero-shot, e.g., “Let’s think step by step”) to encourage step-by-step explanations.
      • Output Structure: The model generates tokens sequentially, simulating a reasoned pathway to the answer, even though internal processing remains parallel (via transformer architecture).
    3. Benefits:

      • Accuracy: Improves results on multi-step tasks by isolating and addressing each component.
      • Transparency: Makes the model’s “thinking” visible, aiding debugging and trust.
    4. Variants:

      • Few-Shot CoT: Examples with detailed reasoning are included in the prompt.
      • Zero-Shot CoT: Direct instructions trigger step-by-step output without examples.
      • Self-Consistency: Aggregates answers from multiple CoT paths to select the most consistent one.
    5. Effectiveness: Particularly impactful for tasks requiring structured reasoning, while less critical for simple queries. Research shows marked accuracy gains in benchmarks like math word problems.

    In essence, CoT leverages the model’s generative capability to externalize reasoning, bridging the gap between opaque model decisions and interpretable human problem-solving.

    Example:








  • Holy fuck, the more I read in to the bot campaign, the more infuriated and horrified I get

    “As it relates to Covid-19 disinformation, China [in 2020] initiated a disinformation campaign to falsely blame the United States for the spread of Covid-19,” Lawrence noted. “In line with the US National Defense Strategy, the DoD continues to build integrated deterrence against critical challenges to US national security, including deterring the PRC’s spread of disinformation under the scrutiny of the Department’s coordination and deconfliction process,” she said.

    They are literally using “Whataboutism”, except spreading vaccine denial actually gets people KILLED.

    According to the document, the Pentagon also conceded it had “made some missteps in our COVID related messaging” but assured the Philippines that the military “has vastly improved oversight and accountability of information operations” since 2022.

    Oops-poopsies, we promise we won’t cause suffering and death the next time we deploy our bot army because we’re “Transparent” and “Accountable” (especially after we allotted 1.6 Billion dollars to badmouth China) Tee-hee-hee

    Who gives a flying fuck about “spooky wumaos” when the US bot army actually got people KILLED

    https://www.pna.gov.ph/articles/1227042

    https://www.reuters.com/world/us-told-philippines-it-made-missteps-secret-anti-vax-propaganda-effort-2024-07-26/