• WasPentalive@lemmy.one
    link
    fedilink
    arrow-up
    22
    ·
    1 year ago

    Every time I have asked ChatGPT to code something it seems to lose the thread halfway through and starts giving nonsensical code. I asked it to do something simple in HP41C calculator code and it invented functions out of whole cloth.

    • averagedrunk@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      I asked it for something in Powershell and it did the same thing. I asked how it came up with that function and it said it doesn’t exist but if it did that’s how it would work.

    • Ubermeisters@discuss.online
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      When it starts going off the rails like that I also ask it to “check its work when its done”, and it seems to extend the amount of usable time before it loses the plot and suggests i use VBA or something.

    • CloverSi@lemmy.comfysnug.space
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Quality of output depends a lot on how common the code is in its training data. I would guess it’d be best at something like Python, with its wealth of teaching materials and examples out there.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It depends on how common the language is and how novel the idea is. It can not create something new. It isn’t creative. It spits out what is predictable based on what other people have written before. It isn’t intelligent. It’s glorified auto-complete.