ekZepp@lemmy.world to Technology@lemmy.mlEnglish · 5 months agoStudy Finds That 52 Percent of ChatGPT Answers to Programming Questions Are Wrongfuturism.comexternal-linkmessage-square116fedilinkarrow-up10arrow-down10
arrow-up10arrow-down1external-linkStudy Finds That 52 Percent of ChatGPT Answers to Programming Questions Are Wrongfuturism.comekZepp@lemmy.world to Technology@lemmy.mlEnglish · 5 months agomessage-square116fedilink
minus-squareslacktoid@lemmy.mllinkfedilinkEnglisharrow-up0·5 months agoWe need a comparison against an average coder. Some fucking baseline ffs.
minus-squarehayes_@sh.itjust.workslinkfedilinkarrow-up0·5 months agoWhy would we compare it against an average coder? ChatGPT wants to be a coding aid/reference material. A better baseline would be the top rated answer for the question on stackoverflow or whether the answer exists on the first 3 Google search results.
minus-squareanachronist@midwest.sociallinkfedilinkEnglisharrow-up0·5 months ago“Self driving cars will make the roads safer. They won’t be drunk or tired or make a mistake.” Self driving cars start killing people. “Yeah but how do they compare to the average human driver?” Goal post moving.
We need a comparison against an average coder. Some fucking baseline ffs.
Why would we compare it against an average coder?
ChatGPT wants to be a coding aid/reference material. A better baseline would be the top rated answer for the question on stackoverflow or whether the answer exists on the first 3 Google search results.
Or a textbook’s explanation
“Self driving cars will make the roads safer. They won’t be drunk or tired or make a mistake.”
Self driving cars start killing people.
“Yeah but how do they compare to the average human driver?”
Goal post moving.