TheCheddarCheese ,
@TheCheddarCheese@lemmy.world avatar

chatgpt only generates text. that’s how it was supposed to work. it doesn’t care if the text it’s generating is true, or if it even makes any sense. so sometimes it will generate untrue statements (with the same confidence as the ‘linux gatekeepers’ you mentioned, except with no comments to correct the response), no matter how well you train it. and if there’s enough wrong information in the dataset, it will start repeating it in the responses, because again, its only real purpose is to pick out the next word in a string based on the training data it got. sometimes it gets things right, sometimes it doesn’t, we can’t just blindly trust it. pointing that out is not gatekeeping.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • tech
  • kbinEarth
  • testing
  • interstellar
  • wanderlust
  • All magazines