• Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    3
    ·
    3 days ago

    Is environmental impact on the top of anyones list for why they don’t like ChatGPT? It’s not on mine nor on anyones I have talked to.

    The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

    The author moves away from strict environmental focus despite claims to the contrary in their intro,

    This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons

    […]

    Other Objections, This is all a gimmick anyway. Why not just use Google? ChatGPT doesn’t give better information

    … yet doesn’t address the most common criticisms.

    Worse, the author accuses anyone who pauses to think of the negatives of ChatGPT of being absurdly illogical.

    Being around a lot of adults freaking out over 3 Wh feels like I’m in a dream reality. It has the logic of a bad dream. Everyone is suddenly fixating on this absurd concept or rule that you can’t get a grasp of, and scolding you for not seeing the same thing. Posting long blog posts is my attempt to get out of the weird dream reality this discourse has created.

    IDK what logical fallacy this is but claiming people are “freaking out over 3Wh” is very disingenuous.

    Rating as basic content: 2/10, poor and disingenuous argument

    Rating as example of AI writing: 5/10, I’ve certainly seen worse AI slop

    • Saik0@lemmy.saik0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

      My reason is that you can’t trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can’t trust ANYTHING. We’ve seen well trained and targeted AIs that don’t directly take user input (so can’t be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better… or that geologists recommend eating a rock a day.

      If a custom tailored AI can’t cut it… the general ones are not going to be all that valuable without significant external validation/moderation.

      • Justas🇱🇹@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        There is also the argument that a downpour of AI generated slop is making the Internet in general less usable, hurting everyone (except the slop makers) by making true or genuine information harder to find and verify.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      22
      ·
      3 days ago

      Thank you for your considered and articulate comment

      What do you think about the significant difference in attitude between comments here and in (quite serious) programming communities like https://lobste.rs/s/bxixuu/cheat_sheet_for_why_using_chatgpt_is_not

      Are we in different echo chambers? Is chatgpt a uniquely powerful tool for programmers? Is social media a fundamentally Luddite mechanism?

      • Vanth@reddthat.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 days ago

        I’m curious if you can articulate the difference between being critical of how a particular technology is owned and managed versus being a Luddite?

      • Rooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

          • Saik0@lemmy.saik0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I don’t think this answers the question

            They’re specifically showing you that in the use case you asked about the assertions must change. Your question is bad for the case that you’re specifically asking about.

            So no, it doesn’t answer the question… But your question has a bunch more caveats that must be accounted for that you’re just straight up missing.