• TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    53 minutes ago

    I don’t work in IT, but I do know you need creativity to work in the industry, something which the current LLM/AI doesn’t possess.

    Linguists also dismiss LLMs in similar vein because LLMs can’t grasp context. It is always funny to be sarcastic and ironic on an LLM.

    Soft skills and culture are what that the current iteration of LLMs lack. However, I do think there is still huge potential for AI development in dacades to come, but I want this AI bubble to burst as “in your face” to companies.

  • TuffNutzes@lemmy.world
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    5
    ·
    23 hours ago

    The LLM worship has to stop.

    It’s like saying a hammer can build a house. No, it can’t.

    It’s useful to pound in nails and automate a lot of repetitive and boring tasks but it’s not going to build the house for you - architect it, plan it, validate it.

    It’s similar to the whole 3D printing hype. You can 3D print a house! No you can’t.

    You can 3D print a wall, maybe a window.

    Then have a skilled Craftsman put it all together for you, ensure fit and finish and essentially build the final product.

    • frog_brawler@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      48 minutes ago

      You’re making a great analogy with the 3D printing of a house.

      However, if we consider the 3D printed house scenario; that skilled craftsman is now able to do things on his own that he would have needed a team for in the past. Most, if not all, of the less skilled members of that team are not getting any experience within the craft at that point. They’re no longer necessary when one skilled person can now do things on their own.

      What happens when the skilled and highly experienced craftsmen that use AI as a supplemental tool (and subsequently earn all the work) eventually retire, and there’s been no juniors or mid-levels for a while? No one is really going to be qualified without having had exposure to the trade for several years.

      • TuffNutzes@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        5
        ·
        22 hours ago

        Yeah I’ve seen that before and it’s basically what I’m talking about. Again, that’s not “printing a 3D house” as hype would lead one to believe. Is it extruding cement to build the walls around very carefully placed framing and heavily managed and coordinated by people and finished with plumbing, electrical, etc.

        It’s cool that they can bring this huge piece of equipment to extrude cement to form some kind of wall. It’s a neat proof of concept. I personally wouldn’t want to live in a house that looked anything like or was constructed that way. Would you?

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          8 hours ago

          it’s basically what I’m talking about

          Well, a minute ago you were saying that AI worship is akin to saying

          a hammer can build a house

          Now you’re saying that a hammer is basically the same thing as a machine that can create a building frame unattended? Come on. You have a point to be made here but you’re leaning on the stick a bit too hard.

        • Nate Cox@programming.dev
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          8
          ·
          21 hours ago

          I mean, “to 3d print a wall” is a massive, bordering on disingenuous, understatement of what’s happening there. They’re replacing all of the construction work of framing and finishing all of the walls of the house, interior and exterior, plus attaching them and insulating them, with a single step.

          My point is if you want to make a good argument against LLMs, your metaphor should not have such an easy argument against it at the ready.

          • DireTech@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            14 hours ago

            Did you see another video about this? The one linked only showed the walls and still showed them doing interior framing. Nothing about windows, electrical, plumbing, insulation, etc.

            What they showed could speed up construction but there are tons of other steps involved.

            I do wonder how sturdy it is since it doesn’t look like rebar or anything else is added.

            • Nate Cox@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              12 hours ago

              I’m not an expert on it, I’ve only watched a few videos on it, but from what I’ve seen they add structural elements between the layers at certain points which act like rebar.

              There’s no framing of the walls, but they do set up scaffolds to support overhangs (because you can’t print onto nothing)

              • scarabic@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                8 hours ago

                I’m with you on this. We can’t just causally brush aside a machine that can create the frame of a house unattended - just because it can’t also do wiring. It was a bad choice of image to use to attack AI. In fact it’s a perfect metaphor for what AI is actually good for: automating certain parts of the work. Yes you still need an electrician to come in, just like you also need a software engineer to wire up the UI code their LLM generated to the back end, etc.

  • black_flag@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    6
    ·
    1 day ago

    I think it’s going to require a change in how models are built and optimized. Software engineering requires models that can do more than just generate code.

    You mean to tell me that language models aren’t intelligent? But that would mean all these people cramming LLMs in places where intelligence is needed are wasting their time?? Who knew?

    Me.

  • isaacd@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    7
    ·
    22 hours ago

    Clearly LLMs are useful to software engineers.

    Citation needed. I don’t use one. If my coworkers do, they’re very quiet about it. More than half the posts I see promoting them, even as “just a tool,” are from people with obvious conflicts of interest. What’s “clear” to me is that the Overton window has been dragged kicking and screaming to the extreme end of the scale by five years of constant press releases masquerading as news and billions of dollars of market speculation.

    I’m not going to delegate the easiest part of my job to something that’s undeniably worse at it. I’m not going to pass up opportunities to understand a system better in hopes of getting 30-minute tasks done in 10. And I’m definitely not going to pay for the privilege.

    • frog_brawler@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      29 minutes ago

      I’m not a “software engineer” but a lot of people that don’t work within tech would probably call me one.

      I’m in Cloud Engineering, but came from the sys/network admin and ops side of things rather than starting off in dev or anything like that.

      Up until about 5 years ago, I really only knew Powershell and a little bit of bash. I’ve gotten up to speed in a lot of things but never officially learned python, js, go or any other real development language that would be useful to me. I’ve spent way more time focusing on getting good with IaC, and probably more of the SRE type stuff.

      In my particular situation, LLMs are incredibly useful. It’s fair to say that I use them daily now. I’ve had it convert bash scripts to python for me very quickly. I don’t know python but now that I’m able to look at a python script next to my bash; I’m picking up on stuff a lot faster. I’m using Lambda way more often as a result.

      Also, there’s a lot of mundane filling out forms shit that I delegate to an LLM. I don’t want to spend my time filling out a form that I know no one is actually going to read. F it, I’ll have the AI write a report for an AI. It’s dumb as shit, but that’s the world today.

    • Phegan@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I’ve only found two effective uses for them. Every time I tried them otherwise they fell flat and took me longer that it would have to write the code myself.

      The first was a greenfield personal project where I let code quality wane since I was the only person maintaining it, and wanted to test LLMs. The other was to write highly repeative data tests where the model can simply type faster than me.

      Anything that requires writing code that needs to be maintained by multiple people or systems older than 2 years, it has fallen completely flat. In cases like that I spend more time telling the LLM it is doing it wrong, it would have taken me less time to write the code in the first place. In 95% of cases, I am still faster than an LLM at solving a problem and writing the code.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      I’ve found them useful, sometimes, but nothing like a fraction of what the hype would suggest.

      They’re not adequate replacements for code reviewers, but getting an AI code review does let me occasionally fix a couple of blunders before I waste another human’s time with them.

      I’ve also had the occasional bit of luck with “why am I getting this error” questions, where it saved me 10 minutes of digging through the code myself.

      “Create some test data and a smoke test for this feature” is another good timesaver for what would normally be very tedious drudge work.

      What I have given up on is “implement a feature that does X” questions, because it invariably creates more work than it saves. Companies selling “type in your app idea and it’ll write the code” solutions are snake-oil salesman.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      I have been using it a bit, still can’t decide if it is useful or not though… It can occasionally suggest a blatantly obvious couple of lines of code here and there, but along the way I get inundated with annoying suggestions that are useless and I haven’t gotten used to ignoring them.

      I mostly work with a niche area the LLMs seem broadly clueless about, and prompt driven code is almost always useless except when dealing with a super boilerplate usage of a common library.

      I do know some people that deal with amazingly mundane and common functions and they are amazed that it can pretty much do their jobs, but they never really impressed me before anyway and I wondered how they had a job…

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      20 hours ago

      I don’t use one, and my coworkers that do use them are very loud about it, and worse at their jobs than they were a year ago.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        47% daily use

        That is NOT what that says. It says 47% of STACK OVERFLOW RESPONDENTS REPORT using AI. That does not represent 47% of devs.

        If you go to 4chan and poll of chuds, you’re going to get a high percentage of respondents affirming your query. You went to stackoverflow and asked about AI. Think about the user base.

        • Aatube@kbin.melroy.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          13 hours ago

          thanks but i felt like that’d be obvious from the URL lol. the SO survey is probably the largest sample size we have for this…

          …that isn’t outright from an AI company (not that SO doesn’t have AI but they’re still an answers company as opposed to, say, Cursor AI whose main selling point is the AI. even Zed, the company behind the blog linked in the post, has a much higher emphasis on AI) and their sample should be pretty close to all online devs, maybe slightly exclusionary of very experienced ones. SO’s evangelist proportion is not even close to 4chan’s chud proportion; not sure why had the impression needed to name that comparison.

          it’s not like Codidact has a dev survey and even if they had one they’d have as much bias as this comment section

    • hisao@ani.social
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      9
      ·
      22 hours ago

      If my coworkers do, they’re very quiet about it.

      Gee, guess why. Given the current culture of hate and ostracism I would never outright say IRL that I like it or use it a lot. I would say something like “yeah, I think it can sometimes be useful when used carefully and I sometimes use it too”. While in reality it would mean that it actually writes 95% of code under my micromanagement.

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        11
        ·
        20 hours ago

        Wut. At software shops the prevailing atmosphere is that you should use it and broadcast it as much as possible. This person’s experience is not normal

        • hisao@ani.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          Okay, to be fair, my knowledge of the current culture in industry is very limited. It’s mostly impression formed by online conversations, not limited to Lemmy. Last project I worked at it was illegal to use public LLMs because of intellectual property (and maybe even GDPR) concerns. We had a local scope-limited LLM integration though and that one was allowed, but there was literally a single person across multiple departments who used it and it was a “middle” frontend dev and it was only for autocomplete. Backenders wouldn’t even consider it.

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 day ago

    To those who have played around with LLM code generation more than me, how are they at debugging?

    I’m thinking of Kernighan’s Law: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” If vibe coding reduces the complexity of writing code by 10x, but debugging remains just as difficult as before, then Kernighan’s Law needs to be updated to say debugging is 20x as hard as vibe coding. Vibe coders have no hope of bridging that gap.

    • frog_brawler@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 minutes ago

      How are they at debugging? In a silo, they’re shit.

      I’ve been using one LLM to debug the other this past week for a personal project, and it can be a bit tedious sometimes, but it eventually does a decent enough job. I’m pretty much vibe coding things that are a bit out of my immediate knowledge and skill set, but I know how they’re supposed to work. For example, I’ve got some python scripts using rekognition to scan photos for porn or other explicit stuff before they get sent to an s3 bucket. After that happens, there’s now a dashboard that’s going to give me results on how many images were scanned and then marked as either acceptable or flagged as inappropriate. After a threshold of too many inappropriate images being sent in, it’ll shadowban them from sending any more dick pics in.

      For someone that’s never taken a coding course, I’m relatively happy with the results I’m getting so far. Granted, this may be small potatoes for someone with an actual development background; but as someone that’s been working adjacent to those folks for several years, I’m happy with the output.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      22 hours ago

      The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I’ve been experimenting with it a lot lately.

      In my experience, it’s worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.

      For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you’re a dev who’s desperate enough to ask AI for help debugging something, you probably don’t know what’s wrong either, so it won’t be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time… then you go back to the LLM with more context and ask it to try again. It’s second suggestion will sound even more confident than the first, (“Aha! I see the real cause of the issue now!”) but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.

      Rinse and repeat this cycle enough times until your manager is happy you’ve hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        9
        ·
        21 hours ago

        we must start using AI tools in our workflow and is tracking our usage

        Reads to me as “Please help us justify the very expensive license we just purchased and all the talented engineers we just laid off.”

        I know the pain. Leadership’s desperation is so thick you can smell it. They got FOMO’d, now they’re humiliated, so they start lashing out.

        • frog_brawler@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 minutes ago

          Funny enough, the AI shift is really just covering for the over-hiring mistakes in 2021. They can’t admit they fucked up in hiring too many people during Covid, so they’re using AI as the scapegoat. We all know it’s not able to actually replace people yet; but that’s happening anyway.

          There won’t be any immediate ramifications, we’ll start to see that in probably 12-18 months or so. It’s just another form of kicking the can down the road.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        22 hours ago

        maybe its just me but I find typos to be the most difficult because my brain and easily see it as correct so the whole code looks correct. Its like the way you can take the vowels out of sentences and people can still immediately read it.

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Probably why they talked about looking at a stack trace, you’ll see immediately that you made a typo in a variable’s name or language keyword when compiling or executing.

      • TrooBloo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        21 hours ago

        As it seems to be the case in all of these situations, AI fails hard at tasks when compared to tools specifically designed for that task. I use Ruff in all my Python projects because it formats my code and finds (and often fixes) the kind of low complexity/high probability problems that are likely to pop up as a result of human imperfection. It does it with great accuracy, incredible speed, using very little computing resources, and provides levels of safety in automating fixes. I can run it as an automation step when someone proposes code changes, adding all of 3 or 4 seconds to the runtime. I can run it on my local machine to instantly resolve my ID10T errors. If AI can’t solve these problems as quickly, and if it can’t solve anything more complicated reliably, I don’t understand why it would be a tool I would use.

    • Ledivin@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      1 day ago

      They’re not good at debugging. The article is pretty spot on, IMO - they’re great at doing the work; but you are still the brain. You’re still deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore. Similar for debugging - this is not an exercise at the lowest level, and needs you to run it.

      • hisao@ani.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        24 hours ago

        deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore

        And that’s exactly what I want. And I don’t get it why people want more. Having more means you have less and less control or influence on the result. What I want is that in other fields it becomes like it is in programming now, so that you micromanage every step and have great control over the result.

    • Pechente@feddit.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      Definitely not good. Sometimes they can solve issues but you gotta point them in the direction of the issue. Other times they write hacky workarounds that do the job for the moment but crash catastrophically with the next major dependency update.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        12
        ·
        24 hours ago

        I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”

        It didn’t even solve our problem.

        • hisao@ani.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          21 hours ago

          I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”

          Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”. This is btw one of the reasons why I increasingly dislike programming as a field over the years and happily delegate the coding part to AI nowadays. This field celebrates conformism and that’s why humans shouldn’t write code manually. Perfect field to automate away via LLMs.

          • very_well_lost@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            21 hours ago

            Before LLMs people were often saying this about people smarter than the rest of the group.

            Smarter by whose metric? If you can’t write software that meets the bare minimum of comprehensibility, you’re probably not as smart as you think you are.

            Software engineering is an engineering discipline, and conformity is exactly what you want in engineering — because in engineering you don’t call it ‘conformity’, you call it ‘standardization’. Nobody wants to hire a maverick bridge-builder, they wanna hire the guy who follows standards and best practices because that’s how you build a bridge that doesn’t fall down. The engineers who don’t follow standards and who deride others as being too stupid or too conservative to understand their vision are the ones who end up crushed to death by their imploding carbon fiber submarine at the bottom of the Atlantic.

            AI has exactly the same “maverick” tendencies as human developers (because, surprise surprise, it’s trained on human output), and until that gets ironed out, it’s not suitable for writing anything more than the most basic boilerplate — which is stuff you can usually just copy-paste together in five minutes anyway.

            • hisao@ani.social
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              19 hours ago

              You’re right of course and engineering as a whole is a first-line subject to AI. Everything that has strict specs, standards, invariants will benefit massively from it, and conforming is what AI inherently excels at, as opposed to humans. Those complaints like the one this subthread started with are usually people being bad at writing requirements rather than AI being bad at following them. If you approach requirements like in actual engineering fields, you will get corresponding results, while humans will struggle to fully conform or even try to find tricks and loopholes in your requirements to sidestep them and assert their will while technically still remaining in “barely legal” territory.

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            19 hours ago

            Wow you just completely destroyed any credibility about your software development opinions.

            • hisao@ani.social
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              19 hours ago

              Why though? I think hating and maybe even disrespecting programming and wanting your job to be as much redundant and replaced as possible is actually the best mindset for a programmer. Maybe in the past it was a nice mindset to become a teamlead or a project manager, but nowadays with AI it’s a mindset for programmers.

              • Feyd@programming.dev
                link
                fedilink
                English
                arrow-up
                8
                arrow-down
                1
                ·
                19 hours ago

                Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”.

                This part.

                • hisao@ani.social
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  19 hours ago

                  The fact that I dislike it that it turned out that software engineering is not a good place for self-expression or for demonstrating your power level or the beauty and depth of your intricate thought patterns through advanced constructs and structures you come up with, doesn’t mean that I disagree that this is true.

    • hisao@ani.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      24 hours ago

      My first level of debugging is logging things to console. LLMs here do a decent job at “reading your mind” and autocompleting “pri” into something like “println!(“i = {}, x = {}, y = {}”, i, x, y);” with very good context awareness of what and how exactly it makes most sense to debug print in the current location in code.

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      7
      ·
      1 day ago

      I use it extensively daily.

      It cannot step through code right now, so true debugging is not something you use it for. Most of the time the llm will take the junior engineer approach of “guess and check” unless you explicitly give it better guidance.

      My process is generally to start with unit tests and type definitions, then a large multipage prompt for every segment of the app the llm will be tasked with. Then I’ll make a snapshot of the code, give the tool access to the markdown prompt, and validate its work. When there are failures and the project has extensive unit tests it generally follows the same pattern of “I see that this failure should be added to the unit tests” which it does and then re-executes them during iterative development.

      If tests are not available or if it is not something directly accessible to the tool then it will generally rely on logs either directly generated or provided by the user.

      My role these days is to provide long well thought out prompts, verify the integrity of the code after every commit, and generally just kind of treat the llm as a reckless junior dev. Sometimes junior devs can surprise you, like yesterday I was very surprised by a one shot result: asking for a mobile rn app for taking my rambling voice recordings and summarize them into prompts, it was immediately remarkably successful and now I’ve been walking around mic’d up to generate prompts.

    • Zexks@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      12
      ·
      1 day ago

      Working just fine. It one shot a kodi tv channel addon for me last week end. Used it to integrate kofax into docusign. Building 2 blazor apps one new one an upgrade. Used it to create a stack of mc servers for the kids with a dashboard of statuses and control switches. My son is working on his own mc mod with it. Use it almost daily for random file organization and management scripts. Using it to clean uo my media library meta data. Anytime i have to do something to more than 5 or so files i pull it up and ask for a script.

      Its a tool like any other. There will be people who adapt and people who fail to. Just like we had with computers the internet. It zeems to be long forgotten now but literally ALL of these anti ai arguments were made against computers and the internet 30_50 years ago. Very similar ones were made when books and writing became common place as well.

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        23 hours ago

        “Some random people were wrong about something in the past so nobody is allowed to speculate that any technology isn’t as revolutionary as it’s hyped to be ever again” is not a useful or compelling argument.

      • TheFinn@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        Apparently I’m not up to date. I’ve been impressed by some things and turned off by others. But I haven’t seen any workflows or setups that enabled access to my file system. How is that accomplished, and are there any safeguards around it?

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 hours ago

          You’re looking for an MCP server, which is the standard way to hook things into chatbots now, and safeguards would depend on the particular server.

  • NoiseColor @lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    23 hours ago

    Good article, I couldn’t agree with it more, it’s exactly my experience.

    The tech is being developed really fast and that is the main issue when taking about ai. Most ai haters are using the issues we might have today to discredit the while technology which makes no sense to me.

    And this issue the article talks about is apparent and whoever solves it will be rich.

    However, it’s interesting to think about the issues that come next.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Like the guy whose baby doubled in weight in 3 months and thus he extrapolated that by the age of 10 the child would weigh many tons, you’re assuming that this technology has a linear rate of improvement of “intelligence”.

      This is not at all what’s happening - the evolution of things like LLMs in the last year or so (say between GPT4 and GPT5) is far less than it was earlier in that Tech and we keep seeing more and more news on problems about training it further and getting it improved, including the big one which is that training LLMs on the output of LLMs makes them worse, and the more the output of LLMs is out there, the harder it gets to train new iteractions with clean data.

      (And, interestingly, no Tech has ever had a rate of improvement that didn’t eventually tailed of, so it’s a peculiar expectation to have for a specific Tech that it will keep on steadily improving)

      With this specific path taken in implementing AI, the question is not “when will it get there” but rather “can it get there or is it a technological dead-end”, and at least for things like LLMs the answer increasingly seems to be that it is a technological dead-end for the purpose of creating reasoning intelligence and doing work that requires it.

      (For all your preemptive defense by implying that critics are “ai haters”, no hate is required to do this analysis, just analytical ability and skepticism, untainted by fanboyism)

    • HarkMahlberg@kbin.earth
      link
      fedilink
      arrow-up
      10
      arrow-down
      5
      ·
      22 hours ago

      It’s true, the tech will get better in the future, we just need to believe and trust the plan.

      Same thing with crypto and NFT’s. They were 99% scam by volume, but who wouldn’t love moving their life savings into a digital ecosystem controlled by a handful of rich gambling addicts with no consumer protections? Imagine, you’ll never need to handle dirty paper money ever again, we’ll just put it all in a digital wallet somewhere controlled by someone else coughmastercardcough.

      And another thing, we were too harsh on the Metaverse. Sure, spending 8 hours in VR could make you vomit, and the avatars made ET for the Atari look like Uncharted 4, but it was just in its infancy!

      I too want to outsource all my critical thinking to a chatbot controlled by an wealthy insular narcissist who throws Nazi salutes. The technology just needs time to mature. Who knows, maybe it can automate the exile of birthright citizens for us too!

      /s

      • NoiseColor @lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        21 hours ago

        That’s exactly the hyperbole I was talking about. Your post is full of obvious fallacies, but the fact that you are pushing everything to the absolutes is the silliest one.

        • Aceticon@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Your whole point is discounting the experience of 50 years in technological evolution (that all technological branches invariably slow down and stop improving) and the last 20 years of hype in Tech (literally everything is pushed like crazy as “the next big thing” by people trying to make a lot of money from it, and almost all of it isn’t), so that specific satirical take on your post is well deserved.

  • Nighed@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    20 hours ago

    The idea of the mental model CAN be done by AI.

    In my experience, if you get it to build a requirements doc first, then ask it to implement that while updating it as required (effectively it’s mental state). you will get a pretty good output with decent ‘debugging’ ability.

    This even works ok with the older ‘dumber’ models.

    That only works when you have a comprehensive set of requirements available though. It works when you want to add a new screen/process (mostly) but good luck updating an existing one! (I haven’t tried getting it to convert existing code to a requirements doc - anyone tried that?)

    • flop_leash_973@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 hours ago

      I tried feeding ChatGPT a Terraform codebase once and asked it to produce an architecture diagram of what the code base would deploy to AWS.

      It got most of the little blocks right for the services that would get touched. But the layout and traffic direction flow between services was nonsensical.

      Truth be told it did do a better job than I thought it would initially.

      • Nighed@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        15 hours ago

        The trick is to split up the tasks into chunks.

        Ask it to identify the blocks.

        Then ask it to identify the connections.

        Then ask it to produce the diagram.

        • Pumasuedeblue@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          15 hours ago

          Which means you just did four things to help the AI which the AI can’t do itself. That makes it a tool: useful in some applications, not useful in others, and constantly requiring a human to properly utilize it.

  • hisao@ani.social
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    1 day ago

    I love it how article baits AI-haters to upvote it, even though it’s very clearly pro-AI:

    At Zed we believe in a world where people and agents can collaborate together to build software. But, we firmly believe that (at least for now) you are in the drivers seat, and the LLM is just another tool to reach for.