Now they just use LLM to generate formulas to calculate tariffs that fit their fantasy. Gosh I wish they actually taste their own failure for once not just constantly fail up.
Liars wants lying software to run the grifts
Ah this must be because it did such a good job taxing penguins
Now I’m frightened to my core.
AI doesn’t scare me.
People making decisions off of AI scare me.
The government mandating people use AI to make decisions frightens me to my core.
AI should scare you. People will just dump everything on AI and then let it fuck over your life. What happens if AI flags you as a terrorist and your drivers license is suspended, or your health care is cancelled? What happens if AI says you’re fraudulently collecting social security? There’s nobody to blame, because they’ll just blame the computer.
So far the Trump administration and the Federal government under him don’t need AI to justify stupid, globe-wrecking policy.
AI told me to do [wrongful action] is no more a valid excuse than I was just following orders. At least not to an international tribunal or a (seriously peckish) public.
It might not be a valid excuse, but it gets that kind of play in the press (and, therefore, public opinion/support) as discussed in this Citations Needed podcast episode A.I. Mysticism as Responsibility-Evasion PR Tactic
AI keeps sending funding to the asteroid detection guy, the DNA vaccine people, the bee people and other climate change people too. It wants to send money to the education department but we fixed that! We’re so good at AI! Oh look, it keeps saying stuff about Louisiana under water! Crazy! Let’s fix that!
if america is so dominant in ai why did one chinese open source llm take billions off the market cap
LLMs aren’t the enemy, malicious politicians are
Why not both?
Because it’s a naive take. Technology can help us make the world a better place for all - standing in the way are greedy pigs and asshole, ignorant politicians.
“AI” isn’t the problem, our approach to it is.
Kinda sorta.
AI, or rather LLMs, can barf out a lot of passable text quickly. That can be useful as a starting point for something useful, if a human mind is willing and able to review and repair it. It’s like having an idiot intern whom you can never really trust.
But the number of people who use LLMs in a way that reflects and understanding of their limitations is diminishingly small. Most people just don’t assume that something that looks valid needs to be fully and critically reviewed. That’s why we’ve had multiple cases of lawyers having ChatGPT write theis legal briefs based on hallucinated legal precedent.
That’s not a problem of the technology though, that’s human idiocy.
On the one hand, absolutely, human idiocy.
On the other hand, as a society it behooves us to think about how to stop idiots from hurting themselves and others. With IT, and in the context of corpo marketing hype, I am deeply concerned about politicians using AI or allowing AI to be used to do things poorly and thus hurt people simply because they have too much faith in the tool or its salesmen. Like, for example, rewriting the Social Security database.
Ibm helped the holocaust. AI companies will do the same now.
Not just AI companies, but pretty much if not all big tech companies in the US. Wait, still not broad enough…