

Symbol is better, as superscript isn’t standard Markdown and isn’t necessarily supported by other software than Lemmy. Mbin for example doesn’t support it.
Not a reason not to use it of course, but it makes the symbol the more preferable choice.
I’m a #SoftwareDeveloper from #Switzerland. My languages are #Java, #CSharp, #Javascript, German, English, and #SwissGerman. I’m in the process of #LearningJapanese.
I like to make custom #UserScripts and #UserStyles to personalize my experience on the web. In terms of #Gaming, currently I’m mainly interested in #VintageStory and #HonkaiStarRail. I’m a big fan of #Modding.
I also watch #Anime and read #Manga.
#fedi22 (for fediverse.info)
Symbol is better, as superscript isn’t standard Markdown and isn’t necessarily supported by other software than Lemmy. Mbin for example doesn’t support it.
Not a reason not to use it of course, but it makes the symbol the more preferable choice.
This is incredibly shortsighted.
If you get Collective Shout to stop, another group might pick up where they left off.
The problem needs to be fixed, what you’re suggesting is just making the people currently abusing it stop doing so. That’s not a long term solution.
But americans don’t love Jesus, they shit all over his teachings and would deport him if he came to their doorstep.
Tbf, anything that isn’t AI Windows blocks the feature. Including regular Windows.
People just need to not fall for the scam edition and they don’t have to deal with this shit.
Here’s a question regarding the informed consent part.
The article gives the example of asking whether the recipient wants the AI’s answer shared.
“I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want.”
Do you (I mean generally people reading this thread, not OP specifically) think Lemmy’s spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.
I just asked ChatGPT too (your exact prompt there) and it did give me the correct solution.
- Take the child over
- Go back alone
- Take the candy over
- Bring the child back
- Take the priest over
- Go back alone
- Take the child over again
It didn’t comment on moral concerns, though it did applaud itself for keeping the priest and the child separated without elaborating on why.
Then the actual chess isn’t LLM.
And neither did the Atari 2600 win against ChatGPT. Whatever game they ran on it did.
That’s my point here. The fact that neither Atari 2600 nor ChatGPT are capable of playing chess on their own. They can only do so if you provide them with the necessary tools. Which applies to both of them. Yet only one of them was given those tools here.
Isn’t the Atari just a game console, not a chess engine?
Like, Wikipedia doesn’t mention anything about the Atari 2600 having a built-in chess engine.
If they were willing to run a chess game on the Atari 2600, why did they not apply the same to ChatGPT? There are custom GPTs which claim to use a stockfish API or play at a similar level.
Like this, it’s just unfair. Both platforms are not designed to deal with the task by themselves, but one of them is given the necessary tooling, the other one isn’t. No matter what you think of ChatGPT, that’s not a fair comparison.
Edit: Given the existing replies and downvotes, I think this comment is being misunderstood. I would like to try clarifying again what I meant here.
First of all, I’d like to ask if this article is satire. That’s the only way I can understand the replies I’ve gotten that critized me on grounds of the marketing aspect of LLMs (when the article never brings up that topic itself, nor did I). Like, if this article is just some tongue in cheek type thing about holding LLMs to the standards they’re advertised at, I can understand both the article and the replies I’ve gotten. But the article never suggests so itself. So my assumption when writing my comment was that this is not the case and it is serious.
The Atari is hardware. It can’t play chess on its own. To be able to, you need a game for it which is inserted. Then the Atari can interface with the cartridge and play the game.
ChatGPT is an LLM. Guess what, it also can’t play chess on its own. It also needs to interface with a third party tool that enables it to play chess.
Neither the Atari nor ChatGPT can directly, on their own, play chess. This was my core point.
I merely pointed out that it’s unfair that one party in this comparison is given the tool it needs (the cartridge), but the other party isn’t. Unless this is satire, I don’t see how marketing plays a role here at all.
There are custom GPTs which claim to play at a stockfish level or be literally stockfish under the hood (I assume the former is still the latter just not explicitly). Haven’t tested them, but if they work, I’d say yes. An LLM itself will never be able to play chess or do anything similar, unless they outsource that task to another tool that can. And there seem to be GPTs that do exactly that.
As for why we need ChatGPT then when the result comes from Stockfish anyway, it’s for the natural language prompts and responses.
I don’t pay for ChatGPT and just used the Wolfram GPT. They made the custom GPTs non-paid at some point.
why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?
They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.
ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.
I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.
No, they’re downvoting because they don’t like what the post is saying.
People misusing voting buttons as a like/dlislike button is a well known issue and reality, at least on Reddit. But considering the system works the exact same here, it’s no surprise that the same problem persists here as well.
I assume it’s due to people not trusting Youtube with false positives. They don’t have the best reputation with the rest of their AI moderation stuff.
Except they’re not fighting the fire here, they’re taking away the arsonist’s flamethrowser so he can’t continue making the fire. Without that flamethrower, the arsonist can’t do shit.
Fighting the fire would be petitioning Steam, but the target is the payment processors that pressured Steam on request of Collective Shout.