

“It’s such a big issue that I’m going to do absolutely nothing as a parent to stop it from happening!”
- These people, probably
“It’s such a big issue that I’m going to do absolutely nothing as a parent to stop it from happening!”
Could you elaborate on how it’s ableist?
As far as I’m aware, not only are they making a version that doesn’t even require JS, but the JS is only needed for the challenge itself, and the browser can then view the page(s) afterwards entirely without JS being necessary to parse the content in any way. Things like screen readers should still do perfectly fine at parsing content after the browser solves the challenge.
Because the easiest solution for them is a simple web scraper. If they don’t give a shit about ethics, then something that just crawls every page it can find is loads easier for them to set up than a custom implementation to get torrent downloads for wikipedia, making lemmy/mastodon/pixelfed instances for the fediverse, using rss feeds and checking if they have full or only partial articles, implementing proper checks to prevent double (or more) downloading of the same content, etc.
It seems like the point is that Microsoft would be developing some sort of alternative to the kernel with similar functionality for antivirus providers, that doesn’t need to have kernel level access. Anticheat uses a lot of the same techniques as kernel level antivirus to detect malware, thus it would probably have to adapt to this new system.
I think the article is more commenting on how Microsoft is directly partnering with antivirus companies for this new system right now, while they’re not directly partnering with anticheat companies, even though they’d probably have to migrate to this new system regardless.
To be fair, it certainly still makes cheating harder. If it didn’t exist, you’d just see even more people cheating, but it’s a pretty overkill way of system monitoring for such a relatively small benefit by comparison.
Massive privacy risk, only slightly better performance than other non-kernel monitoring.
I was thinking this too! Gait recognition can completely bypass facial coverings as a means of identification, but I also don’t think it’ll be much help here.
Gait recognition can be bypassed by things as simple as putting a rock in your shoe so you walk differently, so when you think about how much extra heavy gear, different shoes, and different overall movement patterns ICE agents will possibly be engaging in, it might not hold up well at tracking them down, especially since to recognize someone by gait, you’d need footage of them that you can already identify them in, to then train the model on.
In the case of fucklapd.com, this was easy because they could just get public record data for headshot photos, but there isn’t a comparable database with names directly tied to it for gait. I will say though, a lot of these undercover agents might be easier to track by gait since they’ll still generally be wearing more normal attire, and it might be more possible to associate them with who they are outside of work since it’s easier to slip up when you’re just wearing normal clothes.
This wouldn’t be an issue if Reddit always attached relevant posts, including negative ones even if those were the minority, to actually help people make a more informed judgement about an ad based on community sentiment, but I think we all know that won’t be the way this goes.
Posts will inevitably only be linked if they are positive, or at the very least neutral about the product being advertised, because that’s what would allow Reddit to sell advertisers on their higher ROI. The bandwagon effect is a real psychological effect, and Reddit knows it.
Fair enough. SEO was definitely one of the many large steps Google has taken to slowly crippling the open web, but I never truly expected it to get this bad. At least with SEO, there was still some incentive left to create quality sites, and it didn’t necessarily kill monetizability for sites.
This feels like an exponentially larger threat, and I truly hope I’m proven wrong about its potential effects, because if it does come true, we’ll be in a much worse situation than we already are now.
Not to mention the fact that the remaining sites that can still hold on, but would just have to cut costs, will just start using language models like Google’s to generate content on their website, which will only worsen the quality of Google’s own answers over time, which will then generate even worse articles, etc etc.
It doesn’t just create a monetization death spiral, it also makes it harder and harder for answers to be sourced reliably, making Google’s own service worse while all the sites hanging on rely on their worse service to exist.
This is fundamentally worse than a lot of what we’ve seen already though, is it not?
AI overviews are parasitic to traffic itself. If AI overviews are where people begin to go for information, websites get zero ad revenue, subscription revenue, or even traffic that can change their ranking in search.
Previous changes just did things like pulling a little better context previews from sites, which only somewhat decreased traffic, and adding more ads, which just made the experience of browsing worse, but this eliminates the entire business model of every website completely if Google continues pushing down this path.
It centralizes all actual traffic solely into Google, yet Google would still be relying on the sites it’s eliminating the traffic of for its information. Those sites cut costs by replacing human writers with more and more AI models, search quality gets infinitely worse, sourcing from articles that themselves were sourced from nothing, then most websites which are no longer receiving enough traffic to be profitable collapse.
Even if you want AI answers, you can use DuckDuckGo. They have an AI assistant too, and even it does better than Google’s at not hallucinating as much.
My VPN’s perfectly fine. To be fair, it’s not a free plan of a VPN that’s heavily throttled, but I can even play multiplayer FPS games with only a few milliseconds of additional delay, and my overall max upload and download speed is almost exactly identical to when I have my VPN off.
This seems like it could be a viable replacement for many plastics, but it isn’t the silver bullet I feel that the article is acting as if it is.
From the linked article in the post:
the new material is as strong as petroleum-based plastics but breaks down into its original components when exposed to salt.
Those components can then be further processed by naturally occurring bacteria, thereby avoiding generating microplastics
The plastic is non-toxic, non-flammable, and does not emit carbon dioxide, he added.
This is great. Good stuff. Wonderful.
From another article (this shows that this isn’t as recent, too. This news was from many months ago)
the team was able to generate plastics that had varying hardnesses and tensile strengths, all comparable or better than conventional plastics.
Plastics like these can be used in 3D printing as well as medical or health-related applications.
Wide applications and uses, much better than a lot of other proposed solutions. Still good so far.
After dissolving the initial new plastic in salt water, they were able to recover 91% of the hexametaphosphate and 82% of the guanidinium as powders, indicating that recycling is easy and efficient.
Easy to recycle and reclaim material from. Great! Not perfect, but still pretty damn good.
In soil, sheets of the new plastic degraded completely over the course of 10 days, supplying the soil with phosphorous and nitrogen similar to a fertilizer.
You could compost these in your backyard. Who needs the local recycling pickup for plastics when you can just chuck it in a bin in the back? Still looking good.
using polysaccharides that form cross-linked salt bridges with guanidinium monomers.
Polysaccharides are literally carbohydrates found in food.
This is really good. Commonly found compound, easy to actually re-integrate back into the environment. But now the problems start. They don’t specify much about the guanidinium monomers in their research in terms of which specific ones are used, so it’s hard to say the exact implications, but…
…they appear to often be toxic, sometimes especially to marine life, soil quality, and plant growth, and have been used in medicine with mixed results as to their effectiveness and safety.
I’m a bit disappointed they didn’t talk about this more in the articles, to be honest. It seems this would definitely be better than traditional plastic in terms of its ecological effects, but still much worse than not dumping it in the ocean at all. In my opinion, in practice it looks like this would simply make the recycling process much more efficient (as mentioned before, a 91% and 82% recovery rate for plastics is much better than the current average of less than 10%) while reducing the overall harm from plastic being dumped in the ocean, even if it’s still not good enough to eliminate the harm altogether.
I think the key reason this was seen as not being terribly offensive was the fact that women are disproportionately more likely than men to be on the receiving end of tons of different negative consequences when dating, thus to a degree justifying them having more of a safe space where their comfort and safety is prioritized.
However I think a lot of people are also recognizing now that such an app has lots of downsides that come as a result of that kind of structure, like false allegations being given too much legitimacy, high amounts of sensitive data storage, negative interactions being blown out of proportion, etc. I also think that this is yet another signature case of “private market solution to systemic problem” that only kind of addresses the symptoms, but not the actual causes of these issues that are rooted more in our societal standards and expectations of the genders, upbringing, depictions in media, etc.