• 0 Posts
  • 77 Comments
Joined 2 months ago
cake
Cake day: June 7th, 2025

help-circle






  • So then you object to the premise any LLM setup that isn’t local can ever be “secure” and can’t seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a “brand issue” because…why? It sounds like a very emotional argument as it’s not backed by any technical discussion beyond “local only secure, nothing else.”

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can’t figure out TLS and flushing logs for an LLM on their own servers? If anything, it’s not even a complicated setup. TLS to the context window, don’t keep logs, flush the data. How do you think no-log VPNs work? This isn’t exactly all that far off from that.


  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.

    First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.

    Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.



  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.



  • OK… So, the initial question was “how could anyone support this?” right?

    I’m simply explaining how some people see the argument. I never said I see it like this.

    So I’m by no means defending any of this other than it being technically possible, and at that, this falls far short of anything resembling acceptable in my book.

    Parents who vote and would support this would do so based on limited technical knowledge and a total ideological investment in “preventing” any exposure. Which, we agree, is idiotic.

    Y’all really need to chill out with your pitchforks.


  • SMH

    Fine, changed the search term to “sex.” Fewer letters in fact. I was trying to just provide a subtle example, I didn’t expect people to need to be hit over the head with it.

    So you love the idea of young children seeing porn? Because studies and surveys routinely find that kids as young as 7 are seeing porn online, and many under age 12. Really? You think that’s perfectly fine for a 12, 10, or 7 year old with granma’s iPad doing an image search and getting even accidental porn?

    And hey, I spent my teen years scouring the earth for playboys and staying up until 3 am to catch boobs in R rated movies. I get it. I’m not saying that any system or method will prevent anyome from seeing all adult content their whole life short of being Amish. But as a tender 13 year old, did I need to see all the porn in the universe? Probably not. Adding friction (pun not intended) to general access, without violating privacy, is all I’m saying might be a good idea.


  • Saying “boobs” was trying to be subtle about it - any child of any age is at all times, unless their parents filter their device, 3 clicks and 3 letters (autocomplete could even oopsie that for them) away from seeing very explicit images. It’s absurd to think that it’s “puritanical” to have nothing in between 10 years olds (or younger) being able to so easily access pull on porn. This isn’t about what you personally want or care about, this is also about the fact that every country in the world has this same issue. Taboos are cultural, but you don’t set the culture of Honduras, or Gabon, or France, or India. So each cultural context needs to be respected, not only your personal cultural context.

    It shouldn’t need to be a slippery slope is the thing. In technical terms, this isn’t even a heavy lift. To my original point, it’s the in theory part of this I support because, in a perfect world, giving everyone the tools to effectively accomplish this isn’t hard. But it’s a lot of work that is actually fairly technical or fairly terrible from a privacy standpoint to place adult content filters on a child’s devices. Not every parent has the skills to do this, and so when a blanket option is available that is sold as a solution like this, of course they’ll go for it. But, as I said before, in our current shitty reality, we only have the worst of all worlds - a system that exists to exploit trying to limit a system that exists to exploit, all baked into a system that exists to exploit, and kids still able to see porn online easily.

    I’m very much a staunch privacy advocate, and I won’t fucking touch a digital ID system because it’s nothing but a surveillance state level at this point to persecute specifically trans people and brown people - for now. I see the writing on the wall with this, and it’s terrifying. And no one is going to force this into the working system category, so it’s just going to be the shitpile system designed to victimize added to the systems of exploitation.



  • hansolo@lemmy.todaytoTechnology@lemmy.worldThe Age-Checked Internet Has Arrived
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    edit-2
    7 days ago

    It’s more like who supports this in theory vs. who supports this how it’s written and implemented.

    Realistically, no one should love how easy it is for anyone of any age to go to any search engine and search for (Edit) “sex” and just get a million images of genitals and porn. I’m not a parent, but I know my parents when I was a teenager would have loved something like this. Kids are sneaky and smart, and this is a blanket thing parents think will once again put porn behind a barrier.

    In a perfect world, a system could very easily exist that would 1) allow for a super-secure government owned digital ID system that isn’t a surveillance nightmare, 2) that system use a hash to verify over 18 age anonymously in real time. That’s how it’s supposed to work with digital IDs - only the data you need to verify is displayed to a vendor. Over 18 is a binary yes/no - a full DOB or name isn’t even needed.

    The government ID wallet or site would use a no-log system to generate a hash value for you when you ask for one. You ask your ID app or site for an age verification hash. You get one that’s valid for about 2 minutes. Copy, paste as needed. The site uses the hash to only know “is this person over 18 or not?” and nothing else. The ID system shouldn’t keep the logs of which site asked back to confirm “is this hash valid?” This is exactly as secure as going to a liquor store with your passport or ID card and having tape over the name, address, and doc number. It’s even better because your face is not displayed, and your actual DOB should not be displayed either.

    However, in our present shitty reality, companies who are trying to get contracts for these systems can’t help but feed their existing, and lucrative, addiction to selling our data and using poor security to store that data. So they want your Google/Apple/Samsung wallets connected to a government system that is actually ran by a 3rd party vendor with questionable security practices, and to provide far more information because no one has set an international standard for neither digital ID checks, nor IDs in general, enough to make it anything less than the surveillance state nightmare that is holding a government ID with all your info, while you move your face around and give them a 3D face scan that the platform doesn’t keep, but the verification company does.