• 0 Posts
  • 13 Comments
Joined 3 months ago
cake
Cake day: March 31st, 2025

help-circle
  • Those are some good nuances that definitely require a nuanced response and forced me to refine my thinking, thank you! I’m actually not claiming that the brain is the sole boundary of the real me, rather it is the majority of me, but my body is a contributor. The real me does change as my body changes, just in less meaningful ways. Likewise some changes in the brain change the real me more than others. However, regardless of what constitutes the real me or not, (and believe me, the philosophical rabbit hole there is one I love to explore), in this case I’m really just talking about the straightforward immediate implications of a brain implant on my privacy. An arm implant would also be quite bad in this regard, but a brain implant is clearly worse.

    There have already been systems that can display very rough, garbled images of what people are thinking of. I’m less worried about an implant that tells me what to do or controls me directly, and more worried about an implant that has a pretty accurate picture of my thoughts and reports it to authorities. It’s surely possible to build a system that can approximate positive or negative mood states, and in combination this is very dangerous. If the government can tell that I’m happy when I think about Luigi Mangione, then they can respond to that information however they want. Eventually, in the same way that I am conditioned by the panopticon to stop at stop sign, even in the middle of a desolate desert where I can see for miles around that there are no cars, no police, no cameras - no anything that could possibly make a difference to me running the stop sign - the system will similarly condition automatic compliance in thoughts themselves. That is, compliance is brought about not by any actual exertion of power or force, but merely by the omnipresent possibility of its exertion.

    (For this we only need moderately complex brain implants, not sophisticated ones that actually control us physiologically.)


  • I am not depressed, but I will never get a brain implant for any reason. The brain is the final frontier of privacy, it is the one place I am free. If that is taken away I am no longer truly autonomous, I am no longer truly myself.

    I understand this is how older generations feel about lots of things, like smartphones, which I am writing this from, and I understand how stupid it sounds to say “but this is different!”, but like… really. This is different. Whatever scale smartphones, drivers licenses, personalized ads, the internet, smart home speakers… whatever scale all these things lie on in terms of “panopticon-ness”, a brain implant is so exponentially further along that scale as to make all the others vanish to nothingness. You can’t top a brain implant. A brain implant is a fundamentally unspeakable horror which would inevitably be used to subjugate entire peoples in a way so systematically flawless as to be almost irreversible.

    This is how it starts. First it will be used for undeniable goods, curing depression, psychological ailments, anxiety, and so on. Next thing you know it’ll be an optional way to pay your check at restaurants, file your taxes, read a recipe - convenience. Then it will be the main way to do those things, and then suddenly it will be the only way to do those things. And once you have no choice but to use a brain implant to function in society, you’ll have no choice but to accept “thought analytics” being reported to your government and corporations. No benefit is worth a brain implant, don’t even think about it (but luckily, I can’t tell if you do).





  • Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:

    It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?

    I think you and I are in agreement, we’re upholding the same principle but in different directions.


  • But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.

    As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.




  • So then can anything that produces dopamine be addictive? Can I get addicted to hugging my girlfriend, or addicted to reading books, or jogging? Or is there some threshold? Does the intensity per time matter, or just the intensity, or just the time? What about the frequency of exposure? Does any amount of dopamine release make me slightly more addicted to whatever it is, or is there some threshold that needs to be exceeded? Do dopamine-based addictions produce physical withdrawal symptoms, always, never, sometimes? Depending on what? And are physical withdrawal symptoms necessary to constitute addiction or are there different tiers of addiction?

    You see what I’m getting at. There’s sooo many questions that need to be answered before just saying “this produces lots of dopamine therefore it’s addictive and bad and should be limited”. While I appreciate and empathize with your sentiment about people cherry-picking the studies they like (sounding like an LLM here lol), it’s not as if science doesn’t know how to deal with that problem, and it certainly isn’t a reason to stop caring about or citing studies at all, or say “well you’ve got your studies and I’ve got mine”. Just because both sides have studies that give evidence in their favor doesn’t mean both sides are equally valid or that it’s impossible to reach an informed conclusion one way or the other.

    My next biggest question (and what I’m trying to drive at with the semi-rhetorical slew of questions I opened with) would be what makes something an addiction or not? Am I addicted to staying alive, because I’ll do anything to stay alive as long as possible? That seems silly to call an addiction, since it doesn’t do any harm. And how do we delineate between, say, someone who is addicted to playing with Rubik’s Cubes vs. someone who just really likes Rubik’s Cubes and has poor self-control? Or what about someone with some other mental quirk, like someone who plays with Rubik’s Cubes a lot due to OCD, or maybe an autistic person who plays a lot with Rubik’s Cubes out of a special interest? Does the existence of such people mean that “Rubik’s Cube Addiction” is a real concern that can happen to anyone who plays with Rubik’s Cubes too much? Or perhaps Rubik’s cubes are not addictive at all, and it is separate traits driving people to engage with them in a way that appears addictive to others.

    I know I’ve written a long post and asked lots of questions. It’s not my intention to “gish gallop” you, just to convey my variety of questions. The Rubik’s example is the one thing I’m most curious to hear your thoughts on. (There I go sounding like an LLM again)