• 0 Posts
  • 17 Comments
Joined 11 months ago
cake
Cake day: January 8th, 2025

help-circle

  • Wee, another person who doesn’t understand how analogies work. You see, class, saying A is analogous to B is not a statement of equivalency, but a statement of structural similarity. The response ‘welcome to the internet’ is sarcasm meant to imply the previous responder is foolish for having standards of some kind and should simply accept and expect that people cannot be held to any standard. The further response asking the respondent, via analogy, if more people doing something is all that it takes to make it acceptable, asks them to consider the implications of the belief that repetition makes right.

    Now that you have a more fullsome understanding of things, will you demonstrate the maturity and intelligence to integrate that knowledge into action or will you demonstrate an inability to comprehend, only to raise a pretense of reason?





  • Depending on where you live this might be because of pricing regulations which require payments to be equal to the most expensive source used in a given period plus a preset margin. Some of the regulatory systems don’t know how to cope with the differences in generation that come from renewables. …not that they’re great at managing the non-renewables these days either.


  • Why…?

    Because why bother saying anything if you aren’t going to say anything? Offering correct information gives the other person a chance to correct and improve. Just saying ‘WRONG!’ is just a slap in the face that only serves to let you feel superior, masturbatory pretense.

    As for the rest, those are all clearly issues, but none of them are of a sort where handling the one I raised and handling them are mutually exclusive. And at least the second item is actually a following point from the one I mentioned. People being tricked into thinking LLMs are capable of thought contributes to the thought by decision-makers that people can simply be replaced. Viewing the systems as intelligent is a big part of what makes people trust them enough to blindly accept biases in the results. Ideally, I’d say AI should be kept purely in the realm of research until it’s developed enough for isolated use as a tool but good luck getting that to happen. Post hoc adjustments are probably the best we can hope for and my little suggestion is a fun way to at least try to mitigate some of the effects. It’s certainly more reasonably likely to address some element of the issues than just saying ‘WRONG!’

    The fun part is, while the issues you mentioned all have the possibility of creating broad, hard to define harm if left unchecked, there are already examples of direct harm coming from people treating LLM outputs as meaningful.



  • No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as ‘I think…’ or ‘I am…’ makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It’s part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.


  • There’s a lot of ink spilled on ‘AI safety’ but I think the most basic regulation that could be implemented is that no model is allowed to output the word “I” and if it does, the model designer owes their local government the equivalent of the median annual income for each violation. There is no ‘I’ for an LLM.