We Avoid Temptation But It Keeps Finding Us

  • 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle




  • we_avoid_temptation@lemmy.ziptoScience Memes@mander.xyzBurning Up
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    10 days ago

    That’s not either scale being intuitive or unintuitive, that’s your familiarity with one over the other.

    I got curious so I did some research on the definitions and why everything is this way. It looks like they originally picked the coldest thing they had (brine, possibly inspired by the coldest weather), the freezing point of water, human body temperature, and the boiling point of water. It was supposed to be brine at 0, water freezing at 30, the human body at 90, and water boiling at 240. Fahrenheit then recalibrated his scale slightly to make his math (and thermometer design and production) easier, and also because he noticed water actually boiled at 212 by his newly modified scale.

    Looking at it like that work the context of what they had at the time and what they were trying to do, it makes a lot of sense.

    https://en.m.wikipedia.org/wiki/Fahrenheit#History





  • Maybe it doesn’t need to be.

    I’m not going to say AI is completely useless, because it’s objectively not. I’ve heard it’s incredibly helpful in pharmaceutical development and science and shit like that.

    LLMs are fancy autocomplete. They guess the most likely next word, based on some fancy-ass math I can’t claim to attempt to understand. They don’t understand the code, or anything really. It’s all math and weighted probabilities internally. How can something that doesn’t understand what it’s actually saying write good, usable, actually correct code all of the time? Sometimes it gets lucky and you get a working snippet, or it rips off someone else’s code (possibly verbatim), and sometimes it just generates nonsense.

    I’ve had the last option happen to me personally. I asked it to generate a script in Home Assistant (fantastic software by the way) to dim my lights automatically over half an hour. It worked, sometimes, but the math that actually stepped the light down in brightness wasn’t correct and the script failed intermittently.

    Maybe it can help write boilerplate, but realistically there will be additional errors inserted and it will require an actual living breathing human to fix.

    This is also entirely beside the point that I think you’re mixing up the FSF and GNU project. They’re related but not actually the same. The FSF is fundraisers and lawyers, GNU writes code. That might be why FSF never responded, though to be perfectly honest given what you’ve said, GNU will likely want nothing to do with it if it’s tainted with AI-generated code. That’s a lawsuit waiting to happen (literally) for no benefit from their perspective.