• 0 Posts
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • Looked like the least worst alternative to reddit (which was the least worst replacement for what the internet used to be before reddit, and facebook, and the like, killed it).

    Turns out it’s mostly reddit reposts (often by bots, which is ironic since the originals were also reddit reposts posted by bots) and US politics garbage, and even more susceptible than Reddit to power hungry mods and echo chambers.

    I guess I’m just addicted to doomscrolling. Which is almost as depressing as the fact that this inane crumb of utterly useless and purposeless garbage is by far the least worst furuncle in the rotting bot infested corpse of the internet.










  • leftzero@lemmynsfw.comtoScience Memes@mander.xyzLord of the SCIENCE
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    10 days ago

    The Trees fell during the First Age; the Downfall of Númenor and the Changing of the World happened towards the end of the Second Age, almost three thousand years later, and involved Sauron, not Morgoth, who’d been defeated and exiled from the world at the end of the First Age.

    As for Legolas, he was a Sindarin elf born in the Third Age. He never saw the Trees, and had never been in Aman at the time of The Lord of the Rings.

    The only named characters in The Lord of the Rings (other than ones mentioned in songs and legends) who had ever seen the trees were Galadriel, possibly Celeborn and Glorfindel, technically Gandalf and Saruman (provided the Istari count as the same people as the Maiar they were back in Aman), and, maybe, Tom Bombadil and Goldberry, but who knows, really, with those two. (I don’t think Sauron ever was in Aman, at least during or after the time of the Trees).









  • leftzero@lemmynsfw.comtoScience Memes@mander.xyzAI Artefacting
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    14 days ago

    LLMs have legitimate uses today

    No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.

    In the future they will have more legitimate and illegitimate uses

    No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.

    The capabilities of current LLMs are often oversold

    LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.

    Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.