That’s where I assumed it was going.
That’s where I assumed it was going.
Unlikely. You probably will injest the poison and die, and depending on if the poison also acts as a venom they may / may not.
It’s probably more accurate to say "Venoms are injected. Poisons are injested. "
I don’t view free-use models as open-source. Open-source means I can rebuild it from scratch and I can’t because I don’t know what the training data is, or have access to it.
The scale is the difference and who is harmed.
Billion dollar company losing $100. Who cares?!
Billion dollar company stealing from all artists in the world. We care.
Points 2 and 3. Basically make restrictions on normal user accounts which are fine for humans but that will make bots swear and curse.
Unless you mean “what should the registration process be” I think API keys via a user account would do.
…but if they don’t know I expect them to say so. An LLM isn’t trustworthy until it says “I don’t know”.
Exactly the reason I suggest it.
An LLMs “intent” is always to give you a plausible response even if it doesn’t have the “knowledge”. The same behaviour in a human would be classed as lying IMHO.
Make bot accounts a separate type of account so legitimate bots don’t appear as users. These can’t vote, are filtered out of post counts and users can be presented with more filtering option for them. Bot accounts are clearly marked.
Heavily rate limit any API that enables posting to a normal user account.
Make having a bot on a human user account bannable offence and enforce it strongly.
Agreed, It wouldn’t be a good thing. However it’s their own failures and mismanagement that are causing it.
It’s certainly arguable that the algorithm constitutes an editorial process and so that opens them up to libel laws and to liability.
Fair point.
Stupid sharks loose their teeth, not their fins that actually do the work.
Errr…wat!!!
The shark dies either way.
Under what law?
UK currently holds the people that post things liable for their own words. X, the platform, just relays what is said. Same as Lemmy. Same as Mastodon.
If you ban X I don’t see why those other platforms wouldn’t be next.
Now should people/organisations/companies leave X? Absolutely! Evacuate like it’s a house of fire. Should it be shut down by legal means? No.
Plot twist: they are aware, but just don’t care.
“Aldabra went under the sea and everything was gone,” Julian Hume, paleontologist and author of the study, said in a press release from the Natural History Museum in London. “There was an almost complete turn over in the fauna. Everything … went extinct. Yet as the Aldabra rail still lives on today, something must have happened for it to have returned.”
It swam.
Training data is the source. Not the 20 lines of python that get supplied with a model.
A generative AIs only purpose is to generate “works”. So it’s only purpose in consuming “work” is to use them as reference. It exists to produce derivative works. Therefore the person feading the original work into the machine is the one making the choice on how that work will be used.
A human can consume a “work” for no other reason but to admire it, be entertained by it, be educated by it, to evoke an emotion and finally to produce another work based on it. Here the consumer of the work is the one deciding how it will be used. They are the ones responsible.
I would disagree, because I don’t see the research into AI as something of value to preserve.
People talk about open source models, but there’s no such thing. They are all black boxes where you have no idea what went into them.
So are they really mammals?