Oh, the artificial humanity!
Oh, the artificial humanity!
I had heard that. Maybe I’ll get my hands on one someday. I hear Commodore makes one.
(I do wonder now if whatever variable is being used to denote time is signed or unsigned, because that would make a big difference, too.)
Doh! You are absolutely right.
deleted by creator
It’s already hard to not write buggy code, but I don’t think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you’re writing code.
Definitely. That’s what I was trying to drive at, but you said it well.
Yeah, they are. They’re not the ones getting banned, because they maintain an air of plausible deniability.
Not saying they don’t deserve to be banned, but they’re not overt Russian propaganda—simply the regular, alt-right Conservative kind, and apparently Facebook is totally fine with that.
They’re just better at hiding it.
Are you asking how much I donate per month?
I help pay for my instance to operate, and it’s a cost I’m happy to help shoulder.
Ironically, they don’t eat honey. They eat wasp and bee larvae.
Through let’s be frank: movie theaters want you to smell popcorn, so you buy snacks. Smell-o-Vision would have to be more lucrative than a $15 bucket of popcorn.
I appreciate the effort you put into the comment and your kind tone, but I’m not really interested in increasing LLM presence in my life.
I said what I said, and I experienced what I experienced. Providing me an example where it works is in no way a falsification of the core of my original comment: LLMs have no place generating code for secure applications apart from human review, because they don’t have a mechanism to comprehend or proof their own work.
I didn’t say that. However, if delegation is too risky, do the work yourself.
Honey Buzzards are fucking awesome.
Neuralink test subject: Why do I smell burnt toast?
Who would I jail? The C-officers. Your shit show, your responsibility. If you can’t trust your employees, figure out why or do the work yourself.
This will never happen. Smell-o-Vision and its successors have been in development for decades, and they all have the same issue: where to store the numerous scent liquids. You can’t just digitize scent and generate it on demand with some kind of solid state device. You can’t just combine three liquids to make 1000 scents—the article’s analogy of combining light to make colors is overly optimistic, bordering on delusional.
The other two related problems are convenience and cost. This is 1000% a novelty, and novelties quickly lose their appeal after you experience it the first time. Who is seriously going to be going out to buy replacement cartridges for a thing that is essentially a toy?
Seems like a paltry amount, given what savvy social engineers could do with that data.
If you don’t use proper security practices, you should be on the hook for prison time at a minimum.
It was ChatGPT from earlier this year. It wasn’t a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I’d have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.
All LLMs will fail like this in some way, because they don’t actually understand what they’re generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).
No, I just thought they were vaguely similar enough words to make a dumb internet joke.