I think it’s an AI summary (if you read just the highlighted part)
I think it’s an AI summary (if you read just the highlighted part)
It baffles me that these types of jobs exist in the same area as mine. My company doesn’t care what hours I work as long as I get things done, has gone fully remote and never going back, encourages people to not burn themselves out and take time off, we have actual unlimited PTO (i.e. nobody coming after me for using too much), etc. I always thought that’s just the Silicon Valley mentality, but I keep seeing news of big tech companies doing all kinds of crazy backwards things and I don’t get it. All the perks I get are not because my company is run by angels, it’s because they understand we’re actually more productive that way.
He didn’t get arrested for AI generated music. He got arrested for faking multiple accounts to upload music and using bots to generate fake listens, thus stealing millions of dollars. If he did the same thing with music he actually wrote and played, he would still be arrested.
So really no excuse when the vogons come
I just looked up the other day how to take care of the venus flytrap my son insisted we buy. It said it needs poor soil, do not fertilize it, and that they get their nutrients from their prey (and should be fed if kept indoors).
I never use my mouse at all in vim
I rushed to just grab that codeblock from Wikipedia. But the selection of which characters are considered emoji is not arbitrary. The Unicode Consortium (their Unicode Emoji Standard and Research Working Group to be exact) publishes those list and guidelines on how they should be rendered. I believe the most recent version of the standard is Emoji 15.1.
Edit: I realized I’m going off track here by just reacting to comments and forgetting my initial point. The difference I was initially alluding to is in selection criteria. The emoji. for assigning a character a Unicode codepoint is very different from the criteria for creating a new emoji. Bitcoin has a unique symbol and there is a real need to use that symbol in written material. Having a unicode character for it solves that problem, and indeed one was added. The Emoji working group has other selection criteria (which is why you have emoji for eggplant and flying money, and other things that are not otherwise characters. So the fact that a certain character exists, despite its very limited use, has no bearing on whether something else should have an emoji to represent it.
There’s no ambiguity. Emoji are characters in the emoticons code block (U+1F600…U+1F64F). Emoji are indeed a subset of characters, but anything outside that block is not an emoji.
Edit: jumped the gun on that definition, just took the code block from Wikipedia. But there is no ambiguity on which character is an emoji and which is not. The Unicode Consortium publishes lists of emoji and guidelines on how they should be rendered.
I don’t disagree with the overall comment, but there’s a difference between character and emoji. ⅌ got a character, but so did ₿ already.
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
As of 2023, complete forms of AGI remain speculative.
Boucher, Philip (March 2019). How artificial intelligence works
Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
If you read the whole text and interpret the highlights as emphasis then it’s just annoying and hard to read (sort of like those people who add random commas everywhere). If you read just the highlighted text then it sounds like a summary, but there are mistakes in it, which is why I assumed AI.