I like using it like a rubber ducky. I even have it respond almost entirely in quacks.
Note: it’s a local model running for free. Don’t pay anyone for this slop.
I like using it like a rubber ducky. I even have it respond almost entirely in quacks.
Note: it’s a local model running for free. Don’t pay anyone for this slop.
“When asked about buggy AI, a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”
That’s… That’s so fucking cool…
You said open source. Open source is a type of licensure.
The entire point of licensure is legal pedantry.
And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.
From the approach section:
A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.
This is not sufficient data information to recreate the model.
From the training data section:
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in the accompanying paper, we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
This is also insufficient data information and links to the paper itself for that data information.
Additionally, model cards =/= data cards. It’s an important distinction in AI training.
There are guides on how to Finetune the model yourself: https://huggingface.co/blog/fine-tune-whisper
Fine-tuning is not re-creating the model. This is an important distinction.
The OSAID has a pretty simple checklist for the OSAID definition: https://opensource.org/deepdive/drafts/the-open-source-ai-definition-checklist-draft-v-0-0-9
To go through the list of materials required to fit the OSAID:
Datasets Available under OSD-compliant license
Whisper does not provide the datasets.
Research paper Available under OSD-compliant license
The research paper is available, but does not fit an OSD-compliant license.
Technical report Available under OSD-compliant license
Whisper does not provide the technical report.
Data card Available under OSD-compliant license
Whisper provides the model card, but not the data card.
Oh and for the OSAID part, the only issue stopping Whisper from being considered open source as per the OSAID is that the information on the training data is published through arxiv, so using the data as written could present licensing issues.
The problem with just shipping AI model weights is that they run up against the issue of point 2 of the OSD:
The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.
AI models can’t be distributed purely as source because they are pre-trained. It’s the same as distributing pre-compiled binaries.
It’s the entire reason the OSAID exists:
Edit: also the information about the training data has to be published in an OSD-equivalent license (such as creative Commons) so that using it doesn’t cause licensing issues with research paper print companies (like arxiv)
Whisper’s code and model weights are released under the MIT License. See LICENSE for further details. So that definitely meets the Open Source Definition on your first link.
Model weights by themselves do not qualify as “open source”, as the OSAID qualifies. Weights are not source.
Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
This is not training data. These are testing metrics.
Edit: additionally, assuming you might have been talking about the link to the research paper. It’s not published under an OSD license. If it were this would qualify the model.
Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.
The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).
Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.
They are model-available if anything.
Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.
The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).
Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.
They are model-available if anything.
LLMs as they stand are already approaching the improvement flatline portion of the sigma curve due to marginal data requirements increasing exponentially.
It’s a known problem in the actual AI research field that nobody in private industry likes to talk about.
If it scores 40% this year it’ll marginally increase by 10% next year then 5% 3 years later and so on.
AI doesn’t follow Moore’s law.
You’re anthropomorphizing LLMs.
There’s a philosophical and neuroscuence concept called “Qualia,” which helps define the human experience. LLMs have no Qualia.
How do you do this with a tablet? Can you buy like a wheel sensor or something?
Unless something has changed recently you still have to submit builds to Nvidia to have them train the DLSS kernel for you, so FSR is substantially easier to integrate.
This is why I think eventually FSR will win over DLSS in the end, despite DLSS having better performance.
Firefly needs to hurry up and make a human-rated capsule instead of cargo farings.
I have high hopes for a company that can set up a rocket almost from scratch in 24 hours.
AI technically already won this debate because autonomous war drones are somewhat ubiquitous.
I doubt jets are going to have the usefulness in war that they used to.
Much more economical to have 1000 cheap drones with bombs overwhelm defenses than put your bets on one “special boi” to try and slip through with constantly defeated stealth capabilities.
Yea it’s the same for us, the complaints from people when they see a kid in public on a tablet are weird to me cause I know as kids we always had stuff like toys we brought into restaurants (or we went to restaurants with like coloring maps and stuff).
Parents have been desperately trying to find things to occupy kids while they’re in public so they don’t disturb the people around them for years and now that smart phones/ipads are universal it seems like there’s finally something that will just keep the kids quiet for awhile without a lot of effort.
I think it’s important to pay attention how much you/your kids are spending on “screen time” but it feels really disingenuous to say stuff like the current generation is cooked because of ipads.
I don’t know if thinking that training data isn’t going to be more and more poisoned by unsupervised training data from this point on counts as “in practice”