Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates
WalnutLum@lemmy.ml 2 months agoWhisper’s code and model weights are released under the MIT License. See LICENSE for further details. So that definitely meets the Open Source Definition on your first link.
Model weights by themselves do not qualify as “open source”, as the OSAID qualifies. Weights are not source.
Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
This is not training data. These are testing metrics.
QuadratureSurfer@lemmy.world 2 months ago
I don’t understand. What’s missing from the code, model, and weights provided to make this “open source” by the definition of your first link? it seems to meet all of those requirements.
As for the OSAID, the exact training dataset is not required, per your quote, they just need to provide enough information that someone else could train the model using a “similar dataset”.
WalnutLum@lemmy.ml 2 months ago
Oh and for the OSAID part, the only issue stopping Whisper from being considered open source as per the OSAID is that the information on the training data is published through arxiv, so using the data as written could present licensing issues.
QuadratureSurfer@lemmy.world 2 months ago
Ok, but the most important part of that research paper is published on the github repository, which explains how to provide audio data and text data to recreate any STT model in the same way that they have done.
See the “Approach” section of the github repository: github.com/openai/whisper?tab=readme-ov-file#appr…
And the Traning Data section of their github: github.com/openai/whisper/blob/…/model-card.md#tr…
With this you don’t really need to use the paper hosted on arxiv, you have enough information on how to train/modify the model.
There are guides on how to Finetune the model yourself: huggingface.co/blog/fine-tune-whisper
Which, from what I understand on the link to the OSAID, is exactly what they are asking for. The ability to retrain/finetune a model fits this definition very well:
All 3 of those have been provided.
WalnutLum@lemmy.ml 2 months ago
From the approach section:
This is not sufficient data information to recreate the model.
From the training data section:
This is also insufficient data information and links to the paper itself for that data information.
Additionally, model cards =/= data cards. It’s an important distinction in AI training.
Fine-tuning is not re-creating the model. This is an important distinction.
The OSAID has a pretty simple checklist for the OSAID definition: opensource.org/…/the-open-source-ai-definition-ch…
To go through the list of materials required to fit the OSAID:
Whisper does not provide the datasets.
The research paper is available, but does not fit an OSD-compliant license.
Whisper does not provide the technical report.
Whisper provides the model card, but not the data card.
WalnutLum@lemmy.ml 2 months ago
The problem with just shipping AI model weights is that they run up against the issue of point 2 of the OSD:
AI models can’t be distributed purely as source because they are pre-trained. It’s the same as distributing pre-compiled binaries.
It’s the entire reason the OSAID exists: