transformers.js
Add Whisper language detection
#1097
Open

Add Whisper language detection #1097

ae9is wants to merge 7 commits into huggingface:main from ae9is:add-whisper-language-detection
ae9is
ae9is209 days ago

See #302

Adds support for automatically detecting language to Whisper tasks.

The existing HuggingFace and Whisper implementations in Python were used as reference:
Hugging Face Transformers
Original Whisper

Also updates the existing Whisper test suites, including adding a string similarity check for actual model output (as opposed to just output length). Please note that the "new" development dependency for these tests, "fastest-levenshtein" is already used by "webpack-cli".

xenova
xenova209 days ago (edited 209 days ago)

Thanks for the PR! This will certainly be a useful feature. Regarding the implementation, I think it can be greatly simplified as follows:

  • Instead of using .generate, perform a single forward pass of the inputs
  • Then, consider all logits which correspond to the language token ids
  • Choose the language with the highest score

Currently, the implementation seems to perform a full generation step (could be hundreds of forward passes).

ae9is
ae9is208 days ago

Sorry about that, it was simpler to code and the performance impact for my app was minimal! I've reworked things now to only run one pass for language detection.

Thanks for all the work on this library.

ae9is Add language detection support with Whisper tasks
242d33cf
ae9is Add tests for Whisper language detection
467851a8
ae9is Add test utility to compare string similarity
88bba08e
ae9is Quality check output for some Whisper pipeline tests
fc56bdc1
ae9is Add a new logits processor to only generate allowed token IDs
7cd642c3
ae9is Improve Whisper language detection performance
ecdd5987
ae9is Fix Whisper language detection pipeline test
db845409
ae9is ae9is force pushed from 7bbc92f1 to db845409 206 days ago
ZhangPeng4242
ZhangPeng4242196 days ago

hey there, please approve this feature, it is a quite useful feature :)

xenova
xenova commented on 2024-12-28
src/models.js
3145 const stopping_criteria = new StoppingCriteriaList();
3146 stopping_criteria.push(new AlwaysStopCriteria());
3147 const good_words_ids = [all_lang_ids];
3148
const output = await this.generate({
3149
...options,
3150
generation_config: {
3151
...generation_config,
3152
good_words_ids,
3153
num_beams: 1,
3154
do_sample: false,
3155
},
3156
stopping_criteria,
3157
decoder_input_ids,
3158
});
xenova195 days ago

We should be able to replace this with a single forward pass (by called this.forward(...) instead of using a generation step.

ae9is194 days ago👍 1

There's a lot of user options for (and logic in) generate and I wanted to respect it while running language detection. It was simpler to extend generate to just stop after one pass than to duplicate that and use forward directly.

Like, hypothetically, a user adds a logits processor that suppresses the first 10 seconds worth of tokens. There is a 15s audio clip in two languages, and the context switches at 10s. The language detection should detect the second language not the first.

edbrdi
edbrdi commented on 2025-01-12
src/models.js
3130 * @returns {Promise<number[]>} A list of language token IDs detected.
3131 */
3132 async _detect_language(options) {
3133
const inputs = options.inputs
edbrdi180 days ago

When testing this PR "inputs" was in my case not present, instead I had "input_features".

I noticed the type returns: "(Tensor of varying shape depending on the modality, optional): The sequence used as a prompt for the generation or as model inputs to the encoder. If null the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should be in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values."

By the way thanks for adding language detection, hope it will be merged soon :)

ae9is179 days ago (edited 179 days ago)

Sorry, I can't reproduce. And reading the typing, sounds like the input_ids/input_values/input_features should always be stored as inputs.

And even if the typing is sometimes wrong, patching _detect_language() to use for ex. options?.inputs ?? options?.input_features still won't fix the generate() function which is currently in main. So it sounds like maybe worth filing a separate issue and or PR.

But if you're interested in just trying an alternative build, my "develop" branch is a fork of v3.0.2 with the language detection patch applied that works for me in a real app. Hope it helps!

edbrdi179 days ago

I think it depends on the model used, you probably can reproduce it with https://huggingface.co/onnx-community/whisper-large-v3-turbo - I was able to fix it with const inputs = options.inputs ?? options.input_features; in _detect_language on my side.

ae9is177 days ago👍 1

I've already used turbo and it works fine for me, sorry! (I do get an unrelated error when using turbo instead of small in the test suite.)

I guess it's up to the maintainer to decide what to do with this PR, and edits are enabled.

But I don't understand why you're not also getting issues with the generate() code that's currently in main. And if so, that's worth a separate issue and PR.

vklyukin
vklyukin94 days ago

Hello! I wanted to check in on this PR - would it be possible to get this approved and merged soon as it would significantly improve user experience with automatic language detection?

SpeedyGonzaless
SpeedyGonzaless93 days ago👀 1

Hi, is there any chance to have this feature soon?

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone