ruff
50f14d01 - Use `tokenize` for linter benchmark (#11417)

Commit
1 year ago
Use `tokenize` for linter benchmark (#11417) ## Summary This PR updates the linter benchmark to use the `tokenize` function instead of the lexer. The linter expects the token list to be up to and including the first error which is what the `ruff_python_parser::tokenize` function returns. This was not a problem before because the benchmarks only uses valid Python code.
Author
Parents
Loading