Skip to content

chonkie-inc/tokie

Repository files navigation

tokie

tokie

Crates.io PyPI Crates.io Downloads PyPI Downloads License docs.rs GitHub Stars

15-29x faster than HuggingFace, 100% accurate drop-in replacement

Install β€’ Quick Start β€’ Examples β€’ Benchmarks β€’ Why tokie?

Caution

tokie is in its alpha stage and might produce mis-aligned output. Please report any issues you encounter.

tokie is a Rust tokenizer library (with Python bindings) that can load any tokenizer on HuggingFace and tokenize 15-29x faster. It supports every major algorithm β€” BPE, WordPiece, SentencePiece, and Unigram β€” and is 100% token-accurate, every time.

benchmark

Install

Python

pip install tokie

Rust

[dependencies]
tokie = { version = "0.0.8", features = ["hf"] }

Quick Start

Python

import tokie

# Load any HuggingFace tokenizer
tokenizer = tokie.Tokenizer.from_pretrained("bert-base-uncased")

# Encode β€” returns Encoding with ids, attention_mask, type_ids, tokens
encoding = tokenizer("Hello, world!")  # or tokenizer.encode("Hello, world!")
print(encoding.ids)             # [101, 7592, 1010, 2088, 999, 102]
print(encoding.tokens)          # ['[CLS]', 'hello', ',', 'world', '!', '[SEP]']
print(encoding.attention_mask)  # [1, 1, 1, 1, 1, 1]

# Decode
text = tokenizer.decode(encoding.ids)  # "hello , world !"

# Count tokens without allocating
count = tokenizer.count_tokens("Hello, world!")  # 6

# Batch encode (parallel across all cores)
encodings = tokenizer.encode_batch(["Hello!", "World"], add_special_tokens=True)

Rust

use tokie::Tokenizer;

let tokenizer = Tokenizer::from_pretrained("bert-base-uncased")?;
let encoding = tokenizer.encode("Hello, world!", true);
println!("{:?}", encoding.ids);             // [101, 7592, 1010, 2088, 999, 102]
println!("{:?}", encoding.attention_mask);  // [1, 1, 1, 1, 1, 1]

let text = tokenizer.decode(&encoding.ids).unwrap();

Examples

Padding & Truncation

For ML inference, you need fixed-length inputs. tokie supports padding and truncation just like HuggingFace:

tokenizer = tokie.Tokenizer.from_pretrained("bert-base-uncased")

# Truncate to max length
tokenizer.enable_truncation(max_length=128)

# Pad to fixed length (or use BatchLongest for dynamic padding)
tokenizer.enable_padding(length=128, pad_id=0)

# All outputs are now exactly 128 tokens
results = tokenizer.encode_batch(["Short text", "A much longer piece of text for testing"])
assert all(len(r) == 128 for r in results)

# attention_mask shows which tokens are real (1) vs padding (0)
print(results[0].attention_mask)  # [1, 1, 1, 1, 0, 0, 0, ...]

Cross-Encoder Pair Encoding

For rerankers and cross-encoders that need sentence pairs with token type IDs:

pair = tokenizer("How are you?", "I am fine.")  # or tokenizer.encode_pair(...)
pair.ids               # [101, 2129, 2024, 2017, 1029, 102, 1045, 2572, 2986, 1012, 102]
pair.attention_mask    # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
pair.type_ids          # [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
pair.special_tokens_mask  # [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]

Byte Offsets

Track where each token maps back to in the original text:

enc = tokenizer.encode_with_offsets("Hello world")
for token_id, (start, end) in zip(enc.ids, enc.offsets):
    print(f"  token {token_id}: bytes [{start}:{end}]")

Vocabulary Access

tokenizer.vocab_size          # 30522
tokenizer.id_to_token(101)    # "[CLS]"
tokenizer.token_to_id("[SEP]")  # 102
vocab = tokenizer.get_vocab()   # {"[CLS]": 101, "[SEP]": 102, ...}

Save and Load .tkz Files

tokie's binary format is ~10x smaller than tokenizer.json and loads in ~5ms:

tokenizer.save("model.tkz")
tokenizer = tokie.Tokenizer.from_file("model.tkz")

from_pretrained() automatically tries .tkz first, falling back to tokenizer.json.

Benchmarks

All benchmarks run on 1 MB of enwik8 on an Apple M3 Pro. tokie produces identical output to HuggingFace tokenizers β€” every token matches, every time.

BPE Encoding (GPT-2, Llama, Qwen, ModernBERT)

For tiktoken-style BPE models, tokie uses a backtracking encoder built on an Aho-Corasick automaton. Instead of iteratively merging byte pairs, it does a greedy longest-match in O(n) time, with backtracking only when adjacent tokens form invalid pairs. Combined with parallel chunking across all cores and hand-coded pretokenizers from pretokie, this gives 15-24x faster than HuggingFace.

BPE encoding speed

WordPiece (BERT, MiniLM, BGE, GTE)

WordPiece tokenizers use a different algorithm β€” greedy longest-match prefix search over a vocabulary trie. tokie uses a pre-built Double-Array trie for O(n) lookup with excellent cache locality, combined with a specialized BERT pretokenizer. The result is 20-29x faster than HuggingFace on BERT, with identical output.

WordPiece encoding speed

SentencePiece BPE & Unigram (Gemma, XLM-R, T5)

SentencePiece-style models use a different merge algorithm with non-topological rank orders. tokie uses a radix heap with O(1) amortized operations that exploits BPE's monotonic rank property. tokie is 2-3x faster than HuggingFace on Gemma 3.

SentencePiece BPE speed

Python Benchmarks

All results on Apple M3 Pro, single-string encode, median of 10 runs.

tokie vs HuggingFace tokenizers

Model Text Size tokie HF tokenizers vs HF
BERT 45 KB 0.56 ms 10.9 ms 20x
BERT 900 KB 9.84 ms 281 ms 29x
GPT-2 45 KB 0.51 ms 8.5 ms 17x
GPT-2 900 KB 9.42 ms 209 ms 22x
Llama 3 45 KB 0.58 ms 8.5 ms 15x
Llama 3 900 KB 9.45 ms 211 ms 22x
Qwen 3 45 KB 0.55 ms 9.3 ms 17x
Qwen 3 900 KB 9.58 ms 230 ms 24x
ModernBERT 45 KB 0.52 ms 10.2 ms 20x
ModernBERT 900 KB 9.75 ms 236 ms 24x
Gemma 3 45 KB 5.20 ms 11.6 ms 2x
Gemma 3 900 KB 131 ms 330 ms 3x

tokie vs tiktoken (OpenAI models)

Model Text Size tokie tiktoken Speedup
cl100k (GPT-4) 45 KB 0.69 ms 2.37 ms 3.5x
cl100k (GPT-4) 900 KB 9.63 ms 45.7 ms 4.7x
o200k (GPT-4o) 45 KB 0.52 ms 4.10 ms 7.9x
o200k (GPT-4o) 900 KB 9.83 ms 81.5 ms 8.3x

100% token-accurate across all models. Batch encoding is 6-9x faster than HF.

Tokenizer Loading

Loading a tokenizer from tokenizer.json requires JSON parsing, vocabulary construction, and β€” for BPE models β€” building the Aho-Corasick automaton from scratch. tiktoken similarly has to parse its BPE data and compile regex patterns on every load. tokie's .tkz binary format stores all of this pre-built: the Double-Array Aho-Corasick (DAAC) automaton state, the normalized vocabulary, and the encoder configuration are serialized directly. Loading becomes a near-zero-cost deserialization β€” no parsing, no construction β€” achieving 2x–8x faster cold load times than HuggingFace.

Tokenizer loading time

Verified Tokenizers

Every tokenizer below is tested against the original HuggingFace tokenizer on 1MB of enwik8 (~300K tokens) in CI. Pass = every token matches.

View full accuracy table (74 models)
Model Type Status
GPT-2 BPE βœ… Pass
cl100k BPE βœ… Pass (vs tiktoken-rs)
o200k BPE βœ… Pass (vs tiktoken-rs)
RoBERTa BPE βœ… Pass
Phi-2 BPE βœ… Pass
Phi-3 Mini BPE βœ… Pass
ModernBERT BPE βœ… Pass
CodeLlama 7B BPE βœ… Pass
DeepSeek-V3 BPE βœ… Pass
DeepSeek-R1 BPE βœ… Pass
Gemma 2 2B SentencePiece BPE βœ… Pass
Gemma 3 4B SentencePiece BPE βœ… Pass
Llama 3.2 1B BPE βœ… Pass
Llama 4 Scout BPE βœ… Pass
Mistral 7B BPE βœ… Pass
Mistral Nemo BPE βœ… Pass
Mixtral 8x7B BPE βœ… Pass
NV-Embed-v2 SentencePiece BPE βœ… Pass
Qwen2 7B BPE βœ… Pass
Qwen3 Embed 0.6B BPE βœ… Pass
Qwen3 Embed 4B BPE βœ… Pass
Qwen3 Embed 8B BPE βœ… Pass
Qwen3 0.6B BPE βœ… Pass
Qwen3 8B BPE βœ… Pass
Qwen3 Coder 30B BPE βœ… Pass
Qwen3.5 0.8B BPE βœ… Pass
Qwen3.5 4B BPE βœ… Pass
SmolLM2 135M BPE βœ… Pass
StableLM 2 1.6B BPE βœ… Pass
Nomic Embed v1 WordPiece βœ… Pass
BERT base WordPiece βœ… Pass
all-MiniLM-L6-v2 WordPiece βœ… Pass
all-MiniLM-L12-v2 WordPiece βœ… Pass
all-mpnet-base-v2 WordPiece βœ… Pass
BGE base en v1.5 WordPiece βœ… Pass
BGE large en v1.5 WordPiece βœ… Pass
BGE small en v1.5 WordPiece βœ… Pass
BGE en ICL BPE βœ… Pass
BGE M3 SentencePiece BPE βœ… Pass
E5 base v2 WordPiece βœ… Pass
E5 large v2 WordPiece βœ… Pass
E5 small v2 WordPiece βœ… Pass
GTE base WordPiece βœ… Pass
GTE large WordPiece βœ… Pass
GTE small WordPiece βœ… Pass
GTE Qwen2 7B BPE βœ… Pass
MS MARCO MiniLM L-4 WordPiece βœ… Pass
MS MARCO MiniLM L-6 WordPiece βœ… Pass
mxbai embed large v1 WordPiece βœ… Pass
mxbai embed 2d large v1 WordPiece βœ… Pass
mxbai embed xsmall v1 WordPiece βœ… Pass
deepset mxbai embed de large Unigram βœ… Pass
Jina v2 base en BPE βœ… Pass
Jina v2 base code BPE βœ… Pass
Jina v3 Unigram βœ… Pass
Jina v4 BPE βœ… Pass
Cohere embed english v3 BPE βœ… Pass
Cohere embed english light v3 BPE βœ… Pass
Cohere embed multilingual v3 Unigram βœ… Pass
Cohere embed multilingual light v3 Unigram βœ… Pass
Voyage 3 BPE βœ… Pass
Voyage 3 large BPE βœ… Pass
Voyage 3 lite BPE βœ… Pass
Voyage 3.5 BPE βœ… Pass
Voyage 3.5 lite BPE βœ… Pass
Voyage Code 2 BPE βœ… Pass
Voyage Code 3 BPE βœ… Pass
Voyage Finance 2 BPE βœ… Pass
Voyage Law 2 BPE βœ… Pass
Voyage Multilingual 2 BPE βœ… Pass
Voyage Multimodal 3 BPE βœ… Pass
Snowflake Arctic Embed v2 SentencePiece BPE βœ… Pass
T5 base Unigram βœ… Pass
XLM-RoBERTa SentencePiece BPE βœ… Pass

Summary: 74 pass, 0 fail out of 74 tested. Every tokenizer produces identical output to HuggingFace.

Why tokie?

When I started building Chonkie, the biggest bottleneck wasn't chunking β€” it was tokenization. We were spending more time counting tokens than actually chunking text.

tokie uses hand-written parsers for each pretokenization pattern β€” GPT-2, cl100k, o200k, BERT β€” that understand the exact character classes needed without the overhead of a general-purpose regex engine. That alone gets you a 3.5x speedup on pretokenization.

The second problem was that no single library could load everything. I actually tried to solve this before with AutoTikTokenizer, believing tiktoken's BPE engine could handle all of HuggingFace. I was wrong β€” you need fundamentally different algorithms for each encoder type: backtracking BPE for tiktoken-style models, heap-based BPE for models with non-topological merge orders, radix-heap BPE for SentencePiece, plus WordPiece and Unigram each with their own tricks.

The third insight was parallelism. Tokenization is embarrassingly parallel if you split text at the right boundaries. We use chunk to SIMD-split text into chunks that respect token boundaries, then encode each chunk on a separate core and concatenate. This gives near-linear scaling β€” about 5x on 8 cores.

Finally, we built the .tkz format to eliminate load-time overhead. A tokenizer.json file has to be parsed, validated, and used to reconstruct all the internal data structures (including the Aho-Corasick automaton, which is expensive to build for large vocabularies). The .tkz format stores the pre-built DAAC automaton, vocabulary, and configuration as a flat binary β€” loading is just deserialization, no construction required. This cuts load times from 283ms to 118ms for large models like O200K, and from 101ms to 12ms for BERT.

The result is tokie β€” one tokenizer to rule them all.

Acknowledgements

tokie builds on ideas from HuggingFace tokenizers, tiktoken, GitHub's rust-gems (backtracking BPE via Aho-Corasick), and chunk (SIMD text splitting).

Citation

If you use tokie in your research, please cite it as follows:

@software{tokie2025,
  author = {Minhas, Bhavnick},
  title = {tokie: Fast, correct tokenizer library for every HuggingFace model},
  year = {2025},
  publisher = {GitHub},
  howpublished = {\url{https://github.com/chonkie-inc/tokie}},
}

About

🍑 50x faster tokenization for every HuggingFace model

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Packages

 
 
 

Contributors