Configuration Parsing Warning: Config file tokenizer_config.json cannot be fetched (too big)

Byte-Level BPE Tokenizer: numeric (100K)

A Byte-Level BPE tokenizer trained on numeric data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language numeric
Target Vocab Size 100,007
Final Vocab Size 100,007
Pre-tokenizer byte_level
Number handling rtl_5digit
Contraction handling False
Normalizer NONE
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 1

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("None")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
123500119 mod 67 1235, 00, 119, , mod, , 67 2294, 87, 134, 6, 4, 6, 53
Downloads last month
-
Safetensors
Model size
54.6M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train flexitok/mod-tokenizers-rtl_5digit

Collections including flexitok/mod-tokenizers-rtl_5digit