The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: UnicodeDecodeError
Message: 'utf-8' codec can't decode byte 0xa1 in position 1850: invalid start byte
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 196, in _generate_tables
csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/streaming.py", line 73, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1250, in xpandas_read_csv
return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 620, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
self._engine = self._make_engine(f, self.engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
return mapping[engine](f, **self.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
self._reader = parsers.TextReader(src, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 1850: invalid start byteNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Text2CAD-Bench π
Text2CAD-Bench is the first comprehensive benchmark for evaluating text-to-CAD generation across geometric complexity and application diversity.
π’ News
- [2026.02] π v1.0 released with 30% prompts for preview
- [Coming Soon] v1.1 will include additional evaluation scripts and expanded documentation
π Overview
Text2CAD-Bench comprises 600 human-curated examples organized into four benchmark levels:
| Level | Description | Examples | Key Features |
|---|---|---|---|
| L1 | Basic | 200 | Primitives, simple spatial relationships |
| L2 | Intermediate | 200 | Boolean operations, chamfer, fillet, patterns |
| L3 | Advanced | 100 | Sweep, loft, shell, complex surfaces |
| L4 | Real-world | 100 | Multi-domain applications |
Each example includes dual-style prompts:
- Geometric (Geo): Appearance-based descriptions mimicking non-expert users
- Sequence (Seq): Procedural descriptions aligned with expert-level CAD conventions
π Dataset Structure
Text2CAD-Bench/
βββ prompts/ # 30% sample prompts (preview)
β βββ L1/
β β βββ L1_001_geo
β β βββ L1_001_seq
β β βββ ...
β βββ L2/
β βββ L3/
β βββ L4/
βββ evaluation/ # Evaluation scripts
β βββ metrics.py
β βββ evaluate.py
β βββ requirements.txt
βββ examples/ # Example outputs
β βββ visualizations/
βββ README.md
β οΈ Note: Ground truth STEP files are not publicly released to prevent benchmark contamination. The 30% prompt samples are provided to demonstrate data distribution and format. For full benchmark access, please contact us.
π Leaderboard
π Interactive Leaderboard: See leaderboard for sortable results by different metrics.
Final results are weighted by sample count: L1 (200, 40%), L2 (200, 40%), L3 (100, 20%).
General-purpose LLMs (Sorted by CD β)
| Rank | Model | CD β | IR β | IoU β |
|---|---|---|---|---|
| π₯ | GPT-5.2 | 63.97 | 30.6% | 0.45 |
| π₯ | Claude-4.5-Sonnet | 66.90 | 41.3% | 0.43 |
| π₯ | DeepSeek-V3.2 | 76.25 | 29.7% | 0.37 |
| 4 | MiniMax M2.11 | 83.16 | 42.7% | 0.37 |
| 5 | GLM-4.7 | 84.98 | 35.0% | 0.34 |
| 6 | Qwen3-max | 99.21 | 43.2% | 0.28 |
Domain-specific Models (Sorted by CD β)
| Rank | Model | CD β | IR β | IoU β |
|---|---|---|---|---|
| π₯ | CADFusion | 224.35 | 60.5% | 0.03 |
| π₯ | Text2CAD | 248.66 | 7.0% | 0.05 |
| π₯ | Text2CADQuery | 250.27 | 51.0% | 0.04 |
π Quick Start
Installation
git clone https://github.com/xxx/Text2CAD-Bench.git
cd Text2CAD-Bench
pip install -r evaluation/requirements.txt
Evaluation
from evaluation import evaluate
# Load your model outputs
results = evaluate(
predictions_dir="path/to/your/outputs",
metrics=["CD", "IR", "IoU"]
)
print(results.summary())
Submit to Leaderboard
To submit your results to the leaderboard:
- Run evaluation on the full benchmark by upload your model.
- Generate results file using our evaluation script
- Submit via Google Form or email
python evaluation/generate_submission.py \
--predictions_dir path/to/outputs \
--output submission.json
π License
This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
You are free to:
- Share β copy and redistribute the material in any medium or format
- Adapt β remix, transform, and build upon the material for any purpose, even commercially
Under the following terms:
- Attribution β You must give appropriate credit, provide a link to the license, and indicate if changes were made.
π§ Contact
- Email:
- Issues: Please use GitHub Issues for bug reports and feature requests
- Full benchmark access: Contact us with your affiliation and intended use
π Acknowledgements
We thank all annotators and reviewers who contributed to the construction of Text2CAD-Bench.
Text2CAD-Bench: A Benchmark for LLM-based Text-to-Parametric CAD Generation
- Downloads last month
- 12