problem_id
stringlengths 1
66
| category
stringclasses 2
values | statement
stringlengths 0
1.07M
| config
stringlengths 20
370
|
|---|---|---|---|
vdb_pareto/high_recall
|
research
|
VDB Design Problem - High Recall Tier
======================================
Problem Setting
---------------
Design a Vector Database index optimized for **recall** subject to a **relaxed latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meeting the constraint are scored purely by recall@1.
**Optimization Goal**: Maximize recall@1 within latency constraint
$$
\text{score} = \begin{cases}
0 & \text{if } t_{\text{query}} > t_{\text{max}} \\
100 & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r \geq r_{\text{baseline}} \\
100 \cdot \frac{r - r_{\text{min}}}{r_{\text{baseline}} - r_{\text{min}}} & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r < r_{\text{baseline}}
\end{cases}
$$
Where:
- $r$: Your recall@1
- $t_{\text{query}}$: Your average query latency (ms)
- $r_{\text{baseline}} = 0.9914$ (baseline recall)
- $r_{\text{min}} = 0.9409$ (minimum acceptable recall, 95% of baseline)
- $t_{\text{max}} = 7.7\text{ms}$ (maximum allowed latency, 200% of baseline 3.85ms)
**Key Insight**: This tier provides 2× latency budget compared to balanced tier, allowing more thorough search for higher recall.
Baseline Performance
--------------------
- Recall@1: **0.9914** (99.14%)
- Avg query time: **3.85ms**
- Baseline score: **100** (recall equals baseline within latency constraint)
Scoring Examples
----------------
Assuming all solutions meet latency constraint ($t \leq 7.7\text{ms}$):
| Recall@1 | Latency | Score Calculation | Score |
|----------|---------|-------------------|-------|
| 0.9914 | 3.85ms | $r = r_{\text{baseline}}$ → max score | **100** |
| 0.9950 | 5.00ms | $r > r_{\text{baseline}}$ → max score | **100** |
| 0.9700 | 6.00ms | $\frac{0.97 - 0.9409}{0.9914 - 0.9409} = 0.576$ | **57.6** |
| 0.9500 | 4.00ms | $\frac{0.95 - 0.9409}{0.9914 - 0.9409} = 0.180$ | **18.0** |
| 0.9409 | 7.00ms | $r = r_{\text{min}}$ → minimum score | **0** |
| 0.9914 | **8.00ms** | $t > t_{\text{max}}$ → latency gate fails | **0** |
**Note**: The relaxed latency constraint (7.7ms vs 5.775ms in balanced) allows more aggressive search strategies for higher recall.
API Specification
-----------------
Implement a class with the following interface:
```python
import numpy as np
from typing import Tuple
class YourIndexClass:
def __init__(self, dim: int, **kwargs):
"""
Initialize the index for vectors of dimension `dim`.
Args:
dim: Vector dimensionality (e.g., 128 for SIFT1M)
**kwargs: Optional parameters (e.g., M, ef_construction for HNSW)
Example:
index = YourIndexClass(dim=128, M=64, ef_search=800)
"""
pass
def add(self, xb: np.ndarray) -> None:
"""
Add vectors to the index.
Args:
xb: Base vectors, shape (N, dim), dtype float32
Notes:
- Can be called multiple times (cumulative)
- Must handle large N (e.g., 1,000,000 vectors)
Example:
index.add(xb) # xb.shape = (1000000, 128)
"""
pass
def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]:
"""
Search for k nearest neighbors of query vectors.
Args:
xq: Query vectors, shape (nq, dim), dtype float32
k: Number of nearest neighbors to return
Returns:
(distances, indices):
- distances: shape (nq, k), dtype float32, L2 distances
- indices: shape (nq, k), dtype int64, indices into base vectors
Notes:
- Must return exactly k neighbors per query
- Indices should refer to positions in the vectors passed to add()
- Lower distance = more similar
Example:
D, I = index.search(xq, k=1) # xq.shape = (10000, 128)
# D.shape = (10000, 1), I.shape = (10000, 1)
"""
pass
```
**Implementation Requirements**:
- Class can have any name (evaluator auto-discovers classes with `add` and `search` methods)
- Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions
- Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)`
- Distances should be L2 (Euclidean) or L2-squared
- No need to handle dataset loading - evaluator provides numpy arrays
Evaluation Process
------------------
The evaluator follows these steps:
### 1. Load Dataset
```python
from faiss.contrib.datasets import DatasetSIFT1M
ds = DatasetSIFT1M()
xb = ds.get_database() # (1000000, 128) float32
xq = ds.get_queries() # (10000, 128) float32
gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices
```
### 2. Build Index
```python
from solution import YourIndexClass # Auto-discovered
d = xb.shape[1] # 128 for SIFT1M
index = YourIndexClass(d) # Pass dimension as first argument
index.add(xb) # Add all 1M base vectors
```
### 3. Measure Performance (Batch Queries)
```python
import time
t0 = time.time()
D, I = index.search(xq, k=1) # Search all 10K queries at once
t1 = time.time()
# Calculate metrics
recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq)
avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq)
```
**Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries.
### 4. Calculate Score
```python
if avg_query_time_ms > 7.7:
score = 0.0
elif recall_at_1 >= 0.9914:
score = 100.0
else:
recall_range = 0.9914 - 0.9409
recall_proportion = (recall_at_1 - 0.9409) / recall_range
score = max(0.0, min(100.0, 100.0 * recall_proportion))
```
Dataset Details
---------------
- **Name**: SIFT1M
- **Base vectors**: 1,000,000 vectors of dimension 128
- **Query vectors**: 10,000 vectors
- **Ground truth**: Precomputed nearest neighbors (k=1)
- **Metric**: L2 (Euclidean distance)
- **Vector type**: float32
Runtime Platform
----------------
- **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure)
- **Compute**: CPU-only instances (no GPU required)
- **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4
Constraints
-----------
- **Timeout**: 1 hour for entire evaluation (index construction + queries)
- **Memory**: Use reasonable memory (index should fit in RAM)
- **Latency constraint**: avg_query_time_ms ≤ 7.7ms
- **Recall range**: 0.9409 ≤ recall@1 ≤ 1.0
Strategy Tips
-------------
1. **Maximize recall**: Use 2× latency budget (7.7ms vs 5.775ms balanced) for more thorough search
2. **Batch optimization is key**: Your `search` should handle batch queries efficiently
3. **Parameter tuning for recall**: Higher HNSW efSearch (500-1000) or IVF nprobe (100-200)
4. **Trade latency for accuracy**: Unlike balanced tier, you can afford slower but more accurate search
Example: Simple Baseline
-------------------------
```python
import numpy as np
class SimpleIndex:
def __init__(self, dim: int, **kwargs):
self.dim = dim
self.xb = None
def add(self, xb: np.ndarray) -> None:
if self.xb is None:
self.xb = xb.copy()
else:
self.xb = np.vstack([self.xb, xb])
def search(self, xq: np.ndarray, k: int) -> tuple:
# Compute all pairwise L2 distances
# xq: (nq, dim), xb: (N, dim)
# distances: (nq, N)
distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2))
# Get k nearest neighbors
indices = np.argpartition(distances, k-1, axis=1)[:, :k]
sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1)
final_indices = indices[np.arange(len(xq))[:, None], sorted_indices]
final_distances = distances[np.arange(len(xq))[:, None], final_indices]
return final_distances, final_indices
```
**Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs.
Debugging Tips
--------------
- **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration
- **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays
- **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)`
- **Profile latency**: Measure batch vs single query performance separately
- **Validate before submit**: Run full 1M dataset locally if possible
|
{
"dependencies": {
"uv_project": "resources"
},
"datasets": [
{
"type": "local_tar",
"path": "resources/sift.tar.gz",
"target": "data/sift1M",
"expected_glob": "*.fvecs"
}
],
"runtime": {
"timeout_seconds": 3600
},
"tag": "db"
}
|
vdb_pareto/low_latency
|
research
|
VDB Design Problem - Low Latency Tier
======================================
Problem Setting
---------------
Design a Vector Database index optimized for **recall** subject to a **strict latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meeting the constraint are scored purely by recall@1.
**Optimization Goal**: Maximize recall@1 within latency constraint
$$
\text{score} = \begin{cases}
0 & \text{if } t_{\text{query}} > t_{\text{max}} \\
100 & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r \geq r_{\text{baseline}} \\
100 \cdot \frac{r - r_{\text{min}}}{r_{\text{baseline}} - r_{\text{min}}} & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r < r_{\text{baseline}}
\end{cases}
$$
Where:
- $r$: Your recall@1
- $t_{\text{query}}$: Your average query latency (ms)
- $r_{\text{baseline}} = 0.9914$ (baseline recall)
- $r_{\text{min}} = 0.7931$ (minimum acceptable recall, 80% of baseline)
- $t_{\text{max}} = 2.31\text{ms}$ (maximum allowed latency, 60% of baseline 3.85ms)
**Key Insight**: This tier has a very strict latency constraint (60% of baseline), requiring aggressive approximation while maintaining reasonable recall.
Baseline Performance
--------------------
- Recall@1: **0.9914** (99.14%)
- Avg query time: **3.85ms**
- Baseline score: **100** (recall equals baseline within latency constraint)
Scoring Examples
----------------
Assuming all solutions meet latency constraint ($t \leq 2.31\text{ms}$):
| Recall@1 | Latency | Score Calculation | Score |
|----------|---------|-------------------|-------|
| 0.9914 | 2.00ms | $r = r_{\text{baseline}}$ → max score | **100** |
| 0.9500 | 2.00ms | $\frac{0.95 - 0.7931}{0.9914 - 0.7931} = 0.791$ | **79.1** |
| 0.9000 | 1.50ms | $\frac{0.90 - 0.7931}{0.9914 - 0.7931} = 0.539$ | **53.9** |
| 0.8500 | 1.00ms | $\frac{0.85 - 0.7931}{0.9914 - 0.7931} = 0.287$ | **28.7** |
| 0.7931 | 2.00ms | $r = r_{\text{min}}$ → minimum score | **0** |
| 0.9500 | **2.50ms** | $t > t_{\text{max}}$ → latency gate fails | **0** |
**Note**: The strict latency constraint (2.31ms vs 5.775ms in balanced) requires aggressive approximation, typically resulting in lower recall.
API Specification
-----------------
Implement a class with the following interface:
```python
import numpy as np
from typing import Tuple
class YourIndexClass:
def __init__(self, dim: int, **kwargs):
"""
Initialize the index for vectors of dimension `dim`.
Args:
dim: Vector dimensionality (e.g., 128 for SIFT1M)
**kwargs: Optional parameters (e.g., M, ef_construction for HNSW)
Example:
index = YourIndexClass(dim=128, M=16, ef_search=80)
"""
pass
def add(self, xb: np.ndarray) -> None:
"""
Add vectors to the index.
Args:
xb: Base vectors, shape (N, dim), dtype float32
Notes:
- Can be called multiple times (cumulative)
- Must handle large N (e.g., 1,000,000 vectors)
Example:
index.add(xb) # xb.shape = (1000000, 128)
"""
pass
def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]:
"""
Search for k nearest neighbors of query vectors.
Args:
xq: Query vectors, shape (nq, dim), dtype float32
k: Number of nearest neighbors to return
Returns:
(distances, indices):
- distances: shape (nq, k), dtype float32, L2 distances
- indices: shape (nq, k), dtype int64, indices into base vectors
Notes:
- Must return exactly k neighbors per query
- Indices should refer to positions in the vectors passed to add()
- Lower distance = more similar
Example:
D, I = index.search(xq, k=1) # xq.shape = (10000, 128)
# D.shape = (10000, 1), I.shape = (10000, 1)
"""
pass
```
**Implementation Requirements**:
- Class can have any name (evaluator auto-discovers classes with `add` and `search` methods)
- Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions
- Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)`
- Distances should be L2 (Euclidean) or L2-squared
- No need to handle dataset loading - evaluator provides numpy arrays
Evaluation Process
------------------
The evaluator follows these steps:
### 1. Load Dataset
```python
from faiss.contrib.datasets import DatasetSIFT1M
ds = DatasetSIFT1M()
xb = ds.get_database() # (1000000, 128) float32
xq = ds.get_queries() # (10000, 128) float32
gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices
```
### 2. Build Index
```python
from solution import YourIndexClass # Auto-discovered
d = xb.shape[1] # 128 for SIFT1M
index = YourIndexClass(d) # Pass dimension as first argument
index.add(xb) # Add all 1M base vectors
```
### 3. Measure Performance (Batch Queries)
```python
import time
t0 = time.time()
D, I = index.search(xq, k=1) # Search all 10K queries at once
t1 = time.time()
# Calculate metrics
recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq)
avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq)
```
**Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries.
### 4. Calculate Score
```python
if avg_query_time_ms > 2.31:
score = 0.0
elif recall_at_1 >= 0.9914:
score = 100.0
else:
recall_range = 0.9914 - 0.7931
recall_proportion = (recall_at_1 - 0.7931) / recall_range
score = max(0.0, min(100.0, 100.0 * recall_proportion))
```
Dataset Details
---------------
- **Name**: SIFT1M
- **Base vectors**: 1,000,000 vectors of dimension 128
- **Query vectors**: 10,000 vectors
- **Ground truth**: Precomputed nearest neighbors (k=1)
- **Metric**: L2 (Euclidean distance)
- **Vector type**: float32
Runtime Platform
----------------
- **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure)
- **Compute**: CPU-only instances (no GPU required)
- **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4
Constraints
-----------
- **Timeout**: 1 hour for entire evaluation (index construction + queries)
- **Memory**: Use reasonable memory (index should fit in RAM)
- **Latency constraint**: avg_query_time_ms ≤ 2.31ms
- **Recall range**: 0.7931 ≤ recall@1 ≤ 1.0
Strategy Tips
-------------
1. **Aggressive approximation**: Use very low search budgets (IVF nprobe=2-5, HNSW efSearch=50-100)
2. **Batch optimization is key**: Your `search` should handle batch queries efficiently
3. **Accept recall drops**: 80-90% recall is acceptable if latency is met
4. **Leave safety margin**: Target 1.5-2.0ms to avoid edge cases exceeding 2.31ms
Example: Simple Baseline
-------------------------
```python
import numpy as np
class SimpleIndex:
def __init__(self, dim: int, **kwargs):
self.dim = dim
self.xb = None
def add(self, xb: np.ndarray) -> None:
if self.xb is None:
self.xb = xb.copy()
else:
self.xb = np.vstack([self.xb, xb])
def search(self, xq: np.ndarray, k: int) -> tuple:
# Compute all pairwise L2 distances
# xq: (nq, dim), xb: (N, dim)
# distances: (nq, N)
distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2))
# Get k nearest neighbors
indices = np.argpartition(distances, k-1, axis=1)[:, :k]
sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1)
final_indices = indices[np.arange(len(xq))[:, None], sorted_indices]
final_distances = distances[np.arange(len(xq))[:, None], final_indices]
return final_distances, final_indices
```
**Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs.
Debugging Tips
--------------
- **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration
- **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays
- **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)`
- **Profile latency**: Measure batch vs single query performance separately
- **Validate before submit**: Run full 1M dataset locally if possible
|
{
"dependencies": {
"uv_project": "resources"
},
"datasets": [
{
"type": "local_tar",
"path": "resources/sift.tar.gz",
"target": "data/sift1M",
"expected_glob": "*.fvecs"
}
],
"runtime": {
"timeout_seconds": 3600
},
"tag": "db"
}
|
vdb_pareto/recall80_latency
|
research
|
VDB Design Problem - Recall80 Latency Tier
===========================================
Problem Setting
---------------
Design a Vector Database index optimized for **latency** subject to a **recall constraint**. This tier uses recall-gated scoring: solutions failing to meet the recall threshold receive zero points, while solutions meeting the constraint are scored purely by latency.
**Optimization Goal**: Minimize latency within recall constraint
$$
\text{score} = \begin{cases}
0 & \text{if } r < r_{\text{gate}} \\
100 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \leq t_{\text{min}} \\
100 \cdot \frac{t_{\text{max}} - t_{\text{query}}}{t_{\text{max}} - t_{\text{min}}} & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{min}} < t_{\text{query}} < t_{\text{max}} \\
0 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \geq t_{\text{max}}
\end{cases}
$$
Where:
- $r$: Your recall@1
- $t_{\text{query}}$: Your average query latency (ms)
- $r_{\text{gate}} = 0.80$ (minimum required recall)
- $t_{\text{min}} = 0.0\text{ms}$ (best possible latency)
- $t_{\text{max}} = 0.6\text{ms}$ (maximum allowed latency)
**Key Insight**: Unlike other tiers, this tier gates on recall and scores on latency. You MUST achieve ≥80% recall, then faster is better.
Baseline Performance
--------------------
- Recall@1: **0.9914** (99.14%)
- Avg query time: **3.85ms**
Scoring Examples
----------------
All examples assume recall constraint is met ($r \geq 0.80$):
| Recall@1 | Latency | Score Calculation | Score |
|----------|---------|-------------------|-------|
| 0.85 | 0.00ms | $t \leq t_{\text{min}}$ → max score | **100** |
| 0.85 | 0.30ms | $\frac{0.6 - 0.3}{0.6 - 0.0} = 0.50$ | **50** |
| 0.82 | 0.50ms | $\frac{0.6 - 0.5}{0.6 - 0.0} = 0.167$ | **16.7** |
| 0.90 | 0.10ms | $\frac{0.6 - 0.1}{0.6 - 0.0} = 0.833$ | **83.3** |
| **0.75** | 0.20ms | $r < r_{\text{gate}}$ → recall gate fails | **0** |
| 0.95 | **0.70ms** | $t \geq t_{\text{max}}$ → latency too high | **0** |
**Note**: This is the most aggressive latency requirement (0.6ms max). You must use extreme approximation while maintaining 80% recall.
API Specification
-----------------
Implement a class with the following interface:
```python
import numpy as np
from typing import Tuple
class YourIndexClass:
def __init__(self, dim: int, **kwargs):
"""
Initialize the index for vectors of dimension `dim`.
Args:
dim: Vector dimensionality (e.g., 128 for SIFT1M)
**kwargs: Optional parameters (e.g., M, ef_construction for HNSW)
Example:
index = YourIndexClass(dim=128, nlist=256, nprobe=2)
"""
pass
def add(self, xb: np.ndarray) -> None:
"""
Add vectors to the index.
Args:
xb: Base vectors, shape (N, dim), dtype float32
Notes:
- Can be called multiple times (cumulative)
- Must handle large N (e.g., 1,000,000 vectors)
Example:
index.add(xb) # xb.shape = (1000000, 128)
"""
pass
def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]:
"""
Search for k nearest neighbors of query vectors.
Args:
xq: Query vectors, shape (nq, dim), dtype float32
k: Number of nearest neighbors to return
Returns:
(distances, indices):
- distances: shape (nq, k), dtype float32, L2 distances
- indices: shape (nq, k), dtype int64, indices into base vectors
Notes:
- Must return exactly k neighbors per query
- Indices should refer to positions in the vectors passed to add()
- Lower distance = more similar
Example:
D, I = index.search(xq, k=1) # xq.shape = (10000, 128)
# D.shape = (10000, 1), I.shape = (10000, 1)
"""
pass
```
**Implementation Requirements**:
- Class can have any name (evaluator auto-discovers classes with `add` and `search` methods)
- Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions
- Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)`
- Distances should be L2 (Euclidean) or L2-squared
- No need to handle dataset loading - evaluator provides numpy arrays
Evaluation Process
------------------
The evaluator follows these steps:
### 1. Load Dataset
```python
from faiss.contrib.datasets import DatasetSIFT1M
ds = DatasetSIFT1M()
xb = ds.get_database() # (1000000, 128) float32
xq = ds.get_queries() # (10000, 128) float32
gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices
```
### 2. Build Index
```python
from solution import YourIndexClass # Auto-discovered
d = xb.shape[1] # 128 for SIFT1M
index = YourIndexClass(d) # Pass dimension as first argument
index.add(xb) # Add all 1M base vectors
```
### 3. Measure Performance (Batch Queries)
```python
import time
t0 = time.time()
D, I = index.search(xq, k=1) # Search all 10K queries at once
t1 = time.time()
# Calculate metrics
recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq)
avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq)
```
**Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries.
### 4. Calculate Score
```python
if recall_at_1 < 0.80:
score = 0.0
elif avg_query_time_ms <= 0.0:
score = 100.0
elif avg_query_time_ms >= 0.6:
score = 0.0
else:
proportion = (avg_query_time_ms - 0.0) / (0.6 - 0.0)
score = 100.0 * (1.0 - proportion)
```
Dataset Details
---------------
- **Name**: SIFT1M
- **Base vectors**: 1,000,000 vectors of dimension 128
- **Query vectors**: 10,000 vectors
- **Ground truth**: Precomputed nearest neighbors (k=1)
- **Metric**: L2 (Euclidean distance)
- **Vector type**: float32
Runtime Platform
----------------
- **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure)
- **Compute**: CPU-only instances (no GPU required)
- **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4
Constraints
-----------
- **Timeout**: 1 hour for entire evaluation (index construction + queries)
- **Memory**: Use reasonable memory (index should fit in RAM)
- **Recall constraint**: recall@1 ≥ 0.80
- **Latency range**: 0.0ms ≤ avg_query_time_ms ≤ 0.6ms
Strategy Tips
-------------
1. **Meet recall gate first**: Ensure ≥80% recall, otherwise score = 0
2. **Extreme approximation**: Use minimal search budget (IVF nprobe=1-3)
3. **Batch optimization critical**: 0.6ms is extremely tight, every microsecond counts
4. **Trade recall for speed**: 80-85% recall with ultra-low latency is ideal
Example: Simple Baseline
-------------------------
```python
import numpy as np
class SimpleIndex:
def __init__(self, dim: int, **kwargs):
self.dim = dim
self.xb = None
def add(self, xb: np.ndarray) -> None:
if self.xb is None:
self.xb = xb.copy()
else:
self.xb = np.vstack([self.xb, xb])
def search(self, xq: np.ndarray, k: int) -> tuple:
# Compute all pairwise L2 distances
# xq: (nq, dim), xb: (N, dim)
# distances: (nq, N)
distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2))
# Get k nearest neighbors
indices = np.argpartition(distances, k-1, axis=1)[:, :k]
sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1)
final_indices = indices[np.arange(len(xq))[:, None], sorted_indices]
final_distances = distances[np.arange(len(xq))[:, None], final_indices]
return final_distances, final_indices
```
**Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs.
Debugging Tips
--------------
- **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration
- **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays
- **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)`
- **Profile latency**: Measure batch vs single query performance separately
- **Validate before submit**: Run full 1M dataset locally if possible
|
{
"dependencies": {
"uv_project": "resources"
},
"datasets": [
{
"type": "local_tar",
"path": "resources/sift.tar.gz",
"target": "data/sift1M",
"expected_glob": "*.fvecs"
}
],
"runtime": {
"timeout_seconds": 3600
},
"tag": "db"
}
|
vdb_pareto/recall95_latency
|
research
|
VDB Design Problem - Recall95 Latency Tier
===========================================
Problem Setting
---------------
Design a Vector Database index optimized for **latency** subject to a **high recall constraint**. This tier uses recall-gated scoring: solutions failing to meet the recall threshold receive zero points, while solutions meeting the constraint are scored purely by latency.
**Optimization Goal**: Minimize latency within recall constraint
$$
\text{score} = \begin{cases}
0 & \text{if } r < r_{\text{gate}} \\
100 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \leq t_{\text{min}} \\
100 \cdot \frac{t_{\text{max}} - t_{\text{query}}}{t_{\text{max}} - t_{\text{min}}} & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{min}} < t_{\text{query}} < t_{\text{max}} \\
0 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \geq t_{\text{max}}
\end{cases}
$$
Where:
- $r$: Your recall@1
- $t_{\text{query}}$: Your average query latency (ms)
- $r_{\text{gate}} = 0.95$ (minimum required recall)
- $t_{\text{min}} = 0.0\text{ms}$ (best possible latency)
- $t_{\text{max}} = 7.7\text{ms}$ (maximum allowed latency)
**Key Insight**: This tier requires high recall (95%), but provides generous latency budget (7.7ms). Focus on recall first, then optimize latency.
Baseline Performance
--------------------
- Recall@1: **0.9914** (99.14%)
- Avg query time: **3.85ms**
Scoring Examples
----------------
All examples assume recall constraint is met ($r \geq 0.95$):
| Recall@1 | Latency | Score Calculation | Score |
|----------|---------|-------------------|-------|
| 0.96 | 0.00ms | $t \leq t_{\text{min}}$ → max score | **100** |
| 0.96 | 3.85ms | $\frac{7.7 - 3.85}{7.7 - 0.0} = 0.50$ | **50.0** |
| 0.97 | 5.00ms | $\frac{7.7 - 5.0}{7.7 - 0.0} = 0.351$ | **35.1** |
| 0.98 | 2.00ms | $\frac{7.7 - 2.0}{7.7 - 0.0} = 0.740$ | **74.0** |
| **0.94** | 2.00ms | $r < r_{\text{gate}}$ → recall gate fails | **0** |
| 0.96 | **8.00ms** | $t \geq t_{\text{max}}$ → latency too high | **0** |
**Note**: The 95% recall requirement is strict, but the 7.7ms latency budget is generous, allowing thorough search strategies.
API Specification
-----------------
Implement a class with the following interface:
```python
import numpy as np
from typing import Tuple
class YourIndexClass:
def __init__(self, dim: int, **kwargs):
"""
Initialize the index for vectors of dimension `dim`.
Args:
dim: Vector dimensionality (e.g., 128 for SIFT1M)
**kwargs: Optional parameters (e.g., M, ef_construction for HNSW)
Example:
index = YourIndexClass(dim=128, M=64, ef_search=400)
"""
pass
def add(self, xb: np.ndarray) -> None:
"""
Add vectors to the index.
Args:
xb: Base vectors, shape (N, dim), dtype float32
Notes:
- Can be called multiple times (cumulative)
- Must handle large N (e.g., 1,000,000 vectors)
Example:
index.add(xb) # xb.shape = (1000000, 128)
"""
pass
def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]:
"""
Search for k nearest neighbors of query vectors.
Args:
xq: Query vectors, shape (nq, dim), dtype float32
k: Number of nearest neighbors to return
Returns:
(distances, indices):
- distances: shape (nq, k), dtype float32, L2 distances
- indices: shape (nq, k), dtype int64, indices into base vectors
Notes:
- Must return exactly k neighbors per query
- Indices should refer to positions in the vectors passed to add()
- Lower distance = more similar
Example:
D, I = index.search(xq, k=1) # xq.shape = (10000, 128)
# D.shape = (10000, 1), I.shape = (10000, 1)
"""
pass
```
**Implementation Requirements**:
- Class can have any name (evaluator auto-discovers classes with `add` and `search` methods)
- Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions
- Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)`
- Distances should be L2 (Euclidean) or L2-squared
- No need to handle dataset loading - evaluator provides numpy arrays
Evaluation Process
------------------
The evaluator follows these steps:
### 1. Load Dataset
```python
from faiss.contrib.datasets import DatasetSIFT1M
ds = DatasetSIFT1M()
xb = ds.get_database() # (1000000, 128) float32
xq = ds.get_queries() # (10000, 128) float32
gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices
```
### 2. Build Index
```python
from solution import YourIndexClass # Auto-discovered
d = xb.shape[1] # 128 for SIFT1M
index = YourIndexClass(d) # Pass dimension as first argument
index.add(xb) # Add all 1M base vectors
```
### 3. Measure Performance (Batch Queries)
```python
import time
t0 = time.time()
D, I = index.search(xq, k=1) # Search all 10K queries at once
t1 = time.time()
# Calculate metrics
recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq)
avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq)
```
**Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries.
### 4. Calculate Score
```python
if recall_at_1 < 0.95:
score = 0.0
elif avg_query_time_ms <= 0.0:
score = 100.0
elif avg_query_time_ms >= 7.7:
score = 0.0
else:
proportion = (avg_query_time_ms - 0.0) / (7.7 - 0.0)
score = 100.0 * (1.0 - proportion)
```
Dataset Details
---------------
- **Name**: SIFT1M
- **Base vectors**: 1,000,000 vectors of dimension 128
- **Query vectors**: 10,000 vectors
- **Ground truth**: Precomputed nearest neighbors (k=1)
- **Metric**: L2 (Euclidean distance)
- **Vector type**: float32
Runtime Platform
----------------
- **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure)
- **Compute**: CPU-only instances (no GPU required)
- **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4
Constraints
-----------
- **Timeout**: 1 hour for entire evaluation (index construction + queries)
- **Memory**: Use reasonable memory (index should fit in RAM)
- **Recall constraint**: recall@1 ≥ 0.95
- **Latency range**: 0.0ms ≤ avg_query_time_ms ≤ 7.7ms
Strategy Tips
-------------
1. **Meet recall gate first**: Ensure ≥95% recall, otherwise score = 0
2. **Use moderate approximation**: Higher recall requirement means less aggressive approximation
3. **Batch optimization is key**: Your `search` should handle batch queries efficiently
4. **Balance recall and latency**: Aim for 95-99% recall with 3-5ms latency
Example: Simple Baseline
-------------------------
```python
import numpy as np
class SimpleIndex:
def __init__(self, dim: int, **kwargs):
self.dim = dim
self.xb = None
def add(self, xb: np.ndarray) -> None:
if self.xb is None:
self.xb = xb.copy()
else:
self.xb = np.vstack([self.xb, xb])
def search(self, xq: np.ndarray, k: int) -> tuple:
# Compute all pairwise L2 distances
# xq: (nq, dim), xb: (N, dim)
# distances: (nq, N)
distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2))
# Get k nearest neighbors
indices = np.argpartition(distances, k-1, axis=1)[:, :k]
sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1)
final_indices = indices[np.arange(len(xq))[:, None], sorted_indices]
final_distances = distances[np.arange(len(xq))[:, None], final_indices]
return final_distances, final_indices
```
**Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs.
Debugging Tips
--------------
- **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration
- **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays
- **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)`
- **Profile latency**: Measure batch vs single query performance separately
- **Validate before submit**: Run full 1M dataset locally if possible
|
{
"dependencies": {
"uv_project": "resources"
},
"datasets": [
{
"type": "local_tar",
"path": "resources/sift.tar.gz",
"target": "data/sift1M",
"expected_glob": "*.fvecs"
}
],
"runtime": {
"timeout_seconds": 3600
},
"tag": "db"
}
|
vector_addition/2_20
|
research
|
Vector Addition Problem - Medium Vectors (2^20)
================================================
Problem Setting
---------------
Design and optimize high-performance Triton kernels for vector addition on GPU with medium vectors (1,048,576 elements). This problem focuses on implementing efficient element-wise addition for typical workloads.
The challenge involves optimizing:
- **Memory access patterns**: Efficient loading and storing of vector data
- **Block sizing**: Optimal block sizes for GPU execution
- **Memory bandwidth**: Maximizing throughput for simple arithmetic operations
- **Performance benchmarking**: Achieving speedup over PyTorch baseline
This variant tests performance on medium vectors (2^20 = 1,048,576 elements = 4 MB per vector).
Target
------
- **Primary**: Maximize bandwidth (GB/s) over PyTorch baseline (higher is better)
- **Secondary**: Ensure correctness
- **Tertiary**: Minimize kernel launch overhead
API Specification
-----------------
Implement a `Solution` class that returns a Triton kernel implementation:
```python
class Solution:
def solve(self, spec_path: str = None) -> dict:
"""
Returns a dict with either:
- {"code": "python_code_string"}
- {"program_path": "path/to/kernel.py"}
"""
# Your implementation
pass
```
Your kernel implementation must provide:
```python
import torch
import triton
import triton.language as tl
def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
"""
Element-wise addition of two vectors.
Args:
x: Input tensor of shape (1048576,)
y: Input tensor of shape (1048576,)
Returns:
Output tensor of shape (1048576,) with x + y
"""
pass
```
API Usage Notes
---------------
- The evaluator looks for an `add` function in the module namespace
- Function must handle vector size of exactly 1,048,576 elements
- Must use Triton JIT compilation for kernel definition
- Should optimize for memory bandwidth
- Input tensors are guaranteed to be contiguous and same size
Scoring (0-100)
---------------
Performance is measured against CPU baseline and PyTorch GPU baseline:
```
target = max(2.0 * (pytorch_bandwidth / cpu_bandwidth), 1.0)
score = ((custom_bandwidth / cpu_bandwidth - 1.0) / (target - 1.0)) * 100
Where:
- custom_bandwidth = your solution's bandwidth
- cpu_bandwidth = naive CPU baseline bandwidth
- pytorch_bandwidth = PyTorch GPU baseline bandwidth
- target = 2x PyTorch performance vs CPU (normalized to custom vs CPU)
Score is clamped to [0, 100] range
```
- 0 points = CPU baseline performance (custom/cpu = 1x)
- 50 points = Halfway between CPU baseline and 2x PyTorch performance
- 100 points = 2x PyTorch GPU performance vs CPU (custom/cpu = 2 * pytorch/cpu)
Evaluation Details
------------------
- Tested on vector size: 2^20 = 1,048,576 elements
- Performance measured in GB/s (bandwidth)
- Correctness verified with tolerance: rtol=1e-5, atol=1e-8
- Performance measured using median execution time across 5 samples
- Requires CUDA backend and GPU support
|
dependencies:
uv_project: resources
tag: hpc
runtime:
environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
docker:
image: andylizf/triton-tlx:tlx-nv-cu122
gpu: true
|
vector_addition/2_24
|
research
|
Vector Addition Problem - Large Vectors (2^24)
===============================================
Problem Setting
---------------
Design and optimize high-performance Triton kernels for vector addition on GPU with large vectors (16,777,216 elements). This problem focuses on implementing efficient element-wise addition for high-throughput workloads.
The challenge involves optimizing:
- **Memory bandwidth**: Maximizing throughput for large vectors
- **Memory access patterns**: Efficient loading and storing of vector data
- **Block sizing**: Optimal block sizes for large vectors
- **Performance benchmarking**: Achieving speedup over PyTorch baseline
This variant tests performance on large vectors (2^24 = 16,777,216 elements = 64 MB per vector).
Target
------
- **Primary**: Maximize bandwidth (GB/s) over PyTorch baseline (higher is better)
- **Secondary**: Minimize kernel launch overhead
- **Tertiary**: Ensure correctness
API Specification
-----------------
Implement a `Solution` class that returns a Triton kernel implementation:
```python
class Solution:
def solve(self, spec_path: str = None) -> dict:
"""
Returns a dict with either:
- {"code": "python_code_string"}
- {"program_path": "path/to/kernel.py"}
"""
# Your implementation
pass
```
Your kernel implementation must provide:
```python
import torch
import triton
import triton.language as tl
def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
"""
Element-wise addition of two vectors.
Args:
x: Input tensor of shape (16777216,)
y: Input tensor of shape (16777216,)
Returns:
Output tensor of shape (16777216,) with x + y
"""
pass
```
API Usage Notes
---------------
- The evaluator looks for an `add` function in the module namespace
- Function must handle vector size of exactly 16,777,216 elements
- Must use Triton JIT compilation for kernel definition
- Should optimize for small vector performance and launch overhead
- Input tensors are guaranteed to be contiguous and same size
Scoring (0-100)
---------------
Performance is measured against CPU baseline and PyTorch GPU baseline:
```
target = max(2.0 * (pytorch_bandwidth / cpu_bandwidth), 1.0)
score = ((custom_bandwidth / cpu_bandwidth - 1.0) / (target - 1.0)) * 100
Where:
- custom_bandwidth = your solution's bandwidth
- cpu_bandwidth = naive CPU baseline bandwidth
- pytorch_bandwidth = PyTorch GPU baseline bandwidth
- target = 2x PyTorch performance vs CPU (normalized to custom vs CPU)
Score is clamped to [0, 100] range
```
- 0 points = CPU baseline performance (custom/cpu = 1x)
- 50 points = Halfway between CPU baseline and 2x PyTorch performance
- 100 points = 2x PyTorch GPU performance vs CPU (custom/cpu = 2 * pytorch/cpu)
Evaluation Details
------------------
- Tested on vector size: 2^24 = 16,777,216 elements
- Performance measured in GB/s (bandwidth)
- Correctness verified with tolerance: rtol=1e-5, atol=1e-8
- Performance measured using median execution time across 5 samples
- Requires CUDA backend and GPU support
|
dependencies:
uv_project: resources
datasets: []
tag: hpc
runtime:
docker:
image: andylizf/triton-tlx:tlx-nv-cu122
gpu: true
environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
|
vector_addition/2_28
|
research
|
Vector Addition Problem - Very Large Vectors (2^28)
==============================================
Problem Setting
---------------
Design and optimize high-performance Triton kernels for vector addition on GPU with very large vectors (268,435,456 elements). This problem focuses on implementing efficient element-wise addition for maximum throughput scenarios.
The challenge involves optimizing:
- **Memory access patterns**: Efficient loading and storing of large vector data
- **Block sizing**: Optimal block sizes for large GPU workloads
- **Memory bandwidth**: Maximizing throughput at scale
- **Performance benchmarking**: Achieving speedup over PyTorch baseline
This variant tests performance on very large vectors (2^28 = 268,435,456 elements = 1 GB per vector). Requires ~3 GB GPU memory total.
Target
------
- **Primary**: Maximize bandwidth (GB/s) over PyTorch baseline (higher is better)
- **Secondary**: Ensure correctness on large vectors
- **Tertiary**: Minimize memory overhead
API Specification
-----------------
Implement a `Solution` class that returns a Triton kernel implementation:
```python
class Solution:
def solve(self, spec_path: str = None) -> dict:
"""
Returns a dict with either:
- {"code": "python_code_string"}
- {"program_path": "path/to/kernel.py"}
"""
# Your implementation
pass
```
Your kernel implementation must provide:
```python
import torch
import triton
import triton.language as tl
def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
"""
Element-wise addition of two vectors.
Args:
x: Input tensor of shape (268435456,)
y: Input tensor of shape (268435456,)
Returns:
Output tensor of shape (268435456,) with x + y
"""
pass
```
API Usage Notes
---------------
- The evaluator looks for an `add` function in the module namespace
- Function must handle vector size of exactly 268,435,456 elements
- Must use Triton JIT compilation for kernel definition
- Should optimize for maximum memory bandwidth at scale
- Input tensors are guaranteed to be contiguous and same size
- May cause OOM on GPUs with less than 3GB memory
Scoring (0-100)
---------------
Performance is measured against CPU baseline and PyTorch GPU baseline:
```
target = max(2.0 * (pytorch_bandwidth / cpu_bandwidth), 1.0)
score = ((custom_bandwidth / cpu_bandwidth - 1.0) / (target - 1.0)) * 100
Where:
- custom_bandwidth = your solution's bandwidth
- cpu_bandwidth = naive CPU baseline bandwidth
- pytorch_bandwidth = PyTorch GPU baseline bandwidth
- target = 2x PyTorch performance vs CPU (normalized to custom vs CPU)
Score is clamped to [0, 100] range
```
- 0 points = CPU baseline performance (custom/cpu = 1x)
- 50 points = Halfway between CPU baseline and 2x PyTorch performance
- 100 points = 2x PyTorch GPU performance vs CPU (custom/cpu = 2 * pytorch/cpu)
Evaluation Details
------------------
- Tested on vector size: 2^28 = 268,435,456 elements
- Performance measured in GB/s (bandwidth)
- Correctness verified with tolerance: rtol=1e-5, atol=1e-8
- Performance measured using median execution time across 5 samples
- Requires CUDA backend and GPU support
- Requires sufficient GPU memory (may OOM on smaller GPUs)
|
dependencies:
uv_project: resources
datasets: []
tag: hpc
runtime:
docker:
image: andylizf/triton-tlx:tlx-nv-cu122
gpu: true
environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.