File size: 5,076 Bytes
5fed0fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
# Submitting Your Results
We welcome submissions from all models and agent frameworks. To have your results included in our leaderboard, please follow the instructions below.
## Algorithmic Problems
We currently release **1 -- 3 public test case** per problem for local testing and debugging. Full evaluation (with all test cases) is performed on our servers.
### What to Submit
1. **Solution files**: `{problem_id}_{model_name}_solution.cpp` for each problem
2. **Model/Agent info**: Name and version of the model or agent framework used
3. **Generation method**: Brief description of how solutions were generated (e.g., one-shot, multi-turn, with/without feedback)
### Submission Format
Organize your solutions as:
```
submissions/
βββ 1_gpt4_solution.cpp
βββ 2_gpt4_solution.cpp
βββ ...
βββ metadata.json
```
`metadata.json`:
```json
{
"model": "gpt-4o",
"agent_framework": "custom",
"generation_method": "one-shot",
"date": "2025-01-15",
"notes": "Optional additional notes"
}
```
## Research Problems
Research problems require a `solution.py` file implementing the `Solution` class interface.
### Problem Structure
Research problems follow a hierarchical structure:
```
Problem (e.g., gemm_optimization, poc_generation)
βββ Category (e.g., squares, heap_buffer_overflow)
βββ Variant (e.g., arvo_21000)
```
| Level | Example | Description |
|-------|---------|-------------|
| **Problem** | `gemm_optimization` | Top-level problem domain |
| **Category** | `gemm_optimization/squares` | Scores are **aggregated** at this level for leaderboard reporting |
| **Variant** | `poc_generation/heap_buffer_overflow/arvo_21000` | Each variant is **evaluated independently** with its own README |
**Key distinction:**
- **Evaluation**: Each variant runs independently and produces its own score
- **Reporting**: Scores are aggregated by category for the leaderboard (e.g., all `heap_buffer_overflow` variants β one score)
> Note: Some problems have only one level (e.g., `flash_attn`), which functions as both category and variant.
### Problem ID Format
Each variant has a unique **Problem ID** based on its path under `research/`.
The full list of all evaluatable variants is in [`research/problems.txt`](research/problems.txt) (109 variants total, aggregated into ~50 categories for reporting).
| Type | Example Path | Problem ID |
|------|-------------|------------|
| Single problem | `research/flash_attn` | `flash_attn` |
| Problem with variants | `research/gemm_optimization/squares` | `gemm_optimization/squares` |
| Nested variants | `research/poc_generation/heap_buffer_overflow/arvo_21000` | `poc_generation/heap_buffer_overflow/arvo_21000` |
### What to Submit
1. **Solution files**: `solution.py` for each problem, placed in a directory matching the Problem ID
2. **Model/Agent info**: Name and version of the model or agent framework used
3. **Local evaluation results** (optional but recommended): Score from running the evaluator locally
### Submission Format
Your submission zip should mirror the Problem ID directory structure:
```
submission.zip
βββ flash_attn/
β βββ solution.py
βββ gemm_optimization/
β βββ squares/
β βββ solution.py
βββ cant_be_late/
β βββ high_availability_loose_deadline/
β βββ solution.py
βββ poc_generation/
β βββ heap_buffer_overflow/
β βββ arvo_21000/
β βββ solution.py
βββ metadata.json
```
**Important**: The directory structure must exactly match the Problem ID. For example:
- `flash_attn/solution.py`
- `gemm_optimization/squares/solution.py`
Each `solution.py` must implement:
```python
class Solution:
def __init__(self):
pass
def solve(self, *args):
# Returns: solution output (format varies by problem)
pass
```
### metadata.json
```json
{
"model": "gpt-4o",
"agent_framework": "custom",
"generation_method": "one-shot",
"date": "2025-01-15",
"problems_solved": [
"flash_attn",
"gemm_optimization/squares",
"cant_be_late/high_availability_loose_deadline"
],
"notes": "Optional additional notes"
}
```
### Running Local Evaluation
Before submitting, you can verify your solutions locally:
```bash
# Evaluate a single solution
frontier-eval flash_attn solution.py
# Batch evaluation with progress tracking
frontier-eval batch --pairs-file pairs.txt --results-dir results/
# Batch evaluation with SkyPilot (cloud)
frontier-eval batch --pairs-file pairs.txt --skypilot --max-concurrent 4
```
## How to Submit
Send your submission to:
- **Email**: [email protected] or [email protected]
Please include:
1. A zip/tar archive of your solutions following the format above
2. `metadata.json` with model and method information
3. (Optional) Local evaluation results if you ran them
## Leaderboard
Accepted submissions will be evaluated on our full test suite and results will be published on the [Frontier-CS Leaderboard](https://frontier-cs.org).
|