Lattice-3
Research configuration with causal reasoning and multi-hop exploit planning
Parameters
33B
Current production checkpoint.
Context window
256k tokens
Rotary embeddings + chunk routing.
Cost guidance
$0.42 / 1K tokens
H100 (80GB) required
Operational envelope
- Architectural backbone: Hybrid transformer + retrieval adapters
- External tools: symbolic executor, fuzz harness broker
- Latency @ BF16 (batch=1, 512-token prompt, H100, CUDA 12.1): p50 280ms | p95 620ms | p99 1100ms
Production SLO thresholds
Model error rate
Inference requests flagged as erroneous (exceptions, timeouts, malformed SARIF) / total requests
Threshold: < 2% over 5-minute sliding window
SARIF reproduction success
Findings where automated sandbox validation confirms vulnerability / total model findings
Threshold: > 60% over 1-hour window
p95 latency
95th percentile inference duration measured server-side
Threshold: < 500ms (L1), < 800ms (L2), < 1500ms (L3)
Benchmark results
Latest evaluation window across public and internal datasets.
| Benchmark | Metric | Lattice-3 | Baseline |
|---|---|---|---|
| CWE-V Suite (1.4k vulns, test split) | F1 | 0.89 | 0.58 (Llama-3-70B) |
| OSS-Fuzz triage (2.1k crashes, held-out) | Recall @ top5 | 0.78 | 0.41 (GPT-4-turbo) |
| Internal exploit chain set (380 chains) | Step accuracy | 0.74 | N/A (proprietary) |
CWE-V Suite (1.4k vulns, test split): 3 runs, seeds: 42, 1337, 9001; temp=0.3
OSS-Fuzz triage (2.1k crashes, held-out): Single pass, no few-shot prompting
Internal exploit chain set (380 chains): Avg over 3 seeds; eval script: github.com/evalops/eval-harness@v2.1.3
Sizing guidance
Reference pricing assumes production hardened inference with observability enabled.
Cost per 1K tokens
$0.42 / 1K tokens
Reference hardware
H100 (80GB) required
Autoscaling notes
Reserved capacity for research workloads, spot instances for CI/CD
Deployment scenarios
| Scenario | Est. tokens | Recommendation | Cost per scan | Note |
|---|---|---|---|---|
| Monorepo (1.2M LOC, ~600MB) | ~420K tokens (chunked) | Lattice-3 with hierarchical summarization | $176.40 per scan | Enable cross-file dependency analysis |
Runtime expectations
Consistent across the Lattice family with model-specific latency budgets.
Interfaces
- gRPC endpoint (stream + unary)
- REST inference proxy
- CLI for batch audit runs
Artifacts accepted
- Source trees (Git), SBOM manifests
- Compiled binaries (ELF/PE/Mach-O)
- Container images, IaC templates
Outputs
- SARIF v2.1.0
- Custom JSON (root-cause + reproduction steps)
- Markdown incident briefs
Observability
- OpenTelemetry traces
- Metric export: Prometheus
- Audit logs: S3/GCS
Engage the research team
Share repository scope, desired runtimes, and deployment constraints so we can scope evaluation access.