🎛️ LLM Router Admin Panel

💬 Back to Chat

Model Configuration

Model Information

Select a model to view details

Generation Parameters

Controls randomness (0 = deterministic, 2 = very random)
Maximum response length in tokens
Nucleus sampling threshold
Limits vocabulary to top K tokens
Penalizes repeated tokens
Model context window size

Routing Strategy

Chat Templates

Template Format

Preview:


                        

Saved Templates

System Instructions

This prompt is prepended to every conversation

Preset Instructions

Behavior Settings

Advanced Parameters

Sampling Parameters

Performance Settings

Memory Management

System Monitoring

Model Status

Loading...

Memory Usage

0 MB

Requests Processed

0

Average Latency

0 ms

Performance History

System Logs