Droid
Droid is Friday Dev's flexible execution backend that supports multiple AI model providers.
Overview
Droid provides:
- Multi-model support - Connect any compatible model
- Unified interface - Same tools across all models
- Flexible autonomy - Configure permissions per model
- Custom profiles - Create your own agent configurations
When to Use Droid
Ideal For
- ✅ Custom model configurations
- ✅ Self-hosted models
- ✅ Experimental models
- ✅ Specific autonomy needs
- ✅ Cost optimization
Less Ideal For
- ⚠️ Quick start (use preset agents)
- ⚠️ Beginners (more configuration)
How Droid Works
┌─────────────────────────────────────────────────────────────────┐
│ Friday Dev │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Task → Droid Executor → AI Model → Tools → Output │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Model API │ │ File System │ │
│ │ (Any) │ │ Git │ │
│ │ │ │ Terminal │ │
│ │ - Gemini │ │ MCP │ │
│ │ - OpenAI │ │ │ │
│ │ - GLM │ │ │ │
│ │ - Local │ │ │ │
│ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Setup
Default Profiles
Droid comes with pre-configured profiles:
{
"GEMINI_2_5_PRO": {
"DROID": {
"autonomy": "workspace-write",
"model": "gemini-2.5-pro-preview-06-05"
}
},
"GLM_4_7": {
"DROID": {
"autonomy": "workspace-write",
"model": "glm-4.7"
}
}
}
Custom Profile
Create your own profile:
{
"MY_CUSTOM_AGENT": {
"DROID": {
"autonomy": "workspace-write",
"model": "your-model-id",
"apiKey": "YOUR_API_KEY",
"baseUrl": "https://your-api-endpoint.com/v1"
}
}
}
Configuration
Profile Options
| Option | Description | Default |
|---|---|---|
model | Model identifier | Required |
autonomy | Permission level | workspace-write |
apiKey | API key (or env var) | - |
baseUrl | Custom API endpoint | - |
maxTokens | Max output tokens | 8192 |
temperature | Randomness (0-1) | 0.7 |
Autonomy Levels
| Level | Capabilities |
|---|---|
workspace-write | Read/write workspace files |
skip-permissions-unsafe | Full system access |
Environment Variables
# Set API keys via environment
export GEMINI_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
export GLM_API_KEY="your-key"
Usage
From CLI
# Use specific profile
friday-dev run --task 123 --agent droid --profile GEMINI_2_5_PRO
# Use custom profile
friday-dev run --task 123 --agent droid --profile MY_CUSTOM_AGENT
From UI
- Open task
- Click "Run Agent"
- Select "Droid"
- Choose profile
- Start
Supported Models
Cloud Providers
| Provider | Models |
|---|---|
| Gemini 2.5 Pro, Gemini 2.5 Flash | |
| OpenAI | GPT-5, GPT-4 Turbo, GPT-4o |
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus |
| Zhipu | GLM-4.7, GLM-4 |
| Alibaba | Qwen-Coder |
Self-Hosted
| System | Models |
|---|---|
| Ollama | Llama, Mistral, CodeLlama |
| vLLM | Any compatible model |
| LocalAI | Any GGUF model |
Self-Hosted Setup
Ollama
# Install Ollama
curl https://ollama.ai/install.sh | sh
# Pull a coding model
ollama pull codellama
# Configure Friday Dev
{
"LOCAL_CODELLAMA": {
"DROID": {
"autonomy": "workspace-write",
"model": "codellama",
"baseUrl": "http://localhost:11434"
}
}
}
vLLM
# Start vLLM server
python -m vllm.entrypoints.openai.api_server \
--model codellama/CodeLlama-34b \
--port 8000
{
"VLLM_CODELLAMA": {
"DROID": {
"autonomy": "workspace-write",
"model": "codellama/CodeLlama-34b",
"baseUrl": "http://localhost:8000/v1"
}
}
}
MCP Integration
Droid supports Model Context Protocol:
{
"mcp": {
"enabled": true,
"servers": [
{
"name": "filesystem",
"command": "mcp-filesystem",
"args": ["--root", "/workspace"]
}
]
}
}
Creating Custom Agents
Step 1: Define Profile
{
"MY_AGENT": {
"DROID": {
"autonomy": "workspace-write",
"model": "my-model",
"systemPrompt": "You are a helpful coding assistant..."
}
}
}
Step 2: Add to Configuration
Edit ~/.friday-dev/profiles.json:
{
"profiles": {
"MY_AGENT": { ... }
}
}
Step 3: Use the Agent
friday-dev run --task 123 --agent droid --profile MY_AGENT
Troubleshooting
Model Not Responding
- Check API endpoint is reachable
- Verify API key is valid
- Check model name is correct
- Review error logs
Poor Performance
- Try a different model
- Increase
maxTokens - Adjust
temperature - Add more context in task
Connection Issues
- Check network/firewall
- Verify base URL
- Test API endpoint directly
- Check rate limits
Best Practices
- Start with presets - Use built-in profiles first
- Test thoroughly - Verify custom profiles work
- Use appropriate autonomy - Don't over-permission
- Monitor costs - Different models have different pricing
- Keep profiles versioned - Track configuration changes
Next Steps
- AI Agents Overview - Compare all agents
- Configuration - Setup guide
- CLI Reference - Command line usage