2026-01-05 04:23:57 +01:00
2025-12-24 07:50:09 +01:00
2025-12-27 02:30:14 +01:00
2026-01-01 03:08:22 +01:00
2026-01-04 13:30:54 +01:00
2026-01-05 04:23:57 +01:00

Alfred Media Organizer 🎬

An AI-powered agent for managing your local media library with natural language. Search, download, and organize movies and TV shows effortlessly through a conversational interface.

Python 3.14 Poetry License: MIT Code style: ruff

Features

  • 🤖 Natural Language Interface — Talk to your media library in plain language
  • 🔍 Smart Search — Find movies and TV shows via TMDB with rich metadata
  • 📥 Torrent Integration — Search and download via qBittorrent
  • 🧠 Contextual Memory — Remembers your preferences and conversation history
  • 📁 Auto-Organization — Keeps your media library tidy and well-structured
  • 🌐 OpenAI-Compatible API — Works with any OpenAI-compatible client
  • 🖥️ LibreChat Frontend — Beautiful web UI included out of the box
  • 🔒 Secure by Default — Auto-generated secrets and encrypted credentials

🏗️ Architecture

Built with Domain-Driven Design (DDD) principles for clean separation of concerns:

alfred/
├── agent/              # AI agent orchestration
│   ├── llm/            # LLM clients (Ollama, DeepSeek)
│   └── tools/          # Tool implementations
├── application/        # Use cases & DTOs
│   ├── movies/         # Movie search use cases
│   ├── torrents/       # Torrent management
│   └── filesystem/     # File operations
├── domain/             # Business logic & entities
│   ├── movies/         # Movie entities
│   ├── tv_shows/       # TV show entities
│   └── subtitles/      # Subtitle entities
└── infrastructure/     # External services & persistence
    ├── api/            # External API clients (TMDB, qBittorrent)
    ├── filesystem/     # File system operations
    └── persistence/    # Memory & repositories

See docs/architecture_diagram.md for detailed architectural diagrams.

🚀 Quick Start

Prerequisites

  • Python 3.14+ (required)
  • Poetry (dependency manager)
  • Docker & Docker Compose (recommended for full stack)
  • API Keys:
    • TMDB API key (get one here)
    • Optional: DeepSeek, OpenAI, Anthropic, or other LLM provider keys

Installation

# Clone the repository
git clone https://github.com/francwa/alfred_media_organizer.git
cd alfred_media_organizer

# Install dependencies
make install

# Bootstrap environment (generates .env with secure secrets)
make bootstrap

# Edit .env with your API keys
nano .env
# Start all services (LibreChat + Alfred + MongoDB + Ollama)
make up

# Or start with specific profiles
make up p=rag,meili      # Include RAG and Meilisearch
make up p=qbittorrent    # Include qBittorrent
make up p=full           # Everything

# View logs
make logs

# Stop all services
make down

The web interface will be available at http://localhost:3080

Running Locally (Development)

# Install dependencies
poetry install

# Start the API server
poetry run uvicorn alfred.app:app --reload --port 8000

⚙️ Configuration

Environment Bootstrap

Alfred uses a smart bootstrap system that:

  1. Generates secure secrets automatically (JWT tokens, database passwords, encryption keys)
  2. Syncs build variables from pyproject.toml (versions, image names)
  3. Preserves existing secrets when re-running (never overwrites your API keys)
  4. Computes database URIs automatically from individual components
# First time setup
make bootstrap

# Re-run after updating pyproject.toml (secrets are preserved)
make bootstrap

Configuration File (.env)

The .env file is generated from .env.example with secure defaults:

# --- CORE SETTINGS ---
HOST=0.0.0.0
PORT=3080
MAX_HISTORY_MESSAGES=10
MAX_TOOL_ITERATIONS=10

# --- LLM CONFIGURATION ---
# Providers: 'local' (Ollama), 'deepseek', 'openai', 'anthropic', 'google'
DEFAULT_LLM_PROVIDER=local

# Local LLM (Ollama - included in Docker stack)
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_MODEL=llama3.3:latest
LLM_TEMPERATURE=0.2

# --- API KEYS (fill only what you need) ---
TMDB_API_KEY=your-tmdb-key-here        # Required for movie search
DEEPSEEK_API_KEY=                       # Optional
OPENAI_API_KEY=                         # Optional
ANTHROPIC_API_KEY=                      # Optional

# --- SECURITY (auto-generated, don't modify) ---
JWT_SECRET=<auto-generated>
JWT_REFRESH_SECRET=<auto-generated>
CREDS_KEY=<auto-generated>
CREDS_IV=<auto-generated>

# --- DATABASES (auto-generated passwords) ---
MONGO_PASSWORD=<auto-generated>
POSTGRES_PASSWORD=<auto-generated>

Security Keys

Security keys are defined in pyproject.toml and generated automatically:

[tool.alfred.security]
jwt_secret = "32:b64"           # 32 bytes, base64 URL-safe
jwt_refresh_secret = "32:b64"
creds_key = "32:hex"            # 32 bytes, hexadecimal (AES-256)
creds_iv = "16:hex"             # 16 bytes, hexadecimal (AES IV)
mongo_password = "16:hex"
postgres_password = "16:hex"

Formats:

  • b64 — Base64 URL-safe (for JWT tokens)
  • hex — Hexadecimal (for encryption keys, passwords)

🐳 Docker Services

Service Architecture

┌─────────────────────────────────────────────────────────────┐
│                     alfred-net (bridge)                      │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐  │
│  │  LibreChat   │───▶│    Alfred    │───▶│   MongoDB    │  │
│  │   :3080      │    │   (core)     │    │   :27017     │  │
│  └──────────────┘    └──────────────┘    └──────────────┘  │
│         │                   │                               │
│         │                   ▼                               │
│         │            ┌──────────────┐                       │
│         │            │    Ollama    │                       │
│         │            │   (local)    │                       │
│         │            └──────────────┘                       │
│         │                                                   │
│  ┌──────┴───────────────────────────────────────────────┐  │
│  │              Optional Services (profiles)             │  │
│  ├──────────────┬──────────────┬──────────────┬─────────┤  │
│  │ Meilisearch  │  RAG API     │  VectorDB    │qBittor- │  │
│  │   :7700      │   :8000      │   :5432      │  rent   │  │
│  │  [meili]     │   [rag]      │   [rag]      │[qbit..] │  │
│  └──────────────┴──────────────┴──────────────┴─────────┘  │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Docker Profiles

Profile Services Use Case
(default) LibreChat, Alfred, MongoDB, Ollama Basic setup
meili + Meilisearch Fast search
rag + RAG API, VectorDB Document retrieval
qbittorrent + qBittorrent Torrent downloads
full All services Complete setup
# Start with specific profiles
make up p=rag,meili
make up p=full

Docker Commands

make up              # Start containers (default profile)
make up p=full       # Start with all services
make down            # Stop all containers
make restart         # Restart containers
make logs            # Follow logs
make ps              # Show container status
make shell           # Open bash in Alfred container
make build           # Build production image
make build-test      # Build test image

🛠️ Available Tools

The agent has access to these tools for interacting with your media library:

Tool Description
find_media_imdb_id Search for movies/TV shows on TMDB by title
find_torrent Search for torrents across multiple indexers
get_torrent_by_index Get detailed info about a specific torrent result
add_torrent_by_index Download a torrent by its index in search results
add_torrent_to_qbittorrent Add a torrent via magnet link directly
set_path_for_folder Configure folder paths for media organization
list_folder List contents of a folder
set_language Set preferred language for searches

💬 Usage Examples

Via Web Interface (LibreChat)

Navigate to http://localhost:3080 and start chatting:

You: Find Inception in 1080p
Alfred: I found 3 torrents for Inception (2010):
        1. Inception.2010.1080p.BluRay.x264 (150 seeders) - 2.1 GB
        2. Inception.2010.1080p.WEB-DL.x265 (80 seeders) - 1.8 GB
        3. Inception.2010.1080p.REMUX (45 seeders) - 25 GB

You: Download the first one
Alfred: ✓ Added to qBittorrent! Download started.
        Saving to: /downloads/Movies/Inception (2010)/

You: What's downloading right now?
Alfred: You have 1 active download:
        - Inception.2010.1080p.BluRay.x264 (45% complete, ETA: 12 min)

Via API

# Health check
curl http://localhost:8000/health

# Chat with the agent (OpenAI-compatible)
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "alfred",
    "messages": [
      {"role": "user", "content": "Find The Matrix 4K"}
    ]
  }'

# List available models
curl http://localhost:8000/v1/models

# View memory state (debug)
curl http://localhost:8000/memory/state

# Clear session memory
curl -X POST http://localhost:8000/memory/clear-session

Via OpenWebUI or Other Clients

Alfred is compatible with any OpenAI-compatible client:

  1. Add as OpenAI-compatible endpoint: http://localhost:8000/v1
  2. Model name: alfred
  3. No API key required (or use any placeholder)

🧠 Memory System

Alfred uses a three-tier memory system for context management:

Long-Term Memory (LTM)

  • Persistent — Saved to JSON files
  • Contents: Configuration, user preferences, media library state
  • Survives: Application restarts

Short-Term Memory (STM)

  • Session-based — Stored in RAM
  • Contents: Conversation history, current workflow state
  • Cleared: On session end or restart

Episodic Memory

  • Transient — Stored in RAM
  • Contents: Search results, active downloads, recent errors
  • Cleared: Frequently, after task completion

🧪 Development

Project Setup

# Install all dependencies (including dev)
poetry install

# Install pre-commit hooks
make install-hooks

# Run the development server
poetry run uvicorn alfred.app:app --reload

Running Tests

# Run all tests (parallel execution)
make test

# Run with coverage report
make coverage

# Run specific test file
poetry run pytest tests/test_agent.py -v

# Run specific test
poetry run pytest tests/test_config_loader.py::TestBootstrapEnv -v

Code Quality

# Lint and auto-fix
make lint

# Format code
make format

# Clean build artifacts
make clean

Adding a New Tool

  1. Create the tool function in alfred/agent/tools/:
# alfred/agent/tools/api.py
def my_new_tool(param: str) -> dict[str, Any]:
    """
    Short description of what this tool does.
    
    This will be shown to the LLM to help it decide when to use this tool.
    """
    memory = get_memory()
    
    # Your implementation here
    result = do_something(param)
    
    return {
        "status": "success",
        "data": result
    }
  1. Register in the registry (alfred/agent/registry.py):
tool_functions = [
    # ... existing tools ...
    api_tools.my_new_tool,  # Add your tool here
]

The tool will be automatically registered with its parameters extracted from the function signature.

Version Management

# Bump version (must be on main branch)
make patch    # 0.1.7 -> 0.1.8
make minor    # 0.1.7 -> 0.2.0
make major    # 0.1.7 -> 1.0.0

📚 API Reference

Endpoints

GET /health

Health check endpoint.

{
  "status": "healthy",
  "version": "0.1.7"
}

GET /v1/models

List available models (OpenAI-compatible).

{
  "object": "list",
  "data": [
    {
      "id": "alfred",
      "object": "model",
      "owned_by": "alfred"
    }
  ]
}

POST /v1/chat/completions

Chat with the agent (OpenAI-compatible).

Request:

{
  "model": "alfred",
  "messages": [
    {"role": "user", "content": "Find Inception"}
  ],
  "stream": false
}

Response:

{
  "id": "chatcmpl-xxx",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "alfred",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "I found Inception (2010)..."
    },
    "finish_reason": "stop"
  }]
}

GET /memory/state

View full memory state (debug endpoint).

POST /memory/clear-session

Clear session memories (STM + Episodic).

🔧 Troubleshooting

Agent doesn't respond

  1. Check API keys in .env
  2. Verify LLM provider is running:
    # For Ollama
    docker logs alfred-ollama
    
    # Check if model is pulled
    docker exec alfred-ollama ollama list
    
  3. Check Alfred logs: docker logs alfred-core

qBittorrent connection failed

  1. Verify qBittorrent is running: docker ps | grep qbittorrent
  2. Check Web UI is enabled in qBittorrent settings
  3. Verify credentials in .env:
    QBITTORRENT_URL=http://qbittorrent:16140
    QBITTORRENT_USERNAME=admin
    QBITTORRENT_PASSWORD=<check-your-env>
    

Database connection issues

  1. Check MongoDB is healthy: docker logs alfred-mongodb
  2. Verify credentials match in .env
  3. Try restarting: make restart

Memory not persisting

  1. Check data/ directory exists and is writable
  2. Verify volume mounts in docker-compose.yaml
  3. Check file permissions: ls -la data/

Bootstrap fails

  1. Ensure .env.example exists
  2. Check pyproject.toml has required sections:
    [tool.alfred.settings]
    [tool.alfred.security]
    
  3. Run manually: python scripts/bootstrap.py

Tests failing

  1. Update dependencies: poetry install
  2. Check Python version: python --version (needs 3.14+)
  3. Run specific failing test with verbose output:
    poetry run pytest tests/test_failing.py -v --tb=long
    

🤝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Make your changes
  4. Run tests: make test
  5. Run linting: make lint && make format
  6. Commit: git commit -m "feat: add my feature"
  7. Push: git push origin feature/my-feature
  8. Create a Pull Request

Commit Convention

We use Conventional Commits:

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation
  • refactor: Code refactoring
  • test: Adding tests
  • chore: Maintenance

📖 Documentation

📄 License

MIT License — see LICENSE file for details.

🙏 Acknowledgments

📬 Support


Made with ❤️ by Francwa

Description
Alfred media organizer
Readme 1.4 MiB
Languages
Python 98.1%
Makefile 1.1%
Dockerfile 0.8%