Skip to content

Parallel Processing (Beta)

Join our Discord community to discuss this feature and get help!

The lingo.dev run command enables parallel processing of localization tasks, significantly accelerating localization workflows for large projects.

Usage

bash
npx lingo.dev@latest run [options]

Options

All options are optional:

OptionDescription
--concurrency <number>Set number of parallel workers (default: 10)
--locale <locale>Process specific locale(s) only
--bucket <bucket>Process specific bucket(s) only
--file <file>Process only files matching pattern
--key <key>Process specific translation key only
--forceIgnore lockfile and process all keys

Note: Avoid setting extremely high concurrency values as you may hit API rate limits with the Lingo.dev API.

How It Works

Lingo.dev's parallel processing architecture distributes localization tasks across multiple concurrent workers, dramatically reducing the time required to localize large projects. This system is designed to prevent file corruption and race conditions while maximizing throughput.

Planning Phase

The process begins with a comprehensive analysis of your project:

bash
npx lingo.dev@latest run

During planning, the system:

  1. Analyzes Configuration: Scans your i18n.json to identify buckets, locales, and file patterns
  2. Creates Tasks: Generates individual localization tasks for each target locale and file pattern
  3. Prepares for Execution: Organizes tasks for efficient distribution to worker processes

Worker Pool Architecture

The core of the parallel processing system is a sophisticated worker pool:

  • Dynamic Worker Allocation: Creates workers based on available system resources (default: 10)
  • Task Distribution Algorithm: Evenly assigns tasks to workers using modulo-based distribution
  • Progress Tracking: Each worker reports real-time progress for its assigned tasks

Concurrency Management & Race Condition Prevention

The system employs two distinct concurrency limiters to maximize throughput while preventing file corruption:

  1. Localization Limiter: Controls the number of simultaneous API calls to the localization engine, allowing multiple tasks to be processed in parallel

  2. I/O Limiter: Ensures file system operations occur sequentially to prevent race conditions when multiple workers need to access the same files

This architecture includes several safeguards:

  • Mutex Locks: Synchronize access to shared resources like the lockfile
  • Atomic File Operations: Ensure file reads/writes complete fully before the next operation begins
  • Transactional Processing: Guarantees that either all changes for a task are applied or none are

Best Practices

  1. Optimize Concurrency: Adjust worker count based on your system's capabilities

    bash
    npx lingo.dev@latest run --concurrency 16
  2. Target Specific Files: For quick iterations on specific content

    bash
    npx lingo.dev@latest run --file components/header
  3. Combine with Caching: Leverage the i18n.lock file to only process changed content

    bash
    # First run caches all translations
    npx lingo.dev@latest run
    
    # Subsequent runs only process changes
    npx lingo.dev@latest run
  4. Monitor System Resources: The parallel processing engine is designed to be efficient, but very large projects with high concurrency settings may require additional system resources

Compatibility

The run command is compatible with existing i18n.json configuration and i18n.lock files. It supports all the same file formats and features as the standard i18n command, with the added benefit of parallel processing.

Note: The --frozen flag from the i18n command is not currently supported in the parallel processing mode.

Future Enhancements

As this feature is currently in beta, we're actively working on:

  1. Adaptive Concurrency: Automatically adjusting worker count based on system load
  2. Enhanced Reporting: More detailed analytics on localization performance and bottlenecks

We welcome your feedback on the parallel processing feature to help us refine and improve it!

Join our Discord community to share your experiences and suggestions.