The Crew Pkg -
The Crew Pkg -
controller <- crew_controller_local(workers = 8) controller$start() for (file in all_files) { controller$push( name = file, command = process_file(file) ) } results <- list() while (controller$pop()$name != "done") { Crew auto-replaces crashed workers results <- c(results, controller$pop()$result) }
Furthermore, crew requires that your worker sessions be fully self-contained. Any library, function, or data object must be loaded or passed explicitly. There is no "magic" global environment inheritance. crew is the industrial-grade conveyor belt that the R ecosystem has been missing. It doesn't try to be the flashiest parallel package; instead, it focuses on being the most reliable .
But crew (which stands for oordinated R esource E xecution W orker) isn't just another entry in the parallel-processing catalog. Created by William Landau, the author of the targets package, crew is a fundamental rethink of how R should talk to background jobs.
Because workers auto-restart after a memory threshold or crash, that file that causes a segmentation fault only kills its worker. The other seven keep humming along, and a new worker spins up to retry the bad file. crew is not for every use case. If you are doing interactive, exploratory work where you need to inspect every object in the global environment immediately, stick with lapply or furrr . the crew pkg
For analysts running one-off scripts, the overhead of learning crew might not be worth it. But for data scientists building automated reports, for bioinformaticians processing thousands of genomes, and for production pipelines that must run at 3 AM without failing— crew is quietly becoming the gold standard.
tar_option_set( controller = crew_controller_local(workers = 10) ) Suddenly, your pipeline is running across a fleet of auto-healing workers without changing a single analysis step. crew is not a parallel engine itself. It is a controller specification that leverages two incredibly fast lower-level packages: mirai (for asynchronous task execution) and nanonext (for low-level networking).
But the real magic happens when you pair crew with targets . In a _targets.R file, changing the controller is a one-line edit: crew is the industrial-grade conveyor belt that the
library(crew) controller <- crew_controller_local( name = "my_cluster", workers = 4, tasks_max = 100 # Auto-restart workers after 100 tasks ) Start the workers controller$start()
And in 2025, that is precisely what robust data science demands. Quick Start Summary # Install install.packages("crew") Local usage library(crew) c <- crew_controller_local(workers = 4) c$start() c$push("sum", command = sum(1:10)) c$pop()$result # Returns 55 c$terminate()
It is, in essence, a . And it changes the game for production-level R code. The Problem crew Solves (That You Didn't Know You Had) Traditional parallel backends in R share a common flaw: they are often too "chatty" or too fragile. foreach with doParallel works, but it forks processes, which can crash on Windows or with large objects. future is elegant, but its nested parallelism and persistent-worker logic can be tricky to debug. Created by William Landau, the author of the
In the rapidly evolving landscape of R, the line between "script" and "orchestration" has never been thinner. For years, if you needed to run tasks in parallel, manage complex dependencies, or scale a workflow beyond the limits of your local memory, you reached for packages like future , foreach , or targets .
That’s it. The controller sits in your main R session. You push tasks to it, and it distributes them to persistent, resilient R sessions running in the background. # Non-blocking push controller$push( name = "long_compute", command = slow_function(data) ) Collect results later result <- controller$pop()
For HPC users: Replace crew_controller_local() with crew_controller_slurm() and define your job submission template. The API remains identical.
With crew :




