mirai mirai logo

CRAN status R-universe status R-CMD-check Codecov test coverage

ミライ

Minimalist Async Evaluation Framework for R

→ Event-driven core with microsecond messaging

→ Scale from laptop to HPC and cloud — add or remove compute on the fly

→ Built for production — bounded queues, cancellation, distributed tracing


Ask DeepWiki

Installation

install.packages("mirai")

Quick Start

library(mirai)
daemons(6)

# Async — non-blocking, returns immediately
m <- mirai({ Sys.sleep(1); mean(rnorm(1e6)) })
unresolved(m)
#> [1] TRUE

# Parallel map with progress, flattened (m runs concurrently)
mirai_map(1:9, \(x) { Sys.sleep(0.5); x^2 })[.progress, .flat]
#> [1]  1  4  9 16 25 36 49 64 81

# Collect — m finished during the map
m[]
#> [1] 0.0005734454

daemons(0)

Architecture

mirai() sends tasks to daemons — persistent R worker processes. The host listens at a URL; daemons dial in and pull work via an in-process dispatcher thread that handles scheduling, cancellation, and bounded queues. Add or remove daemons at any time, and direct tasks to different compute profiles (CPU pool, GPU pool, remote cluster) from the same session.

Hub architecture diagram showing compute profiles with daemons connecting to host

Round-trip latency stays in the microseconds:

daemons(1)
bench::mark(mirai(1)[])
#> # A tibble: 1 × 6
#>   expression      min   median `itr/sec` mem_alloc `gc/sec`
#>   <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
#> 1 mirai(1)[]     67µs   97.3µs     9868.    9.68KB     2.01
daemons(0)

Deploy

Where Setup
Local machine daemons(n)
SSH (direct or tunnelled) ssh_config()
HPC scheduler — Slurm, SGE, Torque/PBS, LSF cluster_config()
HTTP API — Posit Workbench, custom http_config()
Anywhere else remote_config()
daemons(
  n = 6,
  url = host_url(tls = TRUE),
  remote = cluster_config(options = "#SBATCH --mem=10G")
)

See the reference vignette for the full deployment guide.

What’s inside

Across the R stack

R, Shiny, plumber2, tidyverse, purrr, tidymodels, tune, ragnar, targets, crew, Arrow, torch

mirai has become the shared async layer for the R ecosystem. It’s the recommended async backend for Shiny and the only one for plumber2, the engine behind purrr::in_parallel() and targets pipelines through crew, and is the first official alternative communications backend for base R’s parallel package.

Acknowledgements

Will Landau for being instrumental in shaping development of the package, from initiating the original request for persistent daemons, through to orchestrating robustness testing for the high performance computing requirements of crew and targets.

Joe Cheng for integrating the ‘promises’ method to work seamlessly within Shiny, and prototyping event-driven promises.

Luke Tierney of R Core, for discussion on L’Ecuyer-CMRG streams to ensure statistical independence in parallel processing, and reviewing mirai’s implementation as the first ‘alternative communications backend for R’.

Travers Ching for a novel idea in extending the original custom serialization support in the package.

Hadley Wickham, Henrik Bengtsson, Daniel Falbel, and Kirill Müller for many deep insights and discussions.

mirai | nanonext | CRAN HPC Task View

AI coding agents: the r-lib agent skill from the posit-dev-skills plugin provides mirai-specific guidance.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.