Speeding up tidySummarizedExperiment through query optimisation and the plyxp backend
Performance optimisation of tidySummarizedExperiment and related benchmark.
Author
Stefano Mangiola
Published
March 22, 2026
tidySummarizedExperiment logo
Contributors: Michael Love, Justin Landis, Pierre-Paul Axisa
The generality of tidySummarizedExperiment makes it easy to interface with several tidyverse packages (e.g. dplyr, tidyr, ggplot2, purrr, plotly). This is possible thanks to its approach of converting SummarizedExperiment objects to tibbles, performing operations, and converting back to the original format. This conversion process introduces substantial overhead when working with large-scale datasets. Each operation requires multiple data transformations, with the conversion to tibble format creating memory copies of the entire dataset, followed by the reverse conversion back to SummarizedExperiment. For datasets containing hundreds of samples and tens of thousands of genes, these repeated conversions can consume memory and add significant computational overhead to even simple operations such as filtering or grouping.
With the new tidySummarizedExperiment release (v1.19.7), we have introduced new optimisations that address these performance limitations. This optimisation is powered by:
Check for the query domain (assay, colData, rowData), and execute specialised operation.
plyxp is a tidyomics package developed by Justin Landis, and first released as part of Bioconductor 3.20 in October 2024. It uses data-masking functionality from the rlang package to perform efficient operations on SummarizedExperiment objects.
Motivation and design principles
This benchmark supports ongoing work to improve the performance of tidySummarizedExperiment. In this benchmark, we show up to 30x improvement in operations such as mutate().
The current optimisation is grounded in three principles:
Decompose operation series: break mutate(a=..., b=..., c=...) into single operations for simpler handling and clearer routing. Reference implementation in R/mutate.R (decomposition step) at L146.
Analyse scope: infer whether each expression targets colData, rowData, assays, or a mix (noting that the current analyser is likely over-engineered and could be simplified). See L149.
Route mixed operations via plyxp: when an expression touches multiple slots, prefer the plyxp path for correctness and performance. See L155.
These design choices aim to preserve dimnames, avoid unnecessary tibble round-trips, and provide predictable performance across simple and mixed-slot scenarios.
Example of code optimisation
This was the mutate() method before optimisation. The previous implementation relied on as_tibble() |> dplyr::mutate() |> update_SE_from_tibble(.data)
The function update_SE_from_tibble interprets the input tibble and converts it back to a SummarizedExperiment. Although this step provides great generality and flexibility, it is particularly expensive because it must infer whether columns are sample-wise or feature-wise.
Show pre-optimization source
mutate.SummarizedExperiment <-function(.data, ...) {# Legacy implementation of mutate() for SummarizedExperiment:# - Validates requested edits against special/view-only columns# - Performs mutate() via tibble round-trip, then reconstructs the SE# Check that we are not modifying a key column cols <-enquos(...) |>names()# Deprecation of special column names:# capture all quoted args to detect deprecated special-column usage .cols <-enquos(..., .ignore_empty="all") %>%map(~quo_name(.x)) %>%unlist()if (is_sample_feature_deprecated_used(.data, .cols)) {# Record deprecated usage into metadata for backward compatibility .data <-ping_old_special_column_into_metadata(.data) }# Identify view-only/special columns (sample/feature keys, etc.)# Use a small slice to reduce overhead while probing structure special_columns <-get_special_columns(# Decrease the size of the dataset .data[1:min(100, nrow(.data)), 1:min(20, ncol(.data))] ) |>c(get_needed_columns(.data))# Are any requested targets among special/view-only columns? tst <-intersect( cols, special_columns ) |>length() |>gt(0)if (tst) { columns <- special_columns |>paste(collapse=", ")stop("tidySummarizedExperiment says:"," you are trying to rename a column that is view only", columns,"(it is not present in the colData)."," If you want to mutate a view-only column,"," make a copy and mutate that one." ) }# If Ranges column not in query, prefer faster tibble conversion# Skip expanding GRanges columns when not referenced skip_GRanges <-get_GRanges_colnames() %in% cols |>not()# Round-trip: SE -> tibble -> dplyr::mutate -> SE .data |>as_tibble(skip_GRanges=skip_GRanges) |> dplyr::mutate(...) |>update_SE_from_tibble(.data)}
The new implementation captures all easy cases, such as sample-only and feature-only metadata mutate(). If mutate() is a mixed operation that can be factored out to sample- and feature-wise operation it is handled by plyxp. Otherwise, the general solution is used.
Key components to compare: - The pre-optimization code always uses a tibble round-trip (as_tibble() |> dplyr::mutate() |> update_SE_from_tibble()). - The optimized code first analyzes scope (colData, rowData, assay, or mixed) and dispatches to specialized paths. - The fallback still exists (mutate_via_tibble) for complex cases, preserving generality.
Show post-optimization source
mutate.SummarizedExperiment <-function(.data, ...) {# Check if query is composed (multiple expressions)if (is_composed("mutate", ...)) return(decompose_tidy_operation("mutate", ...)(.data))# Check for scope and dispatch elegantly scope_report <-analyze_query_scope_mutate(.data, ...) scope <- scope_report$scope result <-if(scope =="coldata_only") modify_samples(.data, "mutate", ...)elseif(scope =="rowdata_only") modify_features(.data, "mutate", ...)elseif(scope =="assay_only") mutate_assay(.data, ...)elseif(scope =="mixed") modify_se_plyxp(.data, "mutate", scope_report, ...)elsemutate_via_tibble(.data, ...)# Record latest mutate scope into metadata for testing/introspection meta <- S4Vectors::metadata(result)if (is.null(meta)) meta <-list() meta$latest_mutate_scope_report <- scope_report S4Vectors::metadata(result) <- metareturn(result)}
Benchmarking Overview
This vignette benchmarks a set of mutate(), filter(), select(), and distinct() scenarios (see documentation) comparing performance before and after optimisation, by explicitly checking out specific commits via git worktree, loading each commit’s code with devtools::load_all(), running the same scenarios multiple times, and comparing the runtimes with ggplot boxplots.
suppressPackageStartupMessages({library(ggplot2)library(dplyr)library(SummarizedExperiment)library(rlang)library(devtools)library(airway)library(microbenchmark)library(reactable)library(patchwork)})load_branch_code <-function(worktree_dir) {if (!requireNamespace("devtools", quietly =TRUE)) stop("Please install devtools to run this vignette.")# Debug: print the directory we're looking forcat("Looking for worktree directory:", worktree_dir, "\n")cat("Directory exists:", dir.exists(worktree_dir), "\n")cat("Current working directory:", getwd(), "\n")# Check if directory existsif (!dir.exists(worktree_dir)) {stop(paste("Worktree directory does not exist:", worktree_dir)) }suppressMessages(devtools::load_all(worktree_dir, quiet =TRUE))}create_airway_test_se <-function() {suppressPackageStartupMessages(library(airway))data(airway) se <- airway se[1:200, ]}benchmark_scenarios <-function() {list(coldata_simple_assignment =quo({ se |>mutate(new_dex = dex) }),coldata_arithmetic =quo({ se |>mutate(avgLength_plus_5 = avgLength +5) }),coldata_concat =quo({ se |>mutate(sample_info =paste(cell, dex, SampleName, sep ="_")) }),coldata_grouped_mean =quo({ se |>group_by(dex) |>mutate(avgLength_group_mean =mean(avgLength)) |>ungroup() }),assay_simple_assignment =quo({ se |>mutate(counts_copy = counts) }),assay_plus_one =quo({ se |>mutate(counts_plus_1 = counts +1) }),assay_log =quo({ se |>mutate(log_counts_manual =log2(counts +1)) }),complex_conditional_coldata =quo({ se |>mutate(length_group =ifelse(avgLength >mean(avgLength), "longer", "shorter")) }),complex_nested =quo({ se |>mutate(complex_category =ifelse(dex =="trt"& avgLength >mean(avgLength), "treated_long", ifelse(dex =="untrt", "untreated", "other"))) }),mixed_assay_coldata =quo({ se |>mutate(new_counts = counts * avgLength) }),multiple_simple_assay =quo({ se |>mutate(normalized_counts = counts /1000, sqrt_counts =sqrt(counts)) }),chained_mutates =quo({ se |>mutate(tmp = avgLength *2) |>mutate(flag =ifelse(tmp >mean(tmp), 1, 0)) }),# Filter benchmarks (scoped and non-rectangular)filter_coldata_simple =quo({ se |>filter(dex =="trt") }),filter_coldata_numeric =quo({ se |>filter(avgLength >median(avgLength)) }),filter_assay_nonrect =quo({ se |>filter(counts >0) }),# Select benchmarks (covering colData-only, rowData-only, assays-only, mixed)select_coldata_simple =quo({ se |>select(.sample, dex) }),select_rowdata_simple =quo({ se |>select(.feature) }),select_assay_only =quo({ se |>select(counts) }),select_mixed_keys_counts =quo({ se |>select(.sample, .feature, counts) }),select_coldata_wide =quo({ se |>select(.sample, dex, avgLength, SampleName) }),# Distinct benchmarks (covering colData-only, rowData-only, assays-only, mixed)distinct_coldata_simple =quo({ se |>distinct(dex) }),distinct_coldata_multiple =quo({ se |>distinct(dex, avgLength) }),distinct_rowdata_simple =quo({ se |>distinct(.feature) }),distinct_assay_only =quo({ se |>distinct(counts) }),distinct_mixed_keys_counts =quo({ se |>distinct(.sample, .feature, counts) }),distinct_coldata_wide =quo({ se |>distinct(.sample, dex, avgLength, SampleName) }),distinct_with_keep_all =quo({ se |>distinct(dex, .keep_all =TRUE) }),distinct_complex_expression =quo({ se |>distinct(dex, avgLength) }) )}run_one <-function(expr_quo, reps =5L) { se_base <-create_airway_test_se() mb <- microbenchmark::microbenchmark(eval_tidy(expr_quo),times = reps,setup = { se <- se_base }, # reuse the same input, avoid recreating inside the timed exprcontrol =list(warmup =2L) )# microbenchmark returns nanoseconds; convert to millisecondsas.numeric(mb$time) /1e6}run_all_scenarios <-function(branch_label, reps =7L) { scenarios <-benchmark_scenarios() out <-list()for (nm innames(scenarios)) { tms <-run_one(scenarios[[nm]], reps = reps) out[[length(out) +1L]] <-data.frame(branch = branch_label,scenario = nm,replicate =seq_along(tms),elapsed_ms = tms,stringsAsFactors =FALSE ) }bind_rows(out)}# Parallel version: run each scenario on a separate workerrun_all_scenarios_parallel <-function(branch_label, reps =20L, workers =1L, initializer =NULL) { scenarios <-benchmark_scenarios() nms <-names(scenarios) old_plan <- future::plan()on.exit(future::plan(old_plan), add =TRUE) future::plan(future::multisession, workers = workers) res <- future.apply::future_lapply(nms, function(nm) {if (!is.null(initializer)) initializer() tms <-run_one(scenarios[[nm]], reps = reps)data.frame(branch = branch_label,scenario = nm,replicate =seq_along(tms),elapsed_ms = tms,stringsAsFactors =FALSE ) }, future.seed =TRUE) dplyr::bind_rows(res)}
Run benchmarks on both branches
Show the code
# Worktree directories (already exist in the post directory)wt_before <-".__bench_before__"wt_after <-".__bench_after__"# Verify worktrees existif (!dir.exists(wt_before)) {stop("Worktree directory does not exist: ", wt_before)}if (!dir.exists(wt_after)) {stop("Worktree directory does not exist: ", wt_after)}# Before optimisation (commit 87445757)load_branch_code(wt_before)
Looking for worktree directory: .__bench_before__
Directory exists: TRUE
Current working directory: /Users/a1234450/Documents/GitHub/tidyomicsBlog/posts/2025-10-25-tidySummarizedExperiment-optimization
Show the code
res_before <-run_all_scenarios(branch_label ="before_optimization", reps =10L)# After optimisation (commit 9f7c26e)load_branch_code(wt_after)
Looking for worktree directory: .__bench_after__
Directory exists: TRUE
Current working directory: /Users/a1234450/Documents/GitHub/tidyomicsBlog/posts/2025-10-25-tidySummarizedExperiment-optimization