Experiments

Satori experiments let you deliver different feature configurations to defined player segments and measure how each performs against your goal metrics.

Use experiments to test a change against a narrow audience before full rollout, validate a pricing configuration against a specific segment, or run sequential phases to refine a configuration without resetting participant assignments. When a phase concludes, promote the winning variant directly to a feature flag.

How it works #

An experiment is built around variants. Each experiment has a control variant that represents the current player experience, and one or more test variants that deliver a different configuration.

Each variant overrides the values of one or more feature flags. You select the flags to test from your existing flag definitions. The experiment wizard exposes each variant’s configuration as a JSON object so you can set the specific parameter values each test group receives.

When the game client requests configuration from the Satori API, Satori resolves the value using this priority order:

Experiment > Live Event > Flag Variants > Default Flag

Players enrolled in an active experiment receive the experiment variant value, which takes precedence over all other sources.

Diagram showing how Satori experiments are structured. An experiment contains a control variant representing the current player experience and one or more test variants. Each variant overrides the values of selected feature flags. When the game client requests configuration, Satori resolves which value a player receives using a fixed priority order: Experiment overrides first, then Live Event overrides, then Flag Variants, then the Default Flag value.

Diagram showing how experiments work in Satori

Key capabilities #

Run sequential phases without starting overAdjust variant split, update variant values, or extend the duration in a new phase while keeping the experiment container and accumulated results intact.
Control exactly who participatesCap the number of participants, set an admission deadline, or lock participation mid-experiment to protect the integrity of your results.
Measure what you're optimizing and watch for side effectsSet a goal metric for the expected outcome of your experiment and add monitor metrics for signals you want to watch for potential side effects.
Keep player assignments stableOnce a player is assigned to a variant, they stay there for the phase regardless of whether their audience membership changes mid-experiment.
Test changes on a targeted audience before full rolloutScope an experiment to any defined audience, such as churned players or new active users, so only the players you intend are affected.

Audience assignment #

Select the audiences that participate in an experiment and define how identities are split across variants.

For example, you might target a non-spender audience and set a 50/50 split between a control variant and a new-game-mode variant. The split is probabilistic: each identity in the audience has a 50% chance of being assigned to either variant. The resulting groups will be approximately equal, but not guaranteed to be exactly equal.

Diagram showing how the participant pool narrows for an experiment. All players in the game narrow down to the non-spender audience. That audience is then split 50/50 between a control variant and a new-game-mode test variant.

Diagram showing how identities are split across experiment variants
Handling Audience Membership
Once an identity is assigned to a specific experiment variant, it will remain in that variant for the duration of that experiment phase regardless if the audience to which the identity belongs changed in the interim (e.g. from non-spender to spender).

How Satori is different #

Phase your experiment without starting over #

Satori structures experiments around sequential phases, each with its own time window, audience split, and variant configuration. To learn more about phases, see Sequence phases.

Measure what you’re optimizing and watch for side effects #

Set a goal metric to track the outcome you expect the experiment to move, and add monitor metrics to catch unplanned signals. You can scope a retention or RoAS report to experiment participants, or send a message to players in the losing variant, without a separate analytics platform or data sync.

Experiment within the data model your game already uses #

Experiment variants can be delivered via feature flags, the configuration your game client already reads. Instead of adding experiment-specific logic to your game code, each variant overrides the values of an existing flag.