Ref
Ref<A> is a purely functional mutable variable. It wraps a single value of type A and exposes every read and write as an IO effect, so mutations stay within the effect system and remain composable, testable, and safe to reason about.
TIP
Because all access goes through IO, a Ref is safe to share across concurrent fibers — reads and writes are atomic.
Creating a Ref
Use Ref.of(initialValue) to allocate a ref inside IO, preserving referential transparency:
final IO<Ref<int>> ref = Ref.of(0);Core operations
| Method | Returns | Description |
|---|---|---|
value() | IO<A> | Read the current value |
setValue(a) | IO<Unit> | Overwrite the value |
update(f) | IO<Unit> | Apply a pure function to the value |
updateAndGet(f) | IO<A> | Apply f and return the new value |
getAndUpdate(f) | IO<A> | Apply f and return the old value |
getAndSet(a) | IO<A> | Replace and return the old value |
modify(f) | IO<B> | Atomically update and produce a result |
Basic read / write
IO<Unit> basics() => Ref.of(0).flatMap(
(counter) => counter
.update((n) => n + 1)
.flatMap((_) => counter.update((n) => n + 1))
.flatMap((_) => counter.value())
.flatMap((n) => IO.print('counter: $n')),
); // counter: 2Modifying with a result
modify lets you update the value and return something in a single atomic step. The function receives the current value and returns a tuple of (newValue, result):
IO<Unit> modifyExample() => Ref.of(<String>[]).flatMap(
(log) => log
.modify((msgs) => ([...msgs, 'first'], Unit()))
.flatMap((_) => log.modify((msgs) => ([...msgs, 'second'], Unit())))
.flatMap((_) => log.value())
.flatMap((msgs) => IO.print(msgs.toString())),
);Swapping values
getAndSet replaces the value and returns what was there before:
IO<Unit> getAndSet() => Ref.of('initial').flatMap(
(ref) => ref
.getAndSet('updated')
.flatMap((prev) => IO.print('was: $prev')) // was: initial
.flatMap((_) => ref.value())
.flatMap((cur) => IO.print('now: $cur')),
); // now: updatedConcurrent counter
Because every Ref operation is atomic, a Ref<int> works correctly as a shared counter even when many fibers update it at the same time.
The example below spawns 10 fibers concurrently, each incrementing the counter 100 times, then reads the final value. IO.start launches each worker as an independent fiber and returns an IOFiber handle; join then waits for each fiber to complete before the final value is read.
/// Spawn [fibers] fibers, each incrementing [counter] [increments] times.
IO<int> concurrentCounter({int fibers = 10, int increments = 100}) => Ref.of(0).flatMap((counter) {
// Each worker performs the update action, replicated `increments` times
final worker = counter.update((n) => n + 1).replicate_(increments);
// Start all fibers, collecting their handles
return IList.fill(fibers, worker)
.traverseIO((w) => w.start())
.flatMap((fibers) => fibers.traverseIO_((f) => f.join()))
.flatMap((_) => counter.value());
});No matter how the fiber scheduler interleaves the increments, the final result is always fibers × increments (1,000 by default) — the atomicity of update makes lost updates impossible.
Real-world scenario: in-memory request cache
A common use for Ref is a simple cache that avoids redundant work. The Ref holds a Map of results; lookups check the map first and only perform the real fetch on a miss, updating the cache atomically before returning.
/// A simple in-memory cache backed by a [Ref].
IO<String> fetchUser(int id) => IO.pure('user-$id');
IO<Unit> requestCacheExample() {
final emptyCache = <int, String>{};
return Ref.of(emptyCache).flatMap((cache) {
// Look up the cache; fetch and store on miss.
IO<String> cachedFetch(int id) => cache.value().flatMap(
(map) =>
map.containsKey(id)
? IO.pure(map[id]!)
: fetchUser(
id,
).flatMap((user) => cache.update((m) => {...m, id: user}).map((_) => user)),
);
return cachedFetch(1)
.flatMap((_) => cachedFetch(1)) // cache hit
.flatMap((_) => cachedFetch(2))
.flatMap((_) => cache.value())
.flatMap((m) => IO.print('cache: $m'));
});
}Because the cache is a Ref, it can be passed to any number of concurrent fibers without risk of conflicting writes — every update is an atomic compare-and-swap under the hood.