"Double buffered" (transactionally updated) game state is basically required if you want to avoid strange nondeterministic glitches as entities update based on partially updated world state. Persistent immutable data structures are great for this purpose as they typically attempt to minimize actual copying. UI logic in regular applications also greatly benefits from immutable state, which is indeed what things like React are based on.
Double buffering and determinism are not necessarily related. Determinism requires that everything that could affect the execution of logic is captured as part of the input, and (this is the annoying and error-prone part), that anything that is not captured as part of the input does NOT affect execution of logic.
If execution of logic is always performed in the same order, then the partially updated world state is not a problem for determinism (though it may be for correctness). Double buffering may let you reorder the execution of logic in different ways and still keep it deterministic. This would likely be needed for logic that executes in parallel threads, which is increasingly important for high-performance game engines.
Yes, true, ”unpredictable” is probably better. Depending on how exactly entity lists are managed, from player/dev point of view it can seem close to nondeterministic in practice even if a theoretical 1:1 replay would give identical results.
Another option is to build a set of diffs (Unit::MoveTo,Score::Add,...) based on the existing (immutabled viewed) state, then apply them mutably (possibly checking for conflicts on moveto et al):
diff = differential(state)
state.update(diff) # or state = update(state,diff)
# or state += ∂_∂t(state) if you're feeling overly mathy
Is there a reason you specified partial derivative with respect to time? Why not just d/dt(state)?
I've been tempted lately to model application state as the integral over time of all events (deltas) that have occurred. For example, imagine a simple game state:
type GameState = {
x: float; // x coordinate
t: float; // current time
v: float; // velocity variable
}
If initial game state =
{ x: 0,
t: 0,
v: 0, };
and you have some events (deltas):
[{t: 1, v:1}, {t:2, v:-1 /*this is deltaV, so back to v: 0*/}, {t: 3, v:2}, {t:5, v:-2}]
You could "sum" them up (integrate over time) and get a game state of x = 5, v = 0 where t >= 5.
I've found this approach kinda sucks when you add things like collision detection. Your application would have to emit velocity deltas when objects would collide. If you've got a point that bounces around in a box with perfectly elastic collisions, then over time you'd have an infinite number of these collision/velocity delta events. But as you receive new events, all your precomputed collision events are for nothing (if your player logs out). So the traditional update loop simulate each tick as it occurs seems to work best.
Is there a more general way of thinking about this integration? Maybe with respect to another variable? Perhaps it would address this problem I'm having where I feel forced to quantize my game state onto ticks.
A related idea—representing state as pseudo-continuously varying values—is Conal Elliott's original formulation of functional reactive programming in the context of animation [1].