Raising the Stakes
Now that we've got a basic understanding of how to structure logic with Depends, let's start ramping up the complexity.
We can use the special DiagnosticsVisitor
and resolve
to identify when nodes have their internal value calculated.
let a = InputNode::new(SomeNumber { value: 1 });
let b = InputNode::new(SomeNumber { value: 2 });
let c = InputNode::new(SomeNumber { value: 3 });
let d = InputNode::new(SomeNumber { value: 4 });
let e = InputNode::new(SomeNumber { value: 2 });
let a_times_b = DerivedNode::new(
TwoNumbers::init(Rc::clone(&a), Rc::clone(&b)),
Multiply,
SomeNumber::default(),
);
let d_minus_c = DerivedNode::new(
TwoNumbers::init(Rc::clone(&d), Rc::clone(&c)),
Subtract,
SomeNumber::default(),
);
let d_squared = DerivedNode::new(
Dependency::new(Rc::clone(&d)),
Square,
SomeNumber::default(),
);
let e_squared = DerivedNode::new(
Dependency::new(Rc::clone(&e)),
Square,
SomeNumber::default(),
);
let a_times_b_plus_c_minus_d = DerivedNode::new(
TwoNumbers::init(Rc::clone(&a_times_b), Rc::clone(&d_minus_c)),
Add,
SomeNumber::default(),
);
let times_e_squared = DerivedNode::new(
TwoNumbers::init(Rc::clone(&a_times_b_plus_c_minus_d), Rc::clone(&e_squared)),
Multiply,
SomeNumber::default(),
);
let minus_d_squared = DerivedNode::new(
TwoNumbers::init(Rc::clone(×_e_squared), Rc::clone(&d_squared)),
// Check out the `examples` directory to see the implementation of
// these new operations.
Subtract,
SomeNumber::default(),
);
let cube_and_change_type = DerivedNode::new(
Dependency::new(Rc::clone(&minus_d_squared)),
Cube,
// The graph can be constructed from all sorts of types.
AnotherNumber::default(),
);
// This visitor will track how many derived nodes have their input
// recalculated.
let mut visitor = DiagnosticVisitor::new();
let output = cube_and_change_type.resolve(&mut visitor).unwrap();
assert_eq!(output.value, -64);
// We calculated all derived nodes.
assert_eq!(visitor.recalculated.len(), 8);
We've now created this pretty gnarly graph:
Incremental Computation
Let's demonstrate incremental computation in action. Suppose now we wanted to update the value of node e
.
// Since we didn't `resolve_root` last time, clear the visitor manually.
visitor.clear();
// Updating E will only update the nodes which depend on it.
e.update(3).unwrap();
// Resolve our root node.
let output = cube_and_change_type.resolve(&mut visitor).unwrap();
assert_eq!(output.value, 1331);
// We've _visited_ all nodes.
assert_eq!(visitor.visitor.len(), 13);
// But only _recomputed_ the ones which are dependent on `e`.
// (Input nodes do not count towards this total).
assert_eq!(visitor.recalculated.len(), 4);
So, only 4 derived nodes were recomputed that time, saving us from all those costly math operations! A graphical representation of how this looks is below. The red nodes dirty or recomputed. The amber nodes provided cached values to their dependents. The green nodes were unchanged.
What would happen if we were to swap the values of a
and b
?
// Let's swap the values of `a` and `b`.
a.update(2).unwrap();
b.update(1).unwrap();
// The end result hasn't changed. 1 * 2 == 2 * 1.
let output = cube_and_change_type.resolve(&mut visitor).unwrap();
assert_eq!(output.value, 1331);
// Only 1 node was recalculated this time.
assert_eq!(visitor.recalculated.len(), 1);
We can see that only a single node had to be recalculated this time. Despite the node multiplying a
and b
being
recalculated, its hash value remained the same. This resulted in its dependent nodes not considering their
dependencies dirty, therefore returning cached values.
Organizing nodes to effectively cache values of nodes further up the dependency path can lead to significant performance gains, particularly in large dependency graphs with costly calculations.
A graphical representation of this is below.
Note that each Visitor will have its own
Hasher
, so using a different visitor will cause all nodes to be dirty. For best results, use a single visitor for the lifetime of a graph.