Lego with Algorithms

Part of the cognition series. Builds on The Parts Bin.

Cognitive architectures look like inventions. ACT-R. CLARION. DreamCoder. Cobweb. Each has a name, a lab, a body of literature. Each feels like a thing someone built from scratch.

They’re Lego. Assembled from algorithms that already had names, proofs, and provenance. The novelty lives in the coupling: which parts to connect, in what order, under what theory.

The parts

Four architectures, decomposed. Every row is a named algorithm with independent provenance. The Role column maps to the Parts Bin grid.

ACT-R — Anderson & Lebiere, 1998

ComponentWhat it doesRoleBorrowed from
Power-law decayTracks memory availability by recency and frequencyRememberEbbinghaus, 1885
Spreading activationBoosts contextually relevant memoriesAttendQuillian, 1967; Collins & Loftus, 1975
Softmax / Luce choice ruleConverts activations into retrieval probabilitiesAttendLuce, 1959; Boltzmann, 1868
Partial matchingPenalizes imperfect matches by similarityFilterShepard, 1987
Blended valueAverages across retrieved instancesConsolidateNadaraya-Watson, 1964
Retrieval thresholdGates out memories below a criterionFilterNeyman-Pearson, 1933

Original contribution: Rational analysis — a Bayesian argument for why these parts compose. Each term in the activation equation is a factor in the posterior odds that a memory will be needed.

CLARION — Sun, Merrill & Peterson, 2001

ComponentWhat it doesRoleBorrowed from
Q-Learning + BackpropagationTrains implicit (subsymbolic) levelConsolidateWatkins, 1989; Rumelhart et al., 1986
Boltzmann selectionPicks actions from Q-valuesAttendLuce, 1959 (same part as ACT-R)
Information gainEvaluates rule quality for refinementFilterLavrač & Džeroski, 1994
Rule generalization / specializationTraverses a generality lattice to broaden or narrow rulesConsolidateSun et al., 2001
Weighted-sum integrationCombines implicit and explicit recommendationsAttendMaclin & Shavlik, 1994
Rule extraction (positivity gate)Creates explicit rule when implicit level succeedsFilterSun et al., 1996

Original contribution: The coupling — bottom-up extraction, top-down assimilation, and IG-based lattice traversal that refines rules across the implicit/explicit boundary.

Cobweb — Fisher, 1987

ComponentWhat it doesRoleBorrowed from
Category UtilityScores competing tree operationsAttendGluck & Corter, 1985; Gini, 1912
IncorporateRoutes instance down best branchFilterKolodner, 1984; Lebowitz, 1987
CreateAdds a new leaf nodeCacheFisher, 1987
Merge / SplitRestructures tree from accumulated evidenceConsolidateFisher, 1987
Count updateWrites frequencies to each nodeRememberFrequency estimation (MLE)

Original contribution: Category utility as the universal scoring function — one metric drives all four tree operators. Five roles in a single function call.

DreamCoder — Ellis et al., PLDI 2021

ComponentWhat it doesRoleBorrowed from
Type-guided enumerationSearches for programs that solve tasksAttendEllis et al., 2018; Hindley, 1969
Recognition model (GRU)Prunes irrelevant productions per taskFilterHelmholtz Machine, Dayan et al., 1995
Helmholtz enumerationGenerates synthetic training dataCacheDayan, Hinton et al., 1995
Inside-Outside (EM for PCFGs)Re-estimates grammar weightsConsolidateBaker, 1979
MDL scoringGates which fragments enter the libraryFilterRissanen, 1978
Version space compressionCompresses program library via inverse beta-reductionConsolidateEllis et al., 2021

Original contribution: Version space compression — n-step inverse beta-reduction over hash-consed DAGs.

The pattern

Four architectures, twenty-three components, four original contributions.

Decomposition isn’t the whole story. Representation choices and training dynamics matter too. But when people call an architecture novel, the novelty almost always sits in the coupling. Anderson had rational analysis. Sun had dual-process theory. Fisher had category utility. Ellis had wake-sleep with library compression. Each invented a reason to connect these parts in this order.

The algorithms are cheap to look up. They have names. The composition is expensive to discover.

The Parts Bin

The Parts Bin makes the lookup explicit. Six roles, six data structures, thirty-six cells. Each cell lists the algorithms that satisfy that role’s contract over that structure.

If you’re building a cognitive architecture, you don’t need to invent algorithms. You need to:

  1. Diagnose which roles your architecture is missing.
  2. Look up which algorithms fill those roles in the grid.
  3. Wire them together — that’s your contribution.

The algorithms have names because someone already did the hard work. What’s left is the wiring. That’s the architecture. That’s yours to invent.