Dynamic thinking a primer on dynamic field theory 1st Edition Schöner

furcagocal46 7 views 85 slides May 10, 2025
Slide 1
Slide 1 of 85
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85

About This Presentation

Dynamic thinking a primer on dynamic field theory 1st Edition Schöner
Dynamic thinking a primer on dynamic field theory 1st Edition Schöner
Dynamic thinking a primer on dynamic field theory 1st Edition Schöner


Slide Content

Dynamic thinking a primer on dynamic field
theory 1st Edition Schöner download
https://ebookgate.com/product/dynamic-thinking-a-primer-on-
dynamic-field-theory-1st-edition-schoner/
Get Instant Ebook Downloads – Browse at https://ebookgate.com

Get Your Digital Files Instantly: PDF, ePub, MOBI and More
Quick Digital Downloads: PDF, ePub, MOBI and Other Formats
Macroeconomic Theory A Dynamic General Equilibrium
Approach Second Edition Michael Wickens
https://ebookgate.com/product/macroeconomic-theory-a-dynamic-
general-equilibrium-approach-second-edition-michael-wickens/
The Theory and Treatment of Depression Towards a
Dynamic Interactionism Model Jozef Corveleyn
https://ebookgate.com/product/the-theory-and-treatment-of-
depression-towards-a-dynamic-interactionism-model-jozef-
corveleyn/
Tectonic Faults Agents of Change on a Dynamic Earth 1st
Edition Mark R. Handy
https://ebookgate.com/product/tectonic-faults-agents-of-change-
on-a-dynamic-earth-1st-edition-mark-r-handy/
Dynamic Yoga 1st Edition Kia Meaux
https://ebookgate.com/product/dynamic-yoga-1st-edition-kia-meaux/

Dynamic Posing Guide 1st Edition Craig Stidham
https://ebookgate.com/product/dynamic-posing-guide-1st-edition-
craig-stidham/
Dynamic Technical Analysis 1st Edition Philippe Cahen
https://ebookgate.com/product/dynamic-technical-analysis-1st-
edition-philippe-cahen/
A Primer on Critical Thinking and Business Ethics 1st
Edition Oswald A. J. Mascarenhas Munish Thakur
https://ebookgate.com/product/a-primer-on-critical-thinking-and-
business-ethics-1st-edition-oswald-a-j-mascarenhas-munish-thakur/
Dynamic asset pricing 3rd Edition Darrell Duffie
https://ebookgate.com/product/dynamic-asset-pricing-3rd-edition-
darrell-duffie/
The Dynamic Laws of Prosperity Catherine Ponder
https://ebookgate.com/product/the-dynamic-laws-of-prosperity-
catherine-ponder/

DYNAMIC T HINKING

Oxford Series in Developmental Cognitive Neuroscience
Series Editor
Mark H. Johnson, Centre for Brain and Cognitive Development,
Birkbeck College, University of London, UK
Attention, Genes, and Development
Kim Cornish and John Wilding
Neuroconstructivism, Volume One: How the Brain Constructs Cognition
Denis Mareschal, Mark H. Johnson, Sylvain Sirois, Michael W. Spratling,
Michael S. C. Thomas, and Gert Westerman
Neuroconstructivism, Volume Two: Perspectives and Prospects
Edited by Denis Mareschal, Sylvain Sirois, Gert Westerman, and
Mark H. Johnson
Toward a Unified Theory of Development: Connectionism and Dynamic Systems
Theory Re-considered
Edited by John P. Spencer, Michael S. C. Thomas, and
James L. McClelland
Spatial Representation: From Gene to Mind
Barbara Landau and James E. Hoffman
Dynamic Thinking: A Primer on Dynamic Field Theory
Gregor Schöner, John P. Spencer, and the DFT Research Group

1 DYNAMIC THINKING
A Primer on Dynamic Field Theory
GREGOR SCHÖNER, JOHN P. SPENCER,
AND THE DFT RESEARCH GROUP

1 Oxford University Press is a department of the University of
Oxford. It furthers the University’s objective of excellence in research,
scholarship, and education by publishing worldwide.
Oxford New York
Auckland  Cape Town  Dar es Salaam  Hong Kong  Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trademark of Oxford University Press
in the UK and certain other countries.
Published in the United States of America by
Oxford University Press
198 Madison Avenue, New York, NY 10016
© Oxford University Press 2016
All rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, without the prior
permission in writing of Oxford University Press, or as expressly permitted by law,
by license, or under terms agreed with the appropriate reproduction rights organization.
Inquiries concerning reproduction outside the scope of the above should be sent to the
Rights Department, Oxford University Press, at the address above.
You must not circulate this work in any other form
and you must impose this same condition on any acquirer.
Library of Congress Cataloging-in-Publication Data
Schöner, Gregor.
Dynamic thinking: a primer on dynamic field theory / Gregor Schöner,
John P. Spencer, DFT Research Group.
 pages cm.—(Oxford series in developmental cognitive neuroscience)
Includes bibliographical references and index.
ISBN 978–0–19–930056–3 (hardback)
1. Cognitive psychology.  2. Thought and thinking.  3. Neuropsychology.  I. Title.
BF201.S36 2015
153—dc23
2015012229
9 8 7 6 5 4 3 2 1
Printed in the United States of America
on acid-free paper

CONTENTS
Contributors s vii
Abbreviations s ix
General Introduction s xi
John P. Spencer and Gregor Schöner
PART 1: Foundations of Dynamic Field Theory
Introduction s 1
Gregor Schöner and John P. Spencer
s1.sNeural Dynamics s 5
Gregor Schöner, Hendrik Reimann,
and Jonas Lins
s2.sDynamic Field Theory: Foundations s 35
Gregor Schöner and Anne R. Schutte
s3.sEmbedding Dynamic Field Theory
in Neurophysiology s 61
Sebastian Schneegans, Jonas Lins,
and Gregor Schöner
s4.sEmbodied Neural Dynamics s 95
Gregor Schöner, Christian Faubel,
Evelina Dineva, and Estela Bicho
PART 2: Integrating Lower-Level
Perception-Action with Higher-Level
Cognition L 119
Introduction s 119
John P. Spencer and Gregor Schöner
s5.sIntegration and Selection in
Multidimensional Dynamic Fields s 121
Sebastian Schneegans, Jonas Lins,
and John P. Spencer
s6.sIntegrating Perception and Working
Memory in a Three-Layer Dynamic
Field Model s 151
Jeffrey S. Johnson and
Vanessa R. Simmering
s7.sSensory-Motor and Cognitive
Transformation s 169
Sebastian Schneegans
s8.sIntegrating “What” and “Where”: Visual
Working Memory for Objects in a Scene s197
Sebastian Schneegans,
John P. Spencer, and Gregor Schöner
s9.sDynamic Scene Representations and
Autonomous Robotics s 227
Stephan K. U. Zibner and
Christian Faubel
PART 3: Integrating Thinking over Multiple
Timescales
Introduction s 247
John P. Spencer and Gregor Schöner
s10.sDevelopmental Dynamics: The Spatial
Precision Hypothesis s 251
Vanessa R. Simmering and
Anne R. Schutte
s11.sA Process View of Learning and Development
in an Autonomous Exploratory System s271
Sammy Perone and Joseph P. Ambrose
s12.sGrounding Word Learning in Space
and Time s 297
Larissa K. Samuelson and
Christian Faubel

visContents
s13.sThe Emergence of Higher-Level Cognitive
Flexibility: Dynamic Field Theory and
Executive Function s 327
Aaron T. Buss, Timothy Wifall,
and Eliot Hazeltine
s14.sAutonomous Sequence Generation in
Dynamic Field Theory s 353
Yulia Sandamirskaya
s15.sConclusions: A “How-to” Guide to
Modeling with Dynamic Field Theory s369
Joseph P. Ambrose, Sebastian Schneegans,
Gregor Schöner, and John P. Spencer
IndexL 389

CONTRIBUTORS
Joseph P. Ambrose, Department of Mathematics,
University of Iowa, Iowa City, IA, USA
Estela Bicho, Departamento de Electrónica
Industrial, Universidade do Minho, Guimarães,
Portugal
Aaron T. Buss, Department of Psychology,
University of Tennessee, Knoxville, TN, USA
Evelina Dineva, Institut für
Lufttransportsysteme, Deutsches Zentrum für
Luft- und Raumfahrt (DLR), Hamburg, Germany
Christian Faubel, Laboratory for Experimental
Computer Science, Kunsthochschule für Medien,
Köln, Germany
Eliot Hazeltine, Department of Psychology,
University of Iowa, Iowa City, IA, USA
Jeffrey S. Johnson, Department of Psychology,
North Dakota State University, Fargo, ND, USA
Jonas Lins, Institut für Neuroinformatik,
Ruhr-Universität Bochum, Bochum, Germany
Sammy Perone, Institute of Child Development,
University of Minnesota, MN, USA
Hendrik Reimann, Department of Kinesiology,
Temple University, Philadelphia, PA, USA
Larissa K. Samuelson, School of Psychology,
University of East Anglia, Norwich, United
Kingdom
Yulia Sandamirskaya, Institut für
Neuroinformatik, Universität Zürich/ETH
Zürich, Zürich, Switzerland
Sebastian Schneegans, Institut für
Neuroinformatik, Ruhr-Universität Bochum,
Bochum, Germany
Gregor Schöner, Institut für Neuroinformatik,
Ruhr-Universität Bochum, Bochum, Germany
Anne R. Schutte, Department of Psychology,
University of Nebraska, Lincoln, NE, USA
Vanessa R. Simmering, Department of
Psychology, University of Wisconsin, Madison,
WI, USA
John P. Spencer, School of Psychology,
University of East Anglia, Norwich, United
Kingdom
Stephan K. U. Zibner, Institut für
Neuroinformatik, Ruhr-Universität Bochum,
Bochum, Germany
Timothy Wifall, Department of Psychology,
University of Iowa, Iowa City, IA, USA

ABBREVIATIONS
CoS Condition of Satisfaction
DFT Dynamic Field Theory
DF Dynamic Field

GENERAL INTRODUCTION
JOHN P. SPENCER AND GREGOR SCHÖNER
This book describes a new theoretical approach—
dynamic field theory (DFT)—that explains how
people think and act.
DFT officially turned 20  years old in 2013.
Two decades earlier, in 1993, Gregor Schöner and
his colleagues published the first paper on DFT,
presenting a theory of how eye movements are
planned using dynamic fields (Kopecz, Engels,
& Schöner, 1993; Kopecz & Schöner, 1995,
Trappenberg, Dorris, Munoz, & Klein, 2001).
Since that time, DFT has been extended to a range
of topics including the planning of reaching move-
ments (Bastian, Riehle, Erlhagen, & Schöner,
1998; Bastian, Schöner, & Riehle, 2003; Erlhagen
& Schöner, 2002), the development of motor plan-
ning (Thelen, Schöner, Scheier, & Smith, 2001),
the perception of motion (Hock, Schöner, & Giese,
2003; Jancke, Erlhagen, Schöner, & Dinse, 2004),
the processes that underlie habituation in infancy
(Schöner & Thelen, 2006), the control of autono-
mous robots (Bicho, Mallet, & Schöner, 2000;
Schöner, Dose, & Engels, 1995), the processes
that underlie visuospatial cognition and spatial
language (Lipinski, Schneegans, Sandamirskaya,
Spencer, & Schöner, 2012; Lipinski, Spencer, &
Samuelson, 2009; Spencer, Simmering, Schutte,
& Schöner, 2007), the development of visuospa-
tial cognition (Simmering, Schutte, & Spencer,
2008), the processes that underlie visual work-
ing memory and change detection (Johnson,
Spencer, & Schöner, 2009), the fast learning of
object labels and other aspects of word learn-
ing (Faubel & Schöner, 2008; Samuelson, Smith,
Perry, & Spencer, 2011), the process of imitation
(Erlhagen, Mukovskiy, & Bicho, 2006), the devel-
opment of executive function (Buss & Spencer,
2014), and sequence learning (Sandamirskaya &
Schöner, 2010).
This list is meant to establish that DFT has a
track record for providing neural process accounts
for a broad swath of behaviors and cognitive abili-
ties. The list also suggests, however, that a tightly
knit group has been driving DFT forward. We are
keenly aware of barriers that researchers encoun-
ter when trying to use the concepts, measures, and
modeling tools of DFT. Some of these barriers have
to do with mathematical modeling—DFT can be
quite technical, and some of the detail is difficult
to understand from journal articles. Another bar-
rier lies at the conceptual level. In DFT we make
a number of conceptual commitments that are not
always obvious when we move from one domain to
another. For instance, the origins of DFT in motor
control have led to the conceptual commitment
that functional states of nervous systems must have
stability properties—the capacity to resist change
and counteract perturbations from, for instance,
the peripheral motor system. When we stick to that
same conceptual commitment in accounting for
serial order or object representation, it is not always
obvious to readers why we are doing that. As we
state in this book, we are convinced that this inte-
gration of principles across domains is fundamental
to creating a unified theory of cognition.
In response to the perceived entry barrier, we
have offered tutorials, one-day workshops, and
yearly summer schools, often with a “hands-on”
component in which participants could practice
using the tool of DFT. This book is the result of
such efforts and represents the culmination of our
drive to develop a new theoretical language for
understanding the dynamics of cognition. Before
we dive into the practicalities of how to approach
this book, we will position DFT in a brief history of
ideas to make explicit the theoretical commitments
we make in this book.

xii General Introduction
TOWARD A UNIFIED
THEORY OF COGNITIVE
DYNAMICS: A BRIEF HISTORY
OF IDEAS
The central concepts of DFT emerged in a scien-
tific context that we now briefly review here. For
a more detailed account, see the following papers
(Sandamirskaya, Zibner, Schneegans, & Schöner,
2013; Schöner, 2008, 2009, 2014; Spencer, Austin,
& Schutte, 2012; Spencer & Schöner, 2003).
The most important historical root of DFT lies
in the motor domain. Inspired by functional muscle
models (Feldman, 1966), researchers working with
Michael Turvey and Scott Kelso proposed dynami-
cal systems as a metaphor for how movements are
generated. Like a damped pendulum moves to its
equilibrium position, a set of virtual forces would
guide human movement to movement goals (Kelso,
Holt, Kugler, & Turvey, 1980). Coordination would
arise by coupling the virtual force fields govern-
ing the effectors. This metaphor resonated with
Gestalt ideas and drew on analogies with principles
of self-organization and pattern formation in phys-
ics (Haken, 1983).
The metaphor acquired new force with the
discovery of an instability in movement coordina-
tion (Kelso, 1984; Schöner & Kelso, 1988). When
humans move two fingers (or hands, or legs) rhyth-
mically, two coordination patterns are commonly
observed: in-phase (homologous muscles activated
at the same time) and anti-phase (homologous
muscles alternating). These patterns are stable: If
the fingers are perturbed during rhythmic move-
ment, for example, they recover from perturba-
tions and maintain the original pattern. When the
frequency of rhythmic movement is increased,
however, the anti-phase pattern loses stabil-
ity: Recovery from perturbations becomes slow and
resistance to change breaks down. This ultimately
leads participants to switch to the in-phase pattern.
The significance of this discovery lay in the recog-
nition that stability is both necessary and sufficient for
a pattern of coordination to emerge. Without stabil-
ity, the pattern is lost. And the mechanisms that
stabilize a pattern against perturbations also bring
about the pattern in the first place. This demysti-
fied the notion of emergence and gave an opera-
tional and mathematically formalized foundation
to the dynamical systems metaphor. Moreover, the
emergence of patterns of behavior during learning
could be understood as the result of a change of
the dynamics of the system (Schöner, Zanone, &
Kelso, 1992).
It is not surprising, then, that this form of
the dynamical systems metaphor was most
influential in development, starting with motor
development (Thelen, 1995). Esther Thelen,
Linda Smith, and their colleagues developed
this metaphor into a comprehensive theory of
development (Smith & Thelen, 2003; Thelen
& Smith, 1994). Among the attractive features
of the metaphor was the notion that change of
dynamics during development could start from
different initial conditions, leading to individual
paths in development that converge on the same
developmental outcome (Spencer et  al., 2006).
Moreover, because the environment may act as
one contribution to a behavioral dynamics, skills
may be softly assembled in different environ-
ments, accounting for context effects. And the
same behavioral pattern may result from differ-
ent configurations of dynamic contributions,
providing an understanding of multicausality in
development: Developmental achievements may
emerge from the joint effect of multiple factors
rather than depend on a single, critical factor
alone (Schöner, 2014).
In the motor domain, the dynamical systems
metaphor with stability as a central concept was
particularly intuitive because movement itself is
time-continuous and its resistance to perturba-
tions can be directly observed. But even early in
the development of these concepts a central issue
arose: What about cognition? Some proponents of
dynamical systems thinking turned to a strong
anti-representationalist stance (Van Gelder,
1998). The notion was that cognition could arise
“directly,” analogous to how behavior arises from
a direct coupling of perception and action in a
closed loop. This view has been argued among phi-
losophers but has had limited impact on researchers
working on human cognition.
We have gone the other route, by making rep-
resentational states a natural part of dynamical
systems thinking (see the argument in Spencer &
Schöner, 2003). The first part of this book lays out
these ideas. In a nutshell, not only the overt state of
a motor system but also the inner state of nervous
systems may evolve in time under the influence
of dynamical systems that now govern the neural
processes on which cognition is based. Attractors
of such neural dynamics are the functionally sig-
nificant states of cognitive processes. Their stabil-
ity enables neural states to stay in the attentional
foreground, to resist distractors, and to achieve the
goals of thinking.

General Introduction xiii
This view of neural dynamics is not fundamen-
tally different from how connectionists approach
representation (Rumelhart, McClelland, & PDP
Research Group, 1986). But DFT adds new prin-
ciples to neural processing accounts, in particular,
the fundamental space–time continuity of repre-
sentations from which categories may emerge, the
emphasis on the stability of cognitive states, and the
emergence of qualitatively new cognitive functions
from instabilities of the underlying neural dynam-
ics. The consensus that emerged from a first con-
versation about the relationship between dynamical
systems thinking and connectionism is summa-
rized in an edited volume by Spencer, Thomas, and
McClelland (2009). Many readers might think that
the stronger emphasis on learning in connection-
ism sets it apart from DFT. In Part 3 of this book, we
show how learning is central to DFT as well.
The formal mathematical framework of DFT
built on prior work of brilliant theoreticians in the
neural and behavioral sciences. Neural dynam-
ics was developed as a language to describe the
fundamental principles of neural information pro-
cessing in neural networks by Stephen Grossberg,
beginning in the 1970s (Grossberg, 1970, 1980).
Wilson and Cowan (1973) introduced the concept
of neural fields as an approximate description of the
neural dynamics within the homogeneous layers
of cortex in which dendritic trees of neighboring
neurons strongly overlap. Amari’s (1977) math-
ematical analysis of this class of dynamic neural
field models provided the building blocks for DFT.
In Chapter 3 we will review this heritage and show
that DFT is closely linked to cortical neurophysiol-
ogy, although the neural dynamics in DFT reside
at a somewhat more abstract level than in these
classical works.
The move toward neural dynamics that took
place when the dynamical systems metaphor crys-
tallized into DFT resolved a second question: Is
dynamical systems thinking primarily descriptive? Or
does it speak to how neural mechanisms bring about
cognition and behavior? The early work on coordi-
nation dynamics tended toward abstract descrip-
tions of patterns (Kelso, 1995). The quest for the
“order parameter” that would best characterize the
self-organization of coordination patterns looked
explicitly for invariant levels of description. For
instance, the relative phase between rhythmically
moving limbs was considered a good “pattern”
variable, because different values of that variable
describe different patterns. The stability of coordi-
nation could be assessed by observing fluctuations
of relative phase or its time course following a per-
turbation (Schöner & Kelso, 1988). In this early
work, there was a certain resistance to linking these
abstract descriptions to neural mechanism.
In contrast, the neural dynamics that form the
foundation of this book are intimately connected to
neural processes at the population level. Chapter 3
discusses this link in detail. As a result, models
within the framework of DFT are neural process
accounts that have been tied, in a number of cases,
directly to the experimental observation of neural
processing (Bastian et al., 2003; Jancke et al., 1999;
Markounikau, Igel, Grinvald, & Jancke, 2010).
Moreover, computational neuroscientists have
derived the neural dynamics modeled within DFT
from biophysically more detailed models of neural
processing, a link we discuss in Chapter 2. DFT has,
therefore, clearly moved beyond mere description.
Convergent evidence that dynamical systems
thinking can achieve more than description comes
from the fact that autonomous robots can be built
that act out the behavior modeled by dynami-
cal systems (Schöner et al, 1995). In these robots,
there are no information-processing algorithms, no
“if-then-else” statements. Behavior flows out of the
time- and space-continuous differential equations
that make up DFT. The robots have very simple
interfaces to sensors and to effectors (Bicho et al.
2000). Chapter  4 uses robotic examples to show
how neural and behavioral dynamics work together
to generate overt behavior. Throughout this book
we use robotic demonstrations of DF models to
illustrate principles, to probe the demands that DF
models make of the sensory and motor interfaces,
and to demonstrate that a process account has been
provided. Robotic models are useful beyond such
demonstration as sources of ideas and new ques-
tions. Sometimes, building a robot demonstration
has enhanced our confidence in the principles of
DFT. For instance, in one early demonstration
we used a computational shortcut to read out the
state of a dynamic field. This shortcut failed mis-
erably: The robot switched among targets and did
not achieve its task. This revealed that we needed
to think more deeply about how neural dynamics are
coupled to behavioral dynamics—a key innovation
we highlight in Chapter 4.
The robotic demonstrations also highlight
that DFT provides an embodied account, that is,
neural processes are grounded in sensory and
motor processes that are anchored on a body situ-
ated in a physical environment. In development,
the sensory-motor origins of cognition have been

xiv General Introduction
a theme since Piaget. The research program of
dynamical systems thinking in development was
based on the notion that cognition arises in the
here and now, as an infant or child experiences the
world through the body (Smith & Thelen, 1993;
Thelen & Smith, 1994). DFT has been critical in
showing how cognitive properties may emerge
from representational states grounded in the sen-
sory and motor domains (Simmering et al., 2008;
Smith & Thelen, 2003; Thelen et al., 2001). In fact,
DFT has been proposed more generally as the theo-
retical language for understanding embodied cognition
(Schneegans & Schöner, 2008).
How is DFT related to the line of work pos-
tulating that embodiment plays a central role in
adult cognitive activity? In that work, researchers
have found that higher cognition such as under-
standing language interacts with the motor system
(Glenberg, 1997). For example, if participants were
asked to move their hand toward their body when
a sentence made sense, they were slower when the
sentence implied a movement away from the body
(as in “close the drawer”; see Glenberg & Kaschak,
2002). The embodiment stance is the hypothesis
that all higher cognition builds on a sensory-motor
basis by using mappings across domains and modal-
ities, mental simulation, and embodied schemata
(Barsalou, 1999, 2008; Damasio & Damasio, 1994;
Feldman & Narayanan, 2004, Spivey, 2007). There
is debate about the exact extent of these claims
(Wilson, 2002). For instance, some researchers
claim that embodiment only goes so far and that
“real” higher-level cognition requires concepts such
as symbol manipulation. We contend that dynami-
cal systems principles, space–time continuity,
stability, and emergence of new states from insta-
bilities, are valid all the way from motor behavior
to higher cognition. The organization of this book
follows this trajectory, with strong grounding of
the theory in the sensory-motor domain in Part 1 to
increasingly abstract forms of cognitive processing
in Parts 2 and 3.
This positioning of DFT relative to the embodi-
ment stance provides a concrete answer to the ques-
tion of how much DFT is a “motor” theory. The
answer is that we are pursuing a general theory that
spans perception, action, and cognition. This is
reflected in the broad number of topics covered in
this book, from low-level psychophysics and motor
control to word learning, spatial language, execu-
tive control, and sequence generation. Ultimately,
integration across these domains is a fundamental
challenge. Throughout this book, we show how
DFT addresses integration directly by carrying
forward a set of common principles as we move
from lower- to higher-level cognition. This is why
the commitment to these principles is maintained
across domains, no matter how obvious their neces-
sity is in each case.
We discuss this voyage of DFT from the motor
to the higher cognitive domains. This voyage is not
over yet and the strong embodiment hypothesis
has not been fully tested. But we are firmly com-
mitted to that course, working toward accounts of
intentionality (Sandamirskaya et al., 2013), mental
hypothesis testing (Richter, Lins, Schneegans, &
Schöner, 2014), autonomous learning (Perone, &
Spencer, 2013), and other elements of higher
cognition.
HOW TO APPROACH
THE BOOK
The goal of this book is to make the concepts and
methods of DFT accessible to you, the reader.
Thus, we have developed a book that is tutorial in
nature, designed to walk you through the theory
step-by-step. Even though each chapter is written
by different authors, they are all members of the
DFT research group and the writing and editing
was highly coordinated. Importantly, we wrote
the book to be read in order—each chapter builds
on the previous one. We have used this book sev-
eral times in graduate courses (thanks to all of the
graduate students who suffered through earlier
drafts!). One of them remarked that the book is a
bit like a math or physics textbook—if you don’t
understand the content from Chapter 1, you’ll be
lost when reading Chapter 2. While we honestly
don’t think you’ll be lost if you jump around a
bit, it is certainly the case that the knowledge you
build (or fail to build) as you read each chapter
will impact how you approach the chapters that
follow.
In truth, though, we don’t just want to make the
concepts of DFT accessible—we want to do our
job so well that you are seduced into thinking and
working with DFT. To facilitate that, we (actually,
Sebastian Schneegans) have also developed a new
(and really cool) simulation environment called
COSIVINA (Compose, Simulate, and Visualize
Neurodynamic Architectures), an open-source
toolbox for MATLAB. Exercises at the end of each
chapter give you a chance to play with dynamic
fields, to test your understanding of what DFT is
all about, to get your hands dirty, and, ultimately, to
learn how to create and innovate on your own. We

General Introduction xv
encourage you to do these exercises. The interactive
simulators are fun and relatively easy to get used to,
and they can go a long way toward revealing what
you understand and what you don’t. There’s noth-
ing like learning by doing.
The content of this book is supported on our
home page: www.dynamicfieldtheory.org. There,
you can find all the interactive simulators for the
exercises, source code, and papers. We have also
set up this website so users can interact, creating
an online community of researchers actively using
DFT. Thus, if you have a question, get stuck on an
exercise, or just want to rave about the awesome-
ness of DFT, you have ready access to a group of
people who will support your every need.
A strong commitment of DFT is to a tight inter-
face with behavior. We contend that the details of
behavior—the details of data—are fundamental
to the strength of any theory; thus, within DFT we
often obsess about such details. In multiple chap-
ters, you will encounter detailed discussions of
means and standard deviations, an obsessive focus
on small shifts in response biases across conditions,
excitement about subtle differences in the landing
position of an eye movement. These subtle details
matter because they often reflect unique “signa-
tures” of the type of processing taking place within
dynamic fields.
Perhaps due to this attention to quantita-
tive detail, some of our students didn’t appreciate
when they started reading the book that this entire
book—from start to finish—is about one unified
theory of how people think. Initially, they saw chap-
ters on different topics and assumed this would be
a survey about a general framework—a bit of visual
cognition over here, some word learning over there,
and some robotics thrown in for good measure.
What surprised them was that we actually try to
put all the pieces together. In the end, this creates
a sense of integration—unity—that is often lacking
in our respective fields.
This is no accident. DFT is not a modeling
approach (although we build models) or a tool for
explaining isolated findings. Rather, DFT is a sys-
tem of interconnected ideas that can be expressed
formally using a particular class of mathematical
equations (which we can simulate in the form of a
computer model). Our goal is to create a theory that
brings together so many interconnected facts and
explained observations that it becomes much more
than a model of X or Y—it becomes a unified theory
of cognition.
And with that, let the fun begin.
REFERENCES
Amari, S. (1977). Dynamics of pattern formation in
lateral-inhibition type neural fields. Biological
Cybernetics, 27(2), 77–87.
Barsalou, L.  W. (1999). Perceptual symbol systems.
Behavioral and Brain Sciences, 22, 577–660.
Barsalou, L.  W. (2008). Grounded cognition. Annual
Review of Psychology, 59, 617–645.
Bastian, A., Riehle, A., Erlhagen, W., & Schöner, G.
(1998). Prior information preshapes the population
representation of movement direction in motor cor-
tex. Neuroreports, 9, 315–319.
Bastian, A., Schöner, G., & Riehle, A. (2003). Preshaping
and continuous evolution of motor cortical repre-
sentations during movement preparation. European
Journal of Neuroscience, 18, 2047–2058.
Bicho, E., Mallet, P., & Schöner, G. (2000). Target repre-
sentation on an autonomous vehicle with low-level
sensors. International Journal of Robotics Research,
19, 424–447.
Buss, A. T., & Spencer, J. P. (2014). The emergent execu-
tive: A dynamic field theory of the development of
executive function. Monographs of the Society for
Research in Child Development, 79(2), 1–103
Damasio, A.  R., & Damasio, H. (1994). Cortical sys-
tems for retrieval of concrete knowledge: The con-
vergence zone framework. In C. Koch & J. Davis, L.
(Eds.), Large-scale neuronal theories of the brain (pp.
61–74). Cambridge MA: MIT Press.
Erlhagen, W., Mukovskiy, A., & Bicho, E. (2006). A dynamic
model for action understanding and goal-directed imi-
tation. Brain Research, 1083, 174–188.
Erlhagen, W., & Schöner, G. (2002). Dynamic field the-
ory of movement preparation. Psychological Review,
109, 545–572.
Faubel, C., & Schöner, G. (2008). Learning to recognize
objects on the fly: A neurally based dynamic field
approach. Neural Networks, 21, 562–576.
Feldman, A. G. (1966). Funtional tuning of the nervous
system during control of movement or maintanence
of a steady posture. III. Mechanographic analysis of
the execution by man of the simplest motor acts.
Biofizika, 11, 667–675.
Feldman, J., & Narayanan, S. (2004). Embodied
meaning in a neural theory of language. Brain and
Language, 89, 385–392.
Glenberg, A. M. (1997). What memory is for. Behavioral
and Brain Sciences, 20, 1–55.
Glenberg, A. M., & Kaschak, M. P. (2002). Grounding
language in action. Psychonomic Bulletin & Review,
9, 558–565.
Grossberg, S. (1970). Some networks that can learn,
remember, and reproduce any number of compli-
cated space-time patterns, II. Studies in Applied
Mathematics, XLIX(2), 135–166.
Grossberg, S. (1980). Biological competition: Decision
rules, pattern formation, and oscillations.

xvi General Introduction
Proceedings of the National Academy of Sciences USA,
77, 2338–2342.
Haken, H. (1983). Synergetics—An introduction (3rd
ed.). Berlin: Springer Verlag.
Hock, H.  S., Schöner, G., & Giese, M. (2003). The
dynamical foundations of motion pattern forma-
tion: stability, selective adaptation, and perceptual
continuity. Perception & Psychophysics, 65, 429–457.
Jancke, D., Erlhagen, W., Dinse, H. R., Akhavan, A. C.,
Giese, M., Steinhage, A., & Schöner, G. (1999).
Parametric population representation of retinal
location:  Neuronal interaction dynamics in cat
primary visual cortex. Journal of Neuroscience, 19,
9016–9028.
Jancke, D., Erlhagen, W., Schöner, G., & Dinse, H.  R.
(2004). Shorter latencies for motion trajectories
than for flashes in population responses of cat pri-
mary visual cortex. Journal of Physiology (Paris),
556(3), 971–982.
Johnson, J.  S., Spencer, J.  P., & Schöner, G. (2009).
A layered neural architecture for the consolida-
tion, maintenance, and updating of representa-
tions in visual working memory. Brain Research,
1299, 17–32.
Kelso, J.  A. S. (1984). Phase transitions and criti-
cal behavior in human bimanual coordina-
tion. American Journal of Physiology:  Regulatory,
Integrative and Comparative Physiology, 15,
R1000–R1004.
Kelso, J.  A. S. (1995). Dynamic patterns:  The
self-organization of brain and behavior. Cambridge,
MA: MIT Press.
Kelso, J.  A. S., Holt, K.  G., Kugler, P.  N., & Turvey,
M.  T. (1980). On the concept of coordinative
structures as dissipative structures. II. Empirical
lines of convergence. In G. E.  Stelmach & R. J
(Eds.), Tutorials in motor behavior (pp. 49–70).
Amsterdam: North-Holland.
Kopecz, K., Engels, C., & Schöner, G. (1993). Dynamic
field approach to target selection in gaze control.
In S. Gielen & B. Kappen (Eds.), International
Conference on Artificial Neural Networks, Amsterdam
(pp. 96–101). Berlin: Springer Verlag.
Kopecz, K., & Schöner, G. (1995). Saccadic motor
planning by integrating visual information
and pre-information on neural, dynamic fields.
Biological Cybernetics, 73, 49–60.
Lipinski, J., Schneegans, S., Sandamirskaya, Y., Spencer,
J.  P., & Schöner, G. (2012). A Neuro-behavioral
model of flexible spatial language behaviors. Journal
of Experimental Psycholog:  Learning, Memory and
Cognition, 38, 1490–1511.
Lipinski, J., Spencer, J. P., & Samuelson, L. K. (2009).
Corresponding delay-dependent biases in spa-
tial language and spatial memory. Psychological
Research, 74, 337–351.
Markounikau, V., Igel, C., Grinvald, A., & Jancke, D.
(2010). A dynamic neural field model of mesoscopic
cortical activity captured with voltage-sensitive
dye imaging. PLoS Computional Biology, 6(9),
e1000919
Perone, S., & Spencer, J. P. (2013). Autonomous visual
exploration creates developmental change in famil-
iarity and novelty seeking behaviors. Frontiers in
Psychology, 4, 648.
Richter, M., Lins, J., Schneegans, S., Schöner, G. (2014).
Autonomous neural dynamics to test hypotheses in
a model of spatial language. In P. Bello, M. Guarini,
M. McShane, & B. Scassellati (Eds.), Proceedings of
the 36th Annual Conference of the Cognitive Science
Society (pp. 2847–2852). Austin, TX:  Cognitive
Science Society.
Rumelhart, D.  E., McClelland, J.  L., & the PDP
Research Group, (Eds.). (1986). Parallel distributed
processing,—Volume 1:  Foundations. Cambridge,
MA: MIT Press.
Samuelson, L. K., Smith, L. B., Perry, L. K., & Spencer,
J. P. (2011). Grounding word learning in space. PloS
One, 6(12), e28095.
Sandamirskaya, Y., & Schöner, G. (2010). An embod-
ied account of serial order:  How instabilities
drive sequence generation. Neural Networks 23,
1164–1179.
Sandamirskaya, Y., Zibner, S. K. U., Schneegans, S., &
Schöner, G. (2013). Using dynamic field theory to
extend the embodiment stance toward higher cog-
nition. New Ideas in Psychology, 31, 322–339.
Schneegans, S., & Schöner, G. (2008). Dynamic field
theory as a framework for understanding embod-
ied cognition. In P. Calvo & T. Gomila (Eds.),
Handbook of cognitive science: An embodied approach
(pp. 241–271). New York: Elsevier.
Schöner, G. (2008). Dynamical systems approaches to
cognition. In R. Sun (Ed.), The Cambridge hand-
book of computational psychology (pp. 101–126).
New York: Cambridge University Press.
Schöner, G. (2009). Development as change of system
dynamics:  Stability, instability, and emergence.
In J. P.  Spencer, M. Thomas, & J. L.  McClelland
(Eds.), Toward a unified theory of develop-
ment:  Connectionism and dynamic systems theory
re-considered. (pp. 25–47). New  York:  Oxford
University Press.
Schöner, G. (2014). Dynamical systems thinking: From
metaphor to neural theory. In P. C.  M. Molenaar,
R. M. Lerner, & K. M. Newell (Eds.), Handbook of
developmental systems theory and methodology (pp.
188–219). New York: Guilford Publications.
Schöner, G., Dose, M., & Engels, C. (1995). Dynamics
of behavior:  Theory and applications for autono-
mous robot architectures. Robotics and Autonomous
Systems, 16, 213–245.

General Introduction xvii
Schöner, G., & Kelso, J.  A. S. (1988). Dynamic pat-
tern generation in behavioral and neural systems.
Science, 239, 1513–1520.
Schöner, G., & Thelen, E. (2006). Using dynamic field
theory to rethink infant habituation. Psychological
Review, 113, 273–299.
Schöner, G., Zanone, P.  G., & Kelso, J.  A. S. (1992).
Learning as change of coordination dynam-
ics:  Theory and experiment. Journal of Motor
Behavior, 24, 29–48.
Simmering, V., Schutte, A. R., & Spencer, J. P. (2008).
Generalizing the dynamic field theory of spatial
cognition across real and developmental time
scales. Brain Research, 1202, 68–86.
Smith, L.  B., & Thelen, E. (Eds.). (1993). A dynamic
systems approach to development:  Applications.
Cambridge, MA: MIT Press.
Smith, L.  B., & Thelen, E. (2003). Development as a
dynamical system. Trends in Cognitive Sciences, 7,
343–348.
Spencer, J.  P., Austin, A., & Schutte, A.  R. (2012).
Contributions of dynamic systems theory to cog-
nitive development. Cognitive Development, 27,
401–418.
Spencer, J.  P., Clearfield, M., Corbetta, C., Ulrich, B.,
Buchanan, P., & Schöner, G. (2006). Moving toward
a grand theory of development:  In memory of
Esther Thelen. Child Development, 77, 1521–1538.
Spencer, J. P., & Schöner, G. (2003). Bridging the repre-
sentational gap in the dynamical systems approach
to development. Developmental Science, 6, 392–412.
Spencer, J.  P., Simmering, V.  R., Schutte, A.  R., &
Schöner, G. (2007). What does theoretical neu-
roscience have to offer the study of behavioral
development? Insights from a dynamic field theory
of spatial cognition. In J. M. Plumert & J. P. Spencer
(Eds.), The emerging spatial mind (pp. 320–361).
New York: Oxford University Press.
Spencer, J.  P., Thomas, M.  S. C., & McClelland, J.  L.
(Eds.). (2009). Toward a unified theory of develop-
ment. New York: Oxford University Press.
Spivey, M.  J. (2007). The continuity of mind. Oxford,
UK: Oxford University Press.
Thelen, E. (1995). Motor development: A new synthe-
sis. American Psychologist, 50, 79–95.
Thelen, E., Schöner, G., Scheier, C., & Smith, L. (2001).
The dynamics of embodiment:  A  field theory of
infant perseverative reaching. Brain and Behavioral
Sciences, 24, 1–33.
Thelen, E., & Smith, L.  B. (1994). A dynamic systems
approach to the development of cognition and action.
Cambridge, MA: MIT Press.
Trappenberg, T.  P., Dorris, M.  C., Munoz, D.  P., &
Klein, R. M. (2001). A model of saccade initiation
based on the competitive integration of exoge-
nous and endogenous signals in the superior col-
liculus. Journal of Cognitive Neuroscience, 13(2),
256–271.
Van Gelder, T. (1998). The dynamical hypothesis in
cognitive science. Brain and Behavioral Sciences, 21,
615–628; Bechtel, W., Beer, R., Braisby, N., Cooper,
R., Franks, B. et al., commentary, 629–654, authors’
response: 654–661.
Wilson, H. R., & Cowan, J. D. (1973). A mathematical
theory of the functional dynamics of cortical and
thalamic nervous tissue. Kybernetik, 13, 55–80.
Wilson, M. (2002). Six views of embodied cognition.
Psychonomic Bulletin & Review, 9, 625–636.

PART 1
Foundations of Dynamic
Field Theory
Introduction
GREGOR SCHÖNER AND JOHN P. SPENCER
T
he goal of this book is to understand how per-
ception, action, and cognition come together
to produce behavior. Achieving this goal requires
that we uncover the laws of behavior and under-
stand the processes from which behavior emerges.
There is no question that human behavior is gen-
erated by the nervous system, so a process under-
standing must be achieved in neural terms.
What does it mean to base an account of behav-
ior on neural principles? Valentino Braitenberg
introduced the metaphor of a “vehicle” that beauti-
fully illustrates the challenges of creating a neural
account of behavior (Figure I.1). His vehicles are
simple organisms that have four elements, all of
which are required to generate behavior:
1. They have sensors. Sensors transform physi-
cal variables, such as light intensity, the loudness
of a sound, or the concentration of a chemical, into
internal variables, such as the firing rate of a sen-
sory neuron.
2. They have effectors. Effectors transform
internal neural variables into physical variables,
like the force or torque of a muscle, or, in the vehicle
metaphor, the turning rate of a wheel.
3. They have nervous systems. The nervous
system links the internal variables together. In the
simplest case of a feed-forward nervous system, the
internal variables that arise from the sensors are
transmitted by the nervous system to the effectors.
4. They have bodies, a component that is,
ironically, often overlooked. The body links the
sensors to the effectors in the physical world. When
the effectors drive the body around, the sensors
move along with the body and sensory information
changes. This, of course, has major consequences
for subsequent behavior.
One way of thinking about how behavior
emerges from nervous systems using this metaphor
is to assume that sensors provide information about
the environment, which is processed by the nervous
system and then fed to the motor systems. This is
a feed-forward view of the nervous system, and
invites thinking in information-processing terms.
In neuroscience and cognitive science, this per-
spective has been very helpful in characterizing the
organization of the nervous system and in explor-
ing how that organization is reflected in behavior.
For instance, influential concepts like “neural cod-
ing” emerged from this way of thinking.
In Figure I.1, we have illustrated the
feed-forward view. Here, the physical intensity of a
stimulus is picked up by a sensor and transformed

2 Dynamic Thinking
into an activation value using a particular type of
neural coding called “rate coding.” The idea is that
there is a one-to-one mapping from the physical
intensity value in the world to the activation value in
the nervous system, that is, to the firing rate induced
by stimulation of the sensory cell. Similarly, motor
systems can be characterized using a rate code pic-
ture where the activation value in the nervous sys-
tem is mapped to the force generated by a motor.
Critically, Braitenberg took his metaphor one
step farther by situating the vehicle in a structured
environment. Figure I.2 shows one of his vehicles
situated in an environment that has a stimulus off
to the left such that stimulation hits the two sensors
differentially. In particular, the left sensor receives
a higher intensity than the right sensor. If we
assume that this critter is wired up such that strong
stimulus intensity leads to low activation levels, this
situation will generate an orienting behavior, what
biologists have called “taxis”—the critter will turn
toward the input. Why does this happen? In this
vehicle, the nervous system is organized ipsilater-
ally, so the right motor receives input from acti-
vation associated with the right sensor. Because
strong stimulation leads to a lower firing rate, the
left motor will receive less activation than the right
motor. Consequently, the left motor will turn more
slowly than the right motor and the vehicle will
turn toward the source. As it approaches the source,
the intensities get stronger and the firing rates drop
perhaps to zero—the critter approaches the stimu-
lus and stops.
The lesson from this narrative is that mean-
ingful behavior is not generated solely from a
feed-forward view of the nervous system; rather,
meaningful behavior emerges when an organism
is situated in an appropriately structured envi-
ronment. All four components of the vehicle are
important. Indeed, we should really think of the
structured environment as the fifth component of
the vehicle—without it, no meaningful behavior
will arise, as James J Gibson has forcefully argued.
When we put all five components together,
the resultant “vehicle–environment system” forms
something called a dynamical system. To see this,
the graph on the top of Figure I.3 collapses the sen-
sor and motor characteristics down into one direct
Activation
Movement
Movement
Activation
Intensity
ActivationIntensity
Activation
Sensory
system
Body
Nervous
system
Motor
system
FIGURE I.1: A  Braitenberg vehicle consists of sensory systems, motor systems, a nervous system, and a body. The
sensory characteristic shown at the top right describes the activation output by a sensor system as a function of
the physical intensity to which the sensor is sensitive. The motor characteristic shown at the bottom right describes the
movement generated by a motor system as a function of the activation received as input.
Source
Intensity
Activation
Wheel
motion
Activation
Intensity
FIGURE I.2: The taxis vehicle of Braitenberg in an envi-
ronment with a single source of intensity. The sensor
characteristic is a monotonic negative function, the motor
characteristic a monotonic positive function. This leads
to taxis behavior in which the vehicle turns toward the
source (curved arrow).

Foundations of Dynamic Field Theory 3
mapping from physical intensity to a motor param-
eter. The difference in intensity sensed between
the two sensors (the x-axis) determines the dif-
ference in movement generated by the two wheels
(the y-axis). If there is a larger intensity on the left
than on the right (i.e., a positive value along the
x-axis), this will lead to a smaller motor command
on the left than on the right. The vehicle will turn
to the left. Conversely, if there is a larger intensity
on the right than on the left (a negative value along
the x-axis), this will cause the vehicle to turn to the
right. These effects balance where the straight line
crosses zero: Here, there is zero difference in inten-
sity and no change in heading direction.
The differences in sensed intensity come
from how the vehicle is oriented relative to the
source: A positive difference left versus right cor-
responds to the vehicle heading to the right of
the source, a negative difference corresponds
to the vehicle heading to the left of the source.
The difference in movement generated by the two
wheels corresponds to different turning rates of
the vehicle—positive for turning right, negative for
turning left. Thus, the sensory-motor characteris-
tic shown on top in Figure I.3 can be transformed
into the functional dependence of the vehicle’s
turning rate on the vehicle’s heading shown at the
bottom of Figure I.3. Because the vehicle’s turning
rate is the rate of change of the vehicle’s heading
direction, this is a dynamical system that predicts
the vehicle’s future heading directions from its cur-
rent heading direction. If you do not know yet what
a dynamical system is and do not recognize this as
a dynamical system, don’t worry. We will provide
a gentle introduction to these notions in the chap-
ters that follow. In dynamical systems terms, the
zero crossing of this dynamics has special mean-
ing:  This point is called an attractor because the
vehicle’s heading direction converges to this value
over time from just about any initial heading. If the
vehicle heads toward the right of that zero cross-
ing, its turning rate is negative, so it will change
heading toward the left. Analogously, if the vehicle
heads toward the left of the zero crossing, its turn-
ing rate is positive, so it will change heading toward
the right.
Why do we care about this dynamical system?
Because it fully describes the laws of behavior for
this simple vehicle—behavior emerges from this
dynamical system as the vehicle moves around in
a given environment. In a different environment,
a different dynamical system arises. For instance,
the environment of Figure I.4 with two sources
leads to the dynamical system with two attractors
shown on the left that enables the vehicle to make
a selection decision, orienting to one source, ignor-
ing the other. The dynamical system captures the
closed loop in which the vehicle’s sensation drives
its action that, in turn, determines the vehicle’s sen-
sation. If we know the dynamical system, we can
fully characterize—and predict—how the vehicle
will behave. We build on this sense of understand-
ing behavior throughout the book.
Concretely, our goal is to create a theoretical
language that allows us to characterize the dynami-
cal system that underlies human cognition and
behavior. This dynamical system will specify the
processes from which behavior emerges. And this
dynamical system will be specified using neural
dynamics that can be coupled to sensory and motor
systems on a body that acts within a structured
environment.
Heading
direction
Turning rate
of vehicle
Source
Differences in intensity left-right
Differences in
turning rate left-right wheel
FIGURE I.3: Concatenating the two sensor and motor
characteristics of the taxis vehicle of Figure I.2, and tak-
ing their difference, leads to the function shown on top.
With a generic model of how intensity falls off as the head-
ing direction deviates from the direction to the source
(marked by the vertical line), this sensory-motor charac-
teristics translates into the functional dependence of the
vehicle’s turning rate on its heading direction shown on
bottom. This is a dynamical system of heading direction
that has an attractor at the zero-crossing. Initial head-
ings to the left or the right of this zero-crossing converge
in time to the heading direction that points to the source
(arrows).

4 Dynamic Thinking
Chapter 1 begins building this dynamical sys-
tems view with an overview of neural dynamics.
We will see that to describe real nervous systems,
we must move beyond the simple feed-forward pic-
ture captured by Braitenberg’s vehicle. Instead, we
will use closed loops that take place entirely within
the nervous system to create internal attractor
states—neural patterns that make decisions, select
one input over another, and keep those decisions
active even when the input is removed (see right
side of Figure I.4).
In Chapter 2, we ask how such neural activation
variables come about. The Braitenberg picture sug-
gests that “neurons” must be intricately connected
to the sensory surface and the motor surface. In
simple vehicles, those surfaces are sampled by a
small number of sensor or motor cells, but in real
organisms, the sampling is so dense that we can
describe these “surfaces” in terms of continuous
spaces that are continuously coupled to the nervous
system. Dynamic fields are the result—dynamical
systems that reflect distributions of activation
over appropriate feature spaces, including physi-
cal space. This enables the nervous system to know
where a stimulus is located in space and to identify
its particular features (e.g., color, shape, and so on).
In Chapter 3, we review the neural foundations
of dynamic fields. We show that populations of neu-
rons in cortex and many subcortical structures can
be thought of using the concept of neural activation
fields. In fact, it will turn out that real neurons in
the brain operate as if they are smeared out over
activation fields.
Finally, in Chapter 4, we come back to behav-
ioral dynamics. We show how behavioral and neu-
ral dynamics can be combined within dynamic field
theory, linking perception, action, and cognition.
We demonstrate how this link enables embodied
cognition by implementing a behavioral and neural
dynamics on a robotic vehicle that orients toward
targets, which it detects, selects, and keeps in work-
ing memory.
Turning rate of vehicle
Heading
direction
Activation feld
Heading
direction
FIGURE I.4: Left: With two sources of intensity in the environment, the dynamical system from which orientation
behavior emerges has two attractors (two zero-crossings toward which heading direction converges as indicated by the
arrows). The vehicle selects one of the two sources depending on its initial heading. Right: Nervous systems with internal
loops have neural dynamics in which activation evolves toward neural attractors. The activation field shown on top is in a
neural attractor in which a peak of activation is positioned over the heading direction of one source, while input from the
other source is suppressed. The first three chapters of the book provide the concepts to understand this form of internal
neural processing.

1
Neural Dynamics
GREGOR SCHÖNER, HENDRIK REIMANN, AND JONAS LINS
A
s you are reading these lines, your nervous
system is engaged in three aspects of behavior,
perception, action, and cognition. Whenever your
gaze falls onto a particular part of the visual array,
your brain processes sensory information. Your
brain controls motor actions that actively shift your
eyes from fixation to fixation. And your brain makes
sense of the visual patterns, recognizing letters, link-
ing the recognition across multiple fixations, and
bringing about complex thoughts. Understanding
how the brain, together with the sensory and motor
periphery, brings about perception, action, and cog-
nition requires a theoretical language that reaches
across theses different domains. A  central theme
of this book is that the neural processes from which
behavior emerges evolve continuously in time and
are continuously linked to each other and to online
sensory information. These processes generate
graded signals that steer motor behavior. Continuity
in state and in time invites the language of dynami-
cal systems. This chapter will introduce the core ele-
ments of that language.
Within the language of dynamical systems,
stability is a critical concept. Stability is the capac-
ity to resist change in the face of variable inputs,
such as variation in sensory inputs or variation in
the signals received from other neural processes.
For instance, if you are looking at a picture in this
book, you may be able to focus on only that picture
and ignore distractions—the music you have run-
ning in the background, the cars passing by the
window next to you, the other pictures in the book.
The rich environments in which we are immersed
always provide alternatives to what we are currently
processing. Our rich behavioral repertoire always
provides alternatives to the motor action we are
currently engaged in. And inside our nervous sys-
tem, neural processes are richly interconnected and
inherently noisy. So for any particular neural pro-
cess to be effective and have an impact on behavior,
it needs to be stabilized against the influence of all
the other competing processes and against noisy
inputs. In this chapter, we will discuss how the
concept of stability can be formalized in the lan-
guage of dynamical systems, and how the theoreti-
cal models must be formulated so that stability is
assured within them.
Stability means resistance to change.
Cognition, however, requires change. Detecting
a stimulus, initiating an action, or selecting one of
multiple possible actions—all of these are decisions
that imply change: The neural state before the deci-
sion differs from the neural state after the decision
has been made. To understand how stable neural
processes allow for change, we need to understand
how neural states are released from stability, what
we will call a dynamic instability. This chapter will
discuss stability and the basic types of dynamic
instabilities that are central to dynamic field theory
(DFT) and recur throughout the book.
We begin with the concept of neural activation
to capture the inner state of the central nervous
system (CNS). First, we will talk about how activa-
tion can be linked to states of the world outside the
nervous system, that is, to sensory stimuli or motor
actions. Next, we will introduce the core notions
of neural dynamics. The premise that neural states
have stability properties narrows down the range
of dynamical models. We will look at the linear
dynamical model of a single activation variable
to introduce the basic notions of dynamical sys-
tems: fixed points and their stability. Even a single
activation variable may interact with itself. We will
introduce the notion of a sigmoid nonlinearity, and
find that self-excitation of an activation variable
may give rise to a first instability, the detection
instability that occurs in response to input. We will
then consider two activation variables that interact
inhibitorily, leading to competition. This simple
system may already make selection decisions.

6 Foundations of Dynamic Field Theory
“When one of two inputs becomes dominant, a
second instability, the selection instability, occurs.
Excitatory and inhibitory interaction and the two
instabilities they produce constitute dynamic
fields, as we shall see in Chapter  2 and address
throughout this book.
ACTIVATION
How do neural processes supported within the
CNS generate behavior? To begin addressing this
question, we clearly need some way to characterize
different inner states of the CNS that lead to differ-
ent kinds of behavior. In choosing such a character-
ization we are selecting a level of description of the
CNS. In dynamic field theory, we hypothesize that
it is the activity of populations of neurons within
circumscribed brain areas that is tightly related to
behavioral patterns. Chapter 3 will operationalize
this hypothesis by constructing activation fields
from the firing rates of a population of neurons. In
Chapter 2 we will show how neural activation fields
and their dynamics may form neural representa-
tions of behavior. In this chapter, we will use the
concept of neural activation variables and look at the
simplest cases in which behavior is linked to only
one or two such activation variables. In Chapter 2
we shall find out that these activation variables
are best viewed as measures of the neural activity
within circumscribed subpopulations of neurons.
Localized hills or peaks of activation in neural acti-
vation fields represent these subpopulations.
A neural activation variable, the way we will use
it, is a real number that may be positive or negative.
One may think of an activation variable as akin to
the membrane potential of a neuron, so that the
probability of eliciting an action potential is larger
the higher the activation level is. The biophysics of
neurons are briefly reviewed in Box 1.1, but DFT is
BOX 1.1  BIOPHYSICS OF NEURONS
Here we provide a brief review of the main biophysical features of neurons to establish the
terminology used in this book. For textbook treatment of the biophysics of neurons see, for
instance, Kandel, Schwartz, and Jessell (2013) and, especially, Trappenberg (2010), where the
link between the biophysical and the population level is addressed in some depth.
Neurons are electrically active cells that maintain an electrical potential across their mem-
branes through ion pumps. Neurons have four functionally relevant components: (1) the axon,
which is the output structure of the neuron and carries traveling excitations of membrane
potential called spikes; (2) the soma, which is the core of the neural cell at which summation of
inputs may lead to spike generation; (3) the dendritic tree, which collects inputs in the form of
membrane potential changes that happen at synapses and transports these to the soma; and
(4) synapses, electrochemical connections between the axons of presynaptic cells and the den-
dritic tree of the postsynaptic cell.
Across the membrane of neurons, a difference in ion concentration between the intracellu-
lar and the extracellular space gives rise to an electrical potential, called the membrane potential.
The most relevant ions in this process are sodium and potassium, which are both positively
charged. Membrane channels are proteins in the membrane that are specifically permeable to
a particular type of ion, for instance, sodium or potassium. Membrane channels can be con -
trolled electrochemically to change configuration such that they are either open or closed. Ion
pumps are another type of membrane protein that use chemical energy to actively transport
ions across the membrane against their electrochemical gradient.
When there is no input to the membrane, the membrane potential is typically around −70
millivolts (intracellular versus extracellular space), the so-called resting potential. In this state,
the sodium concentration is much higher on the outside of the axon than on its inside, while
the potassium concentration is much higher on the inside. The excess negative charge on the
inside stems from largely immobile negative ions and from a slight constant efflux of potas-
sium ions through a few open potassium channels, openings in the membrane through which
potassium ions can pass when a electrochemical control system configures them appropri -
ately. However, this efflux is largely counterbalanced by active sodium-potassium pumps such

Neural Dynamics 7
that the resting potential is maintained at −70 millivolts. Importantly, the great majority of
both sodium and potassium channels are closed while the membrane is at resting potential.
In most neurons in the higher nervous system of mammals, neural processing is based on
spikes. Spikes, also called action potentials, are brief, active changes of the membrane poten-
tial that travel along a neuron’s axon. A  spike is triggered when the potential at a patch of
axon membrane is increased above resting level (depolarized) to a certain threshold. This spike
threshold typically lies about 15 to 20 millivolts above the resting potential. The initial depolar-
ization is caused by a flow of ions from a neighboring area of the axon where an action potential
is already in progress. When the threshold is reached, voltage-gated sodium channels open.
This initiates an all-or-none cascade of events. First, a sodium influx occurs, depolarizing the
membrane further, which in turn leads to the opening of even more sodium channels. The
result of this positive feedback loop is a very quick depolarization far into the positive range,
typically peaking at around +40 millivolts. However, the sodium channels become inactivated
and thus impermeable shortly after this, preventing further depolarization. Concurrently,
voltage-gated potassium channels are opened, allowing potassium ions to flow out of the axon.
This potassium efflux repolarizes the membrane to slightly below the resting potential. This
causes the potassium channels to close again, and the original distribution of ions is then
restored by active ion pumps.
The total duration of a spike often amounts to little more than 1 millisecond. However, the
sodium channels cannot be activated for an additional time span of 1 or 2 milliseconds, the
so-called refractory period, which limits the maximally possible spike frequency to around 500
Hz (less in many neurons). Importantly, because the absolute height of the initial depolariza-
tion does not affect the course of events once the threshold has been reached, spikes are vir-
tually identical to each other in amplitude and duration, especially within the same neuron.
Finally, the propagation of spikes is based on currents along the length of the axon fiber,
between an already depolarized patch of membrane and a neighboring membrane patch still
at resting potential. These currents serve to depolarize the next axon patch to spike thresh-
old. Most axons are wrapped into so-called myelin sheaths, however, which consist of mul -
tiple layers of cell membrane, thus insulating the axon from extracellular space. The myelin
sheath is interrupted by gaps every millimeter or so, called nodes of Ranvier. Only at the nodes
of Ranvier can spikes establish, while the current triggering the spike at the next node is con-
ducted within the axon. This so-called saltatory conduction (from Latin saltare, “to leap”) greatly
increases nerve conduction velocity.
The conditions at the cell body (soma) and at the dendrites of a neuron are similar to those
at axonal membranes. That is, the distribution of ions between the intracellular and extra-
cellular space determines the membrane potential, with sodium and potassium being most
relevant, and a resting potential of around −70 millivolts. There is an important difference,
though:  Potentials at somatic and dendritic membranes are graded, which means that volt -
age can vary across a wide range without triggering an all-or-none chain of events like spikes
(although some neurons are capable of developing spikes at these membranes as well).
Changes in somatic or dendritic membrane potential are induced by synaptic activity.
Synapses are contact points between the axon of one neuron (the presynaptic neuron) and the
dendritic tree of another neuron (the postsynaptic neuron). When a spike in a presynaptic neu-
ron reaches the synaptic knob at the end of an axonal branch, neurotransmitters are released
into the synaptic cleft. The transmitter molecules diffuse toward the membrane of the post-
synaptic neuron, where they bind to receptors embedded in the membrane, triggering the
opening of ion channels. The binding works according to the key-lock principle, so that a given
type of neurotransmitter specifically activates a particular type of channel. Thus, synaptic
action can have different effects on the postsynaptic membrane potential, depending on which
transmitter is released by a synapse. Excitatory transmitters cause sodium channels to open.
The ensuing sodium influx depolarizes the postsynaptic membrane, inducing an excitatory

8 Foundations of Dynamic Field Theory
postsynaptic potential (EPSP). Inhibitory transmitters, by contrast, cause potassium channels to
open. The resulting potassium efflux hyperpolarizes the membrane; that is, it makes mem -
brane potential more negative. This is known as the inhibitory postsynaptic potential (IPSP). Some
inhibitory transmitters cause the opening of chloride channels, allowing an influx of chloride
ions. As chloride ions are negatively charged, this likewise induces an IPSP. The size of the
postsynaptic potential depends on the firing rate of the presynaptic neuron in the form of a
sigmoidal function (although for many cortical neurons, the sigmoid saturates only for quite
high presynaptic firing rates that are outside the normal physiological range).
Once a postsynaptic potential has been induced, it spreads across the dendritic tree to the
cell soma, eventually reaching the axon hillock, the starting point of the axon where spikes are
generated. As is the case on the axon itself, a spike is generated if the membrane potential at the
axon hillock reaches a threshold some 20 millivolts above the resting potential (note, however,
that many neurons have spontaneous base firing rates). Hence, EPSPs increase the probability
of spiking, whereas IPSPs reduce it.
Temporal summation of synaptic input occurs when multiple spikes arrive at synapses in
quick succession, so that the postsynaptic potentials induced by the individual spikes overlap
in time and may thus add up to a larger change in membrane potential or, if they have differ-
ent signs, to cancel each other out. Through temporal summation, a postsynaptic cell may be
driven to a spiking threshold when an EPSP induced by a individual spike may not be suffi -
cient to do this. Conversely, summation of IPSPs lowers spiking probability more than a single
I PSP. Spatial summation refers to the same principle of summation at the point of spike genera-
tion when the EPSPs and IPSPs originate from different synapses across the dendritic tree. The
arrangement of synapses on the dendritic tree may bring about nontrivial computation, such
as shutting off the connections from a particular branch of the dendritic tree by an IPSP down-
stream from that branch (also called shunting inhibition; see Koch, 1999).
For postsynaptic potentials to be summed up, spikes need to arrive at an axon hillock within
a certain time window. The width of this time window depends on the time constant of the
postsynaptic membrane (which in turn depends on properties of the membrane itself as well
as on the state of the ion channels, determining membrane resistance and capacitance). The
membrane potential evolves according to a dynamics much like that postulated in DFT, with
a –u term determining an exponential decay toward the resting level. This can be observed in
the laboratory when an electrode is inserted through the membrane into the cell and a current
is injected. The timescale of this exponential is slower for cortical neurons than for neurons
on the periphery of the nervous system, making temporal summation more likely. Although
spikes last only a millisecond, the integration timescale of cortical neurons is sufficiently slow
to enable summation of incoming spikes that are separated by less then 10 milliseconds.
The neural dynamics at the population level that we model in DFT is characterized by this
slower timescale of summation (see Trappenberg, 2010, for a more detailed discussion of this
link). This neural dynamics of populations of neurons can be derived mathematically from the
biophysical dynamics of neurons under certain restrictive conditions in the so-called mean-field
approximation, in which the evolution of a population level “activation” is determined by the
summed asynchronous spiking activity of neurons in the population (Faugeras, Touboul, &
Cessac, 2009). In that derivation, the basic form of the neural dynamics on which DFT is based,
including the  –u term, the resting level, and input as an additive contribution to the rate of
change, is inherited from the biophysical level of description but acquires a slower timescale
when the averaging across the population happens. Similarly, the sigmoidal threshold function
used at the population level is functionally analogous to the sigmoidal transfer function that
describes the postsynaptic potential as a function of the presynaptic firing rate. Making that
analogy concrete is not so easy, however, as these sigmoids link very different kinds of vari-
ables (spike rates to membrane potentials for the biophysical sigmoid, population activation to
its rate of change for population-level neural dynamics).

Neural Dynamics 9
not intended to be biophysically detailed and the
analogy to neural firing is not critical to under-
standing DFT. In fact, we do not use actual units
of electrical potential to describe activation, nor do
we take into account the mechanisms of spike gen-
eration and of synaptic transmission. We will, how-
ever, capture the basic idea of synaptic transmission
by assuming that there is a threshold, which we
set to be zero, so that only activation values above
that threshold—that is, only positive levels of
activation—are transmitted to other activation
variables. This assumption is formalized through
the sigmoidal function, illustrated in Figure 1.1,
which increases monotonically from zero for very
negative levels of activation to one for large positive
activation levels.
Connectionism uses a similar concept of activa-
tion to describe the inner state of each unit of paral-
lel processing, the abstract connectionist “neuron.”
Most connectionist models use graded activation
variables. Connectionist neurons may then be “on”
or “off” (Thomas & McClelland, 2008), character-
ized again by a sigmoidal threshold function applied
to the activation level. Some connectionist models
use binary activation variables to begin with, so they
do not require a separate sigmoidal threshold func-
tion. In Chapter  3 we will see that the activation
variables of DFT are measures of activity in small
subpopulations of neurons. These variables thus do
not directly reflect the state of individual neurons.
In typical connectionist models, the model neurons
are similarly meant to encompass activity of more
than one real neuron. Thus, overall, the concept of
activation is used more variably in connectionism,
but is not qualitatively different from the dynamic
concept of activation used in this book.
A concept of activation is also invoked in some
versions of classical cognitive architectures, models
0.5
1
β
0
g(u)
u
FIGURE 1.1: A sigmoidal threshold function, g(u), is plot-
ted as a function of activation level, u. The sigmoid maps
low levels of activation onto zero and large levels of acti-
vation onto 1 and links these two regimes smoothly as a
monotonically increasing function. By convention, we
position the half-point of the sigmoid at the activation
level of zero. That convention effectively defines the acti-
vation scale. In DFT models we typically use the mathe-
matical formalization of gu u()=+ −()()11/exp , β where β
is the slope of the sigmoid at zero activation. Larger values
of β create steeper (more nonlinear) sigmoids.
The mathematical derivation of the mean-field approximation is complex; as a result, it is not
easy even to state how the population “activation” variable is computed from the spiking activi-
ties of all the neurons that contribute. At this point, there is no derivation of the neural dynamics
at the population level that is general enough to cover the conditions under which we use the
population description. We will show in Chapter 3 that the activity of populations of neurons pro-
vides the best correlate of neural measures with measures of behavior. The neural dynamics on
which DFT is based is a good phenomenological description of how the activity in populations of
cortical neurons evolves over time under physiological conditions in which the brain is involved
in perception and generates behavior. Although this phenomenological description has not been
rigorously derived as an approximate description from biophysical neural dynamics under these
physiological conditions, it has not been ruled out that this could be achieved in the future.
What properties of biophysical neurons are we leaving out from the population-level neural
dynamics of DFT? Clearly, we are not including discrete spiking events and spike times. The
mean-field picture assumes that, within a neural population, spikes are generated frequently
enough and asynchronously enough to sample continuous time. It is possible, however, that
for some neural mechanisms, such as the detection of time differences (e.g., in the auditory
system), or for learning (e.g., in spike time–dependent plasticity), the timing of spikes plays a
special role. Those would be cases where the approximation on which DFT is based begins to
break down. At this point, there is no clear empirical evidence for a functional role of the spik-
ing mechanism that would not be captured by population activation, but the possibility of such
a functional role remains.

10 Foundations of Dynamic Field Theory
of cognition that are based on the computer meta-
phor and on concepts of information processing.
In ACT-R, activation is an attribute of items of
memory that determines how accessible the memo-
rized information is (Anderson, 1983). Multiple
factors like the salience of an item, the strength of
its association with other items, or the strength of
a memory trace of the item may contribute to the
level of activation. The probability of retrieval of an
item is an increasing (sigmoidal) function of activa-
tion, and the latency of retrieval is an exponentially
decaying function of its activation level. These two
relationships link activation to observable response
rates and response times. In a broad sense, there
is some analogy between this notion of activation
and our dynamic concept of activation, in that high
levels of activation have more impact on behavior
(responses) than low levels. The theoretical setting
of ACT-R is so different, however, from that of neu-
ral dynamics that this analogy is not useful; thus for
the remainder of this book we will ignore that alter-
nate notion of activation.
If activation characterizes the inner state of
a part of the CNS, how might that inner state be
related to what is outside the CNS? Ultimately, the
CNS is connected to the outside world through the
sensory surfaces, the retina, the cochlea, the skin,
the distributed proprioceptive sensors, and other
sensory systems. Moreover, neural activity drives
motor systems, activating muscles and bring-
ing about mechanical change in the world. The
connections that sensor cells make to a portion of
the CNS can be characterized as input to relevant
activation variables that influences activation lev-
els. This will be quite easy to conceptualize within
DFT, as we shall see shortly. Conversely, activation
variables may have an impact on motor systems,
driving muscle activation and changing the physi-
cal state of an effector. That is actually trickier to
conceptualize than one might think. In terms of the
metaphor of the Braitenberg vehicles that was used
in the introduction to this part of the book, motor
action always brings with it the potential of closed
sensory-motor loops, as any motor action has sen-
sory consequences. We will address this problem in
depth in Chapter 4.
Much of functional neurophysiology is dedi-
cated to looking for systematic relationships
between stimulus or motor parameters and
the activity of neurons. This is often based on
information-theoretical notions, in particular, cod-
ing and prediction. In this book, we try to stay away
from such notions. Coding principles and their
relationship to feed-forward neural networks are
briefly reviewed in Box 1.2, where we also discuss
how the language of neural dynamics is necessary
to make sense of recurrent neural networks.
For now, let us say then that in DFT the inner
state of the CNS is related to the world outside
through two directions of influence:  The state of
the world influences the levels of activation, and
those levels of activation influence the state of
BOX 1.2  NEURAL CODING, FEED-FORWARD NETWORKS, AND
RECURRENCE
The classical conception of feed-forward neural networks is illustrated in Figure 1.2. The con-
nectivity among nodes ui
i
,,,=…( )12 6 is ordered so that each neuron receives input only from
neurons closer (in connections) to the sensory surface (described by input levels sss
12 3
,,) or
directly from the sensory surface itself. In such a forward network, the output neurons are
those furthest removed from the sensory surface. Their output can be described as a function
of the sensory inputs, subsuming all intermediate (hidden) neurons. In the illustration,

gu sss
61 23()= (),,.function
(B1.1)
The function may be nonlinear due to the sigmoidal threshold function for each neuron’s
output but maps each input onto a unique output. If the function were invertible the network
would implement a code, a one-to-one mapping between inputs and outputs. Close to the sen -
sory periphery, where the networks are not deep, such invertible mappings are sometimes
observed or postulated, leading to the notion of rate code: Each level of stimulus intensity is

Neural Dynamics 11
uniquely represented by a particular rate of neural firing. In general, however, the map is not
invertible, so that a many-to-one mapping may result. This is the case, for instance, when dif-
ferent patterns of input are mapped onto the same “response.” Still, information-theoretical
terms are sometimes used to characterize such networks by saying that the output neurons
“encode” particular patterns of input, perhaps with a certain degree of invariance, so that a set
of changes in the input pattern does not affect the output. A whole field of connectionism or
neural network theory is devoted to finding ways of how to learn these forward mappings from
examples. An important part of that theory is the proof that certain classes of learning meth-
ods make such networks universal approximators; that is, they are capable of instantiating any
reasonably behaved mapping from one space to another (Haykin, 2008). In this characterization
of a feed-forward neural network, time does not matter. Any time course of the input pattern
will be reflected in a corresponding time course in the output pattern. The output depends only
on the current input, not on past inputs or on past levels of the output or the hidden neurons.
A recurrent network such as the one illustrated in Figure 1.3 cannot be characterized by
such an input–output mapping. In a recurrent network, loops of connectivity can be found so
that one particular neuron (e.g., u
4
in the figure) may provide input to other neurons (e.g., u
6),
but also conversely receive input from those other neurons either directly (u
6
) or through some
other intermediate steps (e.g., through u
6
and u
5 or through the chain from u
6 to u
5 to u
2
to u
4).
The output cannot be computed from the input value because it depends on itself! Recurrence
of this kind is common in the central nervous system, as shown empirically through methods
of quantitative neuroanatomy (Braitenberg and Schüz, 1991).
To make sense of recurrent neural networks, the notion of time is needed, at least in some
rudimentary form. For instance, neural processing in such a network may be thought of as
s
1
u
1
s
3
s
2
g(u
6
)
u
2u
3
u
4u
5
u
6
FIGURE 1.2: In this sketch of a feed-forward neural network, activation variables, u
1 to u
6, are symbolized by the
circles. Inputs from the sensory surface, s
1 to s
3, are represented by arrows. Arrows also represent connections where
the output of one activation variable is input to another. Connections are ordered such that there are no closed loops
in the network.
s
1
s
3
s
2
g(u
6
)
u
1u
2u
3
u
4u
5
u
6
FIGURE 1.3: Same sketch as in Figure 1.2, but now with additional connections that create loops of connectivity,
making this a recurrent neural network.

12 Foundations of Dynamic Field Theory
the world through motor actions. In fact, it is ulti-
mately only through those links to the sensory and
motor systems that the inner states of the CNS have
meaning. In the end, this may be the concrete man-
ifestation of the embodiment stance to cognition
(Riegler, 2002). We shall come back to this point
multiple times throughout the book.
NEURAL DYNAMICS
The inner state of the CNS typically varies contin-
uously over time. Unlike digital computers, organ-
isms do not have a clock that updates the state of
the CNS in a computational cycle. Nor is there any
behavioral evidence that processing occurs from
time step to time step. On the contrary, there is
behavioral evidence for online updating of CNS
states that occurs in continuous time. For instance,
if the target to which a pointing movement is
directed is shifted at any time during the processes
of movement preparation or initiation, the move-
ment begins to reflect that shift after a delay of
about 100 ms. That shift is invariant as the timing
of the target shift is varied (Prablanc & Martin,
1992). We should think, therefore, of activation
variables as functions of continuous time, denoted
mathematically by u(t), where u stands for activa-
tion and t, for continuous time.
Does this time dependence itself have to be con-
tinuous? In other words, does u(t) change smoothly
over time, or may u(t) jump abruptly from one
value to another? At the level of the biophysics of
neurons, the forming of an action potential would
seem to be an abrupt event, although it is actually
continuous on a finer timescale (see Box 1.1). There
is really no evidence that behavior is driven by such
microscopic events. To the contrary, there is behav-
ioral evidence for inertia, for a gradual change of
activation states. A classic example is visual inertia
in motion perception (Anstis & Ramachandran,
1987), in which a percept of visual motion is set
up by a first stimulus of apparent motion, followed
by an ambiguous stimulus that offers two possible
paths of motion, one path in the same direction as
the first motion, the other at an angle to the initial
path. Observers prefer the motion path in the same
direction. (The exact mapping of such perceptual
continuity to our activation variables requires
some work, which we will do in a formal way in
Chapter 2).
The postulate that activation variables u(t) are
continuous functions of continuous time has impor-
tant consequences. It rules out, for instance, the idea
that the values of activation variables originate from
simple input–output computations (seeBox 1.2),
an iteration process through time. From an initial level of activation, the activation level of
all neurons is iteratively updated. At each time step, the output levels that provide input to
a neuron are taken from the previous iteration step. In a sense, this iteration rule for the
activation levels of all neurons represents a dynamical system, although in discrete time
(Scheinerman, 1996). On the other hand, the synchronous updating of all neurons by some
kind of clock cycle is not neurally realistic. There is no evidence for such updating across an
entire network. Instead, as briefly reviewed in Box 1.1, neurons fire asynchronously, effec-
tively sampling continuous time. The mathematic description of how activation evolves in
recurrent neural networks in continuous time is exactly the neural dynamics discussed in
the main text of this chapter.
Recurrence and the neural dynamics it implies are not conceptually compatible with the
information-theoretical notions of encoding. In recurrent networks, there is no one-to-one
or even many-to-one mapping from the stimulus space. The output of any neuron depends
not only on the inputs to the network but also on the current state of activation in the net-
work, which reflects the recent history of activation and stimulation. Different histories of
stimulation leading up to the same instantaneous stimulus lead to different activation pat-
terns. Information-theoretical measures are still sometimes used to characterize recurrent
neural networks as an approximate description (e.g., looking for how deep in time we need
to go to extract how much information about a stimulus). In dynamic field theory we aban -
don this language, however, and emphasize instead the neural processes captured by neural
dynamics.

Neural Dynamics 13
because in such input–output systems any abrupt
change of input induces a matching abrupt change
in output. Neural dynamics formalizes this postu-
late of continuous evolution of activation in con-
tinuous time. Neural dynamics means that the time
course of an activation variable, u(t), is the solution
of a differential equation

τ ufu=().

(1.1)
where  ut() is the rate of change of u, and τ is a positive
constant that serves to define the units of time (e.g.,
seconds or milliseconds). Here, f(u) is a smooth
function of activation, u, and we need to figure out
which function, f, produces the right time course of
activation.
Before we do that, let’s unpack Equation 1.1. The
rate of change of an activation variable is formally
its derivative with respect to time,  u. If we were to
plot the time course of activation, u(t), against time,
t, the rate of change would be the slope of that func-
tion. To make that intuitive, think of activation
as the position of a particle. The rate of change of
the position of a particle is its velocity—simple as
that! The differential equation above, Equation 1.1,
forms a dynamical system for the activation vari-
able, u(t) (see Box 1.3 for a tutorial on dynamical
systems). The solutions of the differential equation
BOX 1.3  DYNAMICAL SYSTEMS
The word dynamics has a variety of meanings. In music, for instance, dynamics refers to the
varying levels of sound within a piece. A dynamic scenario in computer vision or robotics is
simply a time-varying scenario. The word comes from the Greek dynamis, for “power” or “force.”
In classical mechanics, dynamics refers to the core idea that movement can be explained and
predicted from underlying causes, the forces that act on bodies. In modern mathematics, the
theory of dynamical systems is a well-developed field with deep connections to other branches
of analysis (see Perko [2001] for an advanced but pertinent treatment). This theory is the basis
of most mathematically formalized models in the sciences—not only in physics and engineer -
ing but also in chemistry, biology, economics, sociology, and many other areas. Braun (1993)
provides a highly accessible introduction to dynamical systems with an emphasis on such
applications, giving a large number of examples.
The core idea of the theory of dynamical systems is that “the present predicts the future”
given a “law of motion,” a dynamical law formalized as a dynamical system. To make that idea
concrete, we first need to talk about variables and time courses. Think of a single variable, u,
that characterizes the state of a system (we will say something about multiple variables at the
very end of this box). In the main text of this book, u is an activation level. In mechanics, u
could be the position of a point mass along a line, for example, along a vertical line when study-
ing free fall. The variable is assumed to capture the evolution in time of a system by its time
dependency, u(t). Figure 1.4 illustrates such a time course, here in the form of an exponential
function. The derivative of u, denoted by  u or du/dt, is the rate of change of u, also illustrated in
Figure 1.4. If u were the vertical position of a point mass, its rate of change would be the verti-
cal velocity of the point mass. In the figure, as u decreases in time, its rate of change is nega-
tive. The decrease slows down over time, and thus the rate of change approaches zero from
below. The time courses of the variable u(t) and of its rate of change,  ut(), are correlated. Figure
1.4 shows this correlation by plotting  ut() against u. This reveals the functional relationship
between the two quantities,  ut u()=−.
More generally, any functional relationship
 ut fu()=() (B1.2)
sets up a dynamical system through a differential equation. Figure 1.5 illustrates a gen-
eral dynamical system characterized by a nonlinear function, f(u). The core idea of dynamical
systems theory is captured by the existence and uniqueness theorem, which says that for any

14 Foundations of Dynamic Field Theory
sufficiently smooth function, f(u), and any initial value of u, a unique solution, u(t), of the dif-
ferential equation exists for an interval of time, t. Thus, given the dynamics captured by the
function, f(u), “the present predicts the future.” In Figure 1.5, this is made plausible by mark-
ing an initial condition for u and highlighting the rate of change for that initial value. In this
case, a negative rate of change, predicting an imminent decrease of the activation variable, is
indicated by the arrow pointing to the left. Thus, in a mental “iteration,” we expect the vari-
able to have a somewhat smaller value to the left of the initial value a moment of time later.
The dynamics will then supply a new rate of change, which predicts the next value and so on.
In the main text of this chapter we use this form of iterative mental simulation to intui-
tively understand attractors, the convergence in time to a fixed point of the dynamical system.
A fixed point, u
0
, is formally defined as a solution of

fu
0
0()=

(B1.3)
as illustrated in Figure 1.6. Because the function f does not depend on time, the fixed point,
u
0
, is constant over time as well, so that  u
0
0=, and thus:  uf u
00
=()=0. In other words, the
fixed point, u
0
, is a constant solution of the differential equation.
A fixed point is “asymptotically stable” if the solutions of the dynamical system that start
from initial conditions nearby converge over time to the fixed point. When the dynamics, f, has
a negative slope at the fixed point,
df
du
uu=() <
0
0, then the fixed point is stable. The arrows in
Time, t
u
u
Time, t
u
u
FIGURE  1.4: Top: The time course of a dynamic variable, u. Middle: The time course of its rate of change,  u.
Bottom: The functional relationship between  u and u obtained by correlating the two. The symbols in the three
panels mark corresponding values of  u and u at three points in time. The time courses on top were obtained from
solutions of the linear dynamical system shown at the bottom.
u = f(u)
u
Initial
condition
FIGURE 1.5: A nonlinear dynamics system  ufu=() with a particular value of u chosen as initial condition (open
circle). The dynamics assigns a rate of change to that initial condition, which predicts the direction of change
(arrow).

Neural Dynamics 15
Figure 1.6 remind us of the argument made in the chapter’s main text: To the left of the fixed
point, the positive rate of change leads to an increase toward the fixed point, and to the right
of the fixed point, the negative rate of change leads to a decrease toward the fixed point. An
asymptotically stable fixed point is also called a fixed point attractor and sometimes just an
attractor (there are more complex limit sets that carry that name, but we will not concern our-
selves with those in this book).
This mathematical concept of asymptotical stability is sometimes loosely referred to as
stability by modelers, even though strictly speaking stability is a slightly different concept.
Mathematically, a fixed point is “stable” when solutions that start nearby stay nearby (but
do not necessarily converge). Asymptotic stability implies stability, but not vice versa. This is
important because instability is the opposite of stability, not of asymptotic stability. A  fixed
point is “unstable” if there are solutions that start arbitrarily close to the fixed point but move
away from the fixed point. The lower plot in Figure 1.6 shows an unstable fixed point on the
right. In fact, this is a “repellor,” a fixed point that all solutions starting nearby move away from.
This plot also brings home the important message that stability is a property of a fixed
point, not of the entire dynamical system! There are two fixed points here, one stable, the other
unstable. Sometimes, researchers talk about “stable” systems. This is a loose way to talk about
a system that has a single fixed point, which is stable. Linear systems, in particular, can have
only a single fixed point (because a straight line can only go through zero once). Because a lot
of systems encountered in modeling are linear or are approximated as linear, it happens quite
often that there is a single fixed point, hence this loose talk about the “stability of the system.”
In nonlinear dynamical systems, the fixed points and their stability organize the ensemble
of all solutions of the dynamical system. This ensemble is called the flow and can be thought of
as a mapping from all initial conditions to the states the solutions lead those initial conditions
to at a given time, t, later. For the dynamical system at the bottom of Figure 1.6, for instance,
all initial conditions to the left of the repellor will be mapped onto values increasingly (with
increasing time) close to the attractor on the left. All initial conditions to the right of the repel-
lor will be mapped onto increasingly large values of u (which will go to infinity when time goes
to infinity). The qualitative theory of dynamical systems is aimed at characterizing the flow of
dynamical systems rather than analytically solving specific equations. Most textbooks on dif-
ferential equations focus on solving equations, but the books cited earlier in this box address
the qualitative theory of dynamical systems (as does Scheinerman [1996], a good elementary
text provided freely online by the author). In the qualitative theory of dynamical systems,
flows that are merely slight deformations of each other are all considered to be equivalent (the
u = f(u)
u
u = f(u)
u
u
0
u
0
u
1
FIGURE 1.6: Top: The same nonlinear dynamics system of Figure 1.5 with the fixed point, u
0, marked by a filled
circle. Arrows indicate the attraction to this asymptotically stable fixed point. The thin line illustrates the (nega-
tive) slope of the function, f(u), at the fixed point. Bottom: The dynamics is changed (shifted upwards) and now has
two fixed points, an attractor, u
0, on the left and a repellor, u
1, on the right.

16 Foundations of Dynamic Field Theory
Parameter
Parameter
u
u
u
u
1
u
1
u
0
u
0
FIGURE 1.7: Top: A part cut out of the dynamics at the bottom of Figure 1.6 is further changed by adding a constant
parameter to the dynamics. The dynamics at three values of this additive parameter is shown (see text). Bottom:
A bifurcation diagram of the dynamics shown at the top plots the fixed points of the dynamics as a function of the
parameter that changes the dynamics. The two fixed points collide and then disappear as the additive constant
parameter increases.
technical term is topologically equivalent). For instance, if we deform the function, f, at the bottom
of Figure 1.6 a bit, but not enough to remove the two fixed points or to change the signs of the
slope of f around each fixed point, then the precise time courses of solutions would change,
but that change would be minor. Solutions of the original and of the deformed dynamical sys-
tems could be mapped onto each other such that neighboring solutions in the original system
remain neighbors in the deformed system and vice versa (this is topological equivalence). In
contrast, the dynamical system at the top of Figure 1.6 is not topologically equivalent to the
one at the bottom. One can see this by looking at solutions for the system at the top that start
just to the left and just to the right of the location where the repellor is in the bottom system.
Those solutions stay close to each other over time for the top system, while they diverge from
each other for the bottom system.
When we model neural processes, we essentially form hypotheses about categories of solu-
tions, different stable states, and how they are connected. This amounts to making assump -
tions about the flow, the ensemble of all solutions. That is why the qualitative theory of
dynamical systems is of interest to us. Qualitatively different flows are often separated by
instabilities, which we will look at next. Instabilities thus demarcate regimes with qualitatively
different solutions, and that is why instabilities are of so much interest to us in dynamic field
theory (DFT).
Instabilities are changes in the number or stability of fixed points. The changes come from
some parametric change of the dynamics, that is, of the function, f. We think of such changes
as being smooth, that is, the function, f, changes continuously as a continuous parameter is
changed. In the main text of this chapter, input strength is such a parameter, for instance. Even
though the function, f, changes smoothly, the solutions may change abruptly, and that hap-
pens exactly at instabilities. Figure 1.7 illustrates how this may happen. Here we have taken
the portion of the dynamics depicted at the bottom of Figure 1.6 that contains the two fixed
points and applied a continuous parameter that shifts the dynamics, f, upward (f is shown only
for three values of that parameter). As this happens, the attractor on the left and the repellor
on the right move toward each other, until they collide, forming a single fixed point that is now
unstable. At slightly larger values of the parameter, the fixed point is gone! So the stability of a
fixed point has changed (attractor to unstable fixed point) and the number of fixed points has
changed (from two to zero). This is the “tangent bifurcation” that we also discussed in the main
text. The word bifurcation is a mathematical term for the looser term instability more commonly
used by physicists and modelers. Why instability is a good term is intuitive from Figure 1.7: Just

Neural Dynamics 17
are time-continuous (in fact, differentiable) trajec-
tories of activation, u(t), for which Equation 1.1 is
true—that is, whose rate of change,  u, is the pre-
scribed function, f(u), of its current activation, u.
But what function, f(u), would be appropriate?
We need another postulate to narrow in the class of
admissible dynamical systems defined by f(u). That
additional postulate is stability. Intuitively, stabil-
ity means something like resilience, the capacity
to recover from perturbations. In the CNS, neural
noise is a common form of perturbation. Neural
processes vary stochastically (see Box 1.4 for a dis-
cussion of noise and fluctuations). Neural variability
acts as stochastic perturbations on any activation
variable that receives neural input. Stability enables
the activation level to resist such perturbations.
Other forms of perturbation are distractors, that is,
activation states that are not compatible with the
current activation pattern in the CNS. For instance,
when gaze is fixed on a visual target, neural activa-
tion from a visual stimulus outside the fovea would
tend to attract attention and to redirect gaze to that
new location. Stability is the capacity to resist such
distractor activation (even if resistance is limited in
time and strength, see Kopecz and Schöner, 1995, for
an early neural dynamic account of such resistance).
Because the CNS is highly interconnected, an
activation variable is exposed to influences from
many other activation variables or directly from sen-
sory stimulation. Most of the time, many of these
influences are not consistent with the current state
of the activation variable; that is, they would tend to
drive activation away from the current state. Without
stability, the CNS would not be able to shield a par-
ticular state from all other possible influences that
would disrupt neural function quite generally. We
will examine the postulate of stability in more detail
later in the chapter and again in Chapters 2 and 4. For
now, we shall use the stability postulate to constrain
the class of neural dynamics, f(u), that generates
behaviorally meaningful time courses of activation.
How stability constrains the function f(u) can
be understood by first looking at the trivial case
in which f(u) = 0, illustrated in Figure 1.8. In this
case, the rate of change of activation is constant at
zero, independent of the current level of activation.
as the bifurcation occurs, and the two fixed points collide, the slope of the function, f, at the
remaining single fixed point becomes zero! So the stability criterion starts to fail at this point.
A theorem by Hopf has classified instabilities in dynamical systems using concepts that
we will not discuss here. In that classification, the tangent bifurcation is the simplest and
most generic instability, and most of the instabilities encountered in DFT are tangent bifurca-
tions (the only exceptions arise from special symmetries in a dynamics). What the theory of
bifurcations and, more generally, the qualitative theory of dynamical systems helps us model-
ers with is solving the problem of inverse dynamics. Forward dynamics means solving a given
differential equation, and that is what most textbooks focus on (we do this mostly by numeri-
cal methods; see Box 1.4). Inverse dynamics is finding the right differential equation given some
assumptions about its solutions. We typically make assumptions about attractors and how
their number and stability change as conditions are changed. We can then use bifurcation
theory to decide whether a particular class of dynamical systems correctly captures that lay-
out of the solutions.
This tutorial box only provides the most basic ideas. In particular, we have been referring
to a single variable, u, and its dynamics,  ufu=(). Most of the time in DFT we have many vari-
ables; in fact, conceptually we have infinitely many variables described by entire functions,
u(x). The ideas sketched out here do carry over into higher dimensions, but the level of mathe-
matics required is more advanced. Fixed points are still fixed points in higher dimensions. The
slope of f is replaced by the real parts of the eigenvalues of the matrix that linearizes f around
the fixed point. Attractors are separated not by simple repellors but by lines or surfaces that are
invariant solutions, unstable manifolds. But these changes are primarily technical in nature.
The only thing that is qualitatively new when we move beyond a single dimension is the occur-
rence of more complex attractors such as periodic solutions (limit cycle attractors) and more
complex bifurcations. In this book we manage to stay away from those, although they do play
a role in understanding coordination (Kelso, 1995; Schöner & Kelso, 1988).

18 Foundations of Dynamic Field Theory
BOX 1.4  STOCHASTIC DYNAMICAL SYSTEMS AND THEIR
NUMERICAL SOLUTION
Noise is important in neural dynamics. First of all, one of the salient features of real neu-
ral networks is that neural activity is noisy, whatever the cause. Behavioral data are also
noisy: Performance varies from trial to trial. Such behavioral variance is an important diagnos-
tic of the underlying dynamics. Models that account for the variability of behavior are stronger
than models that predict only the average performance. More specifically, the neural dynamics
we use in dynamic field theory (DFT) goes through instabilities. Near instabilities, the neural
dynamics is sensitive to noise:  A  small random perturbation may kick the system out of an
attractor that is close to becoming unstable and thus induce a transition to another stable state.
Thus, in our modeling, we must address noise explicitly.
Mathematically, variability is a topic of probability theory. Combining probability theory
with dynamics requires the relatively advanced mathematical techniques of stochastic differ-
ential equations (Gardiner, 2009), but fortunately we really only need the simplest case, which
can be grasped quite intuitively. The idea is that noise acts as a contribution to the dynamics
that is additive, white, and Gaussian. Formally,
 ufuq t=()+()ξ (B1.4)
where f(u) is the deterministic portion of the differential equation, q is the noise strength,
and ξt() is a Gaussian white noise process. First, the noise is additive, which really means that
its influence is independent of the current level of activation, u. That is a reasonable first
approximation. Even if the source of noise were sensitive to the level of activation (e.g., more
noise at higher levels of activation as in a Weber law), there would not be any level of activa-
tion at which noise is zero. So we are modeling that base-level noise that is common across
activation levels. Second, the noise is white. That means that the noise, ξt(), at one particular
moment in time, t, is statistically independent of the noise, ξ′()t, at any other time, ′t. This
expresses that the contributions of noise to the dynamics are truly random. If there was
any dependency across different times, then that would be a deterministic contribution to
the dynamics that should have been included in the deterministic portion, f, of the dynam-
ics. Third, noise is Gaussian. This means that the distribution of the noise at any moment in
time is a Gaussian distribution with zero mean, ξt()=0. The joint probability distribution
of the noise at different moments in time factorizes into Gaussian distributions that are all
generated from the two-point correlation function, ξξ δtt tt()()=−()′′ , for two times, t and ′t.
The delta function is zero whenever the two times differ (consistent with the independence
at different moments in time; for Gaussian processes, statistical independence is the same
as being uncorrelated). The delta function at the point when both times coincide is infinite,
but its integral over time is 1. Obviously, this third property of noise is a bit more technical.
It comes, ultimately, from the central limit theorem of probability theory. The idea is that
the noise comes from many sources of randomness, all independent of each other but hav -
ing the same distribution. The theorem says, intuitively speaking, that the superposition of
such noise sources is Gaussian distributed. In the nervous system, it is easy to imagine that
noise comes from many different sources, for example, variations in membrane potential and
synaptic activity across the many neurons—about 10,000 on average—that project onto any
given cortical neuron.
The upshot is, thus, that noise adds a random component to the rate of change that gives
the activation variable a kick that is uncorrelated at every moment in time. The activation
variable itself evolves by integrating over time across these random kicks. We illustrated this
in Figure 1.8 for the case that f(u) = 0, that is, for a purely stochastic dynamics. The simulation

Neural Dynamics 19
shown in that figure is the result of integration across the Gaussian white noise process. This
leads to a time-continuous process, called the Wiener process, that is still very random because
its increments are independent of each other. That is, at any moment in time, the direction of
change is independent of the current level of activation. We used this insight in Figure 1.8 to
argue for a deterministic portion, f(u), of the dynamics that limits variance by introducing sta-
bility. This was done in Figure 1.9, in which fu uh()=−+.
Conventionally, the source of randomness, the stochastic perturbation on the right-hand
side of the dynamics, is referred to as noise. The consequence of randomness is variability
of the solutions of the stochastic dynamics. That variability is referred to as fluctuations. Not
all authors strictly adhere to that convention, however. Essentially all the models we use in
DFT have a noise component and are thus stochastic differential equations. In many cases we
compare the fluctuations of the time courses obtained from the stochastic dynamics to vari-
ability across time or trials observed in experiment. In some instances, those comparisons
lead to quantitative match and predictive power (e.g., Schöner, Haken, Kelso, 1986; Schutte,
Spencer, 2009).
The numerical solution of stochastic differential equations differs a bit from the numerics
of deterministic differential equations. Before we review that, however, we will first discuss
numerics in greater detail. Numerics is an issue for the modeler, of course, not for the ner-
vous system. The nervous system is essentially an analogue computer that implements neu -
ral dynamics directly (although that implementation is not trivial either, using spikes, as we
briefly discussed in Box 1.1). But as modelers we solve the dynamical equations numerically
on digital computers when we run simulations to account for neural or behavioral data. When
we use neural dynamics to drive robots that behave autonomously based on their own sensory
information (as in Chapters 4, 9, 12, and 14), we do the same: The robots have on-board comput-
ers, on which we solve the equations in real time, taking input from the sensors and sending
the computed solutions to the actuators. On computers, time is discrete. The computer goes
through computational steps, paced by its clock. The time step available to us at the macro-
scopic level at which we write our code is much, much larger than the clock cycle on the hard-
ware (e.g., somewhere around 10 to 50 milliseconds for our computational cycles compared to
1 millionth of a millisecond for the hardware clock cycle on a 1 GHz processor).
How to approximate the continuous time dynamics in discrete time is the topic of numer-
ics, a well-established field of applied mathematics. For numerical solutions of deterministic
differential equations, consult Braun (1993); for numerical solutions of stochastic differential
equations, consult Kloeden and Platen (1999). Here we outline only the main ideas.
Let’s say we want to numerically solve this differential equation, the deterministic version
of Equation B1.4:
. ufu=() (B1.5)
We assume that we have a computational cycle that allows us to provide estimated values,
ut
i(), of the time course of u(t) at the discrete times, tit
i
=∆. Here, ∆t, is the time step and we
have used an index, i=…0123,,,, to count the discrete time events. The classical and simplest
approach is called the Euler method and is based on approximating the derivative,  u, around one
of the sample times, t
i
, by the differential quotient:
 ut
utut
t
i
ii
()≈
()−()
−1

(B1.6)
If you don’t remember this from high school, look it up, even on Wikipedia. It is easy to fig-
ure out. If you insert this into Equation B1.5, multiply by ∆t and add ut
i−()
1, you obtain the Euler
formula:

20 Foundations of Dynamic Field Theory
ut ut tfut
ii i()=()+ ()()−−11
∆ . (B1.7)
In this derivation, you will first find that the function fut
i()() on the right-hand side should
be taken at the current time step, t
i
. That leads to the “implicit Euler” method. When the time
step is sufficiently small, we may approximate this value of the function by its value at the
previous time step,fut
i−()() 1
, as in Equation B1.7. This is easy to implement in a numerical pro-
gram: Initialize the time series by setting ut
1() to the initial condition. Then loop through the
discrete times, computing at each iteration step the next value of ut
i() based on Equation B1.7,
which makes use only of the previous value, ut
i−()
1
. The time step, ∆t, must be small enough
that it can sample the time courses of activation. Near an attractor, the timescale of ut() is given
by the relaxation time, τ, illustrated in Figure 1.11. The time step needs to be smaller than the
relaxation time: ∆t2τ. In practice, our neural dynamics is usually close to an attractor, whose
stability helps keep the numerics stable. We often get away with a Euler step that is only about
10 times smaller than the relaxation time.
When noise comes into the picture, things are a bit different, a fact sometimes overlooked
by modelers. The Euler formula for the stochastic differential equation B1.4 reads:
ut ut tfut tqt
iiii()=()+ ()() + ()
−− −11 1
∆∆ .ξ (B1.8)
Note that the noise term scales differently than the deterministic term with the Euler
step, ∆t.
There are much better numerical procedures for solving deterministic differential equa-
tions. These get away with a larger Euler step to achieve the same precision. In fact, MATLAB
considers the Euler method so outdated that it doesn’t include the Euler algorithm any longer
in its library (it is easily programmed by hand, of course). In practice, we still use this simplest
and worst (from the point of view of numerics experts) algorithm. First, it is good enough.
Second, it lends itself to implementation on robots, on which we also take sensor readings
at every time step. The more advanced algorithms take into account multiple samples of the
dynamical variable at multiple time steps, and many also vary the time step, ∆t, depending
on how strongly the solution varies. Neither is well suited to updating the sensor data. For
sensor data, we want to go as fast as we can to track any changes in the input. So we are not
so interested in using the largest Euler step that delivers acceptable precision. A final issue
is that the more advanced methods for stochastic differential equations are quite complex,
requiring a considerable number of estimates and auxiliary variables to be iterated. Although
those methods scale better with the time step in principle, the amount of computation needed
at each time step can be quite large, more than offsetting the advantage gained by the larger
Euler step.
Any initial level of activation will thus remain
unchanged over time. But what happens when ran-
dom perturbations impact the activation variable?
A random perturbation can be modeled as a random
kick that generates a non-zero rate of change for a
short (infinitesimal) moment in time (see Box 1.4 for
a brief tutorial in stochastics). The random pertur-
bations may be distributed as a Gaussian, as hinted
at in the figure, so large kicks are less frequent than
small kicks, the average kick size being zero. Kicks
at different times are assumed to be independent
of each other. Such random influences are called
Gaussian white noise, ξt(), and form a good model of
sources of stochasticity, based on fundamental laws
of probability (Arnold, 1974). Formally, the neural
dynamics with noise can be written as
τξ ut=(). (1.2)
Any time a positive kick is applied, activation
increases. Every time a negative kick is applied, acti-
vation decreases. Over time, activation performs a

Neural Dynamics 21
random walk, as illustrated in Figure 1.8, in which
multiple time courses obtained by different sam-
ples of the noise process are shown. As is apparent
from those simulations, the variance of the random
walk increases boundlessly! This is essentially the
law of Brownian motion, first modeled mathemati-
cally by Einstein (1905). Intuitively, this increase of
variance comes from the fact that there is no sys-
tematic restoring force that pushes activation back
to the starting value. If perturbations have driven
activation to a certain level, say, a positive level,
future kicks are just as likely to further drive activa-
tion away from the starting level as they are to drive
levels of activation back to the starting level.
Clearly, this model is missing something to
become functionally meaningful:  It is missing a
restoring force that keeps activation within bounds.
Such a restoring force would have to ensure that
when large positive activation levels have been
reached, the probability of negative rates of change
becomes much larger than the probability of posi-
tive rates of change so that kicks back toward lower
activation levels become prevalent. Analogously,
when very negative activation levels have been
reached, the probability of positive rates of change
must become larger than the probability of nega-
tive rates of change. Figure 1.9 illustrates such
probability distributions. They are centered on a
line with a negative slope, so that, in fact, the mean
rate of change is negative far out on the positive
activation axis and positive far out on the negative
activation axis.
Mathematically, this model can be written as
τ uu t h=−++ ()ξ. (1.3)
Its deterministic portion is illustrated in Figure
1.10. Here, –u makes the straight line with the nega-
tive slope. By adding a negative constant, h < 0, we
have shifted the straight line downward, so that it
intersects the activation axis at uh
0=. That inter-
section point is called the resting level of activation.
It is formally the solution of
τ u=0. (1.4)
This solution is a fixed point, a constant solution,
u(t) = h, of the dynamics (see Box 1.3). This fixed
point is also an attractor, defined by the fact that
activation converges at the fixed point over time
from any initial activation level in the vicinity of the
fixed point. Our earlier reasoning that activation
levels remain bounded explains this convergence
as well: If activation starts at levels higher than that
of the fixed point, then the neural dynamics has
Time, t
u(t)
Resting
level
u
Resting level
Probability distribution
of perturbations
u
FIGURE  1.8: Top: A neural dynamics is illustrated by plotting the rate of change of activation,  u, against activation,
u. In this case, the mean rate of change is zero across all levels of activation, but random rates of change are drawn
independently at each moment in time from a Gaussian distribution (which is illustrated for the level of zero activa-
tion; this distribution is meant to extend from the page, the same distribution would exist for every level of activation).
Bottom: Different time courses of activation, u(t), that are generated by this stochastic neural dynamics are shown as
functions of time, t. All trajectories start at the same level of activation, labeled “resting level,” but evolve differently
because different samples are drawn from the probability distributions.

22 Foundations of Dynamic Field Theory
negative rates of change, which implies that acti-
vation will decrease and, thus, approach the fixed
point from above. If activation starts at levels lower
than that of the fixed point, positive rates of change
imply that activation will grow, approaching the
fixed point from below. It is thus the negative
slope of the rate of change around the fixed point
that imposes the stability constraint. The level of
activation at the fixed point is the stable activation
state. The negative slope of the rate of change at the
fixed point thus brings about stability.
A more formal way of seeing the convergence to
the fixed point is to solve the differential equation.
Box 1.3 shows how to do this analytically. More
commonly, in DFT we solve differential equations
numerically on a digital computer (see Box 1.4 for
a review of numerics). Such numerical simulations
formally instantiate the iterative account we have
been using intuitively. Time is sampled at discrete
times separated by a small time step ∆t. The time
course of activation, u(t), is approximated by a
discrete time sequence, ut
i(), where tit
i=⋅∆ and
i counts discrete time, i=012,,,.... In the simplest
numerical procedure (called the Euler formula), the
time sequence may be obtained from the approxi-
mation of the rate of change
 u
utut
t
ii

()−()
−1

. (1.5)
Inserting this into the dynamics (still neglect-
ing noise), we obtain after some rearranging
of terms:
u
Fixed point
attractor
u
FIGURE 1.10: Dynamics of a single neural activation vari-
able of the form, τ uu h=−+, illustrated by plotting the rate
of change of activation,  u, against activation, u, itself. The
intersection with the activation axis at the resting level
is an attractor, a stable fixed point. Along the activation
axis, arrows show the direction of change. The length of
the arrows indicates the rate of change, which approaches
zero near the attractor.
Time, t
u
u(t)
Resting level
Resting
level
u
FIGURE 1.9: This figure is analogous to Figure 1.8, but now the mean rate of change is a function of the activation level
illustrated at the top by the straight line with negative slope. Two examples of probability distributions are illustrated.
The one on the right is centered on a negative rate of change; the one on the left is centered on positive rate of change.
Their means lie on the straight line. The different samples of the activation trajectories shown at the bottom now remain
bounded and are centered on the resting level.

Neural Dynamics 23
utut
t
ut h
ii i()=()+− ()+


−−11

τ
. (1.6)
On the right-hand side, only values of u at the ear-
lier time t
i−1 are needed. They determine the value
of activation at the next time step, t
i, on the left-hand
side. In other words, this is an iterative equa-
tion: Starting with some initial value of activation,
future values can be obtained by iterating the equa-
tion in discrete time into the future. (Numerical
solutions of the stochastic version, Equation 1.3, of
the dynamics are discussed in Box 1.4).
Figure 1.11 illustrates time courses obtained
this way. Different solutions were obtained by
setting different initial conditions, u(0), so that
activation starts out at different levels. Clearly,
independently of the different initial levels, activa-
tion converges in all cases to the fixed point at the
resting level. This convergence, often called relax-
ation, takes the form of an exponential decay of the
difference from the fixed point, a characteristic of
the solutions of linear equations. The time constant
of the exponential decay is the parameter τ. That
is why we said earlier that τ fixes the units of time.
This time constant is also called the characteristic
time or relaxation time of the neural dynamics.
The last step needed to make sense of neural
dynamics is to consider inputs to the dynamics,
which may originate from the sensory surfaces or
from other activation variables. In neural dynamics,
inputs are contributions to the rate of change.
Positive contributions are excitatory inputs; nega-
tive contributions are inhibitory inputs. To be spe-
cific, consider an input from a sensory system, s(t),
that varies in time. Figure 1.12 illustrates how the
neural dynamics changes as the input increases
from zero to a positive value, s
0, in an abrupt step.
Because the input does not depend on the activation
level itself, its increase shifts the entire dynamics,
that is, the negatively sloped function of activation
upward. As a result, the zero-crossing moves to the
right, from the resting level h, to a positive value,
the new fixed point at hs+
0. The system was ini-
tially at resting level, but because that is no longer a
fixed point, activation begins to change. Activation
relaxes exponentially to the new fixed point, with
the same time constant with which it relaxes to
the resting level in the absence of input. Note that
what has an impact on other neurons is not acti-
vation itself but the output of the activation vari-
able, obtained by applying the sigmoidal threshold
u
h+s
Input, s
Resting
level, h
Time, t
u(t)
Resting level, h
g(u(t))
Input, s(t)
u
FIGURE 1.12: Top: The neural dynamics τ uu hst=−++()
is illustrated. The gray line reminds us of the dynamics
without input, s(t), that has a fixed point at u = h, the rest-
ing level. Input shifts the rate of change upward, leading to
a new fixed point at u = h + s. Bottom: The resulting acti-
vation trajectory, u(t) (solid line), is shown together with a
sketch of the associated input, s(t) (dashed line). The dotted
line shows the output of the activation variable obtained by
applying the sigmoid threshold function to the activation
trajectory.
Time, t [ms]
u(t)
Resting
level
τ = 100ms
200300400500
FIGURE  1.11: Three activation trajectories are shown
as functions of time. These were obtained by numeri-
cally solving τ uu h=−+. Activation converges (“relaxes”)
to the resting level, h, from different initial values. The
time, τ, that it takes to reduce the initial distance from the
attractor by 36.8% (the reciprocal of the Euler number e)
is marked by the dashed vertical line. This time is indepen-
dent of the absolute level of initial activation and defines
the timescale of the dynamics.

24 Foundations of Dynamic Field Theory
function to the activation variables. Figure 1.12
shows the time course of this thresholded output
level. While activation responds to a step change of
input with an exponential time course, the output
level has a more abrupt time course.
One can see here that the attractor structures
the time courses of activation. The attractor itself
may move, even jump. Activation changes smoothly,
tracking and at all times moving toward the attrac-
tor. In this simple case of a single activation variable
driven by a single input, the neural dynamics acts as
a low-pass filter, smoothing the time course of input
on the characteristic timescale, τ.
Exercise 1.1 gives you the opportunity to explore
through an interactive simulator how the neural
dynamics generates continuous time courses out of
potentially discontinuous inputs. Next we will look
at how more complex neural dynamics may do the
opposite—transform continuous inputs into dis-
continuous activation time courses that represent
the simplest form of decision making, the decision
that an input has been detected.
SELF-EXCITATION AND THE
DETECTION INSTABILITY
All of this discussion has been about a single acti-
vation variable receiving external input. Now we
will look at neural interaction. Neural interaction
refers to the dependence of the rate of change of an
activation variable on input from other activation
variables. Neural interaction includes, therefore,
the forward neural connectivity that character-
izes many connectionist networks. More typically,
however, neural interaction refers to patterns of
coupling that include recurrent loops of connectiv-
ity. A limit case that we will use as a starting point
here is the neural dynamics of a single activation
variable that receives excitatory input from itself.
That is the simplest form of recurrent neural con-
nectivity, a network consisting of only one neu-
ron that connects back onto itself, as illustrated
in Figure 1.13. Such circuits exist in the CNS, but
we will see in Chapter 2 that this limit case really
stands for the neural dynamics of small populations
of neurons that are mutually coupled through excit-
atory connections. Mathematically, self-excited
neural dynamics can be formulated by adding a sin-
gle term to the rate of change considered thus far:
τ uu hstcgu=−++ ()+⋅(), (1.7)
where the parameter c > 0 represents the strength
of the self-excitatory contribution. The sig-
moid threshold function, g(u), was illustrated
earlier (Figure 1.1) and can be formalized
mathematically as
gu
u()=
+−
()
1
1exp
.
β
(1.8)
Consistent with the concept of activation, only suf-
ficiently positive levels of activation have an impact
on other activation variables, which is assured by
passing activation through the sigmoidal function,
g(u). This mathematical formulation highlights
how input is dependent on the activation level, u,
which is the signature of neural interaction. Note
that the dependence of the rate of change of activa-
tion on the activation variable itself through the –u
term is not part of neural interaction, as this term
does not represent input but establishes the intrin-
sic neural dynamics that generates stability.
Figure 1.14 illustrates this neural dynamics
with self-excitation. For very negative activation
levels, the sigmoid yields zero and we have the lin-
ear dynamics from before. For very positive activa-
tion levels, the sigmoid yields a constant (here 1) so
that the linear dynamics is shifted upward by c.
The sigmoid connects these two regimes, leading
overall to a nonlinear dynamical system. A dynami-
cal system is nonlinear whenever the dependence of
the rate of change on the current level of the acti-
vation variables is not a straight line. Figure 1.14
shows that without external input, s(t) (and for
sufficiently negative h and sufficiently small c), the
dynamics does not change qualitatively over the
linear dynamics. There is still a single attractor at
the resting level and the rate of change is negative
everywhere to the right of that attractor. The sys-
tem is “monostable” around the resting state, mean-
ing there is only a single attractor along the entire
c
s
FIGURE  1.13: The dynamics of a single activation vari-
able, illustrated by a circle filled in gray, is represented in
the manner of neural networks. Excitatory external input,
s, is indicated by an incoming arrow. Self-excitatory neural
interaction is illustrated by a single recurrent loop ending
in an arrow. The strength of that input is modulated by the
parameter, c.

Neural Dynamics 25
activation axis. That is the attractor in which acti-
vation would settle.
If excitatory input of increasing strength is
applied, the dynamics is shifted upward, as shown
in Figure 1.15. At some point, the nonlinear
dynamics touches the activation axis at positive
activation levels and, with just a little more input,
two new fixed points arise. The one at a higher,
positive level of activation is an attractor, as can be
recognized by the negative slope of the dynamics
at that fixed point. The sigmoid threshold function
applied to this attractor level of activation yields
values above zero, so that this attractor represents
an “on” state of the activation variable. The fixed
point at a somewhat lower level of activation (close
to zero) is a repellor, which can be inferred from the
positive slope of the dynamics at that fixed point.
Small deviations from the repellor are amplified by
the dynamics: Deviations to the right are linked to
positive rates of change, so activation grows fur-
ther away from the repellor; deviations to the left
are linked to negative rates of change, so activa-
tion decreases away from the repellor. The repel-
lor therefore divides the activation axis into two
regimes that are called basins of attraction. One
leads to the new “on” attractor, the other to the old
“off” attractor at negative levels of activation. This
is illustrated in Figure 1.16, where the dynamics at
this point is solved numerically, starting with dif-
ferent initial conditions. Starting at larger activa-
tion levels than the repellor leads to convergence
to the new on-attractor; starting at lower activation
levels than the repellor leads to convergence to the
old attractor, at negative activation levels.
Although the new fixed points appear as input is
applied, activation is not yet affected by them. Before
input arrived, the system was sitting in the “off” attrac-
tor. When input arrived, that attractor (left-most
attractor in Figure 1.15) shifted somewhat, but the
u
Time, t
u(0)<0
u(0)>0
u
u(t)
FIGURE  1.16: Top: Dynamics of a single activation vari-
able with self-excitation, τ uu hstcgt=−++()+⋅(), in the
presence of external input, s(t) = constant of intermediate
strength. The dynamics has an attractor at a negative acti-
vation level (circle filled in solid black) and another one at
a positive activation level (circle filled in light gray), sepa-
rated by a repellor at zero activation level (circled filled in
dark gray). Bottom: Simulated activation trajectories con-
verge to the positive attractor (dashed line in light gray)
when started at positive initial activation levels (light
gray curves) and to the negative attractor (dashed line in
solid black) when started at negative initial activation lev-
els (solid black curves). This shows that the dynamics is
bistable.
u
Resting
level, h
u
FIGURE 1.14: The neural dynamics of a single activation
variable with self-excitatory neural interaction is shown in
the absence of external input, s.
u
Resting
level, h
Input strength
u
FIGURE  1.15: The neural dynamics of a single activa-
tion variable with self-excitatory neural interaction in
the presence of increasing amounts of external input, s, is
illustrated by graphs going from light gray to solid black.
Circles filled by matching gray levels mark the fixed
points. Note that the three inner fixed points are unstable.

26 Foundations of Dynamic Field Theory
activation variable tracks that shift. A  qualitative
change is brought about only when the “off” attractor
becomes unstable, as input is further increased. The
repellor moves toward the “off” attractor at negative
activation and ultimately collides with it, annihilat-
ing the attractor. The dynamics lifts off the activation
axis and no attractor remains at negative levels of acti-
vation. At this point, activation can no longer remain
around the off-state. Activation will vigorously grow,
converging to the “on” attractor on the right.
This phenomenon, the disappearance of an
attractor, is associated with a loss of stability (see
Box 1.3). The slope of the dynamics at the attractor
becomes flat just before the attractor disappears.
This means that the restoring force that drives
activation back to the attractor after a perturbation
becomes weaker. This is why such a change of the
dynamics around an attractor is called an instabil-
ity. Mathematicians prefer the term bifurcation,
as such instabilities always involve multiple fixed
points colliding or splitting.
The instability is a significant event, even
though it happens only at one particular level of
input and even though the system quickly moves
away from the now unstable and then vanished
attractor. This is because instabilities separate
different dynamic regimes. Before this instability
the system is bistable, it has two attractors at its
disposal—the “on” state at positive levels of activa-
tion and the “off” state at negative levels of activa-
tion. After the instability only the “on” state is left,
which is now monostable.
We call this instability the detection instability
because when it is run through in the order narrated
here, with increasing input from bistable to mono-
stable, it generates the “decision” that significant
input has been detected. That decision is reflected
by the fact that the activation level goes through
zero, so that the sigmoidal threshold function goes
from zero to a positive value. Note that this decision
mechanism differs from the classical notion of sig-
nal detection theory, in which a detection decision
is made when a criterion level is exceeded. In the
detection instability, the detection decision is stabi-
lized. That is, once the decision has been made, it is
maintained even if in the next moment in time the
input strength falls back below the critical level due
to sensory noise, for instance. The system remains in
the “on” attractor because the system is bistable. In
classical threshold thinking, by contrast, a decision
is a momentary event that is not stabilized per se. If
we were to use such threshold thinking in the con-
text of dynamical systems thinking we would run
into problems. This is because in dynamic thinking,
the decision variable activation is updated continu-
ously in time based on time-varying sensory inputs.
A  threshold mechanism would perform poorly
then: A “yes” decision would often be followed by a
switch to a “no” and perhaps a switch back to “yes”
as sensory inputs fluctuate. Thus, continuous time
decision-making in noisy sensory environments
really requires that decisions are stabilized. We will
return to this point in Chapters 2 and 4.
When input strength drops to sufficiently low
values, however, the detection decision is undone
as the bistable regime merges into the monostable
“off” regime. This is the first instability we dis-
cussed earlier that happens at lower levels of input,
on the right side of the graph in Figure 1.15. We refer
to that instability as the reverse detection instability,
which marks the point at which the loss of a detec-
tion is signaled. Again, this decision is stabilized. If
input strength rises beyond this lower critical level,
then the non-detection decision is maintained.
Together, these two instabilities form the
basis of a phenomenon called decision hyster-
esis: The critical level at which a detection is sig-
naled depends on the direction of change of input
strength. This is illustrated in Figure 1.17, which
traces the attractor state realized when stimulus
strength is increased or decreased. Empirically,
hysteresis is ubiquitous in perceptual psychophys-
ics. In fact, hysteresis has been known from the
earliest days of psychophysics as the dependence
Time, t
u(t)
Detection
instability
Reverse detection instability
FIGURE 1.17: The dynamics of a single activation variable
with self-excitation, τ uu hstcgt=−++()+⋅(), is simulated
when external input, s(t), is first increased, then decreased,
in both cases linearly over time. The resulting activation
level is shown as a solid line for increasing input and as a
dashed line for decreasing input (plotted against a reversed
time axis). The dependence of the realized activation
level on the direction of change of input is a signature of
hysteresis.

Other documents randomly have
different content

eens neemt de graaf, die zeker wilde toonen, hoe weinig bang hij voor het
groote water was, en wel te galant zal geweest zijn, om zijn adellijke
beminde den zeedoop te doen ondergaan, de gravin op en draagt haar in
zee. Door haar tegenspartelen bezeert zij zich aan zijn degen—en deze
wond, waarin het koud vuur kwam, kostte haar het leven.”
Aan het dorp gekomen, stapte men weer in het rijtuig.
„Die kerk,” zeide mevrouw Van Gent, „stond vroeger midden in het dorp.”
„Is zij dan verzet?” vroeg Frits.
„Wel neen, maar bij de verschillende hooge vloeden, door welke
Scheveningen geteisterd is, zijn al de huizen, die aan den zeekant stonden,
weggespoeld.”
„Vreeselijk!” riep Jacob uit.
Men reed nu den schoonen, met boomen beplanten weg langs, naar het plan
van Constantijn Huijgens aangelegd, en gedacht bij het voorbijrijden van
„Zorgvliet” aan onzen volksdichter Jacob Cats, die deze plaats heeft
aangelegd, wiens gedichten bij onze voorouders in huis- en pronkvertrek
een plaats hadden naast den Staten-Bijbel en van wiens zinrijke spreuken er
nog ten huidigen dage in den mond van het volk leven. Daarna reden zij het
schoone Willemspark met zijn prachtige villa’s door, bewonderden de
Alexanderstraat en de Mauritskade, en lieten zich brengen tot aan het oude
paleis in het Noordeinde, waar zij uit het rijtuig stapten, dat mevrouw Van
Gent naar huis zou brengen, nadat deze haar man wèl op ’t hart gedrukt had,
om toch tegen het koffiedrinken thuis te zijn, daar de jongens anders flauw
zouden vallen van den honger.
„Hier staan wij nu tusschen twee paleizen,” zeide mijnheer Van Gent, nadat
het rijtuig was weggereden. „Dat aan uw linkerhand is het oude huis van
Van Wassenaar Obdam en heeft zijn front op den Kneuterdijk.”
„Is dat van den admiraal Van Wassenaar Obdam, die in den tweeden
Engelschen oorlog in de lucht vloog?” vroeg Peter.

„Van denzelfden. Een graftombe is voor hem opgericht in het koor der
Groote Kerk.”
„Daar hij wel toch zelf niet ligt onder,” zeide Ben.
„Natuurlijk niet. Het paleis aan onze rechterhand is dat van onze
tegenwoordige koningin. Jammer, dat H. M. thans in Den Haag is; anders
zou ik het u laten zien. Het is prachtig en vorstelijk gemeubileerd, dat kan
ik u verzekeren. Dat ruiterstandbeeld is van Willem den Eersten, den
grondlegger onzer vrijheid. Het werd hier geplaatst door Koning Willem II
en munt uit door zijn schoone vormen en stoute conceptie.”
Toen zij het Heulstraatje doorgewandeld waren, bleef mijnheer Van Gent
staan.
„Ziet nu aan uw linkerhand, daar in den hoek staat het voormalig paleis van
Willem II; een paar huizen verder ziet gij het huis, waarin Oldenbarneveld
gewoond heeft; die kerk op den hoek van dat straatje is de Kloosterkerk,
waarin Prins Maurits ging om de voorkeur te doen zien, welke hij den
contra-remonstranten wilde betoonen, en verder op is een schoon
hardsteenen gebouw met breede trap, waarin eens een beruchte prefect van
het Departement der Zuiderzee, baron De Stassart, woonde en dat
tegenwoordig is ingericht tot koninklijke bibliotheek en bewaarplaats van
een aanzienlijke verzameling gouden, zilveren, bronzen en koperen munten.
Als gij langer bleeft, zou ik èn de bibliotheek èn het penningkabinet eens
met u bezoeken; nu echter gaan wij den Kneuterdijk op.”
„Is hier niet het huis van den Raadpensionaris Jan de Witt?” vroeg Peter.
„Ik meen ten minste te hebben gelezen, dat dit op den Kneuterdijk stond”1.
„En ik herinner mij, dat Gijsbert Karel van Hogendorp ook op den
Kneuterdijk gewoond heeft,” voegde Frits er bij.
„Dan moeten we ook kort bij de Gevangenpoort en ’t Groene Zoodje zijn,”
zeide Karel.

„Wacht maar, ik zal je alles wijzen. En misschien nog meer dan je wel
weet,” antwoordde de heer Van Gent, die er recht schik in had, dat de
jongens zooveel historische kennis en zooveel lust tot onderzoeken hadden.
„Hier aan onze rechterhand heb je het huis van Van Hogendorp, en
daarnaast is de woning van onzen onsterfelijken Jan de Witt, waarin ook
zijn zwager Van Swijndrecht woonde.”
Met aandoening beschouwden onze knapen het huis, waarin eens zulk een
groot man geleefd, gedacht en gewerkt had. Zoo ging men voort tot op de
Plaats.
„Hier bij dezen lantaarnpaal,” vervolgde mijnheer Van Gent, „is ’t Groene
Zoodje. Hier stond het schavot, waarop Reinier van Groeneveld, Buat en
Van der Graaff zijn onthoofd en de gebroeders de Witt zijn opgehangen en
mishandeld. En daar, die groote keisteen met zeven strepen is er ter
gedachtenis gelegd van den vreeselijken moord, aan Aleida van Poelgeest
gepleegd, omdat zij graaf Albrecht tot de partij der Kabeljauwen had
overgehaald.”
„Maar kijkt nu eens recht uit,” vervolgde hij na een poos. „Deze poort is de
Gevangenpoort, vroeger Voorpoort van den Hove, en dit venster dat van
den kerker van Cornelis de Witt.”
„Zouden wij dien niet kunnen zien?” vroeg Peter.
„We zullen ’t vragen. Zeggen ze neen, dan zijn we nog even ver.”
Men ging de Gevangenpoort door en schelde aan. Het verzoek, om de
vroegere gevangenis te zien, werd volgaarne ingewilligd. Met aandoening
klommen zij de trap op, welke de gebroeders De Witt door het opgeruide
gemeen waren afgesleept; met niet minder aandoening aanschouwden zij de
kamer, waar beiden de laatste en vreeselijkste oogenblikken huns levens
doorbrachten. En toen zij daarna in den kelder afdaalden en hun de pijnbank
gewezen werd, op welke de Ruwaard van Putten werd gepijnigd, toen stond
er in het oog van Lodewijk een traan, die hem waarlijk niet tot schande was.

Nadat mijnheer Van Gent de vriendelijke dienstmaagd, die hun een en ander
had laten zien, met een ruime fooi beloond had, wandelde men naar het
Buitenhof.
„Kijk nu eens recht voor u,” zeide mijnheer Van Gent. „Uit deze ramen
hield eens de snoode Tichelaar zijn redevoering tot het volk. En nu linksom.
Dit standbeeld is dat van den ridderlijksten onzer vorsten, van den edelen
Koning Willem II, die bij Quatre-Bras voor onze onafhankelijkheid streed
en bij Waterloo zijn bloed voor ons veil had.”
„’t Staat daar al heel mooi,” zeide Frits. „En hoe sierlijk zijn die beelden
aan den voet!”
„Dat zijn ze,” hernam mijnheer Van Gent. „En nu slaan we linksom en gaan
naar het Binnenhof, het oudste gedeelte van Den Haag en dat door drie
poorten kan worden gesloten. Vroeger hingen hier de vaandels, in
verschillende veldslagen op de vijanden des lands behaald. Doch die zijn
tijdens Koning Lodewijk weggenomen en naar Amsterdam gezonden. Die,
welke wij nu doorgaan en boven welke de appartementen der vroegere
Prinsen van Oranje zich uitstrekten, heet de Stadhouderspoort.”
„Die is, in het laatst der achttiende eeuw, tegen alle bepalingen aan
doorgereden door Cornelis de Gijzelaar,” zeide Peter. „En daar vandaan
hebben de tegenstanders van het Huis van Oranje in dien tijd den naam van
Keezen gekregen.”
„Juist. En hier vlak over ons hebben wij het oudste gebouw van Den Haag:
de Loterijzaal of liever de groote ridderzaal, door Willem II, graaf van
Holland en Zeeland, in 1270 gesticht, en den oorsprong van Den Haag.”
Zij bezichtigden nu de groote ridderzaal, toen nog niet herbouwd of liever
gedeconstrueerd; verder de vergaderzaal van de Eerste Kamer der Staten-
Generaal, waar het Twaalfjarig Bestand werd gesloten en die daarom
vroeger den naam van Trèves-kamer droeg. Zij is vooral bezienswaardig om
haar schoone schilderijen, voornamelijk het schoorsteenstuk, hetwelk Prins
Willem III ten voeten uit in koninklijk gewaad voorstelt;—en het gebouw,
dat tot vergaderplaats dient van de Tweede Kamer, vroeger gebruikt tot

danszaal voor de Prinsen van Oranje, maar onder Prins Willem V van
hardsteen herbouwd en tot vergaderzaal voor de Staten-Generaal ingericht.
Hier werd in 1796 de eerste Nationale Vergadering gehouden.
„Dat torentje aan de linkerzijde der groote ridderzaal,” vervolgde mijnheer
Van Gent, toen zij uit de troonzaal kwamen, „is ook nog merkwaardig. Hier
stond het schavot, waarop de grijze Oldenbarneveld het leven verloor onder
beulshanden. En daar vlak over ons is een merkwaardige kapel, de oudste
kerk van Den Haag, tegenwoordig in gebruik bij de Roomsen-Katholieken,
onder den naam van Hofkerk. Zij heette vroeger de „kapel van Maria ten
Hove” en is waarschijnlijk door Graaf Willem II gebouwd en door diens
zoon Floris V voltrokken. In deze kapel woonden de vroegere graven van
Holland en Zeeland de godsdienstoefeningen bij. Na de Hervorming werd
zij tot een Gereformeerde kerk ingericht, waar, op last der Staten van
Holland, in de landtaal en, sedert 1592 vooral ten genoegen van Louise de
Coligny, ook in het Fransch werd gepredikt. Toen men bij het verbouwen in
1769, de fondamenten van den muur aan de zijde van het Binnenhof
opbrak, vond men daar verscheidene houten en looden grafkisten, waarin
zich het gebeente der Oudhollandsche graven bevond.”
„En heeft men die beenderen bewaard?” vroeg Frits.
„Men kon ’t niet. Zoodra zij met de buitenlucht in aanraking kwamen,
vielen zij in elkander. Een der lijken echter, dat in een looden kist lag, was
door een sterk vocht vrij wel bewaard. Uit de wonden, welke het aan den
hals en in het gezicht had, veronderstelde men, dat dit het lijk van Willem
IV moet zijn geweest, die in 1345 in den slag bij Warns tegen de Friezen is
gesneuveld. In de kist van Jacoba van Beieren was het hoofdhaar nog
ongeschonden bewaard; men heeft dat naar het museum gebracht en daar
zullen wij het straks zien.”
Daar het te koud was om lang stil te staan, waren zij tot genoemd huis
doorgewandeld en beschouwden hier eerst het museum van Japansche,
Chineesche en andere curiositeiten en eindelijk, in de laatste zaal, de
historische overblijfselen. ’t Meest werd de aandacht onzer knapen geboeid
door het gewaad, dat Prins Willem I had aangehad, toen hij te Delft door

Balthazar Gerards vermoord werd. Duidelijk kon men de plaats zien, waar
de kogel was doorgegaan. Daar lag ook het hemd van den grooten man, nog
gekleurd van het edel bloed, dat hij voor ons land veil had gehad, de
uitgesneden kogel met een paar beentjes, die door het vuurwapen
verbrijzeld waren, de pistolen van den moordenaar met zijn sententie,
zooals die binnen Delft is uitgevoerd. Verder zagen zij er geuzennappen,
geuzenpenningen, zilveren schotels, aan onze grootste zeehelden ten
geschenke gegeven, groote haakbussen, oude pieken; ook uit later tijd, den
stoel, waarop Chassé in de citadel heeft gezeten, en een geweer, afkomstig
van het in de lucht gesprongen schip van Van Speyk.
Maar wat vooral Ben het meest belang inboezemde, was het
Oudhollandsche huis in schildpadden kast, eens voor Czaar Peter van
Rusland vervaardigd, en dat zulk een duidelijke voorstelling bevat van het
ameublement onzer voorouders.
Daarna begaf men zich de trap op naar het schoone museum van
schilderijen door oude meesters. Als de tijd niet gedrongen had, zouden de
knapen gaarne langer hebben vertoefd voor de ontleedkundige les van
Rembrandt, voor den stier van Potter, den veldslag van Wouwerman, en zoo
menig stuk dat niet alleen groote kunstwaarde bezit, maar ook zelfs den
oppervlakkigen beschouwer door zijn meesterlijk navolgen van de natuur
boeit.
„Wij moeten naar huis, jongens,” zeide mijnheer Van Gent, „anders krijgen
wij knorren van mijn vrouw en—wat erger is—koude koffie.”
„Is dit huis gebouwd door Prins Maurits, die vocht in het slag at
Newpoort?” vroeg Ben, toen zij de trappen van het museum afgingen.
„Neen, Ben. Het is gesticht door Johan Maurits van Nassau, den held van
Brazilië, en gebouwd door den beroemden Jacob van Kampen, den
bouwmeester van het paleis van Amsterdam, en Daniël Stalpert. Maar zie
nu eens hier. Dit is het standbeeld van Willem den Zwijger, denzelfden,
wiens ruiterstandbeeld gij in het Noordeinde hebt gezien.”
„En wiens kleeren op het Prins-Maurits-huis waren,” zeide Jacob.

Toen zij bij mevrouw Van Gent kwamen, zat deze hen reeds met de koffie te
wachten, of liever, ter eere van Benjamin en ten genoegen van den eetlust
der vijf andere jongens, met een soort van luncheon of Engelsch
ochtenddiner. Onder het vertellen van wat men gezien had, werden verdere
plannen voor dien dag besproken. De jongens, zeide mijnheer Van Gent,
moesten hun schaatsen medenemen, dan zou men, na eerst de kanongieterij
te hebben bezien, een wandeling door het Bosch doen en vervolgens, te
midden van de beaumonde van Den Haag, op de vijvers schaatsen rijden.
Daarna zou men een bezoek brengen aan het Huis ten Bosch en vervolgens
naar huis rijden om te dineeren, terwijl mevrouw Van Gent als voorwaarde
stelde, dat zij het verdere van den avond zouden uitrusten en in den
huiselijken kring slijten, als wanneer zij ze op een glas warmen wijn met
bisschop en wentelteefjes zou trakteeren.
„En dan gaan jelui morgen per spoortrein naar Amsterdam terug,” eindigde
zij.
„Per spoortrein, Marie?” vroeg Peter. „Dan zal men ons in Broek
uitlachen.”
„Laat men lachen,” zeide Jacob, die alweer meer trek had om rust te nemen,
dan zich in te spannen. „Ik vind het voorstel van mevrouw Van Gent
lumineus.”
„’t Zou een schandelijk eind van onzen tocht zijn,” zeide Frits.
„Mevrouw Van Gent is in ’t gelijk,” zeide Ben. „Wij moeten gaan per
railway. Otherwise wij zullen niet zijn in staat om overmorgen te rijden.”
„’t Best is, dat wij er ons op beslapen,” hernam Lodewijk.
„Ik ben verzekerd, dat mijn voorstel wel zal worden aangenomen, als de
jongens van middag gedineerd hebben,” hernam mevrouw Van Gent, die
berekende, dat ons zestal na den tocht naar het Bosch wel van idee
veranderen zou.

„Als mevrouw Van Gent het mij veroorlooft, dan zou ik haar gaarne
gezelschap houden, in plaats van mede naar het Bosch te gaan,” zeide
Jacob, die tamelijk vermoeid was van de morgenwandeling.
„Geneer je niet, Jacob,” antwoordde mevrouw Van Gent, die zeer goed
begreep wat de reden van Jacobs wellevendheid was. „Ik mag je niet van je
fortuin afhouden. Ga gerust mee. Kanongieten heb je nog nooit gezien en
een partijtje op de vijvers in het Bosch is ook niet te verwerpen.”
„Maar dan zit Mevrouw den geheelen namiddag alleen,” hervatte Jacob.
„Inderdaad, geneer je niet,” hernam mevrouw Van Gent. „Ik ben wel
gewoon aan de eenzaamheid. Mijn man is een groot deel van den dag uit.”
Jacob zat er geducht in, toen zijn gewaande beleefdheid zoo werd
gerefuseerd. Gelukkig dat Ben hem uit den nood hielp. „Mijn neef is zoo
vermoeid,” zeide hij. „En daarom hij wenscht te profiteeren van het
gezelschap van Mevrouw, because het hem behaagt veel.”
„Je slaat den spijker op den kop,” hervatte Jacob, die nu maar ruiterlijk voor
de waarheid uitkwam.
„’t Mocht je anders weer eens zoo gaan als eergisteren,” zeide Karel. „En
dan zou je een mal figuur maken op de vijvers in het Bosch.”
Toen de knapen het luncheon gebruikt hadden en wat uitgerust waren,
gingen zij, behalve Jacob die thuis bleef, met hun schaatsen in de hand naar
de kanongieterij, waar mijnheer Van Gent toegang had gekregen en waar
men juist aan het gieten was. Daarna wandelden zij het Bosch in, dat,
ofschoon van zijn groen beroofd en dus vrij wat minder schoon dan in den
zomer, er toch statig genoeg uitzag, om hun bewondering te wekken.
Nadat zij langen tijd in dat heerlijke gedenkstuk van den ouden tijd
gewandeld hadden, welks westelijk gedeelte nog eenig denkbeeld geeft, hoe
’t er in den tijd van de Batavieren en Kaninefaten uitzag, bonden zij de
schaatsen aan en vermaakten zich te midden van een talrijk en uitgezocht
publiek van schaatsenrijders, waarbij zij hun oogen uitkeken naar de bonte

1
rij van wandelaars uit de eerste standen des lands, die zich langs de vijvers
bewogen.
Daarop bezichtigden zij het Huis ten Bosch, door Amalia van Solms, de
weduwe van Prins Frederik Hendrik, ter eere van haar gemaal gesticht: een
mausoleum uit den nieuweren tijd. Vooral de Oranjezaal boeide hen lang.
Het is een achthoekige zaal met een rond koepeltje in het dak, hetwelk haar
een eigenaardig licht schenkt. Terstond bij het binnentreden wordt men
getroffen door de heerlijke voorstelling van Frederik Hendrik op zijn
triomfwagen met vier witte paarden door Pallas en Mercurius gemend;
terwijl de overwinning zijn hoofd met een lauwerkrans kroont en de faam
de pijlen afweert, waarmede de dood den held bedreigt. Niet minder trof
hun het beeld van den grijzen tijd en de afbeelding van de stichtster zelf met
haar dochters, levensgroot en ten voeten uit. Al had men geen andere
overblijfselen der Oudnederlandsche schilderschool dan die heerlijke
schilderijen uit de Oranjezaal, dan nog zouden deze genoegzaam zijn om
den naam onzer oude kunstenaars te vereeuwigen.
Maar ’t wordt tijd, dat wij met de jongens naar huis gaan. Ik laat aan de
verbeelding mijner lezeressen en lezers over, hoe het diner hun smaakte,
hoe genoeglijk zij den avond bij de familie Van Gent doorbrachten, hoe zij
naar Amsterdam spoorden en hoe zij toch op schaatsen van de hoofdstad
naar Broek reden. Ook wij keeren derwaarts terug.
Zie mijn „Huisgezin van den Raadpensionaris.” ↑

TIENDE HOOFDSTUK.

De gevaarlijke oéeratie.
’t Wordt tijd, dat we weder eens een kijkje nemen in de hut van Rolf
Brinker, van wien we ’t laatst hebben gehoord, toen Hans op weg naar
Leiden was.
Wij vinden er dokter Broekman, die, toen hij het briefje van Peter
ontvangen had, nog denzelfden dag naar Amsterdam was vertrokken om
hulp toe te brengen, waar hij die zoo hartelijk beloofd had. Wij zien hem in
een hoek van het vertrek zacht spreken met een jongmensch, student in de
medicijnen en zijn assistent. Hans is ook in het vertrek, eerbiedig
wachtende, totdat hij zou worden aangesproken. Van hun gesprek verstond
hij niets, daar het eensdeels fluisterend werd gevoerd, anderdeels zoo met
Latijnsche woorden doorspekt was, dat het toch voor hem geheel
onverstaanbaar zou zijn geweest, al hadden zij ook luide gesproken. Maar
zooveel begreep hij wel aan hun ernstig gelaat, dat er van iets zeer
gewichtigs sprake was, en daarin werd hij versterkt door de woorden van
den student:
„Indien er iemand in Holland den armen man kan redden, dokter, dan zijt
gij het.”
De dokter keek min of meer knorrig over dien lof; want hij wist maar al te
wel en had het in zijn langdurige praktijk slechts al te dikwijls
ondervonden, dat de kunst wel kan te gemoet komen, wel kan helpen, maar
dat er slechts één is die kan redden, en dat is God. Hij wenkte dus Hans, om
nader te komen.
„Hoor eens, mannetje,” zeide hij op denzelfden vriendelijken toon, als hij
vroeger te Buiksloot tegen hem had aangeslagen. „Daar is maar één middel
om je vader te helpen, maar ik moet je vooraf zeggen, dat hij onder de
handen kan dood blijven; ’t is een operatie.”
Hans keek den dokter angstig aan.

„En u zegt, dat vader onder uw handen kan sterven,” zeide hij met
sidderende stem.
„Ja, ’t is er op of er onder. Maar ik heb alle hoop, dat de operatie zal
gelukken. Intusschen, jij en je moeder moeten decideeren. De operatie is te
gevaarlijk, dan dat ik die zonder uw beslissing zou willen ondernemen.
Vraag jij dus aan je moeder, hoe zij er over denkt. Want er moet spoedig een
besluit genomen worden, daar mijn tijd kostbaar is.”
Hans ging naar zijn moeder, vertelde haar wat de dokter hem gezegd had,
en eindigde:
„En nu moeder, hoe wilt gij? De dokter wacht op antwoord.”
„Ach, Hans, ik weet het niet,” zeide zij met bewogen stem. „Beslis jij voor
mij en voor jou.”
„Maar moeder, hoe kan ik dat?”
„Ach kind! Wat zal ik antwoorden? Je zegt, dat vader onder de handen kan
dood blijven.”
„Dat kan hij. Maar hij kan ook beter worden.”
„Ik weet het niet, Hans, ik weet het waarlijk niet.”
„Welnu, antwoord dan, zooals God u dat in ’t hart geeft, moeder.”
Vrouw Brinker sloeg het betraande oog naar boven, als vroeg zij God om
raad. Uit het binnenste van haar ziel steeg een gebed naar den troon des
Almachtigen. Een oogenblik later wendde zij zich tot den dokter.
„Gods wil geschiede, mijnheer!” zeide zij. „Ga uw gang!”
Met kalme bedaardheid deed nu dokter Broekman een lederen étui open,
waaruit hij verschillende scherpe, blinkende instrumenten haalde, terwijl hij
Hans beval, een kom met frisch koud water en eenige doeken te brengen.

Griete had al wat er gebeurde met angstig stilzwijgen gadegeslagen. Toen
zij nu den dokter die scherpe instrumenten voor den dag zag halen, vloog
zij naar haar moeder toe, sloeg haar armen om den hals der reeds zoo
geschokte vrouw en riep uit:
„Ach moeder! ze zullen vader gaan vermoorden—dat zullen ze.”
„Ik weet het niet, kind!” schreide vrouw Brinker. „’t Is wel mogelijk!”
„Hoor eens, vrouwtje,” zeide dokter Broekman ernstig, terwijl hij tevens
een doordringenden blik op Hans wierp, „dat kan zoo niet gaan. Jij en het
meisje moeten het huis uit. De jongen kan blijven.”
’t Was, of er in vrouw Brinker eensklaps een andere geest voer. Zij droogde
haar tranen, hief het hoofd fier op en zeide op vasten toon:
„Neen, mijnheer, ik verlaat mijn man niet. Ik blijf bij hem in de ure des
gevaars.”
Dokter Broekman keek vreemd op. Hij was niet gewoon, dat zijn bevelen in
den wind werden geslagen. Maar toen hij de vrouw aanzag en haar vasten,
beslissenden blik opmerkte, toen zeide hij kalm:
„Je kunt blijven, vrouw Brinker.”
Griete was al verdwenen. Verborgen achter een kist, die in een donkeren
hoek van het vertrek stond, bevend over al haar leden, bespiedde zij al wat
er in de hut voorviel.
Dokter Broekman en zijn assistent trokken hun overjassen uit. Hans bracht
een kom vol water, welke hij op bevel van den geneesheer naast het bed
plaatste, en vrouw Brinker kreeg een paar beddelakens uit de kast,
overblijfsels van vroegere tijden en braaf versleten, doch voor het gebruik,
dat er van gemaakt moest worden, des te beter geschikt.
„Nu Hans, kan ik op je rekenen?” vroeg de dokter.

„Dat kunt u, dokter.”
„Zeer goed. Ga jij nu daar staan, dan kan je moeder naast je zitten.”
„Hoor eens, vrouwtje,” ging hij tot vrouw Brinker voort. „Ik moet je
verzoeken, geen kik te geven en niet flauw te vallen.”
Vrouw Brinker antwoordde hem slechts met een blik. Hij was tevreden.
Hij wenkte den student. Deze nam de vreeselijke instrumenten van de tafel
af en ging er mede naar het bed van den zieke.
Nu kon Griete ’t niet langer uithouden. Zij kwam uit haar schuilplaats te
voorschijn en snelde de hut uit.
’t Was vol
op het ijs
van de
vaart.
Waarom
ook
zouden de
kinderen
van Broek
hun
vacantietij
d laten
voorbijgaa
n, zonder
ruimschoot
s hun
geliefd
winterver
maak te
genieten?
Daar
waren er een aantal, al waren onze zes jongens er ook niet bij, en onder

deze ook Frans van Bree, de dappere aap zonder staart, zooals Jacob hem
genoemd had.
„Wat is dat daar ginds?” riep Frans eensklaps uit, terwijl hij stilstond.
„Wat? Waar? Wat bedoel je?” riepen een dozijn stemmen te gelijk.
„Wel, dat zwarte ding daar bij de hut van den gekken Rolf,” hernam Frans.
„Ik zie niets,” zeide een der kinderen.
„Ik wel,” antwoordde een andere, „’t Lijkt wel een hond.”
„Ben je mal? Een hond? ’t Is niets dan een hoop oude lorren,” hernam
Frans.
„Een hoop oude lorren?” herhaalde een ander.
„Je hebt warempel gelijk, Frans, en als ik mij niet bedrieg, is ’t die meid uit
de hut.”
„Ze is het,” bevestigde Frans. „Heb ik dus geen gelijk gehad, dat het maar
een hoop oude lorren is?”
„’t Is goed, dat haar broer Hans er niet bij is,” meende een ander lachende,
„anders zou je zoo niet spreken, Fransje.”
„’k Ben nog al bang voor hem!” riep Frans dapper uit, daar hij Hans in geen
velden of wegen zag. „Zoo’n voddenraper! Hij moest me eens durven
aanraken, ’k Ben nog niet bang voor een dozijn zooals hij en voor jou ook
niet.”
„’k Hou je aan je woord!” riep de andere en reed op Frans toe; maar deze,
die zich in zijn bluf wat vergaloppeerd had, koos, tot groot pleizier van de
anderen, het hazenpad gevolgd door den vroolijken troep, die de
harddraverij wel eens wilde zien.

Eén echter van deze gelukkige kinderen dacht aan die zwarte kleine
gedaante daar bij de hut van Rolf Brinker, aan de arme, kleine Griete. De
arme Griete! Zij dacht niet aan hen, ofschoon hun vroolijk gelach haar in de
ooren drong en haar door de ziel moest snijden: ach! zij hoorde die
schaterende tonen slechts als in een droom. Zij hoorde slechts het gekerm
daar achter het donkere venster. Hoe! als die vreemde mannen haar vader
eens doodden!
Die gedachte deed haar van afgrijzen opstaan.
„Neen, neen,” riep zij snikkend uit. „Moeder is er immers, en Hans ook. Zij
zullen er wel op passen! Maar wat zagen ze allebei verschrikkelijk bleek!
Zelfs Hans stonden de tranen in de oogen.”
Een oogenblik later vervolgde zij, terwijl zij schuw naar de hut keek:
„Waarom heeft die oude, knorrige dokter hem laten blijven en mij
weggezonden! Ik zou moeder hebben kunnen kussen en haar troosten. Zij
houdt zooveel van mij, meer.… Maar wat is ’t nu stil in huis. O, als vader
sterft, dan gaat moeder ook dood en Hans misschien ook, en wat moet er
dan van mij worden!”
En zij verborg haar schreiend gelaat in haar handjes.
Toen kwamen er nieuwe gedachten in haar op.
Waarom had Hans ’t alleen aan moeder gezegd, wat de dokter ging doen en
niet aan haar? ’t Was toch haar vader, net zoo goed als de zijne. Zij was
geen klein kind meer. Zij had haar vader eens een scherp mes afgenomen,
waarmee hij zich een ongeluk zou hebben toegebracht, als zij het niet belet
had. En op dien akeligen avond, toen Hans, zoo groot als hij was, daar
bewusteloos in een hoek van het vertrek lag, toen had zij vader van het vuur
gelokt en ’t was door haar toedoen, dat moeder niet in brand gevlogen
was. Waarom moest zij nu behandeld worden, alsof zij er niet bij hoorde?—
Ach! wat was het koud! hoe bitter koud! Haar voeten waren als steenen!

Toen ging Griete weer zitten op de plaats, van welke zij was opgestaan, en
keek rondom zich en verwonderde zich, dat de lucht zoo helder blauw was
en dat het zoo stil in de hut bleef en.…
„Wat heeft die dokter een rare lip!” zeide zij eensklaps. „’t Lijkt net een
schaats! En wat blonken die messen, welke hij uit dien leeren zak haalde.
Misschien nog mooier dan de zilveren schaatsen. Had ik mijn nieuw
jacketje maar aangedaan, dan zou ik ’t zoo koud niet hebben! Dat nieuwe
jacketje is zoo mooi, ’k heb nog nooit zoo iets moois gehad!—God heeft
zoo lang voor mijn vader gezorgd; Hij zal ’t nu ook wel doen, als die twee
mannen maar weg waren.—Kijk, daar staan ze allebei op het dak van ons
huis!—Neen, ’t zijn moeder en Hans. O, neen! ’t zijn maar een paar
vogels.”
En weder hield Griete beide handjes voor haar oogjes en schreide zóó luid,
dat men ’t wel in de hut had kunnen hooren.
Eensklaps voelde zij een vreemde hand op haar schouder leggen.
„Sta op, Griete,” zeide een vreemde stem. „Sta op kind! Anders heb je nog
kans om te bevriezen.”
Griete keek verschrikt op. ’t Was de lieve Hilda de Bruyn.
„Sta op, Griete, en ga in de hut,” vervolgde het lieve meisje. „Is dat nu
weer, om buiten op de steenen te zitten?”
„O, neen, juffrouw,” zeide het kind, terwijl zij opstond en tegen Hilda
aanleunde, „ik ga niet in de hut, want de dokter is er in en hij heeft mij
weggestuurd.”
„Zoo, welnu, dan moet je wat gaan loopen, Griete. Want je bent verkleumd
van de kou. Ik zag je daar straks wel zitten, maar ik dacht, dat je aan ’t
spelen waart. Waarom heb je ook je jacketje niet aangedaan?”
„Daar had ik geen tijd voor, juffrouw, want ik ben zoo hard als ik kon de hut
uitgeloopen.”

„Kom hier, doe mijn jacketje zoo lang aan, totdat je weer in de hut kunt
komen,” hervatte Hilda, die reeds pogingen deed, om zich van haar eigen
winterkleed te berooven. „Als de dokter wist, hoe koud je ’t hier hebt, zou
hij je wel weer in de hut laten.”
„O, juffrouw,” riep Griete smeekend uit, „doe als ’t u belieft uw jacketje
niet uit. Ik ben wel koud, maar zal wel weer warm worden, als ik maar wat
beweging neem.”
„Nu, ’t is goed, Griete. Sla je armen dan maar over elkaar. Maar zeg me, is
er een dokter bij je in huis? Is je vader dan erger?”
„Ach, juffrouw! Ik geloof, dat hij sterft!” riep Griete weenend uit. „Er zijn
op ’t oogenblik twee dokters bij hem, die hem zullen vermoorden. Kunt u
hem hier niet hooren kermen? Ik kan ’t niet hooren, door het fluiten van den
wind.”
Hilda luisterde, maar vernam niets.
„We zullen eens door het venster zien, hoe ’t met uw vader is,” hernam zij.
Dit zeggende, ging ze met Griete naar het raam der hut. Maar eensklaps
bedacht zij zich.
„Ik mag niet door eens anders raam kijken,” zeide zij bij zich zelve. „Kijk
jij er eens door, Griete,” vervolgde zij, „en zeg me dan wat je ziet.”
Griete ging op haar teenen staan en keek.
„Kind, je bent zelf ziek,” hernam Hilda, die haar ondersteunde en voelde
hoe het arme meisje over haar geheele lichaam beefde.
„Neen, juffrouw, ik ben niet ziek,” antwoordde Griete, „maar mijn hart
schreit, al zijn mijn oogen ook zoo droog als de uwe.—Hé, juffrouw! U
schreit ook. Schreit u om ons? O, dat is wel goed, en als onze lieve Heer dat
ziet, zal hij vader zeker beter maken.”
„Wat zie je, Griete?” vroeg Hilda. „Of kun je niets zien?”

„Vader ligt heel stil, juffrouw, met een doek om zijn hoofd en allen kijken
naar hem. Ik moet naar binnen, naar moeder. Gaat u mee, juffrouw?”
„Nu niet, maar later kom ik eens hooren, hoe ’t met je vader is.”
En Griete hoorde de laatste woorden niet meer: want snel liep zij den hoek
om en trad, zoo zacht als zij kon, de hut binnen.
In de kamer was ’t stil. ’t Was, of zij den ouden dokter kon hooren
ademhalen, ja, als hoorde zij de asch op de plaat van den haard vallen. De
hand van haar moeder was ijskoud, maar haar wangen gloeiden en haar
oogen stonden glazig helder.

Eindelijk kwam er beweging op het bed, wel zeer zacht, maar genoeg om
hen allen hun oogen naar dien kant te doen richten; dokter Broekman boog
zich oplettend voorover. Brinker trok zijn groote hand, zoo bleek en zoo
zwak voor die van zulk een stevig man, onder het dek vandaan en voelde er
mee naar zijn voorhoofd. Hij scheen daar het verband te voelen, doch deed
dat niet op die rustelooze, onbewuste manier, maar alsof hij met bewustheid
onderzocht, wat men hem toch om het hoofd had gebonden. Zelfs dokter
Broekman hield zijn adem in. Daarop sloeg de patiënt zijn oogen langzaam
op.

„Wat gauw, jongens,” zeide hij met een stem, die in Griete’s ooren zeer
vreemd klonk. „Haalt die kribben wat hooger en werpt er aarde op. Het
water rijst zoo snel—er is geen tijd.…”
Vrouw Brinker vloog naar het bed, greep beide handen van haar man en
zeide:
„Rolf, Rolf! ouwe jongen! Praat eens tegen me.”
„Ben jij ’t, Mietje?” vroeg hij met een zwakke stem. „Ik heb zoo lang
geslapen en geloof zelfs, dat ik mij bezeerd heb. Waar is de kleine Hans?”
„Hier ben ik, vader!” riep Hans, half dol van vreugde. Maar de dokter hield
hem terug.
„Hij kent ons!” riep vrouw Brinker uit. „Hij kent ons, Griete! Griete! kom
eens bij je vader!”
Tevergeefs beval dokter Broekman stilte en trachtte hij hen met geweld van
het bed te houden. Hij kon er niets tegen doen. Hans en zijn moeder lachten
en weenden te gelijk. Griete liet geen geluid hooren, maar stond hen met
blijde en toch verschrikte oogen aan te kijken. Haar vader vroeg met
zwakke stem:
„Slaapt het kleine kind, Mietje?”
„Het kleine kind!” herhaalde vrouw Brinker. „O, Griete, dat ben jij! En hij
noemt onzen Hans „den kleinen Hans”! Tien jaren heeft hij geslapen! O,
mijnheer, gij hebt ons allen gered! Van al die tien jaren weet hij niets af.
Kinderen, dankt toch dien goeden dokter.”
De arme vrouw was buiten zich zelve van vreugde. Dokter Broekman zeide
niets, maar toen hij zijn oogen, die vochtig waren, op de hare vestigde,
sloeg hij ze opwaarts. Zij begreep, wat hij meende. Ook Hans en Griete
begrepen het. Alsof zij ’t hadden afgesproken, knielden ze alle drie in de hut
neder, zonder dat vrouw Brinker echter haar mans hand losliet. Dokter
Broekman stond bij hen en boog eerbiedig het hoofd.

„Waarom bid je?” mompelde de vader. „Is ’t vandaag Zondag?”
Vrouw Brinker knikte, maar kon niet spreken.
„Lees dan een hoofdstuk uit den Bijbel,” hervatte Rolf, terwijl hij langzaam
en met moeite sprak. „Ik weet niet, hoe ’t met mij is. Ik ben zoo zwak.
Misschien wil de dominee het ons voorlezen.”
Griete kreeg den zwaren bijbel van de gebeeldhouwde plank. Dokter
Broekman, die er om meesmuilde, dat Rolf hem een dominee noemde,
reikte het boek aan zijn assistent over.
„Lees,” mompelde hij. „Die menschen moeten tot rust gebracht worden,
anders sterft de man nog.”
Toen het hoofdstuk uit was, wenkte vrouw Brinker geheimzinnig, dat allen
stil moesten zijn, want dat haar man sliep.
„Hoor eens, vrouwtje,” zeide de dokter met gedempte stem, terwijl hij zijn
overjas aantrok. „Er moet hier de grootste stilte zijn, versta je. Morgen kom
ik terug. Geef den patiënt vandaag geen eten,” en zonder verder een woord
te zeggen, ging hij de hut uit, de bevroren vaart over en naar het rijtuig, dat
den ganschen tijd, dien de dokter in de hut had doorgebracht, den weg
langzaam op en neer gereden had, om de paarden in beweging te houden.
Hans ging ook de deur uit.
„Moge God u zegenen, mijnheer,” zeide hij blozend en met een stem, die
van aandoening beefde. „Ik kan u nooit beloonen. Doch, als.…”
„Ja, dat kun je wel,” antwoordde de dokter vrij stroef. „Je kunt je verstand
gebruiken, als de patiënt weer wakker wordt. Als je wilt, dat je vader beter
zal worden, dan moet je allemaal je stilhouden.”
Hilda was aan de hut blijven staan, totdat zij Hans had hooren zeggen:
„Hier ben ik, vader!” Toen was zij heengegaan, terwijl zij in zich zelf
mompelde: „O, wat ben ik blij! Wat ben ik blij!”

’t Duurde niet lang, of het nieuws, dat de krankzinnige Brinker weer tot zijn
verstand was gekomen, was met de noodige vergrootingen door geheel
Broek verspreid. Nog dien zelfden avond werd er verhaald, dat dokter
Broekman hem een groote hoeveelheid medicijnen had ingegeven en dat er
zes mannen waren noodig geweest om den patiënt vast te houden, terwijl de
dokter hem die in de keel goot. Terstond daarop was de krankzinnige van
zijn bed gesprongen en had, in het volle bezit zijner geestvermogens, zich
op den dokter geworpen en hem een pak slaag gegeven; daarop was hij
gaan zitten en had al de omstanders aangesproken, alsof hij een advocaat
was. Toen had hij zich omgekeerd en heel vriendelijk met zijn vrouw en
kinderen gepraat. Vrouw Brinker had het daarna op haar zenuwen gekregen,
en Hans had gezegd: „Hier ben ik, vader!” En Griete had gezegd: „Hier ben
ik, vader!” En de dokter was zoo bleek als een lijk in zijn rijtuig gekropen
en naar Amsterdam teruggereden.

ELFDE HOOFDSTUK.

De verborgen ëchat.
Toen dokter Broekman den volgenden dag in de hut van Rolf Brinker
kwam, was ’t hem alsof een atmosfeer van geluk hem tegenwoei. Vrouw
Brinker zat met een vergenoegd gezicht te breien aan het bed van haar man,
die rustig sliep, terwijl Griete in een hoek van het vertrek bezig was met het
kneden van roggebrood.
De dokter bleef niet lang. Hij deed eenige vragen, scheen tevreden over de
antwoorden, voelde den pols van zijn patiënt en zeide:
„Je man is geducht zwak, vrouw Brinker! Hij moet wat versterkends
hebben. Je kunt er gerust mee beginnen, maar niet te veel op eens, en wat je
hem geeft, het moet van het krachtigste en beste zijn.”
„Wij hebben niets dan roggebrood en aardappelen, mijnheer,” antwoordde
vrouw Brinker, „en die zijn hem altijd heel goed bekomen.”
„Ho, ho! vrouwtje,” hernam de dokter, terwijl hij zijn wenkbrauwen fronste.
„Dat deugt niemendal voor hem. Hij moet krachtigen bouillon hebben, met
oudbakken wittebrood, dat je kan roosteren, dan goede Malaga, en .… de
man lijdt kou .… je moet hem beter dek geven. Waar is je zoon?”
„Hij is het dorp ingegaan, om te zien of hij werk kan krijgen, mijnheer!
Maar hij zal wel gauw terug zijn. Wil u niet gaan zitten?”
Hetzij de harde stoel, dien vrouw Brinker hem aanbood, niet bijzonder
uitlokkend scheen, of dat de dokter haast had—hij nam het aanbod van
vrouw Brinker niet aan, maar zette zijn hoed op en vertrok.
Dokter Broekman’s bezoek had ditmaal geen aangenamen indruk
achtergelaten. Griete kneedde haar roggebrood met een zenuwachtige
gejaagdheid en vrouw Brinker ging naar het bed van haar man en barstte in
bittere tranen uit.

Op dit oogenblik kwam Hans binnen.
„Wat scheelt er aan, moeder?” vroeg hij, toen hij haar zag weenen. „Is vader
soms erger?”
Zij wendde haar gelaat naar Hans, en zonder eenige poging om voor hem de
reden van haar verdriet verborgen te houden, gaf zij hem ten antwoord:
„Ja Hans! je arme vader lijdt honger en kou—dat heeft de dokter gezegd.”
Hans werd bleek.
„Dan moet ge hem wat eten geven, moeder, en hem wat warmer
toedekken,” zeide hij.
„Eten? Roggebrood en aardappelen? Dan vermoorden wij hem. Ons eten is
te zwaar voor hem. Ach! hij zal sterven, je arme vader, als we hem dat
geven. Hij moet vleesch hebben, wijn en een zacht, warm bed. Wat moeten
wij beginnen? Wat zullen wij doen? Er is geen stuiver in huis!”
„Heeft de dokter dan gezegd, dat hij al die dingen moest hebben?” vroeg
Hans.
„Ja, dat heeft hij gezegd.”
„Welnu, moeder! Droog dan uw tranen. Hij zal ze hebben. Nog vóór den
avond breng ik hem vleesch en wijn. En wat het dek aangaat, neem het
maar van het mijne af. Ik ben jong en sterk en kan wel zonder dek slapen!”
„En dan, wat er tot overmaat van ramp nog bij komt, Hans, we hebben geen
brandstof meer. Je vader is er wat ruw mee omgegaan, als ik het niet zag.”
„Zorg daar maar niet voor, moeder! Als de nood dringt, dan kan ik den
wilgeboom wel omhakken; maar van avond zal ik wat brandstof
medebrengen. Al is er te Broek geen werk te vinden, in Amsterdam moet er
zijn. Maak u maar niet ongerust. Het ergste van alles is geleden. Nu vader
weer bij zijn verstand is, kunnen wij allen dingen het hoofd bieden.”

„Je hebt gelijk, Hans,” antwoordde vrouw Brinker, terwijl zij haar tranen
afdroogde.
„Kijk hem nu eens, moeder, hoe rustig hij slaapt. Zou God hem nu door
gebrek aan voedsel doen omkomen, nu Hij hem ons heeft teruggeschonken?
Treur dus niet.”
Dit zeggende, nam Hans zijn schaatsen onder den arm, gaf zijn moeder een
kus en ging de deur uit. De arme jongen, moedeloos door het mislukken van
al zijn pogingen om in Broek werk te krijgen, in bittere smart over de
mededeelingen zijner moeder, hield zich groot, grooter dan hij ’t wel in zijn
hart was, ja, poogde te fluiten, terwijl hij naar de vaart ging.
Nooit was bij de Brinkers de nood zoo hoog geweest. Hun voorraad van
brandhout was zoo goed als op en Griete was bezig met het laatste meel tot
brood te kneden. En geld—dat was een soort van ding, dat er in de hut aan
de vaart weinig, thans in ’t geheel niet te vinden was.
’t Speet Hans, dat hij den koetsier niet had verzocht, om stil te houden, toen
de dokter hem daar straks voorbijreed. Misschien had moeder het verkeerd
verstaan. De dokter kon toch wel begrijpen, dat het buiten hun macht was
om vader bouillon en wijn te verschaffen. En toch—hij was er wel zeker
van, dat de arme man het noodig had, want hij zag er zoo zwak uit.
„Kon ik maar werk krijgen, dan zou men mij misschien wat op voorschot
geven. Ik moet werk hebben. Was mijnheer Van den Helm maar niet juist
naar Rotterdam, dan zou hij mij wel werk verschaffen. Maar de jonge heer
Peter heeft mij gezegd, dat ik maar naar zijn moeder moest gaan, als ik hulp
noodig had. Weet je wat, ik doe ’t: baat het niet, dan schaadt het niet. O, als
’t maar zomer was!”
Onder dit zelfgesprek had Hans zijn schaatsen ondergebonden en reed hij
naar de woning van mijnheer Van den Helm.
„Vader moet wijn en vleesch hebben,” mompelde hij. „Maar hoe kom ik
vandaag nog aan het geld, om het voor hem te koopen? Er is geen ander
middel op, dan de belofte te vervullen, die ik stilzwijgend aan den jongen

heer Peter gedaan heb. Een beetje wijn en vleesch beteekent niets voor de
familie Van den Helm. Als vader maar voedsel heeft, dan rij ik naar
Amsterdam en zie daar wat te verdienen.”
Toen kwamen er andere gedachten bij hem op, gedachten, die zijn hart
heviger deden kloppen en het schaamrood op zijn wangen joegen.
„Dat zou bedelen zijn,” zeide hij. „Op zijn zachtst genomen bedelen.
Nooit heeft een van de Brinkers gebedeld! Zal ik dan de eerste zijn? En zou
mijn arme vader daartoe uit zijn tienjarigen doodslaap ontwaakt zijn, opdat
hij moet hooren, dat zijn huisgezin om een aalmoes gevraagd heeft, hij die
altijd zoo fier werkzaam was? Neen, dan is ’t nog duizendmaal beter om het
horloge te verkoopen.”
Hij stond peinzend stil.
„Verkoopen?” vervolgde hij, als beantwoordde hij zich zelf. „Wel, dat
behoeft niet. Ik kan het te Amsterdam beleenen. Dat is toch geen schande.
En als ik dan werk krijg, dan los ik het weer, en vader is geholpen.”
Die laatste gedachte deed hem opspringen van vreugde.
„Ik behoef ’t niet stil te doen ook,” vervolgde hij in het naar-huis-rijden.
„Wel neen, in ’t geheel niet. Ik kan er vader zelfs naar vragen. Hij is nu
weer bij zijn volle verstand. Misschien is hij al wakker. Dan kan hij ons
vertellen wat er van dat horloge is. Misschien zegt hij, dat het van niet het
minste belang is.”
Sneller dan hij van huis gereden was, reed hij terug. Hij ontmoette zijn
moeder juist in de deur.
„O, Hans!” riep zij met een gelaat, dat van vreugde straalde, uit: „Daar is de
jonge dame geweest met haar dienstmaagd. Zij heeft alles meegebracht, wat
wij noodig hebben: vleesch, soep, wijn en brood—een mand vol brood. En
de dokter heeft een mand gestuurd met een paar flesschen wijn, een zacht
bed en warme dekens voor vader. O, nu zal hij wel weer beter worden. God
zegene die edele juffrouw Hilda en den braven dokter!”

„God zegene hen!” herhaalde Hans en de tranen kwamen hem in de oogen.
Dien avond gevoelde Rolf Brinker zich zooveel beter, dat hij er op
aandrong, om een weinig in zijn ruwen stoel met hooge leuning op te zitten.
Het gaf wel een weinig verlegenheid in de hut. Wat Hans aangaat, die kon,
ofschoon zijn vader een zwaar man was, hem best aan de eene zijde
ondersteunen, maar vrouw Brinker, ofschoon ze in ’t geheel niet zwak was,
beefde zoozeer vooral bij het denkbeeld, dat zij iets zou doen, wat de dokter
niet geordonneerd had, dat het weinig scheelde of zij was onder den last
bezweken.
„Hou je goed, vrouw! hou je goed,” zeide Rolf. „Ben ik dan zóó oud en zóó
zwak geworden, dat ik niet meer op mijn beenen staan kan. Of is het de
koorts?”
„Hoor me dien man eens aan!” riep vrouw Brinker zenuwachtig lachend uit.
„Spreekt hij niet als een gezond christenmensch! ’t Is de koorts, die je zoo
zwak maakt, Rolf. Hier is je stoel. Ga nu maar zitten!”
Met deze woorden liet vrouw Brinker haar man zachtjes op zijn stoel
nederzakken, op welken zij een donzig kussen had gelegd. Hans deed
hetzelfde. Intusschen bracht Griete alles aan, wat tot gemak van haar vader
kon dienen, en stookte het vuur ferm op.
Eindelijk zat Rolf Brinker op zijn gemak. Geen wonder, dat hij vreemd
rondkeek. „De kleine Hans” had niet veel minder gedaan dan hem
ondersteund. Het „kleine kind” was meer dan vier voet lang en stookte den
haard zoo goed, alsof haar moeder ’t gedaan had. Mietje, zijn vrouw, was
nog wel even mooi als vroeger, maar heel wat dikker geworden, en dat alles
—naar ’t hem voorkwam, in weinige uren. De eenige welbekende dingen,
welke hij om zich zag, waren de grenenhouten tafel, welke hij zelf had
gemaakt, de Bijbel op de plank en de kast in den hoek.
Wat wonder, dat de oogen van Rolf Brinker zich met tranen vulden, zelfs bij
het zien van de vroolijke gezichten, die hem omringden. Tien jaren toch van
een menschenleven is geen gering verlies! Tien jaren van mannelijken
leeftijd, van huiselijk geluk en huiselijke zorg, tien jaren van eerlijken

arbeid, genieting van
zonneschijn, van een
leven in dankbaarheid
gesleten. En die tien
jaren als een enkele
nacht voorbijgegaan!
Was ’t wonder, dat er
bittere tranen over zijn
wangen vloeiden, toen
hij begreep, wat er met
hem gebeurd was?
Die tranen—’t was of
zij in het hart van
Griete drongen en daar
de ijskorst deden
smelten, die dat
jeugdige hart bedekte.
Thans had zij haar
vader lief, snelde op
hem toe en sloeg de
armen om zijn hals.
„Vader, lieve vader!”
fluisterde zij, terwijl zij
haar wangetje dicht
aan de eene zijde
drukte. „O, lieve vader,
schrei zoo niet! Wij zijn allen hier.”
„God zegene u, kind!” snikte Rolf, terwijl hij haar herhaalde malen kuste.
„Ik had dat waarlijk vergeten!”
Hans en vrouw Brinker hadden zwijgend en met aandoening Griete
gadegeslagen. Zij waren zóó blijde, dat het kind, hetwelk haar vader
eigenlijk nooit gekend had, zich thans zoo lief jegens hem gedroeg. Rolf

Brinker nam het hoofd van zijn dochtertje tusschen zijn beide handen, keek
haar vriendelijk in het gelaat, wendde zich toen tot zijn vrouw en zeide:
„Ik geloof, dat ik haar ken, Mietje. Dezelfde blauwe oogjes, dezelfde
lippen, ’t is het lieve kind, dat al kon zingen, vóór ze nog kon loopen. Maar
dat is al lang geleden, heel lang,” voegde hij er bij, terwijl hij met een
droomerig gelaat opkeek, „en al die tijd is nu voorbij.”
„In ’t geheel niet, Rolf!” riep zijn vrouw haastig uit. „Denk je dan, dat ik er
niet voor gezorgd heb, dat ze jou niet vergat? Griete, kind! zing eens het
oude liedje, dat je zoo lang gekend hebt.”
Rolf Brinker liet zijn handen zwaar naast zich hangen en sloot zijn oogen,
maar er speelde een glimlach om zijn lippen, toen Griete met haar heldere
stem dat oude, welbekende liedje zong.
Het was een eenvoudig wijsje; de woorden had zij nooit gekend.
En als uit instinct zong zij de noten zóó zacht, dat Rolf zich bijna
verbeeldde, dat zijn tweejarig kindje weer naast hem zat.
Zoodra het gezang gedaan was, klom Hans op een bankje en begon boven
in de kast te rommelen.
„Wees voorzichtig, Hans,” zeide vrouw Brinker, die, hoe arm zij ook was,
steeds een zorgvuldige huismoeder bleef. „Wees voorzichtig, dat je den
wijn niet omgooit, en pas op het brood, dat er naast staat.”
„Wees maar niet bang, moeder,” antwoordde Hans, die ver boven de
hoogste plank uitstak. „Ik zal niets omgooien.”
Toen sprong hij van het bankje af en ging naar zijn vader, voor wien hij een
langwerpig stuk grenenhout op tafel zette. Eén van de einden was schuin
afgerond, en het bovengedeelte was uitgehold.
„Weet gij wel, wat dat is, vader?” vroeg hij.

Rolf Brinker’s gelaat helderde op.
„Of ik het weet. Wel ja, mijn jongen, dat is de boot, waaraan ik giste.…
neen, niet gisteren, maar jaren geleden bezig was.”
„Ik heb haar altijd bewaard, vader. Als uw handen weer sterker zijn, kunt gij
haar afmaken.”
„Dat is goed, mijn jongen. Maar niet voor jou, hoor. ’k Moet nu maar
wachten, tot ik kleinkinderen heb. Wel kerel, je bent bijna een man. Heb je
je moeder al die jaren trouw geholpen?”
„Ja, dat heeft hij gedaan,” zeide vrouw Brinker.
„Laat me eens bedenken,” prevelde de vader. „Hoe lang is ’t sinds dien
nacht, dat het water zoo hoog was? Dat is ’t laatst, wat ik mij herinner.”
„We hebben je de waarheid gezegd, Rolf. ’t Is al over de tien jaren.”
„Tien jaren—en toen ben ik gevallen, niet waar? En heb ik sedert al dien
tijd in de koorts gelegen?”
Vrouw Brinker wist niet, wat zij moest antwoorden. Zou ze hem alles
vertellen? Dokter Broekman had haar volstrekt verboden, om hem bekend
te maken, dat hij krankzinnig, idioot geweest was. Hans en Griete stonden
verbaasd te kijken, toen hun moeder antwoordde:
„Dat heeft er veel van, Rolf. Je begrijpt, als zoo’n zwaar man als jij op zijn
hoofd valt, dan loopt dat zoo gemakkelijk niet af. Maar nu ben je weer
beter, Goddank!”
Rolf liet zijn hoofd zakken.
„’t Is goed, vrouw,” hernam hij, na een oogenblik gezwegen te hebben. „’t
Is me tusschenbeide of mij de hersens in het hoofd draaien. Dat zal wel niet
beter worden, vóór ik op den dijk ga werken. Wanneer denk je, dat ik weer
aan den arbeid kan gaan?”

„Hoor me zoo’n man eens aan!” riep vrouw Brinker verheugd en toch
verschrikt. „We moeten hem te bed brengen, Hans! Dat wou nu al gaan
werken!”
Zij poogden hem nu van zijn stoel op te richten, maar hij was nog niet van
zins om naar bed te gaan.
„Schei toch uit,” zeide hij met zijn ouden glimlach, een glimlach dien
Griete nog nooit op zijn gelaat had gezien. „Moet je een man oplichten als
een blok hout. ’t Duurt geen drie dagen of ik ben weer op den dijk aan ’t
werk. Daar zal ik weer mijn oude, goeie jongens vinden! Wat zullen ze in
hun schik zijn, als ze mij weer zien verschijnen! Daar heb je Jan
Kamphuijzen en den jongen Hoogvliet. ’t Waren trouwe kameraads, Hans,
daar kan je op aan!”
Hans keek zijn moeder aan. De jonge Hoogvliet was al vijf jaren geleden
gestorven en Jan Kamphuijzen zat in de cellulaire gevangenis te
Amsterdam.
„Ze zullen ’t nog wel goed maken, denk ik,” zeide vrouw Brinker
ontwijkend. „Maar je begrijpt wel, Rolf, dat we geen tijd hebben gehad om
ons met hen te bemoeien. Hans had het te druk met leeren en werken, dan
dat hij kameraden zou hebben kunnen zoeken.”
„Leeren en werken!” herhaalde Rolf op peinzenden toon. „Kan de jongen
dan lezen en schrijven, Mietje?”
„Dat zou ik meenen,” antwoordde zij trotsch. „Je zult het hooren, Rolf. In
den tijd, dat ik den vloer doe, leest de jongen een heel boek uit. Hij is net
zoo blij met een blaadje gedrukt schrift als een konijn met een koolstronk.
En cijferen dat hij kan.…”
„Hans, help mij een handje,” viel Rolf zijn vrouw in de rede. „Ik wou weer
naar bed.”

Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookgate.com