Perspectives And Problems In Nolinear Science A Celebratory Volume In Honor Of Lawrence Sirovich 1st Edition Henry D I Abarbanel

vudricanyar 1 views 77 slides May 15, 2025
Slide 1
Slide 1 of 77
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77

About This Presentation

Perspectives And Problems In Nolinear Science A Celebratory Volume In Honor Of Lawrence Sirovich 1st Edition Henry D I Abarbanel
Perspectives And Problems In Nolinear Science A Celebratory Volume In Honor Of Lawrence Sirovich 1st Edition Henry D I Abarbanel
Perspectives And Problems In Nolinear Scie...


Slide Content

Perspectives And Problems In Nolinear Science A
Celebratory Volume In Honor Of Lawrence Sirovich
1st Edition Henry D I Abarbanel download
https://ebookbell.com/product/perspectives-and-problems-in-
nolinear-science-a-celebratory-volume-in-honor-of-lawrence-
sirovich-1st-edition-henry-d-i-abarbanel-4207362
Explore and download more ebooks at ebookbell.com

Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Epsa11 Perspectives And Foundational Problems In Philosophy Of Science
1st Edition Nancy Cartwright Lse And Ucsd Auth
https://ebookbell.com/product/epsa11-perspectives-and-foundational-
problems-in-philosophy-of-science-1st-edition-nancy-cartwright-lse-
and-ucsd-auth-4601558
Multiscale Problems In Science And Technology Challenges To
Mathematical Analysis And Perspectives 1st Edition Luc Tartar Auth
https://ebookbell.com/product/multiscale-problems-in-science-and-
technology-challenges-to-mathematical-analysis-and-perspectives-1st-
edition-luc-tartar-auth-4188216
Evolutionary Thinking Across Disciplines Problems And Perspectives In
Generalized Darwinism Agathe Du Crest
https://ebookbell.com/product/evolutionary-thinking-across-
disciplines-problems-and-perspectives-in-generalized-darwinism-agathe-
du-crest-50699578
The East German Dictatorship Problems And Perspectives In The
Interpretation Of The Gdr Corey Ross
https://ebookbell.com/product/the-east-german-dictatorship-problems-
and-perspectives-in-the-interpretation-of-the-gdr-corey-ross-2174500

Problems And Perspectives Studies In The Modern French Language Wendy
Ayresbennett Janice Carruthers
https://ebookbell.com/product/problems-and-perspectives-studies-in-
the-modern-french-language-wendy-ayresbennett-janice-
carruthers-11855672
Gambling Problems In Youth Theoretical And Applied Perspectives 1st
Edition Durand F Jacobs Auth
https://ebookbell.com/product/gambling-problems-in-youth-theoretical-
and-applied-perspectives-1st-edition-durand-f-jacobs-auth-4413020
Grand Challenge Problems In Technologyenhanced Learning Ii Moocs And
Beyond Perspectives For Research Practice And Policy Making Developed
At The Alpine Rendezvous In Villarddelans 1st Edition Julia Eberle
https://ebookbell.com/product/grand-challenge-problems-in-
technologyenhanced-learning-ii-moocs-and-beyond-perspectives-for-
research-practice-and-policy-making-developed-at-the-alpine-
rendezvous-in-villarddelans-1st-edition-julia-eberle-5234276
Issues In Economic Thought Economic Issues Problems And Perspectives
Miguelangel Galindo Martin
https://ebookbell.com/product/issues-in-economic-thought-economic-
issues-problems-and-perspectives-miguelangel-galindo-martin-1930276
Evidencebased Practice In Complementary And Alternative Medicine
Perspectives Protocols Problems And Potential In Ayurveda 1st Edition
Francesco Chiappelli Auth
https://ebookbell.com/product/evidencebased-practice-in-complementary-
and-alternative-medicine-perspectives-protocols-problems-and-
potential-in-ayurveda-1st-edition-francesco-chiappelli-auth-4100882

Perspectives and Problems
in Nolinear Science
A Celebratory Volume in Honor of Lawrence Sirovich

Springer
New York
Berlin
Heidelberg
Hong Kong
London
Milan
Paris
Tokyo

EhudKaplan
Jerrold E. Marsden
Katepalli
R. Sreenivasan
Editors
Perspectives and Problems
in Nolinear Science
A Celebratory Volume
in Honor
of Lawrence Sirovich
Springer

Ehud Kaplan
Department of Opthalmology
Mount
Sinai School of Medicine
New York, NY 10029-6574
USA
[email protected]
Katepalli
R. Sreenivasan
Department
of Mechanical Engineering
Mason Laboratory
Yale University
New Haven, CT 06520-8286
USA
[email protected]
Jerrold
E. Marsden
Control
and Dynamical Systems 107-81
Caltech
Pasadena,
CA 91125
USA
[email protected]
Photograph of Lawrence Sirovich on page v: Sarah Merians Photography & Company
Mathematics Subject Classification (2001): 76-02, 92C20
Library
of Congress Cataloging-in-Publication Data
Perspectives and problems in nonlinear science: a celebratory volume in honor
of Lawrence
Sirovich / editors, Ehud Kaplan, Jerrold
E. Marsden, Katepalli R. Sreenivasan.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-4684-9566-9
I. Nonlinear theories.
I. Sirovich, L., 1933-II. Kaplan, Ehud. III. Marsden, Jerrold E.
IV. Sreenivasan, Katepalli R.
QA427.P472003
511 '.8-dc21 2002044509
Printed
on acid-free paper.
ISBN 978-1-4684-9566-9 ISBN 978-0-387-21789-5 (eBook)
DOI 10.1007/978-0-387-21789-5
© 2003 Springer-Verlag New York, Inc.
Softcover reprint of the hardcover 1st edition 2003
All rights reserved. This work may not be translated or copied in whole or in part without the written
permission
of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY
10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form
of information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed
is forbidden.
The use
in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
9 8 7
654 3 2 1 SPIN 10906200
Typesetting: Pages were created from author-prepared
laTeX manuscripts by the technical editors,
Wendy McKay and Ross Moore, using modifications
of a Springer laTeX macro package and other
packages for the integration
of graphics and consistent stylistic features within articles from diverse
sources.
www.springer-ny.com
Springer-Verlag New York Berlin Heidelberg
A member of BertelsmannSpringer Science+Business Media GmbH

To Larry Sirovich
On the
occasion of his 70th birthday,
with much
admiration and warmth
from his friends and colleagues worldwide.
Photo by Sarah Mel'ia ns Photography & Co., New York.

Contents
Preface
Contributors
1 Reading Neural Encodings using Phase Space Methods,
ix
xi
H. D. 1. Abarbanel and E. Turner 1
2 Boolean Dynamics with Random Couplings,
f\1. Aldana, S. Coppersmith and L. P. Kadanoff 23
3 Oscillatory Binary Fluid Convection in Finite Containers,
O. Batiste and E. Knobloch 91
4 Solid Flame Waves, A. Bayliss, B. J. Matkowsky
and A. P. Aldushin
5 Globally Coupled Oscillator Networks,
145
E. Brown. P. Holmes and J. Moehlis 183
6 Recent Results in the Kinetic Theory of Granular
Materials, C. Cercignani 217
7 Variational Multisymplectic Formulations of Nonsmooth
Continuum Mechanics, R. C. Fetecau, J. E. Marsden
and M. West 229
8 Geometric Analysis for the Characterization of Nonstat-
ionary Time Series, 1\1. Kirby and C. Anderson 263
9 High Conductance Dynamics of the Primary Visual Cortex,
D. McLaughlin, R. Shapley, M. Shelley and J. Jin 293
10 Power Law Asymptotics for Nonlinear Eigenvalue
Problems, P. K. ~ewton and V. C. Papanicolaou 319
11 A KdV Model for Multi-Modal Internal Wave Propagation
in Confined Basins, L. C. Redekopp 343
12 A Memory Model for Seasonal Variations of Temperature
in Mid-Latitudes, K. R. Sreenivasan and D. D. Joseph 361
13 Simultaneously Band and Space Limited Functions in Two
Dimensions and Receptive Fields of Visual Neurons,
B. \\~. Knight and J. D. Victor 375
14 Pseudochaos, C. f\I. Zaslavsky and f\I. Edelman 421
VII

Preface
Lawrence Sirovich will turn seventy on March 1, 2003. Larry's academic
life of over 45 years at the Courant Institute, Brown University, Rockefeller
University and the Mount Sinai School of Medicine has touched many peo­
ple
and several disciplines, from fluid dynamics to brain theory. His con­
tributions to the kinetic theory of gases, methods of applied mathematics,
theoretical fluid dynamics, hydrodynamic turbulence, the biophysics of vi­
sion and the dynamics of neuronal populations, represent the creative work
of an outstanding scholar who was stimulated mostly by insatiable curios­
ity. As a
scientist, Larry has consistently offered fresh outlooks on classical
and difficult subjects, and moved into new fields effortlessly. He delights
in what he knows and does, and sets no artificial boundaries to the range
of his inquiry. Among the more than fifty or so Ph.D. students and post­
docs that he has mentored, many continue to make first-rate contributions
themselves and hold academic positions in the US and elsewhere. Larry's
scientific collaborators are numerous and distinguished. Those of us who
have known him well will agree that Larry's charm, above all, is his taste,
wit, and grace under fire.
Larry has contributed immensely to mathematics publishing. He be­
gan his career with Springer by founding the Applied Mathematical Sci­
ences
series together with Fritz John and Joe LaSalle some 30 years ago.
Later he co-founded the Texts in Applied Mathematics series and more re­
cently the Interdisciplinary Applied Mathematics series. He has overseen
with imagination the cross-fertilization of a broad range of ideas in applied
mathematics-including his favorite subjects, fluid dynamics and problems
in biology. His good taste and judgement as an editor has touched many
scientists worldwide, and continues to do so.
This volume represents a token of the affection with which we and the
contributors hold him, and marks Larry's influence as well as the continuing
commitment to his work.
Ehud Kaplan, ~Iount Sinai School of ~ledicine
Jerry Marsden, California Institute of Technology
Katepalli Sreenivasan, University of Maryland
ix

List of Contributors
Editors:
* EHUD KAPLAN
Depts. of Ophthalmology, Physiology
and Biophysics
The Mount Sinai School of Medicine
New York, NY, 10029
[email protected]
* JERROLD E. MARSDEN
Control and Dynamical Systems 107-81
California Institute of Technology
Pasadena, CA 91125-8100
[email protected]
* KATEPALLI R. SREENIVASAN
Institute for Physical Science and
Technology
University of Maryland
College Park, MD 20742
[email protected]
Authors:
* HENRY D. I. ABARBANEL
Department of Physics and Marine
Physical Laboratory
Scripps Institution of Oceanography
9500 Gilman Drive
La Jolla, CA 92093-0402
[email protected]
* MAXIMINO ALDANA
James Franck Institute
The University of Chicago
5640 S. Ellis Avenue
Chicago, Illinois 60637
[email protected]
Xl
* ANATOLY P. ALDUSHIN
Institute of Structural Macrokinetics &
Materials Science
Russian Academy of Sciences
142432 Chernogolovka, Russia
[email protected]
* CHARLES W. ANDERSON
Department of Computer Science
Colorado State University
Fort Collins, CO 80523
[email protected]
* ORIOL BATISTE
Departament de Ffsica Aplicada
Universitat Politecnica de Catalunya
c/Jordi Girona 1-3, Campus Nord
[email protected]
* ALVIN BAYLISS
Department of Engineering Sciences
and Applied Mathematics
Northwestern University
2145 Sheridan Road
Evanston, IL 60208-3125
[email protected]
* ERIC BROWN
Program in Applied and
Computational Mathematics
Princeton University
Princeton, New Jersey 08544
[email protected]
* CARLO CERCIGNANI
Dipartimento di Matematica
Politecnico di Milano
Piazza Leonardo da Vinci 32
20133 Milano, Italy
[email protected]

* SUSAN N. COPPERSMITH
Department of Physics
University of Wisconsin-Madison
1150 University Avenue
Madison, WI 53706
snc~physics.wisc.edu
* MARK EDELMAN
Courant Institute of Mathematical
Sciences
251
Mercer St.
New York NY 10012
edelman~cims.nyu.edu
* RAZVAN C. FETECAU
Applied and Computational
Mathematics 217-50
California Institute of Technology
Pasadena, CA 91125
van~cds.caltech.edu
* PHILIP HOLMES
Department of Mechanical and
Aerospace Engineering
and Program in Applied and
Computational Mathematics
Princeton University
Princeton, NJ 08544
pholmes~math.Princeton.edu
* DANIEL D. JOSEPH
Department of Aerospace Engineering
University of Minnesota
Minnealpolis, MN 55455
joseph~aem.umn.edu
* LEO P. KADANOFF
The James Franck Institute
The University of Chicago
5640 S. Ellis Avenue
Chicago, IL 60637
l-kadanoff~uchicago.edu
* EHUD KAPLAN
Depts. of Ophthalmology, Physiology
and Biophysics
The Mount Sinai School of Medicine
New York, NY, 10029
ehud.kaplan~mssm.edu
* MICHAEL J. KIRBY
Department of Mathematics
Colorado State University
Fort Collins, CO 80523
kirby~math.colostate.edu
* BRUCE KNIGHT
Professor, Knight Laboratory of
Biophysics
The Rockefeller University
1230 York Avenue
New York, NY 10021
knight~mail.rockefeller.edu
* EDGAR KNOBLOCH
Department of Physics
University of California, Berkeley
Berkeley, CA 94720
knobloch~physics.berkeley.edu
and
Department of Applied Mathematics
University of Leeds
Leeds LS2 9JT, UK
knobloch~amsta.leeds.ac.uk
* JERROLD E. MARSDEN
Control and Dynamical Systems 107-81
California Institute of Technology
Pasadena, CA 91125-8100
marsden~cds.caltech.edu
* BERNARD J. MATKOWSKY
Department of Engineering Sciences
and Applied Mathematics
Northwestern University
2145 Sheridan Road
Evanston, IL 60208-3125
b-matkowsky~northwestern.edu
* DAVID W. MCLAUGHLIN
Director, Courant Institute of
Mathematical Sciences
Professor of Mathematics and Neural
Science
251 Mercer St.
New York, N.Y. 10012
dmac~cims.nyu.edu
* JEFF MOEHLIS
Program in Applied and
Computational Mathematics
Princeton University
Princeton, N J 08544
jmoehlis~math.princeton.edu
* PAUL K. NEWTON
Department of Aerospace and
Mechanical Engineering
University of Southern California
Los Angeles, CA 90089-1191
newton~spock.usc.edu

* VASSILIS PAPANICOLAOU
Department of Mathematics
National Technical University of Athens
Zografou Campus
GR-15780 Athens, Greece
papanico~ath.ntua.gr
* LARRY G. REDEKOPP
Deparment of Aerospace and
Mechanical Engineering
University of Southern California
Los Angeles, CA 90089-1191
redekopp~spock.usc.edu
* ROBERT SHAPLEY
Professor, Center for Neural Science
New York University
4 Washington Place
New York, NY 10003
shapley~cns.nyu.edu
* MICHAEL SHELLEY
Professor of Mathematics and Neural
Science
Co-Director, Applied Mathematics
Laboratory
251 Mercer St.
New York, NY 10012
shelley~cims.nyu.edu
* KATEPALLI R. SREENIVASAN
Institute for Physical Science and
Technology
University of Maryland
College Park, MD 20742
sreeni~ipst.umd.edu
* EVREN TUMER
UCSD Dept of Physics and
UCSD Institute for Nonlinear Science
CMRR Bldg Room 112
9500 Gilman Dr.
La Jolla, CA 92093-0402
evren~nye.ucsd.edu
* JONATHAN VICTOR
Professor, Neurology and Neuroscience
Weill
Medical College of Cornell
University
1300 York Avenue
New York City, NY 10021
jdvicto~med.comell.edu
* MATTHEW WEST
Control and Dynamical Systems 107-81
California Institute of Technology
Pasadena, CA 91125-8100
mwest~cds.caltech.edu
* JIM JIN
Courant Institute of Mathematical
Sciences
NYU, 251 Mercer St.
New York, NY 10012
wielaard~cims.nyu.edu
* GEORGE ZASLAVSKY
Department of Physics and
Courant Institute of Mathematical
Sciences
251 Mercer St.
New York, NY 10012
zaslav~cims.nyu.edu
'IE;Xnical Editors:
* SHANG-LIN EILEEN CHEN
California Institute of Technology
1200 E. California Blvd.
Pasadena, CA 91125-0001
schen~its.caltech.edu
* WENDY G. MCKAY
Control and Dynamical Systems 107-81
California Institute of Technology
1200 E. California Blvd.
Pasadena, CA 91125-8100
wgm~cds.caltech.edu
* Ross R. MOORE
Mathematics Department
Macquarie University
Sydney, NSW 2109, Australia
ross~math.mq.edu.au

1
Reading Neural Encodings
using Phase Space Methods
Henry D. I. Abarbanel
Evren Thmer
To Larry Sirovich, on the occasion of his 70th birthday.
ABSTRACT
Environmental signals sensed by nervous systems are often represented in
spike trains carried from sensory neurons to higher neural functions where
decisions and functional actions occur. Information about the environmen­
tal stimulus is contained (encoded) in the train of spikes. We show how
to "read" the encoding using state space methods of nonlinear dynam­
ics. We create a mapping from spike signals which are output from the
neural processing system back to an estimate of the analog input signal.
This mapping is realized locally in a reconstructed state space embodying
both the dynamics of the source of the sensory signal and the dynamics
of the neural circuit doing the processing. We explore this idea using a
Hodgkin-Huxley conductance based neuron model and input from a low
dimensional dynamical system, the Lorenz system. We show that one may
accurately learn the dynamical input/output connection and estimate with
high precision the details of the input signals from spike timing output
alone. This form of "reading the neural code" has a focus on the neural
circuitry as a dynamical system and emphasizes how one interprets the dy­
namical degrees of freedom in the neural circuit as they transform analog
environmental information into spike trains.
Contents
1
2
3
Introduction
Input Estimation from State Space Reconstruction
R15 Neuron Model .............. .
2
3
6
3.1
Input Signals to Model Neuron
3.2 Numerical Results.
4 Discussion
References . . . . . . . . .
10
11
18
20
1
E. Kaplan et al. (eds.), Perspectives and Problems in Nolinear Science
© Springer-Verlag New York, Inc. 2003

2 H. D. 1. Abarbanel and E. Turner
1 Introduction
A primary task of nervous systems is the collection at its periphery of infor­
mation from the environment and the distribution of that stimulus input
to central nervous system functions. This is often accomplished through
the production and transmission of action potentials or spike trains Rieke,
Warland, de Ruyter van Steveninck, and Bialek [1997].
The book Rieke, Warland, de Ruyter van Steveninck, and Bialek [1997]
and subsequent papers by its authors and their collaborators Brenner,
Strong, Koberle, Bialek, and de Ruyter van Steveninck [2000] carefully
layout a program for interpreting the analog stimulus of a nervous system
using ideas from probability theory and information theory, as well as a
representation of the input/output or stimulus/response relation in terms
of Volterra kernel functions. In Rieke, War land , de Ruyter van Steveninck,
and Bialek [1997] the authors note that when presenting a stimulus to a
neuron, it is common "that the response spike train is not identical on
each trial." Also they observe that "Since there is no unique response, the
most we can say is that there is some probability of observing each of the
different possible responses." This viewpoint then underlies the wide use
of probabilistic ideas in describing how one can "read the neural code"
through interpreting the response spike trains to infer the stimulus.
In
this paper we take a different point of view and recognize that the neu­
ron into which one sends a stimulus is itself a dynamical system with a time
dependent state which will typically be different upon receipt of different
realizations
of identical stimulus inputs. Viewing the transformation of the
stimulus waveform into the observed response sequence, as a result of deter­
ministic dynamical action of the neuron one can attribute the variation in
the response to identical stimuli to differing neuron states when the stimu­
lus arrives. This allows us to view the entire transduction process of analog
input (stimulus) to spike train output (response) as a deterministic process
which can
be addressed by methods developed in nonlinear dynamics for
dealing with input/output systems Rhodes and Morari [1998].
Previous research
on information encoding in spike trains has concen­
trated on nonlinear filters that convert analog input signals into spike
trains. It has been shown that these models can be used to reconstruct
the dynamical phase space of chaotic inputs to the filters using the spike
timing information Sauer [1994]; Castro and Sauer [1997]; Pavlov, Sos­
novtseva, Mosekilde,
and Anishcenko [2000, 2001]. Using simple dynamical
neuron models, Castro and Sauer Castro and Sauer [1999] have shown that
aspects of a dynamical system can be reconstructed using interspike inter­
vals (ISIs)
properties. Experimental work has demonstrated the ability to
discriminate between chaotic and stochastic inputs to a neuron Richard­
son, Imhoff, Grigg,
and Collins [1998], as well as showing that decoding
sensory
information from a spike train through linear filtering Volterra se­
ries techniques
can allow for large amounts of information to be carried by

1. Reading Neural Encodings 3
the precise timing of the spikes Rieke, Warland, de Ruyter van Steveninck,
and Bialek [1997].
We discuss here
the formulation of input/output systems from a dynam­
ical system point of view, primarily summarizing earlier work Rhodes and
Morari [1998]; Abarbanel [1996], but with a focus on recognizing that we
may treat the response signals as trains of identical spikes. Since the mod­
ulation of the spike train must be carrying the information in the analog
input presented to the neuron, if the spike pulse shapes are identical, all
information must be encoded in the ISIs. We shall show that this is, indeed,
the case.
What is the role of information theory in a deterministic chain of actions
from
stimulus to spiking response? The ideas of information theory, though
often couched in terms of random variables, applies directly to distributed
variation in dynamical variables such as the output from nonlinear systems.
The use of concepts such as entropy and mutual information, at the basis
of information theoretic descriptions of systems, applies easily and directly
to deterministic systems. The understanding of this connection dates from
the 1970s and 1980s where the work of Fraser Fraser [1989] makes this
explicit, and the connection due to Pesin Pesin [1977] between positive
Lyapunov exponents of a deterministic system and the Kolmogorov~Sinai
entropy quantifies the correspondence.
In the body of this paper, we first summarize the methods used to deter­
mine a connection between analog input signals and spiking output, then
we apply these methods to a Hodgkin-Huxley conductance based model of
the R15 neuron of Aplysia Canavier, Clark, and Byrne [1990]; Plant and
Kim [1976]. Future papers will investigate the use of these methods on bi­
ological signals from
the HI visual neuron of a fly and a stretch receptor
in the tail of a crayfish Tumer, Wolfe, Wood, Abarbanel, Rabinovich, and
Selverston [2002]
2 Input Estimation from State Space
Reconstruction
The general problem we address is the response to stimuli of a neural circuit
with N dynamical variables
When there is no time varying input, x(t) satisfies the ordinary differential
equations
dx~?) = Fa (x(t)), a=I,2, ... ,N. (2.1)
The Fa(x) are a set of nonlinear functions which determine the dynamical
time course of the neural circuit. The Fa(x) could well represent a conduc-

4 H. D. I. Abarbanel and E. Turner
tance based neural model of the Hodgkin-Huxley variety as in our example
below.
When there is a time dependent external stimulus s(t), these equations
become
(2.2)
and the time course of x(t) in this driven or non-autonomous setting can
become rather more complicated than the case where s(t) = constant.
If we knew the dynamical origin of the signal s(t), then in the combined
space of the stimuli and the neural state space x(t), we would again have
an autonomous system, and many familiar Abarbanel [1996] methods for
analyzing signals from nonlinear systems would apply. As we proceed to our
"input signal from spike outputs" connection we imagine that the stimulus
system is determined by some other set of state variables z(t) and that
d~~t) = G(z(t)); s(t) = h(z(t)), (2.3)
where G(z) are the nonlinear functions determining the time course of the
state z(t) and h(z(t)) is the nonlinear function determining the input to
the neuron s(t).
With observations of just one component of the state vector x(t), the
full dynamical structure of a system described by Equation 2.2 can be
reconstructed in a proxy state space Mane [1981]; Takens [1981]. Once the
dynamics of the system is reconstructed, the mapping from state variable
to input can be made in the reconstructed space. Assume the measured
state variable, r(t) = g(x(t)), is sampled at times tj, where j is an integer
index. According
to the embedding theorem Mane [1981]; Takens [1981]'
the dynamics of the system can be reconstructed in an embedding space
using
time delayed vectors of the form
y(j) = [r(tj), r(tj + T7s),"" r(tj + (dE -1)T7s)]
= [r(j), r(j + T), ... , rU + (dE -1)T)] , (2.4)
where dE is the dimension of the embedding, tj = to+j7s, 78 is the sampling
time, to is an initial time, and T is an integer time delay. If the dimension
dE is large enough these vectors can reconstruct the dynamical structure of
the full system given in Equation 2.2. Each vector y(j) in the reconstructed
phase space depends on the state of the input signal. Therefore a mapping
should exist that associates locations in the reconstructed phase space y(j)
to values of the input signal s(tj) == s(j) : s(j) = H(y(j)). The map H(y)
is the output-to-input relation we seek.
Without simultaneous measurements of the observable r( t) and the input
signal s(t), this mapping could not be found without knowing the differ­
ential equations that make up Equation 2.2. But in a situation where a
controlled stimulus is presented
to a neuron while measuring the output,

1. Reading Neural Encodings 5
both r(t) and s(t) are available simultaneously. Such a data set with simul­
taneous measurements of spike time and input is split into two parts: the
first part, called the training set, will be used to find the mapping H(y(j))
between y(j) and s(j). The second part, called the test set, will be used to
test the accuracy of that mapping. State variable data from the training
set r(j) is used to construct time delayed vectors as given by
y(j) = [r(j),r(j+T), ... ,r(j+(dE-1)T)]. (2.5)
Each of these vectors is paired with the value of the stimulus at the mid­
point time of the delay vector
(2.6)
We use state space values that occur before and after the input to improve
the quality of the representation. The state variables and input values in
the remainder of the data are organized in a similar way and used to test
the mapping.
The phase space dynamics near a test data vector are reconstructed
using vectors in the training set that are close to the test vector, where
we use Euclidian
distance between vectors. These vectors lie close in the
reconstructed phase space, so they will define the dynamics of the system
in that region and will define a local map from that region to a input
signal value. In other words, we seek a form for H (y(j)) which is local in
reconstructed phase space
to y(j). The global map over all of phase space
is a collection of local maps.
The local map is made using the NB nearest neighbors ym(j) , m =
O ... N B of yO(j) = y(j). These nearest neighbor vectors and their corre­
sponding
input values sm(j) are used to find a local polynomial mapping
between inputs sm(j) and vector versions of the outputs rm(j), namely
ym(j) of the form
sm(j) = H(ym(j)) = Mo(j) +Ml(j) .ym(j) +M2(j) .ym(j) .ym(j) + ... ,
(2.7)
which assume that the function H(y) is locally smooth in phase space.
The scalar Mo(j), the dE-dimensional vector M1(j), and the tensor
M2(j) in dE-dimensions, etc are determined by minimizing the mean squared
error
NB
L Ism(j) - Mo(j) + M1(j)· ym(j) + M2(j)· ym(j). ym(j) + .. ·1
2
.
(2.8)
m=O
We determine Mo(j), M1(j), M2(j), ... for all j = 1,2, ... , and this pro­
vides a local representation of
H(y) in all parts of phase space sampled by
the training set y(j), j = 1,2, ... , Ntrain.
Once the least squares fit values of Mo(j), Ml (j), M2 (j), ... are deter­
mined for
our training set, we can use the resulting local map to determine

6 H. D. 1. Abarbanel and E. Turner
estimates of the input associated with an observed output. This proceeds
as follows: select a new output rnew(l) and form the new output vector
ynew (l) as above. Find the nearest neighbor in the training set to ynew (l).
Suppose it is the vector y(q). Now evaluate an estimated input sest(l) as
This procedure is applied for all new outputs to produce the corresponding
estimated inputs.
3 R15 Neuron Model
To investigate our ability to reconstruct stimuli of analog form presented
to a realistic neuron from the spike train output of that neuron, we exam­
ined a
detailed model of the R15 neuron in Aplysia Canavier, Clark, and
Byrne [1990]; Plant and Kim [1976], and presented this model neuron with
nonperiodic input from a low dimensional dynamical system. This model
has seven dynamical degrees of freedom. The differential equations for this
model are
dV m (t) ( 3 ) ( ) ( )
C~ = gIY2(t) Y3(t) + gT VI -V(t) + gL VL -V(t)
+(gKY4(t)4
+ gAY5(t)Y6(t) + gPY7(t)) (VK -V(t))
+10 + Iext + Iinput(t),
where the Yn(t); n = 2,3, ... ,7 satisfy kinetic equations of the form
Yn(Vm(t)) -Yn(t)
Tn (Vm(t))
(3.1)
(3.2)
which is
the usual form of Hodgkin-Huxley models. The gx, X = I, T,
K, A, P, L are maximal conductances, the Vx, X = I, L, K are reversal
potentials. Vm(t) is the membrane potential, C is the membrane capaci­
tance, 10 is a fixed DC current, and Iext is a DC current we vary to change
the state of oscillation of the model. The functions Yn(V) and Tn(V) and
values for the various constants are given in Canavier, Clark, and Byrne
[1990]; Plant and Kim [1976]. These are phenomenological forms of mem­
brane voltage dependent gating variables, activation and inactivation of
membrane ionic channels, and time constants for these gates. Iinput(t) is
a
time varying current input to the neural dynamics. Our goal will be to
reconstruct Iinput(t) from observations of the spike timing in Vm(t).
In Figure 3.1 we plot the bifurcation diagram of our R15 model. On the
vertical axis we show the values of ISIs taken in the time series for Vm(t)
from the model; on the horizontal axis we plot Iext. From this bifurca­
tion plot we see that the output of the R15 model has regular windows

l. Reading Neural Encodings 7
R15 Model Bifurcation Diagram
1.Se+04
' ...
FIGURE 3.l. Bifurcation diagram for the R15 model with constant input current.
This plot shows the values ofISls which occur in the Vm(t) time series for different
values
of lext.
for Iext < .07 then chaotic regions interspersed with periodic orbits until
Iext ;:;::: 0.19 after which nearly periodic behavior is seen. The last region
represents significant depolarization of
the neuron in which tonic periodic
firing associated
with a stable limit cycle in phase space is typical of n eural
activity. Periodic firing leads
to a fixed value for ISIs, which is what we see.
Careful inspection of
the time series reveals very small fluctuations in the
phase space orbit, but the resolution in Figure 3.1 does not expose this.
Other than the characteristic spikes, there are no significant features in
the membrane voltage dynamics. In addition all the spikes are essentially
the same, so we expect that all the information about the membrane volt­
age
state is captured in the times between spikes, namely the interspike
intervals: ISIs.
The distribution of ISIs characterizes the output signal for
inform
ation theoretic purposes.
We have chosen
three values of Iext at which to examine the response of
this neuron model when presented with
an input signal. At Iext = 0.1613
we expect chaotic waveforms expressed as nonperiodic ISIs with a broad
distribution. At Iext = 0.2031 we expect nearly periodic spike trains. And
at Iext = -0.15 the neuron does not spike, the mebrane voltage remains at
an equilibrium value.

8 H. D. 1. Abarbanel and E. Turner
For each Vm(t) time series we evaluate the normalized distribution ofISls
which we call Prsr(bo) and from this we compute the entropy associated with
the oscillations of the neuron. Entropy is defined as
H(bo) = L -Prsr(bo) log(Prsr(bo)) ;
observedE>
(3.3)
H(bo) 2: O. The entropy is a quantitative measure Shannon [1948] of the
information content of
the output signal from the neural activity.
In
Figure 3.2 we display a section of the Vm(t) time series for Iext =
0.1613. The irregularity in the spiking times is clear from this figure and the
distribution Prsr(bo) shown in Figure 3.3. The Prsr(bo) was evalu ated from
collecting
60,000 spikes from the Vm(t) time series and creating a histogram
with 15,000 bins. This distribution has an entropy H(bo) = 12. In contrast
to this we have a section of the Vm(t) time series for Iext = 0.2031 in
Figure 3.4. Far more regular firing is observed with a firing frequency much
higher
than for Iext = 0.1613. This increase in firing frequency as a neuron
is depolarized is familiar. With Iext = 0.2031 the distribution Prsr(bo) is
mainly concentrated in one bin with some small flucuations near that bin.
Such a regular distribution leads
to a very low e ntropy H(bo) = 0.034. If not
for
the slight vari ations in lSI, the entropy would be zero. If Prsr(boo) = 1
for some lSI value
boo, then H(bo) = O.
R15 Model Neuron
1 ... 0.1613; Amp = 0.0
20
==e
0
>
.,
'" ~
"0
>
"ii -20
"8
:::E
It)
Ii:
_60L-____ ~ __ ~ __ ~ __ ~ __ _L __ ~ __ ~ ____ ~
200000 2 1 0000 220000 230000 240000 250000
Time (arb. units)
FIGURE 3.2. Membrane voltage of the R15 model with a constant input current
Iext = 0.1613.

1. Reading Neural Encodings 9
PISI(6) for R15 Model
0 ,01
lnpul Amplitude = 0.0 ; H(II) • 3.096; I ", = 0.1613
0,008
0.006
~
Do
0.004
0,002
6500 8500
FIGURE 3.3. Normalized distribution HSI(~) from the membrane voltage time
series with Iext = 0.1613. The entropy for this distribution H(~) = 12.
R15 Model Neuron
I,. = 0 ,2011; A mp ~ 0 .0
20
=.
0
>
CD
'" S
"0
>
0;
-20
."
0
::E
III
~
~o
~0L---~ --~------~-- ----~~--~~ ~ __ ~
200000 210000 220000 230000 240000 250000
TIme (arb. units)
FIGURE 3.4. Membrane voltage of the R15 model with a constant input current
Iext = 0.203l.

10 H. D. 1. Abarbanel and E. Turner
3.1 Input Signals to Model Neuron
In the last section the dynamics of the neuron model were examined us­
ing
constant input signals. In studying how neurons encode information in
their spike train, we must clarify what it means for a signal to carry infor­
mation.
In the context of information theory Shannon [1948]' information
lies in
the unpredictability of a signal. If we do not know what a signal is
going
to do next, then by observing it we gain new information. Stochas­
tic signals are commonly used as information carrying signals since
their
unpredictability is easily characterized and readily incorporated into the
theoretical structure of information theory. But they are problematic when
approaching a problem from a dynamical systems
point of view, since they
are systems with a high dimension. This means that the reconstruction of a
stochastic signal using
time delay embedding vectors of the form of Equa­
tion 2.4 would require an extremely large embedding dimension Abarbanel
[1996]. If we are injecting stochastic signals into the R15 model, the dimen­
sion of
the whole system would increase and cause practical problems in
performing
the input reconstruction. Indeed, the degrees of freedom in the
stochastic input signal could well make the input/output relationship we
seek
to expose impossible to see.
An attractive input for testing the reconstruction method will have some
unpredictability but have few degrees of freedom. If there are many degrees
of freedom, the dimensionality of the vector of outputs y(j) above may be
prohibitively large. This leads directly to the consideration of low dimen­
sional chaotic systems. Chaos originates from local instabilities which cause
two
points initially close together in phase space to diverge rapidly as the
system evolves in time, thus producing completely different trajectories.
This exponential divergence is quantified by the positive Lyapunov expo­
nents
and is the source of the unpredictability in chaotic systems Abarbanel
[1996]. The state of any observed system is known only to some degree of
accuracy, limited by measurement
and systematic errors. If the state of a
chaotic
system were known exactly then the future state of that system
should be exactly predictable. But if the state of a chaotic system is only
known
to some finite accuracy, then predictions into the future based on
the estimated state will diverge from the actual evolution of the system.
Imperfect observations of a chaotic signal will limit
the predictability of
the signal. Since chaos can occur in low dimensional systems these signals
do
not raise the same concerns as stochastic signals.
We use a familiar example of a chaotic system, the Lorenz attractor Lorenz
[1963], as the input signal to drive the R15 model. This is a well studied
system that exhibits chaotic dynamics and will be used here as input to
the R15 neuron model. The Lorenz attractor is defined by the differential

l. Reading Neural Encodings 11
equations
dx(t)
K-----;{t = O"(y(t) -x(t))
dy(t)
K-----;{t = -x(t)z(t) + rx(t) -yet) (3.4)
dz(t)
K-----;{t = x(t)y(t) -bz(t)
For the simulations presented in this paper the parameters were chosen
as
0" = 16, r = 45.92 and b = 4. The parameter K is used to change the
time scale. An example times series of the x(t) component of the Lorenz
attractor is shown in Figure 3.5.
30
20
10
o
-10
-20
-30
0.18 0.2 0.22 0. 24 0.26 0.28 0.3 0. 32 0.34 0.38 0.38
Time (arb. unil)
FIGURE 3.5. Small segment of the x(t) component of the Lorenz attractor de­
scribed in equations 3.4 with", = 10
4
.
3.2 Numerical Results
An input signal set) = Iinput(t) is now formed from the x(t) component of
the Lorenz system. Our goal is to use observations of the stimulus Iinput(t)
and of the ISIs of the output signal Vm(t) to learn the dynamics of the R15
neuron model in
the form of a local map in phase space reconstructed from
the observed ISIs. From this map we will estimate the input Iinput(t) from
new observations of
the output ISIs.

12 H. D. 1. Abarbanel and E. Thmer
Our analog signal input is the x(t) output of the Lorenz system, scaled
and offset to a proper range, and then input to the neuron as an external
current
Iinput(t) = Amp(x(t) + xo) , (3.5)
where
Amp is the scaling constant and Xo is the offset. The R15 equations
are integrated Press, Teukolsky, Vetterling, and Flannery [1992] with this
input signal and the spike times tj from the membrane voltage are recorded
simultaneously
with the value of the input current at that time Iinput(tj).
Reconstruction of the neuron plus input phase space is done by creating
time delay vectors from the ISIs
(3.6)
where
(3.7)
For each of these vectors there is a corresponding value of the input current
which we chose to be at the midpoint time of the vector
(3.8)
In our work a total of 40000 spikes were collected. The first 30000 were used
to create the training set vectors and the next 10000 were used to examine
our input estimation methods. For each new output vector constructed from
new observed ISIs,
N B nearest neighbors from the training set were used to
generate a local polynomial map y(j) ---+ Ii~~~~ated(j). NB was chosen to be
twice the number of free parameters in the undetermined local coefficients
j\,fo, M1,M2, ....
We used the same three values, -0.15, 0.1613, and 0.2031, for Iext as
employed above in
our simulations. We took Amp = 0.001, K = 10
4
, and
Xo = 43.5 for all simulations unless stated otherwise. This very small am­
plitude of the input current is much more of a challenge for the input re­
construction
than large amplitudes. When Amp is large, the neural activity
is entrained by the input signal and 'recovering' the input merely requires
looking
at the output and scaling it by a constant. Further, the intrinsic
spiking
of the neuron which is its important biological feature goes away
when
Amp is large. The large value of K assures that the spikes sample the
analog signal Iinput(t) very well.
For
Iext = 0.1613 we show a selection of both the input current Iinput and
the output membrane voltage Vm(t) time series in Figure 3.6. The injected
current substantially changes the pattern of firing seen for the autonomous
neuron. Note that the size of the input current is numerically about 10-
3
of Vm(t), yet the modulation of the ISIs due to this small input is clearly
visible in Figure 3.6.
U sing
the ISIs of this time series we evaluated Frsr (6) as discussed above
and from that the entropy H(6) associated with the driven neuron. The

1. Reading Neural Encodings 13
O·'r-----,-----,--------,-----;==:::;=:::r:;l
I~ ~:' t
40
I,
I
I
I
I
I
I,
I I
'I
,.
I I
l'
I
, I
"
,
I
vvvVlJ
I
I'
\/
r
°O~----,~oooo~---~roooo~----~~~---.~oooo~
Time (arb. units)
-so
FIGURE 3.6. A segment of the R15 neuron model output V=(t) shown along with
the scaled Lorenz system input current Iinput. Here Iext = 0.1613, Amp = 0.001,
and K, = 10
4
. Note the different scales for Iinput (shown on the left axis) and
V=(t) (shown on the right axis).
lSI distribution, PISI(~)' shown in Figure 3.7, has an entropy H(~) = 8.16.
The effect of the input current has been to substantially narrow the range
of ISIs seen in
Vm(t). This can be seen by comparison with Figure 3.3.
Figure 3.8 shows
an example of input signal rec onstruction which esti­
mates
Iinput using lSI vectors of the described in Equation 3.6. We used a
time delay
T = 1, an embedding dimension dE = 7, and a local linear map
for H (y(j)). The RMS error over the 10,000 reconstructed values of the
input was (J = 4.6· 10-
4
. The input signal is only reconstructed at times at
which the neuron spikes. So each point is the reconstruction curve in Figure
3.8 corresponds
to a spike in Vm(t). Some features of the input are missed
because no spikes occur during
that time, but otherwise the reconstruction
is very accurate. At places where the spike rate is high, interpolation seems
to fill the gaps between spikes.
Differe
nt values of embedding dimension, time delay, and map order will
lead
to different reconstruction errors. For example, low embedding dimen­
sion may not unfold
the dynamics and linear maps may not be able to fit
some neighborhoods
to the input. For the results shown here, there is little
difference in
the RMS reconstruction error if the embedding dimension is
increased or quadratic maps are used instead of linear maps. This may not
be true if lower embedding dimension is used.
The previous example probed the response of a chaotic neural oscillation
to a chaotic signal. With Iext = 0.2031 the neuron is in a periodic spiking
regime
and the input modulates the instantaneous firing rate of the neuron.

14 H. D. I. Abarbanel and E. Turner
PISI(~) for R15 Model
Inpul Amplilude. 0.001: H(A) a 3.00; I~ = 0 .1613
0 .008
r--------,-- --- ----,------..-- -~
0.006
<i' 0 .004
~
0.002
1440 2440 3440
FIGURE 3.7. l1SI(~) for R15 model neuron output when a scaled x(t) signal
from
the Lorenz system is presented with I ex t = 0.1613. The entropy of this
distribution H(~) = 8.16.
FIGURE 3.8. lSI
The solid line is
2400
j 1900
;;;
"
~
~ 1400
j
t!i
900
Input Lorenz Signal from R15 Model Spike Train
1~.0 .1613: Amp = 0.001; T = 3, d, = 7; Ordor. 2; " . 0. 4501
- ACluallnpul
-
---Estimaled Inpul
·~------ ---~~-- ----------- 6J700 i neuron.
lSI Number (Time) :hed lines
are the lSI reconstructions. The embedding dimension of the reconstruction dE
is 4,
the time delay T is 1, I ex t = 0.1613, Ii is 10
4
, and a linear map was used.
The RMS error of the estimates over 10,000 estimations is (1' = 4.6· 10-
4 and the
maximum error is about 0.01.

1. Reading Neural Encodings 15
A sample of the input current and membrane voltage is shown in Figure 3.9.
The distribution of ISIs, P1S1(b.), shown in Figure 3.10 and has an entropy
H(b.) = 9.5. The effect of the input current is to substantially broaden the
range of ISIs and increase its entropy as compared to the nearly periodic
firing of
the autonomous neuron with [ext = 0.203l. The high spiking rate
and close relationship between input current amplitude and lSI lead to very
accurate reconstructions using low dimensional embeddings. A sample of
the reconstruction using dE = 2 and T = 1 is shown in Figure 3.11. The
RMS reconstruction error of a = 6.1.10-4 with a maximum error of 0.007.
O·'r-----.-------"""T""---r==::::;=z:===;]
- - IlnPUI 40
V
- m
~L-----~~~-- --- ,~oooo~----- ,~~~
Time (arb. units)
-50
FIGURE 3.9. A segment of the R15 neuron model output Vm(t) shown along with
the scaled Lorenz system input current Iinput. Here I ext = 0.203l,Amp = 0.001,
and '" = 10
4
. Note the different scales for Iinput and V m (t).
In a final example we show the reconstruction when the neuron is be­
ing driven
with an input current below the threshold for spikes. With
[ext = -0.15, the autonomous R15 neuron will remain at an equilibrium
level
and not produce spikes. A Lorenz input injected into the neuron with
Amp = 0.002 and Xo = 43.5 is large enough to cause the neuron to spike.
Figure 3.12 shows a sample of
the membrane voltage time series along with
the corresponding input current. Since the spiking rate of the neuron is
much lower than before, '" is increased to 2 . 105. This slows down the dy­
namics of
the Lorenz input relative to the neuron dynamics. Spikes occur
during increasing portions of
the input current and are absent for low val­
ues of
input current. Figure 3.13 shows the distribution of ISIs which has
an entropy H(b.) = 5.3. The low spiking rate shows up in the distribution
in
the form large numbers of long lSI. For the reconstruction of the input
larger embedding dimensions were needed. An sample of the reconstruction

16 H. D. 1. Abarbanel and E. Turner
PISI( ~) for R15 Model
Input Amplitude = 0.001; H(A) = 2.641; I ... = 0.2011
0
.01 ,---------.,--------.,--------,
0.008
0.006
~
~
Q.
0.004
0.002
380
480
FIGURE 3.10. Hs] (~) for R15 model neuron output when a scaled x( t) signal
from
the Lorenz system is presented with Iext = 0.203l. The entropy of this
distribution H(~) = 9.5.
:;
c.
=
iO
'" U
0.07
<C 0.05 ..,
" ..
..,
CD
'lii
E
~ 0,03
0.0'0'-------""20000"-:-:------""40000:'-:-:-------:-60000:"-:-::--'
Time (arb. units)
FIGURE 3.1l. lSI Reco nstruction of the input Lorenz signal to an R15 neuron.
The solid line is the actual input to the neuron. The dots joined by dashed lines
are the lSI reconstructions. The embedding dimension of the reconstruction dE
is 2,
the time delay T is 1, I ext = 0.2031, K is 10
4
, and a linear map was used.
The RMS error of the estimates over 10,000 estimations is (J = 6.1 . 10-
4 and the
maximum error is about 0.007.

0.15
1
,,----, -,
/ ,
,
/

I

/

I 0.05
1. Reading Neural Encodings 17
, ,
,
,
/
I
I
I
I
1--linpul I
_ Vm I
I
,
"
40
0
;e
'"
'" !!
;;
>
~
o
:;
It>
Ii:

ruV!VIVIV~IVVVVVV
-50
o <0000 80000 120000 160000
Time (arb. units)
FIGURE 3.12. A segment of t he R 15 ne uron model output Vm(t) shown along wi th
the scaled Lorenz system input current I input. Here Iext = -0.15,Amp = 0.002,
and K = 2.10
5
. Note the different scales for Iinput and Vm(t).
4000 16000
FIGURE 3.13. HSI(ll.) for R15 model neuron output when a scaled x(t) signal
fr
om t he Lorenz system is presented with I ext = -0.15. The entropy of this
di
stribution H(ll.) = 5.3.
is shown in Figure 3.14 using dE = 7 and T = 1. For t his fit the RMS re­
construction error a = 0.0094 with a maximum error of 0.03. These errors
are noticeably higher than the previous two examples.

18 H. D. I. Abarbanel and E. Turner
0.15
1
-
Actual Input 1
-.. Estimated Input
~
Q.
.5
"i
:::I
U
"'
"tI
0.1
C
I I
V I ;
101
~ ..
ft
~ PI
A
~ A . E
W ft ,
i ~
,
w ,
~
"
~
~
005
V
0.5 ,
1.5
Time (arb. units) .II: 10'
FIGURE 3.14. l SI Reconstruction of the input Lorenz signal to an R15 neuron.
The
solid line is the actual input to the neuron. The dots joined by dashed lin es
are the lSI reconstructions. The e mbedding dimension of the reconstruction dE
is 7, the time delay T is 1, Iext = -0.15, '" is 2 . 10
5
, and a linear map was used.
The
RMS error of the estimates over 10,000 estimations is a = 0.0094 and the
maximum error
is about 0.03.
The accuracy of the reconstruction method depends on a high spiking
rate in the neuron relative to the time scale of the input signal, since only
one reconstructed
input value is gen erated for each spike. If the spiking
rate of the neuron is low relative to the time scales of the input signal, then
the neuron will undersample the input signal and miss many of its features.
This limitation can be demonstrated by decreasing the time scale parameter
K , thereby speeding up the dynamics of the input. During the longer ISIs
the input current can change by large amounts. Though the reconstruction
undersamples the input, but interpolation can fill in some of the gaps. As
K is increased further the reconstruction will further degrade.
4 Discussion
In previous research on the encoding of chaotic attractors in spikes trains,
the spike trains were produced by nonlinear transformations of chaotic in­
put signal s. Threshold crossing neuron models have been used, which gen­
er
ate the spike times at upward crossings of a threshold. This is equival ent
to a Poincare section of the input signal. Also integrate and fire neurons
have been
studied, which integrate the input signal and fire a spike when it

1. Reading Neural Encodings 19
crosses a threshold, after which the integral is reset to zero. Both of these
models have no intrinsic complex dynamics; they can not produce entropy
autonomously. All of the complex behavior is in the input signal. Even
though the attractor of a chaotic input can be reconstructed from the ISIs,
these models do not account for the complex behavior of real neurons. The
input reconstruction method we have presented here allows for complex
intrinsic dynamics of the neuron. We have shown that the local polynomial
representations of input/output relations realized in reconstructed phase
space can extract the chaotic input from the complex interaction between
the input signal and neuron dynamics.
Other experimental works have used linear kernels to map the spike train
into the input. They have shown that the precise timing of individual spikes
can encode a lot of information about the input Rieke, Warland, de Ruyter
van Steveninck, and Bialek [1997]. And the precise relative timing between
two spikes can carry even more information than their individual timings
combined Brenner, Strong, Koberle, Bialek, and de Ruyter van Steveninck
[2000]. These results may be pointing toward a state space representation
since the time delay embedding vectors used here take into account both
the precise spike timing and the recent history of ISIs. From a dynamical
systems perspective this is important because the state of the system at
the time of the input will affect its response. This is a factor that linear
kernels do not take into account.
The advantage of using local representations of input/output relations
in reconstructed state space lies primarily in the insight it may provide
about the underlying dynamics of the neural transformation process map­
ping analog environmental signals into spike trains. The goal of the work
presented here is not primarily to show we can accurately recover analog
input signals from the ISIs of spike output from neurons, though that is
important to demonstrate. The main goal is to provide clues on how one
can now model the neural circuitry which transforms these analog signals.
The main piece of information in the work presented here lies in the size
of the reconstructed space dE which tells us something about the required
dimension of the neural circuit. Here we see that a low dimension can
give excellent results indicating that the complexity of the neural circuit
is not fully utilized in the transformation to spikes. Another suggestion of
this is in the entropy of the input and output signals. In the case where
Iext = 0.1613 the entropy of the analog input is 11.8 while the entropy of
the lSI distribution of the output is 8.16. When Iext = 0.2031 the output
entropy is 9.5. This suggests, especially in the case of the larger current,
that the signal into R15 neuron model acts primarily as a modulation on
the lSI distribution. This modulation may be substantiaL as in the case
when Iext = 0.1613 but reading the modulated signal does not require
complex methods.
Our final example took Iext = -0.15 at which value the undriven neuron
has V,,,(t) = constant, so it is below threhold for production of action

20 H. D. 1. Abarbanel and E. Turner
potentials. In this case the introduction of the stimulus drove the neuron
above this threshold and produced a spike train which could be accurately
reconstructed. This example is relevant to the behavior of biological neurons
which act as sensors for various quantities: visiual stimuli, chemical stimuli
(olfaction), etc.
In the study of biological sensory systems Turner, Wolfe,
Wood, Abarbanel, Rabinovich, and Selverston [2002] the neural circuitry
is quiet in the absence of input signals, yet as we now see the methods are
equally valid and accurate.
Acknowledgments: This work was partially supported by the U.S. De­
partment of Energy, Office of Basic Energy Sciences, Division of Engineer­
ing
and Geosciences, under Grants No. DE-FG03-90ER14138 and No. DE­
FG03-96ER14592, by a grant from the National Science Foundation, NSF
PHY0097134, by a grant from the Army Research Office, DAAD19-0l-1-
0026, by a grant from the Office of Naval Research, N00014-00-1-0181, and
by a grant from the National Institutes of Health, NIH ROI NS40110-OlA2.
ET acknowledges support from NSF Traineeship DGE 9987614.
References
Abarbanel, H. D. 1. [1996], The Analysis of Observed Chaotic Data. Springer,
New York.
Brenner, N., S. P. Strong, R. Koberle, W. Bialek, and R. de Ruyter van Steveninck
[2000], Synergy in a Neural Code, Neural Computation 12, 1531-52.
Canavier, C. C., J. W. Clark, and J. H. Byrne [1990], Routes to Chaos in a Model
of a Bursting Neuron, Biophys. J. 57, 1245-51.
Castro, R. and T. Sauer [1997], Correlation Dimension of Attractors Through
Interspike Intervals, Phys. Rev. E 55, 287-90.
Castro, R. and T. Sauer [1999], Reconstructing Chaotic Dynamics through Spike
Filters, Phys. Rev. E 59, 2911-17.
Fraser, A. M. [1989], Information Theory and Strange Attractors, PhD thesis,
University of Texas, Austin.
Lorenz, E. N. [1963], Deterministic Nonperiodic Flow, J. Atmos. Sci. 20, 130-4l.
Mane, R. [1981]' On the Dimension of the Compact Invariant Sets of Certain
Nonlinear Maps. In Rand, D. and L. S. Young, editors, Dynamical Systems
and Turbulence, Warwick, 1980, volume 898, page 230, Berlin. Springer.
Pavlov, A. N., O. V. Sosnovtseva, E. Mosekilde, and V. S. Anishcenko [2000],
Extracting Dynamics from Threshold-crossing Interspike Intervals: Possibilities
and Limitations, Phys. Rev. E 61, 5033-44.
Pavlov, A. N., O. V. Sosnovtseva, E. Mosekilde, and V. S. Anishcenko [2001],
Chaotic Dynamics from Interspike Intervals, Phys. Rev. E 63, 036205.

1. Reading Neural Encodings 21
Pesin, Y. B. [1977], Lyapunov Characteristic Exponents and Smooth Ergodic
Theory, Usp. Mat. Nauk. 32,55. English translation in Russian Math. Survey,
Volume 72, 55, (1977).
Plant, R. E. and M. Kim [1976], Mathematical Description of a Bursting Pace­
maker Neuron by a Modification of the Hodgkin-Huxley Equations, Biophys.
J. 16, 227-44.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery [1992]'
Numerical Recipes in FORTRAN. Cambridge University Press, Cambridge.
Rhodes, C. and M. Morari [1998], Determining the Model Order of Nonlinear
Input/Output Systems, AIChE J. 44, 151-63.
Richardson, K. A., T. T. Imhoff, P. Grigg, and J. J. Collins [1998], Encoding
Chaos in Neural Spike Trains, Phys. Rev. Lett. 80, 2485-88.
Rieke,
F., D. Warland, R. de Ruyter van Steveninck, and W. Bialek [1997], Spikes:
Exploring the Neural Code.
The MIT Press, Cambridge, MA.
Sauer, T. [1994], Reconstructing of Dynamical Systems from Interspike Intervals,
Phys. Rev. Lett. 72, 3811-14.
Shannon, C. E. [1948]' A Mathematical Theory of Communication, Bell Syst.
Tech.
J. 27, 379-423 and 623-656.
Takens, F. [1981]' Detecting Strange Attractors in Turbulence. In Rand, D.
and L. S. Young, editors, Dynamical Systems and Turbulence, Warwick, 1980,
volume 898, page 366, Berlin. Springer.
Turner, E. C., J. H. Wolfe, K. Wood, H. D. 1. Abarbanel, M. 1. Rabinovich,
and A. 1. Selverston [2002]' Reading Neural Codes: The Importance of Spike
Patterns,
To be submitted to Nature Neuroscience September 2002, 2002.

2
Boolean Dynamics with
Random Couplings
Maximino Aldana
Susan Coppersmith
Leo P. Kadanoff
To Larry Sirovich, on the occasion of his 70th birthday.
ABSTRACT This paper reviews a class of generic dissipative dynamical
systems called N-K models. In these models, the dynamics of N elements,
defined as Boolean variables, develop step by step, clocked by a discrete
time variable. Each of the N Boolean elements at a given time is given
a value
which depends upon K elements in the previous time step. We
review the work of many authors on the behavior of the models, looking
particularly at the structure and lengths of their cycles, the sizes of their
basins of attraction, and the flow of information through the systems. In
the limit of infinite N, there is a phase transition between a chaotic and
an ordered phase, with a critical phase in between. We argue that the
behavior of this system depends significantly on the topology of the network
connections. If the elements are placed upon a lattice with dimension d, the
system shows correlations related to the standard percolation or directed
percolation phase transition on such a lattice. On the other hand, a very
different behavior is seen in the K aujJman net in which all spins are equally
likely to be coupled to a given spin. In this situation, coupling loops are
mostly suppressed, and the behavior of the system is much more like that
of a mean field theory. We also describe possible applications of the models
to, for example, genetic networks, cell differentiation, evolution, democracy
in social systems and neural networks.
Contents
1 Introduction .....
1.1 Structure of Models .
1.2
Coupling Functions .
1.3
The Updates
1.4 Symmetry Properties
1.5 Outline of the Paper
23
24
26
28
30
31
31
E. Kaplan et al. (eds.), Perspectives and Problems in Nolinear Science
© Springer-Verlag New York, Inc. 2003

24 M. Aldana and S. Coppersmith and L. Kadanoff
2
3
4
5
6
7
Information Flow ................. .
2.1
Response to Changes . . . . . . .
2.2
Percolation and Phase Behavior ....
2.3 Lattice versus Kauffman net . . . . . .
2.4
Calculations of overlap and divergence
Cycle Behavior . . . . . . . . . . . . . . . . . . . .
3.1
Linkage Loops and Cycles . .
3.2
Phase Transitions for Cycles. ....
3.3 Soluble Models. . . . . . . . . . . . .
3.4
Different Phases -Different Cycles
Reversible Models . . . . . . . . . . . . . . . . . .
Beyond Cycles . . . . . . . . . . . . . . . . . . . .
5.1
Non-robustness of Cycles .............. .
5.2
Characterization via noise . . . . . . . . . . . . . .
Applications .................... .
6.1
Genetic networks and cell differentiation
6.2
6.3
6.4 Evolution ....
Social Systems
Neural Networks
Conclusions
References . . . . . . . .
1 Introduction
32
32
34
39
40
43
44
46
46
53
57
59
59
60
65
65
71
75
77
80
81
In this review, we describe the dynamics of a set of N variables, or ele­
ments, which each have two possible values (say 0
and 1). These elements
interact with each other according to some given interaction rules, specified
through a set of Boolean coupling functions that determine the variables at
the next time-step, and thereby give the dynamics of the system. Such a dis­
crete stepping of a set of Boolean variables, also known in general terms as
a Boolean network, is of potential interest in several different fields, rang­
ing from gene
regulation and control, to modeling democracy and social
organization,
to understanding the behavior of glassy materials.
The models were originally studied primarily for their biological interest,
specifically by Stuart Kauffman who introduced the so-called N-K model
in the context of gene expression and fitness landscapes in 1969 (Kauff­
man [1969, 1974, 1995, 1993, 1990, 1984]). Since Kauffman's original work,
the scientific community has found a broad spectrum of applicability of
these models. Specific biological problems studied include cell differentia­
tion (Huang and Ingber [2000]), immune response (Kauffman and Wein­
berger [1989]), evolution
(Bornholdt and Sneppen [1998]; Zawidzki [1998];
Bornholdt and Sneppen [2000]; Ito and Gunji [1994]), regulatory networks
(Bornholdt and Rohlf [2000]) and neural networks (Wang, Pichler, and Ross
[1990];
Derrida, Gardner, and Zippelius [1987]; Kurten [1988a]; Bornholdt
and Rohlf [2000]). In the first two examples, the basic binary element might

2. Dynamics in Boolean Networks 25
be a chemical compound, while in the last it might be the state of firing of
a neuron. A computer scientist might study a similar set of models, calling
the basic elements gates, and be thinking about the logic of computer de­
sign (Atlan, Fogelman-Soulie, Salomon,
and Weisbuch [1981]; Lynch [1995])
or
optimization (Lee and Han [1998]; Stauffer [1994]). Earlier work in the
mathematical literature (Harris [1960]; Metropolis and Ulam [1953]) stud­
ied random mapping models, which are a subset of the models introduced
by Kauffman. This same kind of problem has also drawn considerable at­
tention from physicists interested in the development of chaos (Glass and
Hill [1998]; Luque and Sole [1998, 1997a]; Kurten and Beer [1997]; Mestl,
Bagley,
and Glass [1997]; Bagley and Glass [1996]; Bhattacharjya and Liang
[1996b]; Lynch [1995])
and also in problems associated with glassy and
disordered materials (Derrida and Flyvbjerg [1986]; Derrida and Pomeau
[1986]; Derrida [1987b]; Derrida and Flyvbjerg [1987a]). In these examples,
the Boolean element might be an atomic spin or the state of excitation of
a molecule. Kauffman models have even been applied to quantum gravity
problems (Baillie
and Johnston [1994]).
In some sense, the type of Boolean networks introduced by Kauffman can
be considered as a prototype of generic dynamical system, as they present
chaotic as well as regular behavior
and many other typical structures of
dynamical systems.
In the thermodynamic limit N ----> 00, there can be
"phase transitions" characterized by a critical line dividing chaotic from
regular regions
of state space. The study of the behavior of the system at
and near the phase transitions, which are attained by changing the model­
parameters, has been a very major concern.
As we shall describe in more detail below,
these models are often studied
in a version in which the couplings among the Boolean variables are picked
randomly from some sort of ensemble. In fact, they are often called N-K
models because each of the N elements composing the system, interact with
exactly K others (randomly chosen). In addition, their coupling functions
are usually picked
at random from the space of all possible functions of K
Boolean variables. Clearly this is a simplification of real systems as there
is no particular problem which has such a generically chosen coupling. All
real physical
or biological problems have very specific couplings determined
by the basic structure of the system in hand. However, in many cases the
coupling structure of the system is very complex and completely unknown.
In those cases the only option is to study the generic properties of generic
couplings.
One can then hope that the particular situation has as its most
important properties ones which it shares with generic systems.
Another simplification is the binary nature of the variables under study.
Nevertheless,
many systems have important changes in behavior when
"threshold" values of
the dynamical variables are reached (e.g. the synapses
firing
potential of a neuron, or the activation potential of a given chemical
reaction in a metabolic network).
In those cases, even though the variables
may vary continuously,
the binary approach is very suitable, representing

26 M. Aldana and S. Coppersmith and L. Kadanoff
the above-below threshold state of the variables. The Boolean case is par­
ticularly favorable for the study of generic behavior. If one were to study
a continuum, one would have to average the couplings over some rather
complicated function space. For the Booleans, the function space is just a
list of
the different possible Boolean functions of Boolean variables. Since
the space is enumerable, there is a very natural measure in the space. The
averages needed for analytic work or simulations are direct and easy to
define.
In addition to its application, the study of generic systems is of mathe­
matical interest in and for itself.
1.1 Structure of Models
Any model of a Boolean net starts from N elements {0'1' 0'2, ... , 0' N },
each of which is a binary variable O'i E {a, I}, i = 1,2, ... ,N. In the time
stepping, each of these Boolean elements is given by a function of the other
elements. More precisely, the value of O'i at time t + 1 is determined by the
value of its Ki controlling elements O'j1 (i), O'h(i) , ... , O'jK, (i) at time t. In
symbols,
(1.1)
where
Ii is a Boolean function associated with the ith element that depends
on Ki arguments. To establish completely the model it is necessary to
specify:

the connectivity Ki of each element, namely, how many variables will
influence
the value of every O'i;
• the linkages (or couplings) of each element, which is the particular set
of variables
O'il (i), O'J2(i) , ... ,00jKi
(i) on which the element O'i depends,
and
• the evolution rule of each element, which is the Boolean function fi
determining the value of O'i(t + 1) from the values of the linkages
O'il(i)(t), O'J2(i)(t),
... , O'jKi(i)(t).
Once
these quantities have been specified, equation (1.1) fully determines
the dynamics of the system. In the most general case, the connectivities
Ki may vary from one element to another. However, throughout this work
we will consider only
the case in which the connectivity is the same for all
the nodes: Ki = K, i = 1,2, ... ,N. In doing so, it is possible to talk about
the connectivity K of the whole system, which is an integer parameter by
definition. It is worth mentioning though that when Ki varies from one
element
to another, the important parameter is the mean connectivity of

t he system, (K), defined as
2. Dynamics in Boolean Networks 27
1 N
(K) = -"""" Ki
N~ .
i=l
In this way, the mean connectivity might acquire non-integer values. Scale­
free networks (Strogatz [2001];
Albert and Barabasi [2002]), which have a
very
broad (power-law) distribution of Ki, can also be defined and charac­
terized.
Of fundamental importance is the way the linkages are assigned to the
elements, as the dynamics of the system both qualitatively and quanti­
tatively
depend strongly on this assignment. Throughout this paper, we
distinguish between two different kinds of assignment: In a lattice assign­
ment all the bonds are arranged on some regular l attice. For example, the
K control el ements O'jdi) , O'j,(i), ... , O'jK(i) may be picked from among the
2d nearest neighbors on a d dimensional hyper-cubic lattice. Alternatively,
in a
uniform assignment each and every el ement has an equal chance of
appearing in this list. We shall call a Boolean system with such a unif orm
assignment a Kauffman net. (See Figure 1.1.)
Of course, intermediate cases are possible, for example, one may consider
systems
with some linkages to far-away elements and others to neighboring
elements.
Small-world networks (Strogatz [2001]) are of this type.
For convenience,
we will denote t he whole set of Boolean el ements {0'1 (t),
0'2(t),
... , O'N(t)} by the symbol 2::t:
2::t = {0'1(t),0'2(t), ... ,00N(t)}; (1.2)
2::t represents then the state of the system at time t. We can think of 2::t as
an integer number which is t he base-10 represe ntation of the binary chain
,
,
,
,
0 01
(a)
I 0 '1 0
(b)
... --...
"
, , ,

.. ------
, '~ '
1 1 0 1 1 0 1 0
o
o
,
,
,
1 1
FIGURE 1.1. The different kinds of linka ges in a one dimensional system. (a) In
the Kauffman net the linkages of every element
(ji are chosen at random among
a
ll the other elements (jl ... (j N. (b) In a complete ly ordered l attice, the linkages
are chosen accord
ing to the geometry of the space. In the case illustrated in this
figure,
(ji is linked to its first and second nearest neighbors.

28 M. Aldana and S. Coppersmith and L. Kadanoff
{ 0"1 (t), 0"2 (t), ... , 0" N (t)}. Since every variable O"i has only two possible val­
ues, 0
and 1, the number of all the possible configurations is n = 2N, so
that ~t can be thought of as an integer satisfying 0 ::; ~t < 2N. This
collection of integers is the base-10 representation of the state space of the
system. Although it is not essential for the understanding of the underlying
dynamics of
the network, this integer representation proves to be very use­
ful in
the implementation of computational algorithms used in numerical
simulations
(at least for small values of N).
A note of caution is relevant at this point. We should distinguish the
purely Boolean model described in this work from Kauffman's N-K land­
scape model, which provides a description of fitness landscapes by including
a fitness function
to be optimized. We are not going to review on fitness
landscapes since
abundant literature already exists on this topic (Wilke,
Ronnenwinkel,
and Martinetz [2001]; Kauffman [1995, 1993]).
1.2 Coupling Functions
The arguments of the coupling functions h(O"j,(i), ... , O"jK(i)) can take on
2K different values. One specifies the functions by giving, for each of these
values of
the arguments, a value to the function. Therefore there are a total
of
(1.3)
different possible functions.
In Table 1.1 we give two examples of coupling
functions for
the case K = 3. There are 2
3 = 8 configurations of the
arguments O"jl(i) , O"h(i) , O"h(i) , and for each one of these configurations,
3
the function fi can acquire the values 1 or O. For K = 3 there are 22 =
256 tables similar to the one shown in Table 1.1, one for each Boolean
coupling function. Different tables differ in
their assignments of O's and l's.
If we assign a probability or weight to each of these functions, one gets an
ensemble of possible couplings.
Possible ensemble choices abound. One ensemble used extensively by
Kauffman
and others is the uniform distribution in which all functions are
weighted equally. Alternatively, a magnetization bias
1 may be applied by
weighting
the choice of functions with an outcome 0 with a probability p,
and the outcome 1 with a probability 1 - p (see, for example Bastolla and
Parisi [1998b]). One may also give different weights to particular types of
functions. For example, one
can consider only forcing functions or canaliz­
ing functions (Stauffer [1987a]; Kauffman [1969, 1984]), in which
the func­
tion's value is determined when just one of its arguments is given a specific
value.
The second function shown in Table 1.1 is a canalizing function. An­
other possibility is to specify the value of the function in order to simulate
1 The word magnetization comes from the possibility of identifying each element with
an atomic spin, which is a very small magnet.

2. Dynamics in Boolean Networks 29
Random Canalizing
a11 a12 ajs I( aju (112' aj3) l(a11 , a12, ais)
0 0 0 0 1
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 1 0
1 1 1 1 0
TABLE 1.1. Illustration of two Boolean functions of three arguments. The first
function is a particular random function, whereas the second one is a canalizing
function of the first argument CT1. When this argument is 0, the output of the
function is always 1, while if CT1 = 1, the output can be either 0 or 1.
the additive properties of neurons (Bornholdt and Rohlf [2000]; Genoud
and Metraux [1999]; Cheng and Titterington [1994]; Wang, Pichler, and
Ross [1990]; Kurten [1988a]; Derrida, Gardner, and Zippelius [1987]).
Here we
enumerate some of the coupling functions occurring for different
values of
the connectivity K.
• For K = 0 there are but two functions, corresponding to the two pos­
sible values
of a Boolean variables: tautology 1 = 1 and contradiction
1 = o. Together these functions form a class which we might call A.
• For K = 1, in addition to the class A, there exists another class 8 in
which
I(a) can take on the value a, called identity, and the value -,a,
called negation. Thus there are a total of four functions, represented
as columns in Table 1.2.
Class
A Class 8
a Ao Al 81 8N
0 0 1 0 1
1 0 1 1 0
TABLE 1.2. Boolean functions for K = 1. The first two functions form the class
A of constant functions, AO(CT) = 0 and A1(CT) = 1. The other two functions form
class B which consist in identity B1(CT) = CT and negation BN(CT) = -'CT.
• The situation for K = 2 has been particularly carefully studied. Here
there are four classes of functions I(al, (2) (Lynch [1993b]; Copper­
smith, Kadanoff, and Zhang [2001a]). Each class is invariant under
making the interchange 0 +--t 1 in either arguments or value of I. The
classes are A (two constant functions), 81 (four canalizing functions
which
depend on one argument), 82 (eight canalizing functions which
depend on two arguments), C, (two non-canalizing functions). These
functions are explicitly shown in Table 1.3.

30 M. Aldana and S. Coppersmith and L. Kadanoff
crjl crj2 Class A Class 81 Class 82 Class C
0
0 1 0 0 1 0 1 1 0 0 0 0 1 1 1 1 0
0 1 1 0 0 1 1 0 0 1 0 0 1 0 1 1 0 1
1 0 1 0 1
0 0 1 0 0 1 0 1 1 0 1 0 1
1 1 1 0 1 0 1 0 0 0 0 1 1 1 1 0 1
0
TABLE 1.3. Boolean functions for the case K = 2. The 16 functions can be
arranged
in four different classes which differ in their symmetry properties (see
text).
Several
calculations have been done by giving different weights to the
different classes (see for example Lynch [1995]; Stauffer [1987a]) .
1.3 The Updates
Once the linkages and the Ii's are given, one is said to have defined a
realization of the model. Given the realization, one can define a dynamics
by using equation (1.1) to update all the elements at the same time. This
is called a synchronous updat.e. In this paper, we assume a synchronous
update unless stated otherwise. Alternatively, one may have a serial model
in which one updates only one element at a time. This element may be
picked at random or by some predefined ordering scheme.
Additional choices must be made. One can:
1. Keep
the same realization through all time. We then have a quenched
model.
2.
Pick an entirely new realization after each time step. The model is
then said to be annealed.
2
3. Employ a genetic algorithm in which the system slowly modifies its
realization so as to approach a predefined goal (Bornholdt and Snep­
pen [2000]; Stern [1999]; Stauffer [1994]).
4.
Intermediate choices are also possible (Baillie and Johnston [1994];
Bastolla and Parisi [1996]).
Almost always, we shall regard the real system as one which updates
synchronously and which is quenched so that the interactions are fixed for
all
time. The annealed and sequential models will be regarded as approxi­
mations which can provide clues to the behavior of this "real system". The
2These terms have been borrowed from the physics of alloys in which something which
is cooled quickly so that it cannot change its configuration is said be be quenched, and
something which is held at a high temperature for a long time so that it can respond to
its environment is described as annealed. Hence these terms are applied to situations in
which one wants to distinguish between problems with fixed versus changing interactions.

2. Dynamics in Boolean Networks 31
quenched model has time-independent dynamics describing motion within
the state space of size 0 = 2N. One iterates the model through time by
using equation (1.1) and thereby obtains a dynamics for the system. Each
of the 0 different initial conditions will generate a motion, which will even­
tually fall into a cyclical behavior.
1.4 Symmetry Properties
Typically, each individual realization of these models shows little or no
symmetry. However, the average over realizations has quite a large symme­
try group, and the symmetry is important to model behavior. For example,
the random mapping model (Harris [1960]), which is the K --+ 00 limit of
the N-K model of the Kauffman net, has a full symmetry under the in­
terchange of all states forming the state space. For finite values of K, the
Kauffman net is symmetric under the interchange of any two basic elements.
One can also have a symmetry under the interchange of the two values of
each individual element if one chooses the couplings at random, or with
the appropriate symmetry. One can use dynamics that have reversal sym­
metry (Harris [1960]; Coppersmith, Kadanoff, and Zhang [200la,b]), and
that choice will have a profound effect upon the structure of the cycles.
1.5 Outline of the Paper
To define fully the object of study, one must describe the dynamical process
and the class of realizations that one wishes to study. For example, one can
fix Nand K, and study all of the properties of all realizations of frozen
systems with those values. One might pick a realization at random among
all possible linkages and functions, develop its properties. Then one would
pick
another realization, and study that. Many such steps would give us
the properties of the frozen system averaged over realizations with a given
Nand K. What properties might we wish to study?
In the next section, we describe the gross information transfer through
the system by describing how the system will respond to a change in initial
data or couplings. There are three different phases that have qualitatively
different information transfer properties. We argue that the Kauffman net,
which can transfer information from any element to any other, is qualita­
tively different from lattice systems, in which the information transfer oc­
curs
through a d-dimensional space. We argue that the information transfer
on the lattice is qualitatively, and even quantitatively, similar to the kind
of information flow studied in percolation problems.
Section 3 is
concerned with the temporal recurrence in the network as
reflected in the statistical properties of its cycles. Here we base our argu­
ments upon a discussion of two important limiting cases, K = 1, and very
large
K. The first case is dominated by situations in which there are a
few
short linkage loops. In the second, the Kauffman net shows a behavior

32 M. Aldana and S. Coppersmith and L. Kadanoff
which
can be analyzed by comparison with a random walk through the
state space. The distribution of cycle lengths is qualitatively different from
any of the quantities that are commonly studied in percolation problems.
So
the results characterizing the cycles are substantially different from the
behaviors usually studied in phase transition problems. We argue in ad­
dition that the cycles of Kauffman nets and of networks on d-dimensional
lattices differ substantially.
Generic Kauffman models are dissipative in the sense that several dif­
ferent
states may be mapped into one. Consequently, information is lost.
Nonetheless,
not all kind of problems are generic. In Section 4 we analyze
the dynamics of time-reversible Kauffman models, in which every possi­
ble
state is in exactly one cycle. Any cycle can be traversed equally well
forward
or backward. A phase transition also occurs in the time-reversible
models,
but as we will see, time-reversible orbits have quite different prop­
erties from dissipative ones. For example, in the chaotic phase, the number
of different attractors in the reversible case grows exponentially with the
system size, whereas in the dissipative case it grows linearly. Also, the in­
terplay between the discrete symmetry and quenched randomness in the
time-symmetric models can lead to enormous fluctuations of orbit lengths.
In Section 5 we discuss the robust properties of the N-K models. We
argue that, due to the very large fluctuations present in such important
quantities as the number of different cycles or the cycle lengths, a char­
acterization of these models just via cycle properties does not reflect the
robustness of the dynamics of the system. Therefore, another types of char­
acterizations are needed. In particular, we focus our attention on a charac­
terization via noise, initially proposed almost simultaneously by Golinelli
et. al.
and Miranda et. al. (Golinelli and Derrida [1989]; Miranda and Parga
[1989]). By considering the time it takes for two trajectories in the state
space to cross, it is apparent that the the stochastic dynamics of the Kauff­
man model evolving under the influence of noise is very robust in the sense
that different Kauffman models exhibit the same kind of behavior.
As we already mentioned at the beginning of this section, the spectrum of
applications of the Kauffman model comprises a very wide range of topics
and fields. Therefore, in Section 6 we review only on a few of the applica­
tions (genetic networks and cell differentiation, evolution, organization in
social
systems and neural networks) which, from our point of view, stick
out from this spectrum of varied and imaginative ideas.
2 Information Flow
2.1 Response to Changes
The first thing to study in an N-K model is its response to changes. This
response is important because the actual values of the elements often do

2. Dynamics in Boolean Networks 33
not matter at all. If, for example, we pick the functions Ii at random
among the class of all Boolean functions of K variables, then the ensemble
is
invariant under flipping the value of the ith element. In that case, only
changes
matter, not values.
In computer studies, such changes can be followed quite simply. One
follows a few different time developments of systems that are identical
except for a small number of selected changes in the coupling functions or
initial data, and sees how the differences between the configurations change
in
time. One can do this for two configurations or for many, studying pair­
wise differences between
states, or things which remain identical across all
the time-developments studied.
2.1.1 HaIIlIIling Distance and Divergence of Orbits
For simplicity, imagine starting out from two different possible initial states:
which differ in
the values of a few elements. One can then observe the time­
development of these configurations under the same dynamics, finding, for
example,
the distance D(t) between the configurations as a function of time
N
D(t) = L(CTi(t) -O"i(t))2. (2.2)
i=l
If the transfer of information in the system is localized, this distance will
never grow very large.
If however, the system is sufficiently chaotic so that
information may be transferred over the entire system, then in the limit
of large N this Hamming distance can diverge for large times. Another
interesting measure is the normalized overlap between configurations, a(t),
defined as
a(t) = 1 -N-
1 D(t) . (2.3)
One will wish to know whether a goes to unity or a lesser value as t --+ 00. If
the overlap always goes to unity, independently of the starting states, then
the system cannot retain a nonzero fraction of the information contained
in its starting configuration. Alternatively, when a( (0) is less than unity,
the system "remembers" a nonzero fraction of its input data.
2.1.2 Response to DaIIlage
So far, we have considered the system's response to changes of the initial
data. One can also attack the quenched problem by considering two sys­
tems, each with the same initial data, but with a few of the linkages varied.
Then one can ask: given such "damage" to the system, how much do the

34 IV1. Aldana and S. Coppersmith and L. Kadanoff
subsequent states of the system vary? Do they become more and more alike
or do they diverge? What is the likelihood of such a divergence?
These considerations of robustness-both to damage and to variation in
initial data-are very important for the evaluation of the effectiveness of a
network, either for computations or as part of a biological system. There
have been fewer studies of the effect of damage than that of initial data.
Usually the two types of robustness occur together (Luque and Sole [2000];
De Sales, Martins, and Stariolo [1997]). Damage has been studied for itself
(Stauffer [1994]; Corsten and Poole [1988]).
2.2 Percolation and Phase Behavior
2.2.1 Percolation of Information
In the limiting case in which N approaches infinity, the different types
of N-K models all show three different kinds of phases, depending upon
the form of information transfer in the system. If the time development
transfers information to a number of elements that grows exponentially in
time, the system is said to be in a chaotic phase. Typically, this behavior
occurs for larger values of K, up to and including K = N. If, on the
other hand, a change in the initial data typically propagates to only a
finite
number of other elements, the system is said to be in a frozen phase.
This behavior will arise for smaller values of K, most especially K = 0,
and usually K = 1. There is an intermediate situation in which information
typically flows to more and more elements as time goes on, but this number
increases only algebraically. This situation is described as a critical phase.
When the linkages and the hopping among configurations are sufficiently
random, one can easily perform a quite precise calculation of the boundary
which separates these phases. Imagine starting with a state ~o, containing
a very large number, N, of Boolean elements, picked at random. Imagine
further another configuration I:o in which the vast majority of the elements
have the same value as in ~o, but nevertheless there are a large number of
elements, picked at random, which are different. The Hamming distance at
time zero, D(O), is the number of changed elements. Now take the simplest
N-K system in which all the linkages and the couplings are picked at
random. On average, a change in a single element will change the argument
of K functions, so there will be K D(O) functions affected. Each of these
will have a probability one half of changing their value. (The functions after
all are quite random.) Thus the Hamming distance after the first time step
will be
D(I) = 0.5KD(0).
If the couplings and connections are sufficiently random, then at the start of
the next step, the newly changed elements and their couplings will remain
quite random. Then the same sort of equation will apply in the next time
step, and the next. Just so long as the fraction of changed elements remains

2. Dynamics in Boolean Networks 35
small, and the randomness continues, the Hamming distance will continue
to change by a factor of K /2 so that
D(t + 1) = 0.5KD(t),
which then has the solution
D(t) = D(0)exp[tln(0.5K)]. (2.4)
For K > 2 the number of changed elements will grow exponentially, for
K < 2 it will decay exponentially, and for K = 2 there will be neither
exponential
growth nor decay, and the behavior will be substantially influ­
enced by fluctuations. Thus, by varying
the value of the connectivity, the
system sets down into one of the three following phases:
• Chaotic (K > 2), the Hamming distance grows exponentially with
time.
• Frozen
(K < 2), the Hamming distance decays exponentially with
time.
• Critical (Kc = 2), the temporal evolution of the Hamming distance
is
determined mainly by fluctuations.
In deriving equation (2.4) we have assumed that the coupling functions
Ii of the system acquire the values 0 and 1 with the same probability P = ~.
Nonetheless, as we will see below, the chaotic, frozen and critical phases
are also present in the more general case in which the coupling functions Ii
evaluate to 0 and 1 with probabilities P and 1 -P respectively. For a given
value of
p, there is a critical value Kc(p) of the connectivity below which
the system is in the frozen phase and above which the chaotic phase is
attained. Conversely, for a given connectivity K ~ 2, a critical value Pc(K)
of the probability bias separates the chaotic and the frozen phases.
The behavior of the normalized Hamming distance, D (t) / N, can be seen
in Figures 2.1
and 2.2, which respectively are for the Kauffman net and
a two-dimensional lattice system. In both cases the system has N = 104
elements and the connectivity is K = 4. The linkages of every element of
the Kauffman net are chosen randomly, whereas in the two-dimensional
lattice each element receives inputs from its four nearest neighbors. Both
figures contain three curves, with the parameter P picked to put the systems
into
the three different phases. For the Kauffman net Pc = (1 -M)/2
(see equation (2.8) below). The value of Pc is not very well known for the
two-dimensional lattice, but the best numerical estimations indicate that
Pc '::;'. 0.29 for an infinite lattice (Stauffer [1988]; Sti:ilzle [1988]; Weisbuch
and Stauffer [1987]; Stauffer [1987b]; Derrida and Stauffer [1986]). For finite
lattices
the value of Pc has been defined as the average over realizations of
the value of p at which a first cluster spanning the whole net appears (Lam
[1988]). For a 100 x 100 lattice this value is Pc '::;'. 0.27.

36 M. Aldana and S. Coppersmith and L. Kadanoff
p=0.4
P,
= 0.1464
(a)
p=O.os
O(t)lN
10 1 00 1000
0.010
,-------------------,
0.009 P, =0.1464
0.008
(b)
0.007 '---~_----'-_~ _ __'___~ _ _'___~___'_~_-1
o 100 200 300 400 500
FIGURE 2.1. Hamming distance as a function of time for a Kauffman net composed
of
N = 10000 elements and connectivity K = 4. (a) Log-log graph showing the
Hamming distance
for the three different regimes of the system: frozen (p = 0.05),
critical
(Pc = ~(1 -Jl72) ~ 0.1464) and chaotic (p = 0.4). In all the cases the
initial Hamming distance
was D(O) = 100. (b) Hamming distance for the critical
phase
(p = Pc) but in a non-logarithmic graph. Note that the Hamming distance
initially decreases, and then it rises again to saturate
at a constant value that
depends weakly on system size.
In the frozen phase the distance has an initial transient but then quite
quickly approaches an asymptotic value. In the chaotic phase the distance
shows an exponential rise followed by a similar saturation. These behaviors
are almost the same for both the Kauffman net and the two-dimensional
lattice.
On the other hand, in the critical phase the behavior of the Ham­
ming distance is very different in these two systems. In the Kauffman net
the distance initially decreases and then increases again, asymptotically ap­
proaching a
constant value that depends weakly on system size. In contrast,
for the lattice the Hamming distance initially grows and then saturates. We
will see
later that, within the framework of the annealed approximation,
the normalized Hamming distance for an infinite Kauffman net approaches
zero monotonically in
both the frozen and the critical phases (exponen­
tially in
the frozen phase, and algebraically in the critical phase). As far as
we
can tell, the non-monotonic behavior of the Hamming distance in finite
systems at Kc shown in Figure 2.1b has not yet been explained.

2. Dynamics in Boolean Networks 37
10"
p=O.4
D(t)IN
10'"
P =0.1
10 100 1000
FIGURE 2.2. Hamming distance in a two-dimensional lattice composed of
N = 100 x 100 elements. Every node in the lattice receives inputs from its four
first
nearest neighbors (K = 4). The three curves (in log-log scale) show the
behavior of the Hamming distance in the three regimes: frozen (p = 0.1), critical
(Pc = 0.27) and chaotic (p = 0.4). Note that in the critical phase the Hamming
distance initially increases algebraically and then saturates at a constant value
that depends on system size.
2.2.2 Limitations on the mean field calculation
Let us look back at the argument which led to equation (2.4). Calculations
like this, in which
actual system properties are replaced by average system
properties are in general called "mean field" calculations.
Naturally,
the results derived as equation (2.4) depend crucially upon the
assumptions made. The derivation follows from the assumption that the h's
in each step are effectively random. (See also the derivations of equations
(2.6)
and (2.7) below, which also depend upon the randomness assumption.)
The randomness will certainly be correct in the annealed situation, in which
the couplings are reshuffled in each step. It will also be true in Kauffman
net in the limiting situation in which K = 00. In that case, information
is spread out over the entire system and thus has a very small chance of
correlation among
the Ii'S. The Kauffman net has a likely configuration that
permits the replacement of the actual values of the Ii's by their statistical
distribution (Hilhorst and Nijmeijer [1987]). However, the approximations
used here will
not always work. Specifically, they fail in all kinds of finite
N situations, or in situations in which the linkages are arranged in a finite­
dimensional lattice.
In that case, the assumed randomness of the Ii does not
hold, because their arguments are not random, and the derived equations
will
not work. To see this in the simplest example choose K = N = 1 with

38 M. Aldana and S. Coppersmith and L. Kadanoff
quenched couplings. A brief calculation shows
that for anyone of the four
possible
Ns, after a one-step initial transient a(t+2) = a(t). That does not
agree with equation (2.6) derived below. In fact, for any finite dimension
and linkages which involve short-range couplings, the overlap is not unity
at long times even in the frozen phase.
More generally, if
the system is put onto a finite dimensional lattice, or
if
the functions are not picked at random, or if the initial elements are not
random, couplings initially used can be correlated with couplings used later
on. Then the information transfer will be different and equation (2.4) will
fail.
However,
the principle that there can be a phase transition in the kind
of information
flow remains quite true for a d-dimensional lattice, and for
other ensembles of coupling functions.
2.2.3 Connections to percolation problems
The transfer of information just described is quite similar to the trans­
fer which occurs in a typical phase transition problem. Generically, these
problems have
three phases: ordered, critical, and disordered (Ma [1976];
Kadanoff [2000]). The bare bones of s1,lch an information transfer problem
is described as a percolation problem (Stauffer [1985]).
In one sort of percolation problem one considers a lattice. Each bond
or link of the lattice is picked to be either connected or disconnected. The
choice is made at random and the probability of connecting a given link
is picked to be q. Now one asks about the size of the structure obtained
by considering sets of links all connected with one another. For small val­
ues
of q, these sets tend to be small and isolated. As q is increased, the
clusters tend to get larger. At sufficiently large values of q, one or more
connected clusters may
span the entire lattice. There is a critical value of
the probability, denoted as qc, at which such a spanning cluster just begins
to form. Properties of the large clusters formed near that critical point
have been studied extensively (Stauffer [1985]). The resulting behavior is
"universal" in
that for an isotropic situation, the critical properties depend
only upon the dimensionality d of the lattice, at least when d is less than
four. For d > 4, the percolating system obeys mean field equations. When
the symmetry properties change, the critical behavior does change. For
example, a system
with directed percolation (Stauffer [1985]; Owezarek,
Rechnitzer,
and Guttmann [1997]) has the information flow move preferen­
tially in a
particular direction. The critical behavior of directed percolation
is different from that of ordinary percolation. It is attractive to explore the
connection between the phase transition for percolation, and the one for
the N -K model.
Several
authors have constructed this connection in detail. For example,
Hansen [1988a] looked
at the N-K model on lattices for 2,3 and 4 dimen­
sions
and demonstrated numerically that the phase transition occurred

2. Dynamics in Boolean Networks 39
when the information transition probability reached the critical value for
percolation
on the corresponding lattice. St6lzle [1988] showed the con­
nection
to percolation for both sequential and parallel updating for a two
dimensional lattice. However, Hansen [1988b]
took very special forms of
the connections, using only rules with connectivities of the form of a logi­
cal "or".
This did not result in behavior like simple percolation but instead
quite a different phase transition problem, related to the behavior of diodes
and resistors. At roughly the same time, Stauffer [1988] indicated a close
numerical correspondence
to the two dimensional percolation problem both
in Pc and also in the fractal dimension of the spanning cluster. (For Pc see
also
Lam [1988].) He also defined an order parameter, essentially a Ham­
ming distance,
that, when plotted as a function of (p -Pc), looked just
like a typical critical phenomena result. He argued that the N-K model is
equivalent
to directed percolation. More specifically, Obukhov and Stauf­
fer
[1989] argued that the quenched problem has a critical point which is
in
the same "universality class" (Kadanoff [2000]) as directed percolation.
This would imply that the critical point is essentially the same as that of
the directed percolation problem. The qualitative properties of both the
ordered and the frozen phases would also be essentially similar in the per­
colation case
and the N-K lattice. In the latter situation, the preferred
direction would
be that of the "time" axis. The structure of both would
vary
with dimensionality and become like that of mean field theory above
four dimensions.
This is the same mean field theory which describes the
Kauffman net. Thus, the behavior of information transfer in N-K problems
was mostly
understood in terms of percolation.
2.3 Lattice versus Kauffman net
We can now point to an important difference between systems in which
all elements are coupled
to all others, as in the Kauffman net, and lattice
systems in which
the elements which are "close" to one another are likely
to be mutually coupled. "Closeness" is a reciprocal relation. If a is close
to b, then b is also close to a. Therefore, close elements are likely to be
coupled
to one another and thereby form a closed linkage loop. Any large­
N lattice system might be expected to form many such loops. When K is
small, different spatial regions tend to be unconnected and so many different
modules will form
3. The dynamics of the elements in different modules are
independent.
In contrast, in a Kauffman net, influence is not a reciprocal
relation.
If element {7j appears in the coupling function Ji associated with
element {7i, there is only a small chance, proportional to (KIN), that {7i will
3 A module is a loop or more complex topology of dependencies in which all the
functions are non-constant, plus all other elements that are influenced by that structure.
See Section 3.1.

40 M. Aldana and S. Coppersmith and L. Kadanoff
appear in fJ. For large N and small K, the probability that a given element
will
participate in a linkage loop will be quite small, so there will then be
a small number of modules. When K is small, the number of modules in
uniformly coupled systems grows
more slowly than the system size, while
in lattice systems the number of modules is proportional to the size of the
system. This distinction will make a large difference in the cycle Htructure.
4
For the flow of information the difference between the two kinds of nets is
more quantitative than qualitative. One can see the similarity between
them by comparing the curves shown in Figures 2.1 and 2.2.
2.4 Calculations of overlap and divergence
Before coming to a careful description of the phases, we should describe
more fully the kinds of calculation of overlap that can be performed. Equa­
tion (2.4) is just the beginning of what can be done with the trajectories
of states in this problem. In fact, exactly the same logic which leads to
that equation can give a much more general result. If the overlap between
two states at time t is a(t), and if the elements which are different arise at
random, then the probability that the arguments of the function Ii will be
the same for the two configurations is
p = [a(t)]K . (2.5)
If all arguments are the same, then the contribution to the overlap at time
t + 1 is liN. (The N arises from the normalization of the overlap.) If
one or more arguments of the coupling function are different in the two
configurations,
and the functions Ii are picked at random, then the chance
of having the same functional output is 1/2 and the contribution to the
overlap is 1/(2N). Since there are N of such contributions, weighted by
p and 1 -P respectively, the equation for the overlap defined by equation
(2.3) is
a(t + 1) = ! [1 + [a(t)]K] . (2.6)
There are several possible variants of this equation. For example, if the
different outcomes 0 and 1 of the function Ii are weighted with probabilities
p and 1 -p respectively, to produce a sort of magnetization bias, then
equation (2.6) is replaced by (Derrida and Pomeau [1986]; Derrida and
Stauffer [1986])
(2.7)
4The interested reader will recall that in quantum field theory and statistical me­
chanics, mean field theory appears in a situation in which fluctuations are relatively
unimportant. This will arise when the analog of linkage loops make a small contribu­
tion. Then, the physicist puts the effect of loops back into the problem by doing what
is called a loop expansion. Since they both expand in loops, the percolation mean field
theory and the mean field theory of the Kauffman net are essentially the same.

2. Dynamics in Boolean Networks 41
where Kc is given in terms of pas
Kc = 1/[2p(1 - p)]. (2.8)
In the limit t -t 00, a(t) asymptotically approaches t he fixed point a*,
0.9
0.8
0 (/+1)
0.7
0.6
0.5
0 0.2
1.00
a
0.80
1
0.60
0 2 4 6
0.4 0.6
(a)
8 10 12 14
(b)
.
a ,
0.8
16
1
aCt)
18
K
20
FIGURE 2.3. (a) The mapping F(a) = 1-[l-a
K]/Kc (see Eq. (2.7)) for K c = 3
and three different values of K (solid curves), corresponding to the three different
phases of the system. The dotted line is the identity mapping. (b) Bifurcation
diagram of equation (2.9). For K :::; Kc the only fixed point is a* = 1. For
K > Kc the previous fixed point becomes unstable and another stable fixed
point a; appears.
which obeys, from equ ation (2.7)
(2.9)
We might expect equ
ation (2.9) to reflect the three-phase structure of the
problem, a nd indeed it does. Figure 2.3a shows the graph of the mapping
F(a) = 1-[l-a
KJI Kc for different values of K , and Figure 2.3b shows the
bifurcation diagram of equ ation (2.9). Both graphs were calcul ated with p
chosen so that K c = 3. As can be seen, if K :=:; K c there is only one fixed
point a* = 1, whereas for K > Kc the fixed point a* = 1 becomes unstable
as a
nother stable fixed point, a; =1= 1, appears. The value of the infinite
time overlap
a* describes the fraction of elements whose value is sensitive
to the cycle e ntered.

Another Random Document on
Scribd Without Any Related Topics

— Ainakin yksi lästi voidaan maksaa nyt heti, jos se on aivan
välttämätöntä, mutta loput vasta syksyllä.
— Kaksi lästiä voita, lästi lohta, kolme kippuntaa haukia, — luetteli
kirjuri rästilistasta.
Kun hän oli merkinnyt paperiinsa mitä taloudenhoitajan ilmotuksen
mukaan saatiin tulla heti perimään, lähti hän tiehensä ja sisään astui
samasta oven avauksesta tuomiorovasti Johannes Pietarinpoika. Hän
oli kookas ja hillitysti esiintyvä mies, jonka olemuksessa oli jotakin
synkkää ja painostavaa. Hän oli Flemingien sukulainen sekä
uskonpuhdistuksen kiivas vastustaja, ja tultuaan piispannimityksessä
sivuutetuksi osotti hän iäkkäälle esimiehelleen mielellään katkeraa
mieltä milloin vain sopi. Tervehdittyään piispaa niukalla
arvokkuudella sanoi hän:
— Tulin kuulemaan teidän isällisyytenne mielipidettä uuteen
kaniikinvaaliin nähden. Se kai olisi toimitettava viimeistäänkin
Valpurinmessuna?
— Minä luulen, että meidän on tällä kertaa jätettävä kaniikinvaali,
sillä kuningas on minulle nimenomaan kirjottanut, että kapitulin
jäsenten lukumäärää on vähennettävä, — vastasi piispa hieman
arastellen.
Tuomiorovastin suun ympärillä näkyi ivallinen piirre, kun hän
vastasi:
— Sen jälkeen jätetään tietysti arkkiteinin, tuomiorovastin ja
lopuksi kai piispankin virka täyttämättä, että kuningas voi korjata
heidän palkkatulonsa.

— Kuinkas monta kaniikkia oli Hemmingpiispan ja vielä Bero II:sen
aikanakin? — kysyi piispa.
— Se oli silloin, ja sadassa vuodessa muuttuu maailma paljon.
— Mutta jos silloin tultiin toimeen kuudella, tai tuomiorovasti ja
piispa lukuunotettuna, kahdeksalla kapitulin jäsenellä, niin eivät
olomme sentään ole niin muuttuneet, että ihan välttämättä
tarvitsisimme nykyään viisitoista jäsentä kapitulissa. Minun
ymmärtääkseni me, katsoen valtakunnan hädänalaiseen tilaan,
voimme jättää ainakin kolme kaniikin virkaa täyttämättä ja luovuttaa
niiden tulot valtakunnan velkojen maksamiseen.
— Ja tehdä se pelkästään kuninkaan käskyllä ilman pyhän isän
suostumusta?
— Lähimpänä miehenäni kysyn teiltä vilpittömästi, mitä te minun
sijassani tekisitte, kun olisi valittava kuninkaan ja pyhän isän välillä?
— kysyi piispa vuorostaan.
— Ainoan autuaaksi tekevän kirkon jäsenenä ja katolilaisena
pappina minä ehdottomasti noudattaisin pyhän isän määräyksiä, —
vastasi tuomiorovasti varmasti.
— Seurauksiinko katsomatta?
— Ne minä jättäisin Jumalan huomaan.
— Hm, mutta jospa me, niin monet esimerkit silmäimme edessä,
emme enää niin ehdottomasti voisikaan luottaa pyhään isään?
— Enemmänkö sitten kerettiläismieliseen kuninkaaseen ja
kirkonryöstäjään?

Piispa kohotti varottaen sormeaan ja sanoi vakavasti:
— Muistakaamme, että Jumala on valinnut hänet pelastamaan
valtakunnan muukalaisten sorrosta, ja jos hän yhdessä tai toisessa
asiassa onkin erehtynyt ja mennyt liian pitkälle, niin siitä tuomitkoon
Jumala. Että hän kipeästi tarvitsee rahoja valtakunnan tarpeisiin, sen
me kyllä hyvin tiedämme ja jos me vointimme mukaan häntä
autamme, niin tulevat silloin rahat käytetyiksi paremmin Jumalan
mielen mukaan kuin jos Roomaan lähettäisimme kaikenlaisia
lahjuksia.
Tuomiorovasti koetti hillitä itseään ja päästä toiselta suunnalta
esimiehensä kimppuun.
— Muutoin oli myöskin asianani, — sanoi hän, — saattaa teidän
isällisyytenne tietoon eräs uusi kirkonomaisuuden ryöstö. Pari päivää
sitten käydessäni Naantalin luostarissa valitti minulle abbedissa, että
tuo kerettiläisoppeja levittävä maisteri Särkilahti on anastanut erään
luostarille kuuluvan talon Taivassalon Tammistossa.
— Se on oikeastaan hänelle kuuluvaa perintöä ja on hän sen
peruuttanut itselleen kuninkaan luvalla.
— Aina vain kuningas! Mutta saanko tiedustella, onko teidän
isällisyytenne tehnyt mitään sitä estääkseen?
— Maisteri Särkilahti on köyhä perheellinen mies ja Naantalin
luostarilla on tiloja yllin kyllin, — vastasi piispa vältellen.
— Se ei puolusta ollenkaan kirkon omaisuuteen kajoamista,
kaikista vähimmän silloin, kun on kysymyksessä luvattoman perheen
elättäminen.

— Mitkä ovat luvattomia perheitä?
— Teidän isällisyytenne tietänee sen yhtä hyvin kuin minäkin, että
papiksi vihityt eivät saa perheitä perustaa. Ja muutoin oli
tarkotukseni valittaa sitä, että tuon Särkilahden toiminta käy yhä
julkeammaksi. Juuri kouluhuoneen sivu kulkiessani tuli kansaa
kuulemasta hänen saarnaansa ja kauhistuen kuulin minä niitä
sanoja, joita he käyttivät katolilaisesta kirkosta ja sen päämiehestä.
Kuinka kauan häntä on kärsittävä koulun rehtorina?
— Minä olen muutaman kerran ollut kuulemassa Särkilahden
saarnaa, mutta ainakaan silloin en huomannut hänen puhuvan
raamatusta poikkeavasti.
Tuomiorovastin kasvoilla näkyi terävä ja pilkallinen piirre. Piispa
jatkoi:
— Katsoen hänen tuliseen luonteeseensa olen häntä kyllä
kehottanut esiintymään maltillisesti, eikähän hän olekaan mitään
häiriöitä matkaan saattanut. Hän hylkää kyllä monet kirkon
hyväksymät käsitykset, mutta hänen oppinsa ydin on
raamatunhengen mukainen, ja se se sittenkin on tärkeintä.
Antamatta tuomiorovastille tilaisuutta vastata nousi piispa ja ojensi
hänelle kätensä. Kylmästi tervehtien lähti prelaatti huoneesta.
— Nyt menemme köyhien luo, — sanoi piispa taloudenhoitajalleen
ja yhdessä lähtivät he pihalle. Kaikki sinne kokoontuneet saivat
osansa joko vaatteita, joita viikon varrella oli sitä varten valmistettu,
tai ruokatavaroita ja pieniä hopearahoja. Varsinkin lapsia kohteli
piispa vanhus hellien ja useita orpoja oli hän toimittanut
kasvatettaviksi. Lopuksi hän luki siunauksen ja palasi sitten

työhuoneeseensa, jossa hän kirjotti alle sihteerinsä ja kirjurin
kokoonpanemat kirjeet.
Aurinko oli juuri laskemallansa. Pienten pyöreiden ruutujen läpi
tullen loivat sen viimeiset säteet huoneen panelatulle seinälle
haikeamielisen punerruksen. Vanhus näytti väsyneeltä ja raukealta.
— Päivätyömme on vihdoin lopussa. Huomenna alamme taas
Jumalan nimeen uusin voimin.
Kun sihteeri ja kirjuri olivat lähteneet, pukeusi piispa
päällysnuttuun ja lähti ulos. Hänelle oli tullut tavaksi joka ilta
pistäytyä yksinään rukoilemassa rakkaaksi käyneessä
tuomiokirkossa, jossa hän kolmenakymmenenä miehuusvuotenaan
oli alttaripalveluksia toimittanut.
Alkoi jo hämärtää kun piispa astui ulos. Mielihyvin hengitti hän
raikasta kevätillan ilmaa. Kohmettunut hiekka ritisi hänen hitaasti
astellessaan kirkkoa kohti. Vanha kirkonvartija seisoi sakariston
rappusilla kuten muinakin iltoina ja avasi hänelle oven. Hiljaa kuin
omien askeltensa kaikua peläten kulki piispa avaran sakariston läpi
suoraa kirkkoon. Alttarit, jykevät pilarit, ristiinnaulitun ja pyhimysten
kuvat olivat kätkeytyneet puolipimeään. Päivällä näytti kirkko
entiseen verraten kuin alastomaksi riisutulta, sillä Otto Rudin ja
Söyrinki Norbyn ryöstöjen sekä Kustaa kuninkaan anastusten jäleltä
olivat poissa hopea- ja kultakalleudet alttareilta ja kuorien seiniltä.
Mutta näin hämyssä saattoi vanhus kuvitella kaiken olevan ennallaan
ja siksi nämä lyhyet hetket iltaisin tuomiokirkossa olivat hänelle kuin
käyntejä entisyydessä.
Pyhän Pietarin kuorissa paloi vahakynttilä. Se loi himmeän
valojuovan keskilaivan lattian yli vastakkaiselle seinälle, jossa näkyi

varjo miehen hartioista ja kumartuneesta päästä. Kuorista kuului
matalalla, laulavalla äänellä:
— Pie Jesu, Domine, dona ei requiem… [Pyhä Herra Jeesus, anna
hälle rauhas…] ja kun kuorissa messuavaa pappia ei näkynyt
keskilaivaan, tuntui kuin hänen seinällä näkyvä varjonsa olisi nuo
sanat lausunut.
Yksityismessut olivat kielletyt, mutta joku aika sitten oli piispa
erään hurskaan porvarinlesken rukouksista heltyneenä suostunut
siihen että tämän miesvainajan autuudeksi luettaisiin sielumessuja.
Toimituksen suoritti joka päivä iltamessun jälkeen eräs lähes piispan
ikäinen pappi ja tapahtui se puolittain salassa. Tämän messun aikana
saapui piispa tavallisesti kirkkoon ja se oli omiaan tuntuvasti
täydentämään hänen entisyystunnelmaansa.
Hetken pimennossa seisten kuunneltuaan messua astui piispakin
kuoriin, jonka lattian alla lepäsi hänen entinen esimiehensä Konrad
Bitz. Ristinmerkin tehden polvistui hän alttarin ääreen. Kauan ja
hartaasti rukoiltuaan nousi hän ja papin jatkaessa messua lähti
hitaasti kirkosta. Mutta sakaristoon tultuaan tunsi hän suurta
väsymystä ja istahti yhteen pitkin seiniä olevista nojatuoleista.
Huoneessa oli jo hyvin hämärä ja seinällä häntä vastapäätä oleva
ristiinnaulitun kuva häämötti enää mustana, epäselvänä kuviona. Ovi
kirkkoon oli raollaan ja hiljaisena hyminänä kuului sieltä vanhan
papin messu.
Piispa painoi päänsä käsiin ja huokasi. Tässä samassa huoneessa
olivat liikkuneet ja näillä samoilla tuoleilla istuneet hänen edeltäjänsä
aina Maunu I:n, ensimäisen suomalaissyntyisen piispan ajoista
saakka. Ja siitä oli jo kolmatta sataa vuotta. Tuolla kirkon lattian alla

he kaikki nyt lepäsivät ja sinne oli hänenkin hartain halunsa jo
päästä….
Mutta miten olikaan? Hämärä tuolien yllä ikäänkuin liikkui ja tiheni
ja sen keskeltä selkeentyi yksi toisensa jälkeen päitä, harteita ja
vähitellen koko vartaloita. Ja niin istui puoliympyrässä pitkin
seinustaa liikkumattomia vanhuksia ummistetuin silmin. Kaikilla oli
yllään piispallinen viitta ja viitanpoimut ympäröivät heidän yhteen
puristettuja polviaan. Vaikka heidän piirteensä sakenevassa
hämärässä näkyivät epäselvinä ja vaikka he istuivat pää kumarassa
ja ummistetuin silmin, tunsi piispa heidät järjestään kuin olisi eilen
viimeksi ollut heidän parissaan, eikä hän tuntenut mitään ihmetystä,
että he siinä istuivat.
Perimpänä ja enimmän pimennossa istui Suomen kirkon perustaja,
Pyhä Henrik, ja hänen rinnallaan molemmat venäläisten
vankeudessa kuolleet käännytystyön jatkajat, Rodulf ja Folkvino.
Muista vähän erillään istui vaskenkarvaisin kasvoin Tuomas ja häntä
seurasivat kolme Rantamäen piispaa sekä niiden rinnalla kuusitoista
muuta Suomen piispanhiipan kantajaa. Lähinnä Skytteä istui viisi
hänen lähintä edeltäjäänsä, joiden kaikkien aikana hän oli
kaniikinvirkaa hoitanut. Tuimin piirtein ja muita ryhdikkäämpänä istui
viimeksi mainittujen joukossa unioniajan sotainen piispa Konrad Bitz,
ja hänen piispallisen pukunsa päällä häämötti rintahaarniska sekä
kupeellaan miekanponsi.
Mutta Tuomas piispa kohotti hiukan päätään, avasi hitaasti
silmänsä ja kysyi:
— Ovatko Häme ja Karjala saatetut jo kirkon kuuliaisuuteen?

— Ne ovat, mutta onko minun huolehtimani tuomiokirkon
rakentaminen päätetty ja onko kaniikkien lukumäärä pysynyt
neljänä, joksi minä sen järjestin? — vastasi piispa Katillus.
— Tuomiokirkko on valmistunut ja minä olen vihkinyt sen
toimeensa, — ilmotti Maunu I.
— Minun piispauteni aikana hävittivät venäläiset Turun ja polttivat
tuomiokirkon. Onko se korjattu? — kysyi Ragvald II.
— Minä olen korjauttanut tuomiokirkon ennalleen ja laajentanut
kapitulia kahdella uudella kaniikinviralla, — vastasi tarmokas
kirkkovaltias Benediktus. — Minä järjestin kymmenysverot ja pidin
ensimäisenä piispankäräjiä sekä pappeinkokouksia. Ovatko laitokseni
pysyneet voimassa?
Sananvuoron otti hänen lähin seuraajansa, Hemming piispa, jonka
pään ympärillä näkyi pyhimyskehä. Hän lausui:
— Ne ovat sekä pysyneet voimassa että lisääntyneet ja kasvaneet.
Huolimatta siitä että minun piispauteni aikana riehui Mustasurma
maassa, olen minä syventänyt ja vahvistanut kirkollista järjestystä.
Minä olen kaunistanut tuomiokirkkoa ja lisännyt siihen uusia kuoreja
ja alttareita, minä olen laajentanut kapitulia ja perustanut
tuomiorovastin viran, toimittanut tuomiokirkolle maatiloja sekä
kallisarvoisia kirjoja ja lujentanut kirkon valtaa. Papiston
velvollisuudet olen minä määritellyt statuteissani sekä saattanut
käytäntöön pappien naimattomuuden. Piispanistuimelle minä olen
toimittanut jalokivillä kaunistetun hiipan ja sauvan, puolustanut
maani etuja kuninkaita vastaan sekä masentanut vastahakoisia
pannakirouksella. Onko nämä kaikki säilytetty ja pidetty voimassaan?

— Olen koettanut vointini mukaan pitää yllä kirkon valtaa ja
käyttänyt pannaa niskottelijoita vastaan. Tuomiokirkossa panin alulle
pääkuorin rakennuksen, mutta minun sallittiin hoitaa korkeata
virkaani ainoastaan kaksi vuotta, — ilmotti oppinut Johannes II, joka
nuoruudessaan oli ollut Parisin yliopiston rehtorina.
— Minä olen sen työn onnellisesti päättänyt. Hiippakuntani rajoista
olin riidassa Upsalan arkkipiispan kanssa, mutta suoriusin siitä
voitolla — täydensi hänen seuraajansa Johannes III Westfali.
Bero II Balk, joka oli tuomiokapitulin kiiruusti kokoon haalimalla
rahasummalla saanut käydä pyhää isää Avignonissa
suostuttelemassa sekaantumasta piispanvaaliin, lausui hiljaisella
äänellä:
— Minun oli suotu onnettomana aikana kantaa Suomen hiippaa.
Merirosvot ja venäläiset haaskasivat kilvan seurakuntaani, hävittivät
Turun ja ryöstivät sekä tärvelivät tuomiokirkkoa. Voitavani olen
tehnyt, mutta kaikkea en ole ehtinyt ennalleen saada.
Hänen lähin seuraajansa, Maunu II Tavast, jota Suomen aatelisto
oli »palvellut kuin kuninkaallista majesteettia», lausui nyt lempeällä
äänellä:
— Kaikki on jälleen saatettu siihen kuntoon kuin autuaan
Hemming piispan aikana sekä runsaasti lisätty. Minun suotiin viipyä
piispanistuimella lähes neljäkymmentä ajastaikaa. Minä vahvistin
Kuusiston linnaa piispojen suojaksi, taistelin Ruotsin rauhattomia
ylimyksiä vastaan ja tuin Kaarlo kuningasta hänen horjuvalla
valtaistuimellaan. Minä kiertelin ahkerasti tarkastusretkillä ympäri
maatani, suojelin talonpoikia aatelisten sorrolta ja valvoin pappieni
elämää. Jerusalemin matkaltani toin minä runsaasti kalleuksia

tuomiokirkolle, jota minun toimestani laajennettiin uusilla kappeleilla.
Minä järjestin ja täydensin jumalanpalveluksen, niin että
tuomiokirkossa kaikui messu yli päivän aamuvarhaisesta
iltamyöhään. Kirkon asemaa minä vahvistin ja lisäsin sen tuloja,
opetuksesta sekä köyhien ja sairaiden hoidosta pidin minä huolta ja
hurskaiden naisten olinpaikaksi perustin Armonlaaksoon luostarin.
Ovatko nämä kaikki pysyneet voimassaan?
Konrad Bitz, joka sotajoukkonsa etunenässä oli taistellut Kaarlo
kuningasta vastaan, lausui lyhyesti ja karskisti:
— Minä olen säilyttänyt edeltäjäini perinnön ja unionikuningasten
avulla lisännyt tuntuvasti Suomen kirkon etuja. Vastahakoiset ja
niskottelijat olen pitänyt terveellisessä kurissa.
Hänen rinnallaan istui Maunu III Särkilahti, joka tuomiorovastina
ollessaan oli Saksan keisarilta saanut Turun tuomiorovasteilla
perintönä kulkevan palatsikreivin arvonimen. Hän lausui:
— Sotia, nälänhätää ja ruttotauteja on Suomi saanut läpi aikojen
kestää ja mitä edeltäjät ovat rakentaneet, sitä seuraajat ovat
saaneet jo nähdä raastettavan. Vointini mukaan puolustin minä
kuitenkin maani etuja Ruotsin valtionhoitajaa vastaan, menetin
varani ja sotajoukkoni taistelussa vanhaa vihollistamme venäläistä
vastaan, jonka käsistä minä ystäväni Knuutti Possen kanssa pelastin
Viipurin. Järjestystä ja kirkon valtaa olen koettanut ylläpitää ja
syventää kristillistä valistusta määräämällä jumalanpalveluksissa
käytettäväksi myöskin kansan omaa kieltä.
— Sodan ja sekasorron riehuessa olemme mekin lyhyen
piispautemme aikana vointimme mukaan koettaneet säilyttää

edeltäjäimme työtä, — ilmottivat Lauri Mikaelinpoika Suurpää ja
Johannes Olavinpoika.
Viimeisenä lähinnä Skytteä istui Arvid Kurki. Hiljaa ja tasaisesti
sanoi hän;
— Minä kannoin Suomen hiippaa kahtenatoista onnettomana
vuotena. Turku ja tuomiokirkko olivat piispaksi tullessani raa'an
vihollisen hävittämät ja uudet vielä raskaammat koettelemukset
koittivat maalle. Uutta ei minun oltu suotu rakentaa, mutta entistä
koetin minä voimaini mukaan säilyttää. Varani ja sotavoimani annoin
maan puolustukseen, mutta ennenkuin vihollinen oli karkotettu,
kutsui Jumala minut luokseen. Toivon kuitenkin, että uutta on
ruvettu hävitetyn tilalle rakentamaan.
Nyt kääntyivät kaikkien katseet Skytte vanhukseen. Kolmekolmatta
Suomen kirkon entistä päämiestä, joista useimmat olivat kirjottaneet
nimensä: »Jumalan armosta Turun piispa», katsoi häneen, ja totisina
ja kysyvinä näkyivät hämyn keskeltä heidän silmänsä. Vanhus kohotti
kättään ja liikutti huuliaan, mutta samalla havahtui hän ja näki
seinustoilla häämöttävän tyhjien tuolien. Kirkosta kuului vielä
messun hyminä ja hän kertasi hiljaa:
— Pie Jesu, Domine, dona mihi requiem!
Hän nousi vaivaloisesti seisoalleen ja lähti sakaristosta. Portaille
tullessaan näki hän edessään kirkkopihalla kaksi mustiin kaapuihin
puettua miestä, joista vasta ylennyt kuu loi pitkät varjot. Toinen
miehistä oli pitkä ja hoikka ja piispa tunsi hänet sekä ryhdistä että
äänestä, kun hän hiljaa puheli toverinsa kanssa. Hän viivähti
hetkisen pimennossa portailla ja tunsi kuin vastenmielisyyttä lähetä
miehiä, joiden hän arvasi vartovan häntä. »Mutta sitten lepään

kyllikseni, kun pääsen tuonne kirkon alle», ajatteli hän ja astui alas
portailta.
Miehet tervehtivät häntä kunnioittavasti ja pitempi sanoi:
— Vaikka onkin jo myöhäinen, halusimme teitä, isä, tavata vielä
erään asian takia.
Hän oli maisteri Pietari Särkilahti ja hänen toverinsa oli
dominikaaniluostarin priiori, nuori Mikael Karpalainen.
—- Käykäämme sisälle luokseni, niin saamme hetkisen puhella, —
sanoi piispa ystävällisesti ja ääneti lähtivät he piispantaloa kohti.
Kun he olivat tulleet huoneeseen ja piispa istahtanut tuoliinsa,
sanoi hän Karpalaiseen kääntyen:
— No, poikani, oletko jo tehnyt lopullisen päätöksen?
— Minä olen nyt päättänyt jättää luostarin.
Piispan kasvoilla kuvastui pieni pettymys, kun hän virkkoi:
— Et siis voi luopua Magdalenastasi?
— En, — vastasi Karpalainen alas katsoen.
— Siitä huolimatta että kanoninen laki kieltää hengelliseen säätyyn
kuuluvaa menemästä naimisiin?
— Mutta tiedättehän, isä, että paljon vanhempi ja pätevämpi laki
taasen sallii sen, nimittäin pyhässä raamatussa ilmotettu jumalallinen
laki, — sanoi nyt Särkilahti toverinsa puolesta.

— Niin, — sanoi piispa ja loi katseensa alas, — niin, niin, olette
oikeassa, mutta minä, ukko polo, elän vielä entisissä.
— Lopullista askelta en kuitenkaan ole tahtonut tehdä, ennenkuin
te, isä, annatte suostumuksenne, — sanoi Karpalainen. — Myöskään
en tahdo ottaa sitä Taivassalon kirkkoherran virkaa yksistään
kuninkaan suostumuksella, vaan…
— Ota se, poikani, ota minunkin suostumuksellani, — kiirehti
piispa sanomaan, — ja Jumala askeleesi siunatkoon. Minun vain
tulee surku nähdessäni, että rakkaiden dominikaanieni joukko yhä
harvenee, mutta Jumalan tahdon täytyy tapahtua ja se vaatii uusia
teitä kulettavaksi. Minä näen sen, mutta itse olen juurillani kiinni
entisessä. Te voitte kulkea ehyin sydämin uutta uraa, mutta minun
on päivätyöni suoritettava jaetulla sydämellä. Minun on itsessäni
koettava ja tunnettava vanhan mureneminen ja hajoaminen enkä
pääse siitä, että se välistä koskee kipeästi. Tuolla kirkon sakaristossa
äsken hetkisen viivähtäessäni ja kaikkea mennyttä muistellessani olin
näkevinäni kaikki edeltäjäni piispanvirassa ja minusta tuntui niin
vaikealta ajatellessani, etten ole kyennyt heidän työtään
säilyttämään. Mutta teitä nuoria nähdessäni kirkastuu minulle
tulevaisuus ja silloin tuntuu minusta aina, että vaikken olekaan
mitään näkyväistä saanut aikaan, niin sentäänkin on Jumala minua
laupiaasti tuomitseva, kun siirryn sinne edeltäjäini joukkoon.
Nuoret miehet olivat liikutettuja ja entistä kunnioittavammin
tervehtivät he pois lähtiessään vanhaa esimiestään. Yksin jäätyään
otti piispa kynttilän käteensä ja siirtyi makuuhuoneeseensa. Se oli
pieni kammio valkeaksi kalkituin seinin. Nurkassa oli matala ja kapea
sänky kovine olkipatjoineen ja karkeine villapeitteineen. Lähellä sitä
oli rukoustuoli ja sen päällä seinällä ristiinnaulitunkuva. Rukoiltuaan

sen edessä polvillaan laskeusi piispa levolle, risti kätensä rinnalleen
ja silmänsä ummistaen huokasi:
— Ah, Herra, päästä jo palvelijasi rauhaan!
* * * * *
Kaksikymmentä vuotta sen jälkeen, kun Mikael Karpalainen erosi
luostarin esimiehyydestä, sai Skytte vanhus vielä kantaa päivän
kuormaa ja hellettä ja vasta yhdeksänkymmenvuotiaana pääsi hän
lepäämään edeltäjäinsä seuraan tuomiokirkon alle. Pietari Särkilahti
oli jo paljon ennen iäkästä esimiestään mennyt lepoon. Mutta hänen
työnsä jatkajaksi ja Martti Skytten seuraajaksi oli sillä välin kypsynyt
Mikael Agricola.

AAMUN MIEHIÄ.
Ilma matalassa luentosalissa, jossa oli tungokseen saakka
tarkkaavasti kuuntelevia teologian ylioppilaita, kävi yhä
raskaammaksi. Luennoitsijan muutoin kirkasta ääntä oli yhä
vaikeampi erottaa, se tuntui kuin sammuvan ummehtuneeseen
ilmaan. Syksyisen iltapäiväauringon säteet olivat onnistuneet
pujahtamaan eräästä sivuseinän akkunasta sisään, erottaen näkyviin
yli salin ylettyvän sakean pölyjuovan ja muodostaen sivuseinälle
himmeän kuvion nelikulmaisesta, pyöreäruutuisesta ikkunasta.
Suomalainen ylioppilas Agricola oli hiukan myöhästynyt ja saanut
sen vuoksi paikan ihan oven suussa. Melanchton oli lokakuun alusta
selittänyt Roomalaisepistolaa ja Agricolalla oli edessään Erasmuksen
baselilainen Uusi Testamentti — hänen arvokkain kirja-aarteensa,
jonka hän edellisenä vuonna Wittenbergiin tultuaan oli ensi töikseen
itselleen hankkinut. Mutta hän oli alakuloisena eikä voinut
tarkkaavasti seurata oppinutta esitystä. Puolisen tuntia turhaan
kamppailtuaan antoi hän kirjan painua polvilleen, sallien ajatustensa
harhailla omia teitään sekä tuiottaen väsyneesti luennoitsijaan, joka
pölyisen valojuovan läpi näytti korkealla katederillaan niin avuttoman
pieneltä, kalpealta ja rasittuneelta. Lähes kaksikymmentä vuotta oli
tuo kuuluisa jumaluusoppinut ja filologi työskennellyt Lutherin

rinnalla Wittenbergissä ja nähnyt noiden vuosien kuluessa
auditorionsa aina yhtä täynnä ympäri Keski- ja Pohjois-Europan
kokoontuneita innostuneita kuulijoita. Kun Agricola antoi silmäinsä
painua puoli umpeen, näytti pitkän ja kapean luentosalin perällä
istuva Melanchton tavallistakin pienemmältä ja hennommalta. Hänen
matala, sointuva äänensä, jolla hän ikäänkuin hyväillen lausui
rakasta kreikkaansa, tuntui tulevan kuin jostakin hyvin kaukaa ja se
vaikutti Agricolan väsyneihin hermoihin omituisen uinuttavasti. Hän
ei seurannut enää ollenkaan luennon sisällystä, tuiotti vain
pölyjuovan läpi Melanchtoniin ja antoi korviensa omia aikojaan
suhtautua hänen ääneensä, joka ilman saentuessa tuntui
etääntymistään etääntyvän, kunnes se kokonaan sammui.
Melanchton oli keskeyttänyt luentonsa, hän ummisti silmänsä ja
siveli kämmenellä otsaansa. Silloin alkoi Augustiinikirkosta kuulua
kellonsoittoa. Melanchton avasi jälleen silmänsä, painoi kirjan kiinni
ja sanoi hymähtäen:
— Johan me voimmekin lopettaa. Kovin se onkin tänään
rasittavaa.
Hän nousi seisomaan ja auditoriossa syntyi liikkeen kohinaa. Mutta
sitten taas kaikki vaikeni ja Melanchton piti lyhyen rukouksen, sillä
tämä oli viimeinen luento sinä päivänä, Pyhäinmiesten aattona 1537.
Kun rukous oli vaiennut ja sen jälkeen pieni hetki oltu
kumarruksissa, laskeusi Melanchton katederilta alas ja pieni mies
katosi kokonaan kuulijainsa sekaan. Kaikki alkoivat tunkea ovea
kohti. Työntäen kirjan kainaloonsa pyörähti Agricola ensimäisenä
ulos jääden portin pieleen vartomaan suomalaisia tovereitaan. Niitä
olivat Martti Teitti, joka viime vuonna oli yhdessä hänen kanssaan

Saksaan tullut, sekä Viipurin Simo eli Simon Viburgensis niinkuin
hänen nimensä virallisesti kuului.
Ennenkuin hän ohitseen rientävästä ylioppilasvirrasta sai
näkyviinsä tovereita, tarttui joku hänen käsipuoleensa. Se oli eräs
schwabilainen ylioppilas, joka oli viime kevännä jonkun aikaa asunut
hänen ja Teitin kanssa yhdessä.
— Domine reverendissime, frater Michael finlandensis, ambulemus
paulum? [Kunnioitettavin herra, suomalainen veli Mikael, kävelläänkö
vähän?] — sanoi reippaalla äänekkyydellä tuo aina iloinen
saksalainen.
Agricola hymähti vastaukseksi ja käsikkäin lähtivät he kävelemään
ylös kapeata katua, joka kajahteli eri tahoille hajaantuvan
ylioppilasjoukon askelista. Pyhätarpeita kantavia palvelustyttöjä ja
perheenäitejä pujottelihe tuon iloisesti haastelevan nuorisojoukon
läpi, ylhäällä päätyikkunoissa näkyi kadun yli toistensa kanssa
juttelevia porvareita, kasvoilla edessä olevan sunnuntailevon hohde,
ja yli kaiken täydensi lauantaitunnelmaa koko pikku kaupungin
täyttävä kellojen humina.
Kirkon ohi kulettaessa pysähtyi schwabilainen äkkiä ja virkkoi:
— Ah, ollapa että me olisimme saaneet olla täällä tasan
kaksikymmentä vuotta sitten!
Agricola katsoi häneen kysyvästi.
— Mitä? Pyhäinmiesten aattona silloin, anno domini 1517? —
jatkoi schwabilainen.

Kun Agricola ei vieläkään näyttänyt olevan oikeilla jälillä, alkoi
schwabilainen, jolla oli vilkas mielikuvitus ja eloisa esitystapa,
havainnollistuttaa asiaa seuraavasti:
— No, ajattelehan: toria pitkin astelee nopein ja päättävin askelin
laiha ja kalpea mies puettuna augustiinimunkin kaapuun. Hän
pysähtyy kirkon ovelle, silmää ympärilleen ja vetää kaapunsa alta
paperikäärön, levittää sen ovelle ja ottaa esille pieniä nauloja…
— Ahaa, teesit! — keskeytti Agricola innostuneena. Mutta
schwabilainen oli päässyt vauhtiin ja jatkoi:
— … ja vasaran lyönnit kumahtelevat, kumahtelevat, kumahtelevat
— kertasi hän tahdilleen — ja pelottavana kuuluu niiden kaiku
Alppien yli Roomaan.
Samassa juolahti schwabilaisen mieleen vaaliruhtinaan uni, jonka
tämä oli nähnyt vähää ennen teesien julistamista ja josta aikoinaan
oli kerrottu ympäri Saksan, ja hän jatkoi:
— Niin, siinä seisoi peloton munkki ja kirjotti kirkon oveen ja kynä
hänen kädessään kasvoi ja kasvoi, kunnes ulottui Roomaan ja
puhkasi korvat siellä istuvalta kruunupäiseltä leijonalta. Ja leijona
alkoi huutaa niin että vuoret vapisivat ja kaikki kuninkaat ja ruhtinaat
riensivät vääntämään kynää munkin kädestä, mutta eivät
onnistuneet.
Schwabilainen veti henkeä ja puhui edelleen:
— Silloin sitä olivat elementit liikkeessä ja silloin meidänkin olisi
pitänyt olla täällä. Olisimme saaneet pertuskamiehinä seurata Martti
ja Fiilippi tohtoreita Leipzigiin eckiläisiä rökittämään. Entä

Wormsissa? Siellä jos missään olisi pitänyt saada mukana olla.
Ajatella sitä hetkeä, kun yksinäinen kalpea munkki astuu keisarin,
kuningasten, ruhtinasten, piispojen ja prelattien täyttämään
valtasaliin…
Schwabilaisen isä oli palvellut Yrjö von Freundsbergin kuuluisassa
keihäsmiesjoukossa ja ollut järjestyksen valvojana Wormsin
valtiopäivillä. Lutherin esiintyminen siellä oli tuohon yksinkertaiseen
sotamieheen samoinkuin hänen ritarilliseen päällikköönsäkin tehnyt
syvän vaikutuksen. Poika oli usein kuullut isänsä siitä kertovan ja kun
hän itse puolestaan oli lisäillen ja värittäen kuvaillut lukemattomia
kertoja tovereilleen tuota valtavaa kohtausta, oli se juurtunut niin
hänen mieleensä, että hän toisinaan melkein saattoi uskoa itse
olleensa sitä kaikkea näkemässä. Usein oli Agricolakin sen kuullut,
mutta yhtä tarkkaavasti kuunteli hän nytkin, osaksi
hienotunteisuudesta reipasta toveriaan kohtaan, osaksi yhä
uudistuvasta mielenkiinnosta, sillä siksi eloisasti ja värikkäästi esitti
schwabilainen mieliaihettaan. Ja Agricola mittasi takaisin samalla
mitalla, alkaen kuvailla usein ennenkin kuvailemaansa
päivällisilläoloaan Lutherin luona sekä kertoen kaikki pienimmätkin
asiat, mitä reformaattori oli pöydässä istuttaessa puhunut ja mitä
sanoja hänenkin kanssaan vaihettanut. Ja yhtä tarkkaavasti kuunteli
nyt vuorostaan schwabilainen. Näin innostuttuaan alkoivat he sitten
vuorotellen muistutella toistensa mieleen kummallekin ja koko
Wittenbergin ylioppilaskunnalle tuttuja legendoja
uskonpuhdistajasta, joka oppilastensa silmissä oli kohonnut
saavuttamattomaksi sankari-ihanteeksi.
Kun oli oltu hetki ääneti, lopetti schwabilainen:

— Sen miehen sanaa ei voi vastustaa vaikka hyökkäisi hänen
kimppuunsa filosofian, sofistien, scotistien, albertistien, thomistien ja
kaikilla koko helvetin aseilla!
Viimeisiä sanoja lausuessaan teki hän kädellään suuren,
huitasevan liikkeen, puristi sitten hyvästiksi Agricolan kättä ja lausui:
— Täytyy tästä lähteä ukko Reuchlinin seuraan. Olen pannut
päähäni vielä tänä iltana kamppailla loppuun Jesajan
kolmannentoista luvun. Pax tecum, carissime!
Kun Agricola oli jäänyt yksin ja schwabilaisen rehevä ääni oli
hänen korvissaan sammunut, valtasi hänet entinen alakuloisuus ja
hän ikäänkuin kutistui pienemmäksi. Mitä tehdä ja minne kääntyä,
sillä tovereineen oli hän tällä hetkellä ihan leivätönnä ja ilman
tiettävää leivän saannin mahdollisuuttakaan? Viimeiset viikot he
olivat eläneet melkein yksistään leivällä ja vedellä, mutta tänä
aamuna he olivat saaneet tyhjin vatsoin lähteä luennoille. Jos
toimeentulo alunpitäinkin oli tässä vieraassa yliopistokaupungissa
ollut heille vaivaisen puoleista, niin ei heidän tähän asti ollut
tarvinnut sentään yhtään paastopäivää pitää. Mutta nyt oli
suoranainen nälänuhka edessä. Ne varat, joilla Skytte vanhus oli
häntä ja Teittiä matkalle evästänyt, olivat loppuneet jo viime talvena
eikä piispa ollut toistaiseksi voinut lisää lähettää. Sen jälkeen olivat
he ansainneet jonkun verran opetuksella ja viime aikoina olivat he
elätelleet itseään paremmissa varoissa olevilta tovereilta saaduilla
pikku lainoilla. Lisäksi oli vuokra siitä vaatimattomasta huoneesta,
missä he kolmisin asuivat, parin kuukauden ajalta maksamatta.
Elokuun alussa oli Agricola kirjottanut Kustaa kuninkaalle Ruotsiin,
pyytäen heidän suomalaisten opiskelijain ylläpidoksi jotakin Turun
tuomiokirkon prebendaa tahi muuta avustusta, mutta mitään

vastausta ei ollut kuulunut. Tänään viimeksi oli hän turhaan
tiedustellut yliopistolla kirjettä. Ja puutteesta johtuvan
turvattomuuden tunnon lisäksi painosti hänen mieltään se, ettei
kuningas nähtävästi ollut pitänyt hänen kirjettään ja anomustaan
huomion arvoisena.
Alkoi jo hiukan hämärtää ja kaikkialla kuului akkunaluukkujen
sulkemista. Pikkukaupungin porvarit vetäytyivät ystävällisiin
koteihinsa aattoiltaa viettämään. Sen yksinäisemmäksi tunsi Agricola
itsensä lähtiessään hitaasti ja allapäin astelemaan Elben puoleiseen
kaupungin laitaan, missä hän tovereineen asui erään köyhän
lasimestarin luona. Hän näki mielikuvituksessaan toveriensa istuvan
alakuloisina huoneessaan ja salaa odottavan häntä pelastuksen
tuojana, sillä aamulla hajaantuessaan olivat he sopineet, että kukin
tahollaan koettaisi tehdä voitavansa edes lähimpien päivien
turvaamiseksi. Mutta hän oli varma, etteivät toverit olleet
onnistuneet paremmin kuin hänkään, sillä ani harvat heidän
tuntemistaan ylioppilaista kykenivät toisia auttamaan ja niillekin
harvoille olivat he jo velassa.
Hänestä tuntui vastenmieliseltä mennä kortteeriin niin tyhjänä ja
ilman avun mahdollisuuttakaan näköpiirissään. Hän jatkoi kävelyään
ilman päämäärää, kulkeusi kaupungin ulkopuolelle joen rannalle ja
istahti väsyneesti lähellä vesirajaa olevalle hirsikasalle. Oli jo
joltisenkin hämärä, vastapäisellä rannalla olevien kenttien ylle oli
virrasta levinnyt valkoinen usvaverho, ja mustina ja raskaina vieritteli
Elbe hänen ohitseen syksyisiä vesiään. Sumun takaa jostakin
maalaiskylästä kuului yksinäisen koiran haukunta ja lännestä,
Harzvuoriston takaa, kohosi täysikuu suurena veripunaisena
pyörylänä.

Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com