rsos.royalsocietypublishing.orgReviewCite this article .docx
healdkathaleen
8 views
74 slides
Oct 18, 2022
Slide 1 of 74
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
About This Presentation
rsos.royalsocietypublishing.org
Review
Cite this article: Calder M etal. 2018
Computational modelling for
decision-making: where, why, what, who and
how. R.Soc.opensci. 5: 172096.
http://dx.doi.org/10.1098/rsos.172096
Received: 6 December 2017
Accepted: 10 May 2018
Subject Category:
Computer scie...
rsos.royalsocietypublishing.org
Review
Cite this article: Calder M etal. 2018
Computational modelling for
decision-making: where, why, what, who and
how. R.Soc.opensci. 5: 172096.
http://dx.doi.org/10.1098/rsos.172096
Received: 6 December 2017
Accepted: 10 May 2018
Subject Category:
Computer science
Subject Areas:
computer modelling and
simulation/mathematical modelling
Keywords:
modelling, decision-making, data,
uncertainty, complexity, communication
Author for correspondence:
Muffy Calder
e-mail: [email protected]
Computational modelling
for decision-making: where,
why, what, who and how
Muffy Calder1, Claire Craig2, Dave Culley3, Richard de
Cani4, Christl A. Donnelly5, Rowan Douglas6, Bruce
Edmonds7, Jonathon Gascoigne6, Nigel Gilbert8,
Caroline Hargrove9, Derwen Hinds10, David C. Lane11,
Dervilla Mitchell4, Giles Pavey12, David Robertson13,
Bridget Rosewell14, Spencer Sherwin15, Mark
Walport16 and Alan Wilson17
1School of Computing Science, University of Glasgow, Glasgow, UK
2The Royal Society, London, UK
3Improbable, London, UK
4Arup, London, UK
5MRC Centre for Global Infectious Disease Analysis, Department of Infectious Disease
Epidemiology, Imperial College London, London, UK
6Willis Towers Watson, London, UK
7Centre for Policy Modelling, Manchester Metropolitan University, Manchester, UK
8Centre for Research in Social Simulation, University of Surrey, Guildford, UK
9McLaren Applied Technologies, Woking, UK
10National Cyber Security Centre, UK
11Henley Business School, University of Reading, Reading, UK
12Consultant, UK
13School of Informatics, University of Edinburgh, Edinburgh, UK
14Volterra Partners, London, UK
15Department of Aeronautics, Imperial College London, London, UK
16UK Research and Innovation, London, UK
17The Alan Turing Institute, London, UK
BE, 0000-0002-3903-2507
In order to deal with an increasingly complex world, we
need ever more sophisticated computational models that can
help us make decisions wisely and understand the potential
consequences of choices. But creating a model requires far
more than just raw data and technical skills: it requires a
2018 The Authors. Published by the Royal Society under the terms of the Creative Commons
Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted
use, provided the original author and source are credited.
close collaboration between model commissioners, developers, users and reviewers. Good modelling
requires its users and commissioners to understand more about the whole process, including the
different kinds of purpose a model can have and the different technical bases. This paper offers a
guide to the process of commissioning, developing and deploying mo ...
Size: 485.42 KB
Language: en
Added: Oct 18, 2022
Slides: 74 pages
Slide Content
rsos.royalsocietypublishing.org
Review
Cite this article: Calder M etal. 2018
Computational modelling for
decision-making: where, why, what, who and
how. R.Soc.opensci. 5: 172096.
http://dx.doi.org/10.1098/rsos.172096
Received: 6 December 2017
Accepted: 10 May 2018
Subject Category:
Computer science
Subject Areas:
computer modelling and
simulation/mathematical modelling
Keywords:
modelling, decision-making, data,
uncertainty, complexity, communication
Author for correspondence:
Muffy Calder
e-mail: [email protected]
Computational modelling
for decision-making: where,
why, what, who and how
Muffy Calder1, Claire Craig2, Dave Culley3, Richard de
Cani4, Christl A. Donnelly5, Rowan Douglas6, Bruce
Edmonds7, Jonathon Gascoigne6, Nigel Gilbert8,
Caroline Hargrove9, Derwen Hinds10, David C. Lane11,
Dervilla Mitchell4, Giles Pavey12, David Robertson13,
Bridget Rosewell14, Spencer Sherwin15, Mark
Walport16 and Alan Wilson17
1School of Computing Science, University of Glasgow,
Glasgow, UK
2The Royal Society, London, UK
3Improbable, London, UK
4Arup, London, UK
5MRC Centre for Global Infectious Disease Analysis,
Department of Infectious Disease
Epidemiology, Imperial College London, London, UK
6Willis Towers Watson, London, UK
7Centre for Policy Modelling, Manchester Metropolitan
University, Manchester, UK
8Centre for Research in Social Simulation, University of
Surrey, Guildford, UK
9McLaren Applied Technologies, Woking, UK
10National Cyber Security Centre, UK
11Henley Business School, University of Reading, Reading, UK
12Consultant, UK
13School of Informatics, University of Edinburgh, Edinburgh,
UK
14Volterra Partners, London, UK
15Department of Aeronautics, Imperial College London,
London, UK
16UK Research and Innovation, London, UK
17The Alan Turing Institute, London, UK
BE, 0000-0002-3903-2507
In order to deal with an increasingly complex world, we
need ever more sophisticated computational models that can
help us make decisions wisely and understand the potential
consequences of choices. But creating a model requires far
more than just raw data and technical skills: it requires a
2018 The Authors. Published by the Royal Society under the
terms of the Creative Commons
Attribution License
http://creativecommons.org/licenses/by/4.0/, which permits
unrestricted
use, provided the original author and source are credited.
close collaboration between model commissioners, developers,
users and reviewers. Good modelling
requires its users and commissioners to understand more about
the whole process, including the
different kinds of purpose a model can have and the different
technical bases. This paper offers a
guide to the process of commissioning, developing and
deploying models across a wide range of
domains from public policy to science and engineering. It
provides two checklists to help potential
modellers, commissioners and users ensure they have
considered the most significant factors that
will determine success. We conclude there is a need to reinforce
modelling as a discipline, so that
misconstruction is less likely; to increase understanding of
modelling in all domains, so that the
misuse of models is reduced; and to bring commissioners closer
to modelling, so that the results
are more useful.
1. Introduction
Computational models can help us translate observations into an
anticipation of future events, act as
a testbed for ideas, extract value from data and ask questions
about behaviours. The answers are then
used to understand, design, manage and predict the workings of
complex systems and processes, from
public policy to autonomous systems. Models have spread far
beyond the domains of engineering and
science and are used widely in diverse areas from finance and
economics, to business management,
public policy and urban planning. Increasing computing power
and greater availability of data have
enabled the development of new kinds of computational model
that represent more of the details of the
target systems. These allow us to do virtual what if?
experiments—even changing the rules of how this
detail operates—before we try things out for real.
Analysis and explanation are just the starting point for the
utility of models. They can help us to
visualize, predict, optimize, regulate and control complex
systems. In the built and engineered world,
manufactured products can be simulated as part of the design
process before they are physically created,
saving time, money and resources. Buildings, their
infrastructure and their inhabitants can be modelled,
and those models can be used not only to maximize the
efficiency and effectiveness of the design and
build processes, but also to analyse and manage buildings and
their associated infrastructure throughout
their whole working lifespan. In the public sector, policies can
be explored before they are implemented,
exposing potential unanticipated consequences and suggesting
ways to prevent their occurrence.
It takes time and effort to develop good models, but once
achieved they can repay this investment
many times over. Just as physical tools and machines extend our
physical abilities, models extend our
mental abilities, enabling us to understand and control systems
beyond our direct intellectual reach. This
is why they will have such a radical impact: not just improving
efficiency and planning, but extending
to completely new areas of our lives. Computational models will
change the ways we can interact with
our world, perhaps allowing completely new ways of living and
working to emerge.
Computational modelling is like any other technology: it is
neither intrinsically good nor bad. Models
can inform or mislead. Modelling can be applied well or
misapplied. It is for this reason that a better
understanding of the processes of computational modelling and
a greater awareness of how and when
models can be reliably used are important. This cannot be just
left to the modellers but some of the
understanding is also needed by commissioners and users of
these models. Making the right decisions
when commissioning a model or when and how to use a model
is as important as the more technical
aspects of model development. A hammer may be perfectly
designed by its engineers and fit its
specification exactly, but be worse than useless for driving in
screws.
The contribution of this paper is to bring together current
thinking about, and experiences with,
computational modelling. It does not reveal new research or
results, but rather aims to serve as a
guide for all those involved in modelling. It is of direct interest
to a range of potential stakeholders
for modelling: commissioners, owners, developers and users,
but it is also important for those who
may be affected by the insights that come from these models in
the public, private, academic and
not-for-profit sectors.
Computational models are reaching into domains beyond those
where they have been traditionally
applied (the physical and life sciences and engineering); they
are being used for new purposes; and their
complexity means that they have different properties from
simpler models (such as those which can be
completely checked using analytic methods). This extension has
the potential for new application and
utility across many aspects of our collective life, but it also
means there is a greater potential for their
misuse: misleading as to the current state of what is modelled
and informing decisions where they are
not suited. Hopefully this paper will help educate all relevant
stakeholders as to these opportunities
and dangers, and thus help make these tools a positive force in
the new areas in which they are
being applied.
This paper distinguishes some of the different purposes for a
model: this has a significant impact on
how the model should be developed, checked and used. It gives
an overview of some of the different
technical bases, to provide some understanding of their nature
and properties. It also looks at some of
the future directions in which modelling is developing. It
includes two checklists, aimed at the full range
of stakeholders: to help people ask the right questions of models
and modellers and hence improve the
whole modelling process.
This paper is a condensation of the recent Blackett review
Computational Modelling: Technological
Futures [1] that was initiated by Government Office for Science
and the Prime Minister’s Council for
Science and Technology. It is organized into five sections
covering: where models are used, why model,
making and using models, types of model and analysis and
future directions. Appendix A contains two
checklists: making and using models and what users should ask
about a model.
2. Where models are used
This paper aims to bring together knowledge about
computational modelling across a wide range
of domains, from public and economic policy to physical
systems. A few examples and observations
illustrate the current breadth and scope of modelling.
In public policy, models can enhance the quality of decision-
making and policy design. They can
offer cost–benefit analyses of various policy and delivery
options, help manage risk and uncertainty or
predict how economic and social factors might change in the
future. There is still considerable untapped
potential in this area but also obvious dangers.
The science of urban modelling is rapidly developing, and
modelling is routinely used in the retail
and transport sectors. However, substantial research challenges
and opportunities remain, particularly
in dynamics and in deploying new data sources. Greater
research coordination, and policies that make
high-quality urban models available to local authorities, could
help to realize the tremendous potential
of ‘urban analytics’.
Models play crucial roles in finance and economics, from
identifying and managing risk to forecasting
how economies will evolve. Yet major changes are afoot in
economic modelling, triggered by the global
economic crisis, the availability of huge datasets, and new
abilities to model people’s behaviour that
overturn old certainties.
In business and manufacturing, models underpin a wide variety
of activities, enabling innovative
high-quality design and manufacturing, more efficient supply
chains and greater productivity. Modelling
can also improve businesses’ organizational efficiency,
commercial productivity and profitability. In
manufacturing, models tend to fall into three broad categories:
complex models aimed at modelling
physical reality with a high degree of accuracy, reduced
physical models that capture behaviour at a
specific scale and representative models (so-called ‘black box’)
models that fit data and trends.
Finally, environmental modelling, including climate change,
plays an important role in guiding
government policy as well as business decisions, in situations
ranging from noise reduction to flood risk
assessment and wherever there is an opportunity to enhance
social resilience to severe natural hazards.
Open-access datasets are particularly useful in this domain.
3. Why model
Given the effort it takes to make and check a good model, how
might one decide whether this effort is
worthwhile? For a given system, there are a number of answers
to this question:
— The complexity of the system means that the risks and
consequences of any choice cannot be
anticipated on the basis of common sense or experience.
— There may be too many detailed interactions to keep track of,
or the outcomes may be too
complicated and interwoven to calculate easily.
— It is infeasible or unethical to do experiments with the
system.
— One needs to integrate reliable knowledge from different
sources into a more complex whole to
understand the interactions between them.
— There is a variety of views from stakeholders or experts
about a complex system they are part of,
which needs bridging in order to come to a coherent decision or
find a compromise.
— One needs to be prepared for possible future outcomes in a
complex situation.
The variety of answers are indicative of the different purposes a
model may have.
3.1. Purposes
The purpose of a model affects how it should be developed and
checked and, crucially, it informs
potential users as to how they should judge a model and from
that what it can be reliably used for. Thus
identifying the different uses for a model is very important.
Here, we distinguish five broad categories
of model purpose—there are many others (e.g. those listed in
[2]), but the following five cover many of
the main scientific purposes (the first two empirical, the last
three theoretical).
3.1.1. Prediction or forecasting
Almost all computational models ‘predict’ in the weak sense of
being able to calculate an anticipated
result from a given set of variables. A stronger form of
prediction goes further than this, anticipating
unknown (usually future) outcomes in the observed world (some
describe this as ‘forecasting’). This sort
of prediction is notoriously difficult for complex systems, such
as biological or social systems, and thus
claiming to be able to forecast for these systems may be
misleading. If we truly do not know what is
going to happen, it is better to be honest about that, rather than
be under a false impression that we have
a workable prediction. Fitting known data (e.g. ‘out of sample
data’) is not prediction in this sense.
3.1.2. Explanation or exploration of future scenarios
Particularly when considering very complex phenomena, one
needs to understand why something
occurs—in other words, we need to explain it. In this context,
explanation means establishing a possible
causal chain, from a set-up to its consequences, in terms of the
mechanisms in a model. This degree of
understanding is important for managing complex systems as
well as understanding when predictive
models might work. With many phenomena, explanation is
generally much easier than prediction—
models that explain why things happen can be very useful, even
if they cannot predict reliably the
outcomes of particular choices. For example, a social network
model may help explain the survival of
diverse political attitudes but not predict this [3].
3.1.3. Understanding theory or designs
This usually involves extensive testing and analysis to check
behaviours and assumptions in a theory or
design, especially which outcomes are produced under what
conditions. Outcomes can be used to help
formulate a hypothesis; but they can also be used to refute a
hypothesis, by exhibiting concrete counter-
examples. It is important to note that although a model has to
have some meaning for it to be a model,
this does not necessarily imply the outcomes tell us anything
about real systems. For example, many
(but not all) economic models are theoretical. They might
include assumptions that people behave in a
perfectly rational way, for example, or that everybody has
perfect access to all information. Such models
might be later developed into explanatory or predictive models
but currently be only about theory.
3.1.4. Illustration or visualization
Sometimes one wants an illustration or visualization to
communicate ideas and a model is a good way
of doing this. Such a model usually relates to a specific idea or
situation, and clarity of the illustration
is of over-riding importance—to help people see (possibly
complex) interactions at work. Crucially,
an illustration cannot be relied upon for predicting or
explaining. If an idea or situation is already
represented as a model (designed for another purpose) then the
illustrative model might well be a
simplified version of this. For example, the DICE model
(dynamic integrated model of climate and
the economy) is a ‘simplified analytical and empirical model
that represents the economics, policy, and
scientific aspects of climate change’ [4]. This is a simpler
version of the RICE model [5] that is used to
teach about the links between the economy and climate change.
3.1.5. Analogy
Playing with models in a creative but informal manner can
provide new insights. Here, the model is
used as an aid to thinking, and can be very powerful in this
regard. However, the danger is that people
confuse a useful way of thinking about things with something
that is true.
If the purpose of a model is unclear or confused, this can lead to
misunderstandings or errors. To give
two examples, a theoretical model might be assumed to be a
good way of thinking about a system, even
though this might be crucially misleading, or a model that helps
establish a good explanation be relied
upon as a predictive model. Making clear the purpose of a
model is good practice and helps others know
how to judge it and what it might be reliable for.
4. Making and using models
Models have many technical aspects, such as data, mathematical
expressions and equations, and
algorithms, yet these are not sufficient for a model to be useful.
To get the best out of a model, model users
and commissioners must work closely with model developers
throughout its creation and subsequent
application.
4.1. Asking the right question
It is important to make sure that a model is dealing with the
right issue and helping to ask the right
question. Even a high-quality model will not be helpful if it
relates to an issue that is not the main
concern of the user. Conversely, asking a model to answer more
and more detailed questions can be
counterproductive, because it would require ever more features
of the real system to be included in the
model. In other words, models need to be ‘requisite’—they must
have an identified context and purpose,
with a well-understood knowledge base, users and audience, and
possibly developed within a particular
time constraint [6].
4.2. Who does what?
Although a very simple model might be the work of one person,
usually a team of people will be
involved, and it is important to be clear about the individuals’
roles. There will be at least an owner, or
commissioner: the person whose responsibility it is to specify
what the model is expected to do, provide
the resources needed to get the model built, and sometimes
monitor how the model is used. There will
be model developers, whose job is to design, build and validate
the model; and analysts who will generate
results from the model. Developers and analysts are often, but
not always, the same people. There will
also be the model’s users: those who have the problem or
question that the model is designed to answer.
And it is good practice to have a reviewer or quality assurer,
someone independent from the team whose
task is to audit the model and the way it has been developed to
ensure that it meets appropriate quality
standards and is fit for purpose—standards will vary according
to the importance and risk of the area.
Each of these roles may be carried out by several people—a
large model might need a team of developers,
and the review might be carried out by a group of peer
reviewers, for example. In all but the most modest
models, however, there should be at least one person for each
role, because the skills required for each
are different.
4.3. Specifying a model
Sometimes it is possible to be precise about what a model
should contain, before the model is created.
One can then write a specification and hand it over to a group of
professional model developers.
This situation can arise when dealing with a logistical or
operational question, where there is a
great deal of certainty about the system and clarity about what
the model should output. Much
more often, however, the situation to be modelled is complex;
the processes to be modelled are
uncertain; and the questions to be answered are vague. In such
cases, model commissioners need to
stay very close to the modelling process, getting involved in an
iterative process of deciding what
should be included and how it is represented. Such models will
often produce a range of results
and may identify possible tipping points. This is usually the
best approach if one is concerned with
strategic or policymaking questions; dealing with one-off
issues; addressing uncertainty about the
consequences of actions; or is unclear about appropriate ways of
judging what a system does. In
these cases, those involved in the process need to exercise their
collective judgement when interpreting
the results.
4.4. Finding the data and assessing quality
All too frequently, one does not discover exactly what data one
needs until the model has been built,
so it often becomes an iterative process of finding data and
developing the model. However, there are
a few helpful distinctions to be made that will enable a model
commissioner to ask model developers
the right questions. The first distinction is between the data
needed to specify and build the model; the
data that will be used to check the model’s output; and the data
needed for day-to-day use of the model.
The second distinction concerns the levels at which the model
operates: the micro-level, describing how
the smallest components of the model behave (for example, the
cars in a traffic model); the meso-level,
describing how the components are linked together (for
example, the road layouts); and the macro-level,
covering the properties of the system as a whole (for example,
the funding for new road infrastructure).
The micro-level may be determined by the science behind the
model, by qualitative evidence, or by
‘big data’ analyses. The meso-level might reflect the structure
of the system. And the macro-level may
include data such as aggregate statistics over a long period of
time. Sometimes it is acceptable to use
closely related proxies for these data.
For models that are intended to explain or predict the outcomes
of processes that take place over
time, we usually need data that have been collected over a
period (referred to as time-series data, or
longitudinal data). However, such data are often difficult to
obtain, not least because of the time it takes
to gather the dataset, but also because definitions may have
changed in the intervening period, making
data points measured at different times not strictly comparable.
Also, if one is using data collected at two
points in time from the same individual or organization, one
must consider the effects of those who stop
participating during the data collection period, which may lead
to a biased sample.
4.5. Building a model
Designing and building a model has some of the characteristics
of software development and many of the
same techniques and tools can be used. There are two basic
approaches: either one can attempt to specify
in detail what the model should do and then construct it to
match that specification; or one can build the
model in a much more iterative fashion, starting with a basic
model at an early stage and incrementally
improving it, meanwhile checking that it matches the users’
requirements. These requirements may
themselves change as the users improve their understanding of
the problem.
Model building is often out-sourced to consultancies or is the
responsibility of specialized teams of in-
house developers. The downside of out-sourcing is that barriers
to communication may arise, especially
when the commissioner and the developer are in different
organizations with different cultures and
different priorities. Regardless of the development approach and
the location of the developers, it is
essential that design decisions are logged and the development
process is documented (not just the final
modelling outcomes). This documentation will be an important
input into the model’s quality assurance
review. It is important to establish, at the start, to whom the
resulting model code belongs.
4.6. Documenting a model
A model will be all but useless if it lacks appropriate
documentation. Several different kinds of
documentation are needed:
— Documentation of the model code, sufficient to explain in
detail what it does and how it does it.
Some of this will be integrated into the code as comments, but
there will also need to be separate
documents intended for developers.
— Documentation aimed at analysts, who may want to change
model parameters but not the model
code. Such documentation will need to explain how to run the
model, the computing system it
needs, supporting software if any, and the various files that the
model requires as inputs and
generates as outputs.
— Documentation for users. This may include presentations,
tutorials and user guides aimed at
people who want to use the model but do not need to know
about its mechanics. While the
documentation should be comprehensible to non-experts, it
should include an explanation of
the assumptions on which the model is based, as well as its
objectives and limitations.
Documentation takes time to prepare, often more time than
building the model itself. But it is essential,
because the original developers, reviewers, users and even the
commissioner may move on to other
roles, taking their knowledge and expertise with them.
Moreover, if a decision that relies on the model
is challenged, internally or externally, by public opinion or
judicial review, the documentation may have
legal significance.
4.7. Quality assurance
Validation asks the question: have we built the right model, i.e.
is the model a suitable representation
of what is being modelled? This often involves testing the
model against known data or behaviours, to
demonstrate that the model is faithful and gives the expected
outcomes. Verification asks the question:
have we built the model right? This means checking the model
itself, for example, checking we have the
correct formulae in all the spreadsheet cells, or checking how
errors and uncertainties propagate and for
which inputs the results are undefined.
4.8. Uncertainty
There are many ways in which uncertainty can arise. These
include: errors in measuring or estimating;
inherent chance events in the system being modelled; an
underappreciation of the diversity of events
in a system; ignorance about a key process, such as how people
make decisions; chaotic interactions in
the system such that even a small change can switch behaviours
into another mode; and the complexity
of the model’s behaviour itself, which model developers may
not fully understand. It is important to
consider the uncertainties in the data that underpin a model, and
the level of uncertainty that might be
acceptable in the model’s answers. In addition, there may be
considerable uncertainty about the basic
mechanisms that are being represented in the model and about
whether alternative models using quite
different mechanisms might be better. Moreover, a complex
model can sometimes act as an ‘uncertainty
amplifier’, so that the uncertainty in the results is much greater
than the uncertainty in the setup of the
model and the data it uses. Just as there are different kinds of
uncertainty that affect a model, there are
different kinds of uncertainty in model outcomes. The answers a
model gives might be basically correct,
but somewhat prone to a degree of error. In other cases, the
outcomes might suddenly vary sharply
when the inputs change, or shift from a smoothly changing
continuum to an ‘on/off’ outcome. The
kinds of uncertainty in model outcomes affect how it can be
used reliably. Consequently, it is vital that
the uncertainty in a model’s results is communicated together
with the main results.
4.9. Communicating a model
While the process of modelling can greatly increase one’s
understanding of a problem, the true value
of a model only becomes apparent when it is communicated.
The communication of model results is an
important part of the modelling process: the user interface or
visualization is the only contact those not
directly working on it will have with a model. A visualization
should encapsulate all that is important to
know about the underlying model. It must somehow
communicate the model’s results and (ideally) its
assumptions to the intended audience, who may base important
decisions on their understanding of the
visualization. Consequently, even at the scoping stage it is
crucial to consider who the user of a model
will be, and how they will want to interact with it.
Making educated simplifications and assumptions is an inherent
part of the modelling process, as
is the presence of some uncertainty in model results. Given the
compelling nature of well-designed
visualizations and user interfaces, it is vital that they do not
misrepresent the reliability of the results they
communicate, just as an executive summary should be
representative of the conclusions and caveats of
the underlying report.
4.10. Maintenance
As the Review of quality assurance of government models
(commonly known as the Macpherson review)
found in 2013 [7], once a model exists, it may be used for
purposes beyond that for which it was originally
designed, and it may continue to be used long after the time
when it should have been replaced. There
are at least three reasons for this:
— Users are reluctant to abandon the model. Unless appropriate
maintenance activities have not
been put in place, the model’s results may become less and less
accurate because the system
being modelled has changed. The fact that the model has been
successful in the past can bolster
confidence in its credibility, without anyone realizing that the
model no longer fits what it is
modelling.
— The model’s use has changed. While the model would have
been tested to give good results for
its original purpose, the quality assurance may not guarantee its
validity following ‘creep’ in the
way it is being used. In addition, as staff involved in the model
move on to other projects, the
original understanding of the model’s assumptions, scope and
limitations may get lost.
— Model accretion. If extra parameters or routines are added to
the model to deal with new
demands or new data, the model may eventually become so
complicated that it is difficult for
anyone to understand it and use it correctly.
These dangers can be avoided, or at least ameliorated, by
scheduling regular reviews of the model to
check that it remains fit for purpose, and to ensure that the
documentation remains relevant. The review
may conclude that the model should be retired or re-written. To
ensure that such reviews do take place,
models should have long-term owners with responsibility for
their continued maintenance.
4.11. Preserving a model
An important aspect of documenting and maintaining a model is
to ensure that it is properly preserved
for later access, regardless of institutional and personnel
changes and the evolution of computing
infrastructure. One increasingly popular solution is to make the
model and documentation open source
and lodged on a platform such as GitHub (https://github.com/)
or CoMSES (https://comses.net/).
‘Open source’ means that the model code is freely available and
publicly accessible, under an open
licence. Open sourcing a model also means that others can
modify and use the model for their own
purposes (including, depending on the licence conditions, for
commercial purposes). The advantages of
open source include that what the model does and how it does it
is freely accessible and ‘transparent’;
other users and modellers can assist in the development and
maintenance of the model, and that the
platform takes over responsibility for the model’s long-term
preservation. On the other hand, opening
up a model in this way can raise issues of commercial
confidentiality and individual privacy and data
protection. The latter can be especially tricky if the model
depends on data provided by individuals for
its calibration.
5. Types of model and analysis
One might not need to know anything about the mechanisms
inside a very well established and
understood model. However, for other models (especially newly
developed models) it is useful to have
some understanding of the basis of their construction. In this
section, we give a brief summary of the
main aspects and approaches used.
Stakeholders often have very different perspectives on the key
abstractions and assumptions about the
system being modelled. Frames of reference [8] are one way of
articulating the variety of perspectives,
and their context. Clarity on frames allows different levels and
type of concern to be balanced within
model development and analysis, driving the selection of model
type and techniques. Some common
frames are the following.
Geographic: spatial and topological relationships, such as
(static) locations of adjacent
underground stations and the positions of emergency exits, or
(dynamic) flows in a pipeline
and networks of sensors on people, animals and objects.
Temporal: how the expected certainty of the model varies over
time. For example, weather
forecasting becomes less certain the further we look into the
future, and navigation models
become less precise as we move away from the position where
we last verified our location.
Physical: underlying natural science, ecosystems and their
governing laws, such as those that
govern water flow, heat transfer or atmospheric physics.
Security: threats and their mitigations, such as access controls,
which prevent unauthorized
persons or systems from physically entering or digitally
accessing a system, and encryption
methods that encode data so they can only be accessed via keys.
Privacy: anonymity, identity, authentication of personally
identifiable information, and controls
on intended and unintended disclosures.
Legal: obligations, permissions and responsibilities for different
components within the system
and for human users of the system.
Social: communication and interaction relationships between
humans involved in the system,
and between humans and the physical/natural world and the
underlying technologies.
Economic: quantitative aspects of resource consumption,
production and discovery; typical
resources are energy, money and communication bandwidth.
Uncertainty: what the acceptable bounds of uncertainty are for
various aspects of the system, and
how bounds are qualified, quantified and related to each other.
Failures: relationships between components that can fail or
operate incorrectly, including fail-safe
mechanisms and redundancies.
Each frame (or frames) may require a different type of model
and analysis, and all kinds of framing
demand judgements about the scales to be adopted, from the
coarse to the fine-grained. A model
developed to address one frame of reference may not be suitable
for another frame and can be positively
misleading if this is attempted. For example, using a costing
model for rail ticket sales to assess the order
in which to upgrade signals or the impact of lengthening trains
by adding carriages could give very
misleading results. This is because the costing model would not
include details of how signals depend
on each other, or the loads that rails are designed to withstand.
It is thus helpful to make these frames of
reference explicit when developing or commissioning models.
5.1. Types of model
There are a wide range of computational modelling techniques,
but they differ principally along
a few dimensions. Selecting particular points along these
dimensions implies a set of abstractions
and assumptions about the system being modelled, which in turn
determines how observations are
represented.
Non-deterministic models can deliver several possible outputs
from a given set of inputs. If you
run a non-deterministic model today, and then run it again
tomorrow with the same inputs, you
may obtain different answers.
Deterministic models always produce one specific output from a
particular set of inputs or
initial conditions. Determinism in models is often highly
valued, because it allows one to
make absolute assertions. However, many aspects of the
physical world and human behaviours
are fundamentally non-deterministic, and it may not be useful to
try to model them in a
deterministic way.
Static models have no inherent concept of time and so outputs
do not change over time. For
instance, spreadsheets are static models, unless they explicitly
encode time as an input.
Dynamic models have outputs that change over time. Ordinary
[9] and partial differential
equations [10] are common mathematical dynamic models for
representing the rate of change
over time; they are widely used in engineering and
environmental monitoring, and also in
finance and economics. System dynamics [11] is a technique
based on ordinary differential
equations that is used widely in business and government when
considering new policies. It
is used to explore the possible effects of different policies and
unanticipated consequences, as
well as develop understandings of the structural source of
general patterns of behaviour.
Discrete models represent objects or events by values that go up
in steps—a series of integers
or characters, for example. Common discrete models are based
on sets of discrete states; for
instance, transition systems [12] consist of discrete states with
transitions between them.
Continuous models involve representations that are ‘smooth’
and ‘dense’, using real numbers, for
example. Differential equations are common continuous models.
It is possible to combine both
discrete and continuous aspects into a single model. For
instance, a model may consist of a finite
number of discrete states with the rates of transition between
the states being continuous.
Stochastic (also called probabilistic or statistical) models [13]
have an inherent element of random,
or uncertain, behaviour and the events are assigned
probabilities. This can be viewed as a special
case of a non-deterministic model in which the probabilities are
known.
Individual-based models represent each individual explicitly.
These models are useful when one
needs to track each individual through a system, or individuals
vary significantly in their
behaviour, or together the individuals form a complex and
emergent system whose behaviour
cannot be derived from simple aggregation. Typical examples
include social insects, extremely
large telecommunications networks (including the Internet),
transportation networks, and stock
markets. These systems are often tackled using agent-based
models [14], typically containing a
large set of autonomous agents that each represent individuals
that interact with each other
based on their individual attributes and behaviours.
Population models collectively represent large groups of
individuals and are useful when
individuals do not vary and an individual-based model is not
tractable. When individuals
do vary, but according to a small number of attributes, a
population model based on counter-
abstraction [15] that records the number of individuals with
each trait (or combinations thereof)
may be suitable.
Logic models are statements in a formal logic, which may range
from classical predicate logic [16],
to temporal logics [17] for future behaviours, and probabilistic
temporal logics [18] for future
certainties/uncertainties.
Automata and process algebraic models [19,20] allow simple
and elegant representations of events
occurring in multiple processes that send messages to each
other. The underlying languages are
algebraic, which means there are laws that define how the
different operators (a sequence or
choice between events, for example) relate to each other.
Black-box models fit data and trends without revealing internal
workings. Machine learning [21]
is a common technique based on algorithms that, in effect, learn
directly from past examples,
data and experience. Machine learning is most valuable where
there is little prior knowledge or
intuition about how a system works, but where there is
considerable available data. This opens
up the possibility of making predictions about the future by
extrapolating patterns in the data,
in domains where that has not previously been possible. At
present, the results may be difficult
to interpret or explain; and the models may be robust only
within relatively narrow contexts.
Common example combinations of techniques include stochastic
partial differential equations and
hybrid automata [22]. The latter have discrete states and
transitions between them, and each state is
a set of differential equations that describes the continuous
behaviour that applies during that state.
A drawback of some combinations is that analysis can be
difficult and may be poorly supported by
automated tools.
5.2. Ensemble modelling
Ensemble modelling is an important approach to model
combination that involves running two or
more related (but different) models, and then combining their
results into a single result or comparing
them. When results within the ensemble disagree, this can
contribute to an understanding of whether
uncertainty is present as a result of the type of model (and so
the choice of model is crucial), or exists
within the system. As an example, ensembles are widely used in
weather forecasting, to show the
different ways in which a weather system can develop.
5.3. Analysis
Just as there are many types and techniques, there are also
different ways to ask questions and obtain
answers from models. Often the questions one can ask are
fundamentally linked to the modelling
technique. One of the most common types of analysis is
simulation, usually over a time period, often
called ‘running’ the model. If the model is deterministic, there
is only one simulation result; the output
of a static model depends entirely on the values assumed for any
input parameters. But if the model is
non-deterministic (i.e. has a random element) then there are
many possible answers—each time you run
it you will get a different answer that reflect random elements
in the choices or in the environment. If
you have such a model it will require many runs to achieve a
representative picture of what happens.
Another type of analysis uses logic to formulate questions and
reasoning techniques to answer them.
For instance, questions about the performance of a modelled
telecommunications service such as after
a request for a service, is there at least a 98% probability that
the service will be delivered within 2 s? can be
expressed in a probabilistic temporal logic. Automated
reasoning tools such as theorem provers and
model checkers can be used to derive the answer.
5.4. Role of data
Data are observations that can provide evidence for a model.
The exact role of data depends on how
they were obtained, and the purpose of the model. For example,
if the model aims to offer rigorous
explanations or predict future outcomes of an existing system,
then data are necessary to validate the
model. If, on the other hand, the purpose of the model is to
specify a system design, or define how an
intended system is required to behave, then data are used to
validate the system against the model. In
other words, after the system has been implemented, one checks
it behaves as it should.
There is a further role for data when we are confident about the
essential structure of the model,
but do not know the bounds of some parameters. In this case,
data are used to fine-tune parameters
such as the duration or speed of an event. In all cases, care and
expert judgement about interpreting
validation results is required, especially when the model has
been determined mainly by data with few
structural assumptions imposed by the modeller, or if the data
are sparse, or when it is not possible
to experiment with the deployed system. For example, air traffic
systems are so crucial to modern life
that one cannot experiment with various parameters—such as
frequency of landings or proximity of
aircraft—to comprehensively check the system against the
model.
6. Future of modelling
Modelling is changing fast, due to rapid growth in computing
power, an explosion in available data,
and greater ability of models to tackle extremely complex
systems. In the future, there will be a
greater need for reliable, predictive models that are relevant to
the large-scale, complex systems we
want to understand or wish to construct. While larger and more
sophisticated models will add to
predictive capability, they will also allow us to get a better
grasp on the limits to prediction, fundamental
uncertainties, and the capacity for tipping points and lock in.
Some models will work closely with
(perhaps be embedded in) operational systems and derive data
from them, potentially in real time. These
data may come from the many sensors and actuators that are
now being added to systems, and we will
see new forms of modelling emerge as a consequence. The
following offers a glimpse of the changes,
challenges and potential rewards over the coming decade.
— Large-scale availability of data about individuals will
transform modelling. When we model a
population of individuals today, we often attempt to make
predictions using aggregate models
based on assumptions about hypothetical, ‘average’ members of
the population. In future, it may
be easier to eliminate these assumptions by modelling the
individuals directly.
— Models will require more extensively linked data. Some data
may be derived not from measurement
but from other models, requiring additional links to derived
data.
— Modelling will span many scales, and many levels of detail.
As various modelling communities come
together, bringing expertise from different disciplines and
sharing approaches to model design,
we will see more sophisticated ways to link models in ways that
describe systems at multiple
levels of detail.
— More models will be built by computers. Models may be
constructed from data by automated or
semi-automated inference. These models will have the capacity
to reveal unexpected results, but
it may be hard to guarantee that their mechanisms continue to
operate reliably in the face of new
evidence.
— Models will help to train computers. When computers learn
from real-world data, they need to be
exposed to both positive and negative examples. The latter can
be difficult to find: models may
be able to generate verisimilar data representing failures.
— More systems will become part of models and more models
will become part of systems. More components
of engineered systems will be software: that software may be
incorporated into models used to
predict the behaviour of the aggregate system built from
components and embedded models
may drive aspects of system behaviour. This will change the
dynamic between modelling and
deployment of systems.
— New technologies will change modelling paradigms.
Specialist quantum simulators will soon become
available. They may allow us to develop models that predict
properties of materials or
pharmaceuticals, or make scenario planning for finance, defence
and medical diagnosis more
tractable.
— Ubiquitous sensors will require new forms of modelling.
Sensors, actuators and processors are
becoming more ubiquitous and more intelligent, yet sensors
decalibrate and degrade over time
both individually and as networks. The unreliability of data
from sensors will require more
spatial, dynamic and probabilistic styles of modelling.
— Modelling will be used more often for strategic and policy-
level issues. Modelling will increasingly
be used for high-level organizational planning and systems
thinking, adding more detail to
potential future scenarios, and allowing analysis of possible
outcomes of policy interventions.
12
rsos.royalsocietypublishing.org
R.Soc.open
sci.5:172096
.................................................
— Senior decision-makers will increasingly become involved in
modelling. Senior decision-makers will
participate more often in building and using models. A
willingness to engage directly in
modelling, for example, by bringing modelling into the
boardroom, will increasingly be seen
as a sound approach to managing complexity.
— Some models will be oriented more towards humans and their
personal characteristics. We will have a
greater opportunity, as individuals, to supply (personal) data
that could be used to stimulate
modelling. However, there remain deep, unresolved social and
ethical issues around the
ownership of data and the use of models derived from personal
data.
— Models will help to train humans. Simulators are already
used to train jet pilots, Formula One
drivers and veterinary surgeons. High-fidelity models will soon
be used more widely, in
conjunction with virtual reality and ‘gamification’ in training
for doctors, military personnel,
police forces and school pupils, to name just a few.
— Models will become an important way to understand
properties of many complex systems. We
increasingly build systems so complex that their behaviours
cannot be explored in any depth.
The Internet itself is an example of a complex, engineered
system on which much of our
developed world now depends, and which is continuously
modelled and monitored in order
to explore its behaviours and monitor its performance.
7. Summary and conclusion
In order to deal with an increasingly complex world, we will
need ever more sophisticated models.
Computational models have the potential to help us make
decisions more wisely and to understand
the complicated and often counter-intuitive potential
consequences of our choices. However, as with all
tools, they can be applied in wrong or misleading ways. Thus a
degree of understanding of their uses
and properties is desirable. This paper brings together some of
that knowledge in order to promote the
better understanding of models. This is summarized by four
points.
First, it is important to be aware that models have different
kinds of uses. Effective deployment
requires both the user and the modeller to be aware of their
capabilities and limits. We have outlined
some broad categories of model purpose and the key role that
framing plays in balancing perspectives
and getting the best out of a model. Confusing or conflating
model purpose can result in the
inappropriate use of models, or a mistaken assessment of their
reliability.
Second, creating and using models well involves far more than
raw data and technical skills. A close
collaboration between model commissioners, developers, users
and reviewers provides an essential
framework for developing and using an effective model. We
have offered a guide to that process, which
is vital for building confidence in any model; the checklists in
appendix A suggest some questions to aid
those developing models and to aid communication between the
different actors.
Third, a little knowledge of the different technical basis on
which models are built can be helpful. The
multitude of different modelling techniques can often appear
overwhelming; we have offered a simple
introduction to some of these, explaining the various questions
they can answer, and outlining their
strengths and weaknesses.
Last, modelling is changing fast. This presents a range of future
opportunities, which could transform
policymaking and business operations. We have outlined some
of those opportunities and also the fresh
challenges they provoke. There is a consequent increasing need
for the new skills and collaborations that
will underpin the future of modelling.
As the power and use of modelling grows, there is increased
risk that models could be poorly
constructed, misused or misunderstood. We need to reinforce
modelling as a discipline, so that
misconstruction and misuse are less likely; we need to increase
understanding of modelling across
a wide range of domains, from social policy to life sciences and
engineering, as well as encourage
sharing of insights and best-practice across these domains; and
we need to bring commissioners closer
to modelling, so that results are more useful. As computational
modelling develops and extends to
new application areas, there is enormous potential for
interdisciplinary and intersectoral developments.
The cross-fertilization of ideas between industries, and
academia, along with a mutual appreciation of
different sectors’ needs in modelling skills, represents an
exciting future for computational modelling.
Computational modelling already has an increasing impact on
how science is done, but this will now
extend into other areas of our lives. Thus it is imperative that
this tool is used appropriately and carefully.
We hope this paper will prompt all those involved to think
about how models are used and when they
can be relied upon.
Data accessibility. This article has no data.
Authors’ contributions. M.W. commissioned the review on
which this paper draws. The paper was written by M.C., B.E.
and N.G. with final comments from C.C. All authors contributed
to the review and have approved the publication of
this paper.
Competing interests. We declare we have no competing
interests.
Funding. No funding has been received for this article.
Acknowledgements. The authors would like to acknowledge the
support of Amanda Charles at the Government Office
for Science.
Appendix A
A.1. Making and using models: a checklist
This checklist is inspired by the UK government’s Scope
development checklist [23], and includes some of
the questions that need to be answered before and during the
creation and use of a model. They could
form the basis for an initial discussion between model
commissioners and model developers, to clarify
their understanding of what will be involved, and during model
building and use. In addition, they can
serve as a point of departure for model reviewers.
Purpose
— What is the issue or issues under consideration?
— If there is more than one issue, how are they related?
— What is the context of the issue?
— What are the specific questions that need to be answered and
can modelling address them?
Scope
— What must the model cover?
— What can be excluded from the model?
— What is the minimum viable scope that can be used as a
starting point for the model?
Output and follow up
— What kind of outputs or results might answer the questions
raised?
— What format should be used to present the results?
— What controls are in place to make sure the model is not used
incorrectly?
Design and building
— What level of detail is needed for the model in each of its
frames of reference?
— What accuracy is required in the output?
— What should the trade-off between accuracy, simplicity and
robustness be?
— What modelling techniques will be used, and why those?
Which alternatives were considered?
— How do the chosen modelling techniques have an impact on
the accountability of decisions?
Data and assumptions
— What data are available and how robust are they?
— Are there judgements about the quality of the data that will
need to be made?
— How accurate are the available data, and how does that match
with the required accuracy of the
outputs?
— How will each of the assumptions be justified?
— What alternative assumptions could be made?
Quality assurance
— What verification procedures will be used to check that the
model works as expected?
— How will the model be validated, and what data will be used
for doing so?
— Is there a schedule of reviews to ensure that the model
remains up to date?
14
rsos.royalsocietypublishing.org
R.Soc.open
sci.5:172096
.................................................
Who
— Who will be the users of the model?
— Who will have overall responsibility for the model, its
development and its use?
— Who will provide the data and the knowledge required to
build the model?
— Who will develop the model?
— Who are the stakeholders (in other words, who is interested
in the issue, who could contribute,
who can influence and who will be impacted)?
— How will stakeholders be involved, and at what stage they
can be most useful?
— Do the stakeholders all have the same concerns and questions
about the issue? If not, what are
their perspectives, and which frames of reference are to be
considered?
— Who will provide quality assurance?
— Who will determine when the model is no longer useful?
Communication
— What methods will be used to communicate with users?
— What are their needs and abilities to appreciate the model
and what it provides?
— Are visualizations, dynamic graphs and movies appropriate
to convey the messages of the model
and, if so, have resources been set aside to create these?
Resources
— Has anything similar been done before? If so, what can be
learned from it?
— Is there a schedule of reviews to ensure that the model
remains up to date?
— Are sufficient skills and expertise available and, if not, how
can this be managed?
— What is the timescale for the work?
— What resources (time and money, for example) are available?
— Is it necessary and affordable to build a model, or could
some other approach be used that
requires fewer resources?
— What would be the consequences if the work is not carried
out at all, or the start is delayed?
A.2. What users should ask about a model: a checklist
These questions are ones that those that are contemplating the
use of an existing model should ask
themselves. The checklist is based on the authors’ experience
and sources such as [23].
— Does the model offer answers to the problems that I have?
— Are the assumptions it makes ones that I agree with?
— If the model offers an explanation or prediction, has the
model been validated sufficiently against
empirical data (or in any way at all)?
— If the model has no or weak empirical basis, is this adequate
to my needs?
— Is the model documented so that I can understand how it
works?
— Is the model output clear and comprehensible?
— Does the model output seem plausible when compared with
other sources of information?
— Has the degree of uncertainty in the model output been
properly recorded and its implications
recognized?
— Is the model being used for its original intended purpose or,
if not, is the new purpose compatible
with the design of the model?
— Have other stakeholders or users been involved in the model
design and use and, if so, do they
agree that the model is useful?
References
1. Government Office for Science. 2018 Computational
modelling: technological futures. See
https://www.gov.uk/government/publications/
computational-modelling-blackett-review.
2. Epstein JM. 2008 Why model? J.Artif. Soc. Soc.
Simul. 11, 12. See http://jasss.soc.surrey.ac.uk/11/
4/12.html.
3. Huckfeldt R, Johnson PE, Sprague J. 2004 Political
disagreement: thesurvivalofdiverseopinionswithin
communicationnetworks. Cambridge, UK:
Cambridge University Press.
4. Nordhaus W, Sztorc P. 2013 DICE 2013R: Introduction
and User’s Manual (2nd Edition). See http://www.
econ.yale.edu/nordhaus/homepage/homepage/
documents/DICE_Manual_100413r1.pdf.
5. Nordhaus WD. 2010 Economic aspects of global
warming in a post-Copenhagen environment. Proc.
NatlAcad.Sci.USA 107, 11 721–11 726. (doi:10.1073/
pnas.1005985107)
6. Phillips LD. 2012 A theory of requisite decision
models. ActaPsychol. 54, 29–48.
(doi:10.1016/0001-6918(84)90005-2)
7. Macpherson N. 2013 Review of quality assurance of
Government analytical models: final report. HM
Treasury. See https://www.gov.uk/government/
publications/review-of-quality-assurance-of-gover
nment-models.
8. Calder M, Dobson S, Fisher M, McCann J. 2018
Making sense of the world: models for reliable
sensor-driven systems.
(http://arxiv.org/abs/1803.10478)
10. Ockendon J, Howison S, Lacey A, Movchan A. 1999
Appliedpartialdifferential equations. Oxford, UK:
Oxford University Press.
11. Forrester JW. 1968 PrinciplesofSystems. Cambridge
MA: MIT Press.
12. Keller R. 1976 Formal verification of parallel
programs. Commun.ACM 19, 371–384.
(doi:10.1145/360248.360251)
13. Taylor H, Karlin S. 1998 Anintroductiontostochastic
modeling. New York, NY: Academic Press.
14. Gilbert N. 2008 Agent-basedmodels. Beverley Hills,
CA: SAGE.
15. Pnueli A, Xu J, Zuck LD. 2002 Liveness with
(0, 1, ∞)-counter abstraction. In Computeraided
verification (eds E Brinksma, KG Larsen). Lecture
Notes in Computer Science, vol. 2404, pp. 107–122.
Berlin, Germany: Springer.
(doi:10.1007/3-540-45657-0_9)
16. Kleene SC. 1967 Mathematical logic. New York, NY:
Wiley.
17. Fisher M. 2011 Anintroductiontopractical formal
methodsusingtemporal logic. New York, NY: Wiley.
18. Hansson H, Jonsson B. 1994 A logic for reasoning
about time and reliability. FormalAspectsComput.
6, 102–111. (doi:10.1007/BF01211866)
19. Baeten JCM. 2005 A brief history of process algebra.
Theor.Comput.Sci. 335, 131–146. (doi:10.1016/j.tcs.
2004.07.036)
20. G. Milner AJR. 2009 Thespaceandmotionof
communicatingagents. Cambridge, UK: Cambridge
University Press.
22. Henzinger T. 1996 The theory of hybrid automata. In
Proc. 11thAnnual IEEESymp.onLogic inComputer
Science,NewBrunswick,NJ,USA,27–30July 1996,
pp. 278–292. (doi:10.1109/LICS.1996.561342)
23. Department for Business, Energy & Industrial
Strategy. 2015 Turning a policy question into an
analytical framework. Scope development
checklist. See https://www.gov.uk/govern
ment/publications/scope-development-checklist.
http://dx.doi.org/doi:10.1145/360248.360251
http://dx.doi.org/doi:10.1007/3-540-45657-0_9
http://dx.doi.org/doi:10.1007/BF01211866
http://dx.doi.org/doi:10.1016/j.tcs.2004.07.036
http://dx.doi.org/doi:10.1016/j.tcs.2004.07.036
http://dx.doi.org/doi:10.1109/LICS.1996.561342
https://www.gov.uk/government/publications/scope-
development-checklist
https://www.gov.uk/government/publications/scope-
development-checklistIntroductionWhere models are usedWhy
modelPurposesMaking and using modelsAsking the right
questionWho does what?Specifying a modelFinding the data and
assessing qualityBuilding a modelDocumenting a modelQuality
assuranceUncertaintyCommunicating a
modelMaintenancePreserving a modelTypes of model and
analysisTypes of modelEnsemble modellingAnalysisRole of
dataFuture of modellingSummary and conclusionMaking and
using models: a checklistWhat users should ask about a model:
a checklistReferences
Blending cognitive and social models leads to tools for more
precisely understanding policy implications at both individ-
ual and social levels.
Key Points
•• Cognitive modeling and social simulation together
capture both individual mental processes and interper-
sonal social processes, for better understanding the
implications of public policies.
•• Detailed simulation enables precise understanding of
possible scenarios and outcomes, which can guide
better policies.
•• Cognitive social simulation should be part of the cur-
riculum of studying public policies.
Introduction
Predicting the effects of policies can benefit from detailed
computational analyses. As a simple comparison, weather
forecast has improved tremendously with the use of com-
puter models for various scenarios producing outcomes (e.g.,
paths of a hurricane) with specified probabilities. Similarly,
when policy makers consider a certain social or economic
policy, they preferably would want to know the full implica-
tions of it. They would like to know the implications in terms
of quantifiable and measurable outcomes, such as the amount
of total increase in revenue or the total cost to tax payers.
But, beyond these rather direct outcomes, they may also
want to consider more indirect implications, such as how it
affects different individuals’ perception, emotion, and moti-
vation; how changed perception, emotion, and motivation
lead to cultural and societal changes; and how all of these
changes lead to altering quantifiable and not-so-quantifiable
outcomes. Rather than relying on speculations, one would
definitely want more reliable means for understanding them.
Thus, policy makers (and others) may need to look into
not just social sciences but also cognitive sciences (broadly
defined), to better understand such issues in relation to poli-
cies. Moreover, they may also want to look into combining
social and cognitive sciences somehow, to connect analyses
at different levels for the sake of a more comprehensive
understanding (Sun, 2012).
Furthermore, rather than relying purely on verbal–con-
ceptual theories regarding complex social and cognitive mat-
ters, a more exact, more quantitative approach should be
more desirable. For example, given the complexity of the
human mind, it has proven difficult to infer fine-grained cog-
nitive-psychological details from behavior alone. Although
785925BBSXXX10.1177/2372732218785925Policy Insights
From the Behavioral and Brain SciencesSun
research-article2018
1Rensselaer Polytechnic Institute, Troy, NY, USA
Corresponding Author:
Ron Sun, Cognitive Science Department, Rensselaer
Polytechnic Institute,
110 8th Street, Troy, NY 12180, USA.
Email: dr.ron.s[email protected]
Cognitive Social Simulation
for Policy Making
Ron Sun1
Abstract
Cognitive social simulation is at the intersection of cognitive
modeling and social simulation, two forms of computer-based,
quantitative modeling and understanding. Cognitive modeling
centers on producing precise computational or mathematical
models of mental processes (such as human reasoning or
decision making), while social simulation focuses on precise
models
of social processes (such as group discussion or collective
decision making). By combining cognitive and social models,
cognitive social simulation is poised to address issues
concerning both individual and social processes. To better
anticipate
the implications of policies, detailed simulation enables precise
analysis of possible scenarios and outcomes. Thus, cognitive
social simulation will have practical applications in relation to
policy making in many areas that require understanding at both
the individual and the aggregate level.
Keywords
cognition, modeling, simulation, social simulation, cognitive
architecture, computation
experimentalists may come up with an informal (verbal–con-
ceptual) theory to aid inquiry, full consequences of such a
theory may not be obvious, its details may be underspecified,
and its ambiguity and inconsistencies may be hard to dis-
cover or avoid (Sun, Coward, & Zenzen, 2005).
Computational modeling, unlike verbal–conceptual theories,
is precise and also expressive (capable of precisely describ-
ing many details). It is a suitable ground upon which detailed
theories may be constructed and tested (Sun et al., 2005;
Vernon, 2014). Sun (2001, 2006) argued for the role of com-
putational modeling in understanding social-cognitive issues,
especially computational social simulation with realistic
cognitive models (i.e., cognitive social simulation), utilizing
“cognitive architectures” in particular. The present article
aims to explore such possibilities.
Specifically, cognitive modeling, an approach developed
in cognitive sciences, centers on producing precise computa-
tional (or mathematical) models of individual mental pro-
cesses (such as detailed models of human memory, reasoning,
or decision making). Social simulation, as developed in
social sciences, focuses on precise computational models of
social processes (such as models of interaction between two
individuals, group discussions and decision making, or other
collective processes). Cognitive social simulation combines
methodologies of both cognitive modeling and social simu-
lation (see examples in subsequent sections). By combining
cognitive and social computational models, cognitive social
simulation is poised to address issues concerning both indi-
vidual and social processes and their interaction. Thus, cog-
nitive social simulation will have practical applications in
relation to policy making in many areas that require under-
standing at both the individual and the aggregate level.
Note that the present article does not aim at providing spe-
cific policy recommendations. Rather, it aims at describing
how some lines of research may lead, in the near future, to
providing specific policy recommendations in many areas
(e.g., through quantitatively exploring deep policy implica-
tions). If one is looking for any concrete recommendation
from this article, it is that the further development of these
lines of research will benefit future society and future policy
making, and thus should be closely watched or adopted by
policy makers.
Combining Cognitive Modeling and
Social Simulation
Some basic concepts will be explained below that will show
how this combination works. The notion of “agent” (i.e.,
autonomous entities) has occupied a major role in defining
social and cognitive research. Below, I will briefly examine
this notion in both the social sciences and the cognitive sci-
ences, which points to their integration.
Computational models of agents often take the form of a
“cognitive architecture” as developed in the cognitive sciences,
that is, a broadly scoped, domain-generic computational model
describing the essential structures and processes of cognition
(psychology). They are often used for broad analysis of cogni-
tion and behavior (Anderson & Lebiere, 1998; Carley &
Newell, 1994; Sun, 2002, 2016; Vernon, 2014). In particular,
cognitive architectures provide a means for specifying a wide
range of cognitive-psychological processes together, in tan-
gible (computational) forms, although traditionally the focus
of research in the cognitive sciences has been on specific
aspects of cognition. For example, a cognitive architecture
may include memory, reasoning, decision making, and other
cognitive functionalities.
A cognitive architecture provides a concrete framework
for more detailed modeling and simulation of cognitive-psy-
chological phenomena, through specifying important struc-
tures and a variety of other essential aspects. It thus helps to
narrow down possibilities, provide scaffolding, and embody
foundational theoretical assumptions. The usefulness of cog-
nitive architectures has been demonstrated and argued before
(see, for example, Anderson & Lebiere, 1998; Sun, 2002,
2016; Vernon, 2014).
Computational cognitive modeling, especially with cog-
nitive architectures, has become an essential area of research
in the cognitive sciences. Cognitive architectures specify,
often in considerable computational detail, the mechanisms
and processes underlying cognition. Cognitive architectures
unify various subfields of the cognitive sciences by provid-
ing unified computational accounts of specialized findings in
an integrated model. Some of them have accounted for hun-
dreds of phenomena from cognitive psychology, social psy-
chology, personality psychology, industrial/organizational
psychology, and more (e.g., Sun, 2016). Such developments,
however, need to be extended to issues of multiagent social
interaction.
In contrast, most models of agents in the social sciences
have been simple, although there have been some promising
developments recently (Conte, Andrighetto, & Campennl,
2013; Edmonds, 2014; Sun, 2006). Generally speaking, two
approaches dominate the social sciences. The first approach
may be termed the “deductive” approach (Axelrod, 1997;
Moss, 1999), exemplified by much research in classical eco-
nomics. It centers on the construction of mathematical mod-
els, usually as a set of equations. Deduction may be used to
find consequences of assumptions. The second approach
may be termed the “inductive” approach, exemplified by
many traditional approaches to sociology. Insights are
obtained by generalizations from observations; these insights
are often qualitative, and phenomena are described in terms
of general categories.
However, a relatively new approach involves computa-
tional modeling and simulation of social phenomena, which
starts with a set of assumptions in the forms of rules, mecha-
nisms, or processes. Simulations then lead to data that can be
analyzed. Both inductive and deductive methods may be
242 Policy Insights from the Behavioral and Brain Sciences 5(2)
applied on simulation data: Induction can be used to find pat-
terns in data, and deduction can be used to find consequences
of assumptions (i.e., rules, mechanisms, and processes speci-
fied). Thus, simulations are useful in multiple ways (Axelrod,
1997; Moss, 1999).
This third approach centers on agent-based social simula-
tions, that is, simulations based on autonomous individual
entities. Such simulations explore the interaction among
agents whereby complex patterns may emerge. Thus, they
provide explanations for corresponding social phenomena
(Gilbert & Conte, 1995). Agent-based social simulation
often tests theoretical models in the social sciences or inves-
tigates their properties (when analytical solutions are diffi-
cult). A simulation may even serve as a theory by itself.
Researchers have turned to agents for studying a wide range
of issues (Conte, Hegselmann, & Terna, 1997; Gilbert &
Conte, 1995; Gilbert & Doran, 1994; Moss & Davidsson,
2001).
Some work in social simulation assumes rudimentary
cognition-psychology on the part of agents: Agent models
have often been custom-tailored to the task at hand, often just
a restricted set of highly domain-specific rules, not compa-
rable to cognitive architectures in sophistication. Although
this approach may be adequate for achieving some limited
objectives, it is overall unsatisfactory: It not only limits the
realism, and hence applicability, of social simulation but also
precludes the possibility of tackling the theoretical question
of the micro–macro link (Alexander, Giesen, Munch, &
Smelser, 1987; Sawyer, 2003).
Investigation, modeling, and simulation of social phe-
nomena need cognitive sciences, because such endeavors
need a better understanding, and better models, of individual
cognition-psychology; only on this basis can better models
of aggregate processes be developed (Castelfranchi, 2001;
Sun, 2001). Cognitive models may provide better grounding
for understanding multiagent interaction, by incorporating
realistic constraints, capabilities, and tendencies of individ-
ual agents in their interaction with their environments (e.g.,
as argued at length in Sun, 2001). Some researchers have
already started to explore the cognitive basis of social, politi-
cal, religious, and cultural processes (e.g., Atran &
Norenzayan, 2004; Boyer & Ramble, 2001; Castelfranchi,
2001; Jager, 2017; Kim, Taber, & Lodge, 2010; Mithen,
1996; Turner, 2000). Although some cognitive details may
ultimately prove to be irrelevant, this cannot be determined a
priori, and thus modeling may be useful in determining
which aspects of cognition can be safely abstracted away.
Although, generally speaking, computational modeling is
often limited to a particular level at a time (e.g., the social,
the cognitive-psychological, etc.), this need not be the case:
Cross-level analysis and modeling, such as combining cogni-
tive modeling and social simulation, could be enlightening,
and might even be essential (Sun, 2012; Sun et al., 2005).
These levels do interact with each other (e.g., by
constraining each other) and may not be easily isolated and
tackled alone. Moreover, their respective territories are often
intermingled, without clear-cut boundaries. One may start
with purely social descriptions but then substitute cognitive-
psychological principles and details for simpler descriptions
of agents. Thus, the differences and the separations among
levels can be rather fluid. (Note that Sun et al., 2005, and
Sun, 2012, provided detailed arguments for crossing and
mixing the levels of the social, the cognitive-psychological,
etc.; Sun, 2006, provided more technical discussions of inte-
grating social simulation and cognitive modeling.)
The remainder of this article discusses examples of cogni-
tive social simulation. Note that simulations may differ in
terms of their cognitive-psychological realism. Social simula-
tion models can be rather noncognitive, by using, for exam-
ple, a simple set of rules for an individual agent (Axelrod,
1984). Social simulation models can also be much more cog-
nitive, by using well-developed cognitive architectures (e.g.,
Sun & Naveh, 2004). In between, there can be models of vari-
ous cognitive complexity (Carley & Newell, 1994; Goldspink,
2000; Jager, 2017). In terms of noncognitive details, one may
include in a model only highly abstract social scenarios, for
example, as described by game theory (Von Neumann &
Morgenstern, 1944), or one may include a lot more details of
the scenarios as captured in ethnographical studies (e.g.,
Clancey, Sierhuis, Damer, & Brodsky, 2006).
Examples of Cognitive Social
Simulation
Below we look into a few examples of cognitive social
simulation.
A Cognitive Simulation of Games
Some work in cognitive social simulation extends existing
formal frameworks of agent interaction, taking into consider-
ation cognitive processes more realistically. For instance,
various modifications of, and extensions to, game theory
(Von Neumann & Morgenstern, 1944) move in the direction
of enhanced cognitive realism. Although policy makers may
use game theory to find mathematically optimal strategies
for various situations, humans often do not adopt optimal
game theoretical strategies (Axelrod, 1984).
For instance, a cognitive social simulation (by West,
Lebiere, & Bothell, 2006) found that human players did not
use a fixed way of responding. Instead, they attempted to
adjust their responses to exploit perceived weaknesses in
their opponents’ play. The researchers argued that humans
had evolved to be such a player; furthermore, they argued
that the human cognitive system had evolved to support a
superior ability as such a player.
These researchers (West et al., 2006) created a cognitive
model of how people play games by applying the ACT-R
Sun 243
cognitive architecture (Anderson & Lebiere, 1998), and then
compared it with the behavior of actual human players. For
instance, the standard game theory requires that players be
able to select moves randomly in accordance with preset
probabilities, but research has repeatedly shown that people
are very poor at doing this, suggesting that the evolutionary
success of humans is not based on this ability. People try to
detect the opponent’s sequential patterns of contingent
choices (such as tit-for-tat) and use this information to make
the next move. This is consistent with psychological research
showing that, when sequential dependencies exist, people
can detect and exploit them (e.g., Estes, 1972).
Using this model, they found the following results: (a) the
interaction between two agents of this type produced the
seeming randomness; (b) the sequential patterns that were
produced by this process were temporary and short lived;
and (c) human subjects played similarly to a lag 2 network
that was punished for ties: that is, people were able to predict
their opponent’s moves by using information from the previ-
ous two moves, and people treated ties as losses.
For other work that attempts to make game theory more
cognitively–psychologically realistic, see, for example,
Axelrod (1984), Camerer (1997), and Juvina, Lebiere, and
Gonzalez (2015), among others.
A Cognitive Simulation of Organizations
Another example is a simulation of organizations conducted
based on the Clarion cognitive architecture (Sun, 2002,
2016), which helped to shed light on the role of cognition in
organizations (Sun & Naveh, 2004).
The simulation focused on a typical task faced by organi-
zations—classification (Carley, Prietula, & Lin, 1998). In this
simulation, no one single agent has access to all the informa-
tion relevant to making a decision, and separate decisions
made by different agents are integrated. Organizational struc-
tures include two types: (a) teams, which treat individual
decisions as votes, and the organization decision is the major-
ity decision; and (b) hierarchies, in which the decision of a
superior is based solely on the recommendations of subordi-
nates. Information is accessible to each agent in two different
ways: (a) distributed access, in which each agent sees a differ-
ent subset of attributes, and (b) blocked access, in which sev-
eral agents see exactly the same subset of attributes.
Because the Clarion cognitive architecture is intended for
capturing all the essential cognitive-psychological processes
(Sun, 2002, 2016), its cognitive parameters include, for
example, learning rate, generalization threshold, probability
of using implicit versus explicit processing, and so on. With
these parameters, the results of the simulation closely accord
with the patterns of the human data (e.g., with teams outper-
forming hierarchies, and distributed access being superior to
blocked access; cf. Carley et al., 1998), far better than previ-
ous simulations, which shows the advantage of cognitive
social simulation.
But what happens when cognitive parameters are var-
ied? The statistical results show the superiority of team and
distributed information access early on, and the disappear-
ance or reversal of these advantages later. The analysis
showed that the above trend did not depend on any one set-
ting of parameters.
In sum, the cognitive social simulation with the Clarion
cognitive architecture more accurately captured organiza-
tional performance data and led to deeper explanations for
the results (see Sun & Naveh, 2004, for details). Furthermore,
with Clarion, one can vary parameters that correspond to
cognitive processes and test their effects on collective perfor-
mance; thus, this approach may be used to predict human
performance in organizational settings and to prescribe opti-
mal or near-optimal cognitive abilities for individuals for
specific tasks and organizational structures.
For other possibilities of cognitive social simulation of
organization and group, see, for example, Carley et al.
(1998); Clancey, Sierhuis, Damer, and Brodsky (2006);
Clancey, Linde, Seah, and Shafto (2013); Helmhout (2006);
Prietula, Carley, and Gasser (1998); and so on.
Some Other Cognitive Social Simulations
In addition, some cognitive social simulations may include
evolutionary processes, for example, evolutionary simula-
tion of social survival strategies (Cecconi & Parisi, 1998;
Sun & Naveh, 2007), evolution of individual motivational
processes (Sun & Fleischer, 2012), and simulating other
issues relevant to evolution of cognitive processes in social
settings (Kenrick, Li, & Butner, 2003; Kluver, Malecki,
Schmidt, & Stoica, 2003).
Other cognitive social simulations include models of indi-
vidual and collective motivation (e.g., Clancey et al., 2006;
Wilson, Sun, & Mathews, 2009), personality and personality
interaction (e.g., Quek & Moskowitz, 2007; Sun & Wilson,
2014), emotion and emotion contagion (e.g., Allen & Sun,
2016; Thagard & Kroon, 2006), and human morality (Bretz
& Sun, 2018). For example, unified models of motivation,
emotion, personality, moral judgment, and so on have been
developed within the Clarion cognitive architecture (Sun,
2016), for the sake of in-depth understanding of these aspects
together. For other models of emotions in social settings, see
also Erisen, Lodge, and Taber (2014); Gratch, Mao, and
Marsella (2006); and so on. These models further strengthen
cognitive social simulation and its abilities to tackle deeper
psychological issues involved in social processes. They help
with not only the better understanding of motivation, emo-
tion, personality, and so on, but also the better understanding
of their roles in social interaction.
Furthermore, simulations of political behavior have been
undertaken. For example, the Clarion cognitive architecture
was applied to studying voter decisions in an election cam-
paign (Schreiber, 2004). The ACT-R cognitive architecture
was applied to produce a computational model of political
244 Policy Insights from the Behavioral and Brain Sciences 5(2)
attitudes, incorporating psychological theories with findings
from electoral behavior (Kim et al., 2010).
An analysis of social issues based on some existing mod-
els of cognitive agents was proposed by White (in press). It
was suggested that these models can help understand weighty
social and political issues (such as those involved in interna-
tional geopolitics), and they may lead to reasonable solutions
of these consequential issues. Using cognitive social simula-
tion to tackle these issues is an important suggestion.
Issues addressed by social simulation, especially cogni-
tive social simulation, have been diverse. They include, for
example, emotional interaction, crowd behavior, tribal cus-
toms, belief systems, academic publishing and citation, game
playing, stock market dynamics, social cooperation, group
interaction, organizational decision making, political behav-
ior, evolution of language, formation of social norms, and
countless others (see, for example, Sun, 2006).
Applications and Further
Developments
By incorporating detailed cognitive models, one can take
into consideration human cognition-psychology when pre-
dicting or explaining collective social outcomes (Sun, 2012;
Sun & Naveh, 2004). Conversely, one can also take into con-
sideration sociocultural processes in understanding individ-
ual mind (Nisbett, Peng, Choi, & Norenzayan, 2001;
Vygotsky, 1962; Zerubavel, 1997). The result is better, more
detailed, and more accurate models and simulations.
Cognitive social simulation is still at an early stage of
development, given the relatively recent emergence of the
two fields on which it is based (social simulation and cogni-
tive modeling, including cognitive architectures). Many
research issues and challenges remain to be addressed to bet-
ter serve policy makers.
First, whether or not to use detailed cognitive models in
social simulation is a decision that has to be made on a case-by-
case basis. There are many reasons for using or not using
detailed cognitive models. The reasons for using detailed cogni-
tive models include the following: (a) cognitive realism may
lead to more accurately capturing human data in social simula-
tion; (b) with cognitive realism, one will be able to formulate
deeper explanations for results observed, by basing explana-
tions on cognitive factors rather than arbitrary assumptions; and
(c) with detailed cognitive models, one can vary parameters that
correspond to cognitive processes and test their effects on out-
comes, and in this way, simulations may be used to predict out-
comes based on cognitive factors or to improve performance by
prescribing optimal cognitive abilities for specific tasks.
The reasons for not using detailed cognitive models in
social simulation include the following: (a) it is sometimes
possible to describe causal relationships at higher levels
without referring to relationships at lower levels (Goldstone
& Janssen, 2005); (b) complexity may make it difficult to
interpret results in terms of their precise contributing factors;
and (c) complexity also leads to longer running times and
hence raises issues of scalability.
Another issue facing cognitive social simulation is valida-
tion of simulation results, including validation of cognitive
models as part of social simulation. Validation of complex
simulation models is always difficult (Axtell, Axelrod, &
Cohen, 1996; Moss, 2006; Pew & Mavor, 1998). However,
in this regard, adopting existing cognitive models as part of a
cognitive social simulation may be beneficial: If one adopts
a well-established cognitive model (a cognitive architecture
in particular), the prior validation of that cognitive model
may be leveraged in validating the overall simulation results.
Therefore, there is a significant advantage in adopting an
existing cognitive model.
This area of research will come to fruition in relation to
better understanding cognition and sociality as well as their
interaction (Sun 2006, 2012). Consequently, in terms of its
relevance to policy making, we may examine briefly a few
example cases below.
For instance, computational models of organizational
structures and dynamics on the basis of cognitive models (as
discussed earlier) can be useful to understanding or even
designing organizational structures and makeups for improv-
ing organizational performance in various situations.
Cognitive architectures have been applied to the simulation
of organizational decision making (as described earlier;
Carley et al., 1998; Sun & Naveh, 2004). Relatedly, there
have also been cognitively based models of group dynamics
(Clancey et al., 2006). These models can lead to significant
applications in organizations of various types.
Industrial/organizational psychology needs to understand
not only how goal setting, feedback, self-efficacy, and other
parameters affect individual performance (Locke & Latham,
2013) but also how these parameters interact with social
environments (e.g., team goals, supportive colleagues, emo-
tion contagion, etc.) in affecting overall performance. High-
fidelity cognitive social simulation can provide valuable
information concerning interactions of these parameters and
thus is useful in understanding implications of organizational
practices and policies.
Ongoing work on computationally modeling emotion,
motivation, personality, and other socially relevant psycho-
logical aspects may be useful to cognitive social simulation in
terms of leading to applications. These models are useful not
only for understanding motivation, emotion, and personality
per se but also for designing relevant social mechanisms for
channeling them for public good. For example, emotion con-
tagion is prevalent in social settings; it may be useful for law
enforcement to be able to anticipate crowd behavior in volatile
situations (Parunak, Brooks, Brueckner, Gupta, & Li, 2014) in
part based on modeling emotion contagion among a crowd.
Computational models of politics on the basis of individ-
ual cognition (as mentioned earlier) lead to detailed simula-
tions of voter behavior, political opinion formation, emotional
response, and emotionally colored political reasoning. These
Sun 245
models can be useful tools for political mechanism design
and for deciding on political strategies and coalition
formation.
Other research directions involving cognitive modeling
and social simulation are currently being actively pursued,
including, for example, robot teaming (to generate useful
social behavior among a group of robots) and battlefield
simulation (with detailed cognitive models of agents; Pew
& Mavor, 1998). Some of these research directions may lead
to significant applications as well as making of relevant
policies.
Overall, many directions of research pursued in cognitive
social simulation have significant implications for under-
standing and making public and other relevant policies.
These directions may lead to better, more cognitively and
socially realistic simulations that address both fundamental
theoretical issues facing social and cognitive scientists and
practical policy matters facing policy makers.
Summary
This article surveys cognitive social simulation, which is at
the intersection of cognitive modeling and social simulation.
By integrating cognitive and social models, cognitive social
simulation can address issues concerning both cognition-
psychology and sociality. More importantly, cognitive social
simulation can find important practical applications in rela-
tion to public and other policies in many areas. The present
article may be considered an appeal to better utilize (and to
further develop) cognitive social simulation for improved
policy making.
Author’s Note
The cognitive models mentioned, including Clarion and ACT-R,
are academic research programs, not commercial products.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with
respect
to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial
support
for the research, authorship, and/or publication of this article:
This
article was written while the author was supported (in part) by
ARI
Grant W911NF-17-1-0236.
References
Alexander, J., Giesen, B., Munch, R., & Smelser, N. (Eds.).
(1987).
The micro-macro link. Berkeley: University of California
Press.
Allen, J., & Sun, R. (2016, December 6-9). Emotion contagion
in a cognitive architecture. In Y. Jin & S. Kollias (Eds.),
Proceedings of IEEE Symposium Series in Computational
Intelligence (pp. 1-8). Athens, Greece: IEEE Press.
Anderson, J., & Lebiere, C. (1998). The atomic components of
thought. Mahwah, NJ: Lawrence Erlbaum.
Atran, S., & Norenzayan, A. (2004). Religion’s evolutionary
land-
scape: Counterintuition, commitment, compassion, and com-
munion. Brain and Behavioral Sciences, 27, 713-770.
Axelrod, R. (1984). The evolution of cooperation. New York,
NY:
Basic Books.
Axelrod, R. (1997). Advancing the art of simulation in the
social
sciences. In R. Conte, R. Hegselmann, & P. Terna (Eds.),
Simulating social phenomena (pp. 21-40). Berlin, Germany:
Springer.
Axtell, R., Axelrod, J., & Cohen, M. (1996). Aligning simula-
tion models: A case study and results. Computational &
Mathematical Organization Theory, 1, 123-141.
Boyer, P., & Ramble, C. (2001). Cognitive templates for
religious
concepts: Cross-cultural evidence for recall of counter-intui-
tive representations. Cognitive Science, 25, 535-564.
Bretz, S., & Sun, R. (2018). Two models of moral judgment.
Cognitive Science, 42, 4-37.
Camerer, C. (1997). Progress in behavioral game theory. Journal
of
Economic Perspectives, 11, 167-188.
Carley, K., & Newell, A. (1994). The nature of social agent.
Journal
of Mathematical Sociology, 19, 221-262.
Carley, K., Prietula, M. J., & Lin, Z. (1998). Design versus cog-
nition: The interaction of agent cognition and organizational
design on organizational performance. Journal of Artificial
Societies and Social Simulation, 1(3), 1-19.
Castelfranchi, C. (2001). The theory of social functions:
Challenges
for computational social science and multi-agent learning.
Cognitive Systems Research, 2, 5-38.
Cecconi, F., & Parisi, D. (1998). Individual versus social
survival
strategies. Journal of Artificial Societies and Social Simulation,
1(2), 1-17.
Clancey, W. J., Linde, C., Seah, C., & Shafto, M. (2013). Work
prac-
tice simulation of complex human-automation systems in safety
critical situations: The Brahms Generalized Überlingen Model
(NASA Technical Publication 2013-216508). Washington,
DC: NASA.
Clancey, W. J., Sierhuis, M., Damer, B., & Brodsky, B. (2006).
Cognitive modeling of social behaviors. In R. Sun (Ed.),
Cognition
and multi-agent interaction: From cognitive modeling to social
simulation. New York, NY: Cambridge University Press.
Conte, R., Andrighetto, G., & Campennl, M. (2013). Minding
norms: Mechanisms and dynamics of social order in agent
societies. New York, NY: Oxford University Press.
Conte, R., Hegselmann, R., & Terna, P. (Eds.). (1997).
Simulating
social phenomena. Berlin, Germany: Springer.
Edmonds, B. (2014). Contextual cognition in social simulation.
In
P. Brézillon & A. Gonzalez (Eds.), Context in computing (pp.
273-290). New York, NY: Springer.
Erisen, C., Lodge, M., & Taber, C. S. (2014). Affective
contagion
in effortful political thinking. Political Psychology, 35, 187-
206.
Estes, W. (1972). Research and theory on the learning of
probabili-
ties. Journal of the American Statistical Association, 67, 81-
102.
Gilbert, N., & Doran, J. (1994). Simulating societies: The
computer
simulation of social phenomena. London, England: UCL Press.
246 Policy Insights from the Behavioral and Brain Sciences 5(2)
Goldspink, C. (2000). Modelling social systems as complex:
Towards a social simulation meta-model. Journal of Artificial
Societies and Social Simulation, 3(2), 1-23.
Goldstone, R. L., & Janssen, M. A. (2005). Computational
models of
collective behavior. Trends in Cognitive Sciences, 9(9), 424-
430.
Gratch, J., Mao, W., & Marsella, S. (2006). Modeling social
emotions and social attributions. In R. Sun (Ed.), Cognition
and multi-agent interaction: From cognitive modeling to
social simulation (pp. 219-251). New York, NY: Cambridge
University Press.
Helmhout, M. (2006). The social cognitive actor: A multi-actor
simulation of organisations (Master’s thesis). University of
Groningen, The Netherlands.
Jager, W. (2017). Enhancing the Realism of Simulation (EROS):
On implementing and developing psychological theory in
social simulation. Journal of Artificial Societies and Social
Simulation, 20(3). Retrieved from http://jasss.soc.surrey.
ac.uk/20/3/14.html
Juvina, I., Lebiere, C., & Gonzalez, C. (2015). Modeling trust
dynamics in strategic interaction. Journal of Applied Research
in Memory and Cognition, 4, 197-211.
Kenrick, D., Li, N., & Butner, J. (2003). Dynamical
evolutionary
psychology: Individual decision rules and emergent social
norms. Psychological Review, 110, 3-28.
Kim, S., Taber, C. S., & Lodge, M. (2010). A computational
model
of the citizen as motivated reasoner: Modeling the dynamics
of the 2000 presidential election. Political Behavior, 32, 1-28.
Kluver, J., Malecki, R., Schmidt, J., & Stoica, C. (2003).
Sociocultural evolution and cognitive ontogenesis: A socio-
cultural-cognitive algorithm. Computational & Mathematical
Organization Theory, 9, 255-273.
Locke, E. A., & Latham, G. P. (2013). New developments in
goal
setting and task performance. New York, NY: Routledge.
Mithen, S. (1996). The prehistory of the mind: The cognitive
ori-
gins of art, religion, and science. London, England: Thames
& Hudson.
Moss, S. (1999). Relevance, realism and rigour: A third way
for social and economic research (CPM Report No. 99-56).
Manchester, UK: Center for Policy Analysis, Manchester
Metropolitan University.
Moss, S. (2006). Cognitive science and good social science. In
R. Sun (Ed.), Cognition and multi-agent interaction: From
cognitive modeling to social simulation (p. 393). New York:
Cambridge University Press.
Nisbett, R., Peng, K., Choi, I., & Norenzayan, A. (2001).
Culture
and systems of thought: Holistic versus analytic cognition.
Psychological Review, 108, 291-310.
Parunak, H. V. D., Brooks, S. H., Brueckner, S. A., Gupta, R.,
&
Li, L. (2014). Dynamically tracking the real world in an agent-
based model. In Alam, S. J., & Parunak, H. V. D. (Eds.), Multi-
agent-based simulation XIV (pp. 3-16). Berlin: Springer.
Pew, R., & Mavor, A. (Eds.). (1998). Modeling human and
orga-
nizational behavior: Application to military simulations.
Washington, DC: National Academy Press.
Prietula, M., Carley, K., & Gasser, L. (Eds.). (1998). Simulating
organizations: Computational models of institutions and
groups. Cambridge, MA: MIT Press.
Quek, M., & Moskowitz, D. S. (2007). Testing neural network
models of personality. Journal of Research in Personality, 41,
700-706.
Sawyer, R. (2003). Multiagent systems and the micro-macro
link
in sociological theory. Sociological Methods & Research, 31,
3325-3363.
Schreiber, D. (2004, March). A hybrid model of political
cognition.
Paper presented at Midwestern Political Science Association
Annual Meeting, Chicago, IL.
Sun, R. (2001). Cognitive science meets multi-agent systems: A
prolegomenon. Philosophical Psychology, 14, 5-28.
Sun, R. (2002). Duality of the mind. Mahwah, NJ: Lawrence
Erlbaum.
Sun, R. (Ed.). (2006). Cognition and multi-agent interaction:
From cognitive modeling to social simulation. New York, NY:
Cambridge University Press.
Sun, R. (Ed.). (2012). Grounding social sciences in cognitive
sci-
ences. Cambridge, MA: MIT Press.
Sun, R. (2016). Anatomy of the mind. New York, NY: Oxford
University Press.
Sun, R., Coward, A., & Zenzen, M. (2005). On levels of
cognitive
modeling. Philosophical Psychology, 18, 613-637.
Sun, R., & Fleischer, P. (2012). A cognitive social simulation of
tribal survival strategies: The importance of cognitive and
motivational factors. Journal of Cognition and Culture, 12,
287-321.
Sun, R., & Naveh, I. (2004). Simulating organizational decision
making with a cognitive architecture Clarion. Journal of
Artificial Society and Social Simulation, 7(3), Retrieved from
http://jasss.soc.surrey.ac.uk/7/3/5.html
Sun, R., & Naveh, I. (2007). Social institution, cognition, and
sur-
vival: A cognitive-social simulation. Mind & Society, 6, 115-
142.
Sun, R., & Wilson, N. (2014). A model of personality should be
a
cognitive architecture itself. Cognitive Systems Research, 29-
30, 1-30.
Thagard, P., & Kroon, F. W. (2006). Emotional consensus in
group
decision making. Mind & Society, 5, 85-104.
Turner, M. (2000). Cognitive dimensions of social science. New
York, NY: Oxford University Press.
Vernon, D. (2014). Artificial cognitive systems: A primer.
Cambridge, MA: MIT Press.
Von Neumann, J., & Morgenstern, O. (1944). Theory of games
and economic behavior. Princeton, NJ: Princeton University
Press.
Vygotsky, L. (1962). Thought and language. Cambridge, MA:
MIT
Press.
West, R., Lebiere, C., & Bothell, D. (2006). Cognitive architec-
tures, game playing, and human evolution. In R. Sun (Ed.),
Cognition and multi-agent interaction: From cognitive mod-
eling to social simulation (pp. 103-123). New York, NY:
Cambridge University Press.
White, J. (in press). The role of robotics and AI in
technologically
mediated human evolution. Frontiers in Robotics and AI.
Wilson, N., Sun, R., & Mathews, R. (2009). A motivationally
based
simulation of performance degradation under pressure. Neural
Networks, 22, 502-508.
Zerubavel, E. (1997). Social mindscapes: An invitation to
cognitive
sociology. Cambridge, MA: Harvard University Press.