Can we understand agent-based models?

[Updated. See below.]

[I wrote this because we were starting to discuss the question in comments, and I thought it was interesting, so tried to collect my thoughts. No great original insights here. Nor even any conclusions, right or wrong. Read at your own risk. This post is really just a place for people who want to discuss this topic to do so. ]

I'm bad at math and computers, which means I find it hard to understand agent-based modelling. It also means I'm a bit biased, because I wouldn't be any good at doing agent-based modelling myself. But that doesn't mean I don't want others to do agent-based modelling. It's probably a good thing that some economists work in this area.

This is what I think I understand about agent-based modelling, and the reason for doing it:

The economy is very complicated. Real people can't hope to understand it, solve for the equilibrium, and figure out their optimal Rational Expectations/Bayesian strategy. Hell, we economists sometimes can't even solve models ourselves, so how could we possibly assume real people can? So it makes sense for people to follow rules of thumb to cope with a complicated world. And those rules of thumb, though they simplify the individual's problem, may make the economist's problem even more complicated.

So we model agents as following rules of thumb, put them all together in a computer, and then the computer tells us what happens.

Is that basically right? If so, it makes sense to me. I don't want to do it myself, but I'm glad someone (else) is doing it. It sounds a lot better use of our time, at the margin, than building more existence proofs of Arrow-Debreu General Equilibrium.

But it reminds me of John Searle's Chinese Room.

I want to understand recessions. A few months ago I read a paper putting forward what looked like an interesting new theory of recessions. I did not understand the theory, even though the math did not seem especially complex and the writing seemed reasonably clear. I put the paper aside, with the (tentative and provisional) conclusion that the theory made no sense. I couldn't figure out how they could get the conclusions from the premises.

Perhaps if I had spent more time with that paper, I would have understood it, and decided that the theory did make sense after all. Or perhaps I would have found a mistake in the paper, and decided it didn't make sense. But if I can't understand it, I'm not going to accept it as a possible explanation of recessions.

Now I don't understand the Theory of Relativity either. But physicists say that they do understand it, and I believe them, so I accept it as an explanation of whatever it is they say it explains. Maybe I could understand it too, if I really studied it hard.

What happens if the explanation is a black box, that we can never interrogate? Suppose I had lost my copy of the paper, and so had everyone else. And the only thing anyone could find was the page of assumptions and the page of conclusions. And we had lost and forgotten everything in between, and couldn't figure out what it must have been. And the authors had disappeared. Would we still want to say that the paper explains recessions? Even if we thought that the authors of the paper were very reliable economists who never made mistakes? Aren't the missing pages important?

Agent-based models, or any computer simulations, strike me as being a bit like that black box. A paper written by a very reliable economist where all the middle pages are missing and we've only got the assumptions and conclusions. I can see why computer simulations could be useful. If that's the only way to figure out if a bridge will fall down, then please go ahead and run them. But if we put agents in one end of the computer, and recessions get printed out the other end, and that's all we know, does that mean we understand recessions?

Even if we do lots of different simulations, with different assumptions, and list all the cases where we do get recessions and list all the other cases where we don't get recessions, do we understand recessions?

Even though it could be very useful for policy advice, I would find it deeply unsatisfactory if we had to stop there. "But why do we get recessions in that list of cases and no recessions in that other list of cases?"

[Update: I really liked this comment by (astrophysicist) Jonathan Dursi (very minor edits):

""agent-based models and understanding are complements" — yes.

We in the physical sciences have wrestled with this for a long time – centuries, even, because the issue isn't just with simulation, the issue is with experiment vs theory. You prepare some inputs, push the button, and out comes an output. Great. So now what?

Unfortunately, simulation is comparatively new, and so "best practices" for computational work are just starting to come together.

Right now, the level of quality for a lot of computational work – in all fields – is uneven. For instance, and probably prompting your question, in all disciplines one often sees a simulation paper where the author runs one simulation, says "See! When you put in A, you get out B!", plots some pretty pictures, and concludes.

These papers make me wish academics had a demerit system. They should, of course, never be published.

A better paper would go like this: Using well understood simulation software X, we wish to see what happens when we put A in. So we put a range of things like A in, over a suite of different simulations, and out comes a range of outcomes centred around B. Ok, interesting. But if we put A' in, we don't get B; we get B'. We have a hypothesis about this; we thing that A causes B but A' doesn't because of complex intermediate step C. So let's modify our simulation to not have C (by modifying the inputs, or the behaviour of the agents, or what have you). Oh, look; When we have A, but not C, we don't get B anymore! Thus we have evidence that A gives you B through C.

The upside to doing simulations is because they're controllable theoretical experiments, and that way you get to twiddle and tweak and use the changes to improve your understanding (given your underlying theory). They're ways to understand how the *theory* plays out in complicated situations, the same way physical experiments give you the ability to understand how the physical world plays out in complicated situations.

I'm involved with software carpentry, which teaches basic computational skills to researchers (mainly graduate students) in many different disciplines; we do the same at different levels for Compute Canada. Email me if you think a workshop would be useful in your department."

End Update.]

72 comments

  1. Evan's avatar

    I don’t have any personal experience with ABM either, but I think that things are a little more nuanced than you think, Nick. Rather than having a black box, where we don’t understand what is happening in the model, think of it as a grey box. We have some idea what is going on, but might not know everything.
    I guess it is a little bit like the difference between intuition and formal proof in standard economic models. If we understand the intuition of your lost paper well enough, then it can still be useful even if we cannot recreate the formal proofs.

  2. Brito's avatar

    Is ABM really just a set of rules of thumb? That sounds pretty lousy to me, look at evolutionary economics for instance, peoples heuristics and expectations are adapting as the economy changes. At least with equilibrium models people’s actions respond to changes in their incentives and expectations which in turn responds to changes in the economy or policy, even if they do with unrealistic immediacy or accuracy.

  3. Martin's avatar

    Nick,
    As I read you, you’re essentially asking what’s an explanation or more fundamentally: what’s knowledge? Let me therefore offer you my answer. To me an explanation is to derive something what we did not know from what we do know or from what is given. What is known and/or given is unique to each discipline as that is what defines and limits each discipline together with a set of phenomena.
    To make a long story short I’d say: no, if you cannot reduce a recession to the result of human behavior then you have not provided an economic explanation. Worse yet, if your explanation is a black box, then you have not provided an explanation in any discipline.
    You’re quite right therefore to find it unsatisfactory.

  4. Paul Gustafson's avatar
    Paul Gustafson · · Reply

    I think the fact that we don’t understand exactly how an agent-based simulation generates a recession is actually indicative of the usefulness of the simulation. If we could understand all recessions from first principles, agent-based simulations wouldn’t be necessary.
    To draw an analogy to math, given a differential equation we would ideally like to find an analytic solution. However, in many cases, we have to make do with numerical methods. We can still analyze the behavior of the system using numerical approximations, and gain macroscopic knowledge (attractors, periodic behavior, asymptotic behavior, etc.).

  5. Nick Rowe's avatar

    Brito: I think those rules of thumb can be fairly sophisticated. Like Least squares learning, which is like the agents running regressions to forecast. And I think some maybe include evolution in terms of survivorship.

  6. Brito's avatar

    Nick: fair enough. I think models like this would be much more useful to banks, pension funds, insurance companies, environmental economists and organisations like the CBO. On the other hand, I do not think they would be of much use to central bankers or macroeconomists focusing on more short run problems, where it’s not the predictions that really matter, but the assumptions and the diagnostics which are more mundane (e.g. is there deficient demand? Is monetary policy loose or tight? Has the natural rate of unemployment increased? etc…)

  7. Nick Rowe's avatar

    Martin: “To make a long story short I’d say: no, if you cannot reduce a recession to the result of human behavior then you have not provided an economic explanation.”
    Well, they could reply that they have reduced a recession to the result of human behaviour. (Actually, and more so than models that start with aggregate relationships.)

  8. YM's avatar

    I do not deal in agent based models, but I will guess that like in most things there can be good and bad ABMing.
    Like a traditional (good) economic model helps us build intuition through results and comparative static exercices, a good ABM should help us build intuition by tweaking simulations and observing the results. ABMs could be the great macroeconomic experiment lab you have been dreaming about for so long, Nick!

  9. Ritwik's avatar

    Nick
    This is an interesting post, and I’m glad you’ve written it. I see the point of agent based modelling to be able to answer that great Deirdre McCloskey critique – for too long economists have focused on why and whether, without asking that other fundamental question of, ‘by how much’?
    Why should we just focus on two categories recession and non-recession? How does interaction of economic actors sometimes produce 2 years of below trend growth, sometimes a one year collapse followed by resumption in trend, and sometimes a collapse to the tune of halving of nominal spending?
    AB models don’t really help you deduce the chain of recession logic. Chances are, your own theory of recessions has already been inbuilt into the model, as part of the simple but sophisticated rules/ heuristics that your agents follow. The aim is to see whether your model is able to produce the kind of results that economies produce, in a wide variety of circumstances. And hence to update the next iteration of your model from the corrections that the first simulation(s) suggests. And so on. You’re updating your theory of recessions as you go along, but you’re doing so through simulations, not deduction, or worse, as many economists are wont to do, from simplistic charts.

  10. rsj's avatar

    I am of the camp that you need theorems. Simulations, in and of themselves are useful for building intuition, calibration, and policy analysis. But if you don’t have a theorem, then I don’t think you have a theory.
    It’s possible to come up with models which have no closed form solution, and yet prove theorems about the qualitative properties of the solutions, and authors of agent based models need to deliver these types of theorems.

  11. Martin's avatar

    Nick,
    definitely true and I agree with that: ABM is just a very messy and complicated explanation. However if all you have to go by is:
    “Even if we do lots of different simulations, with different assumptions, and list all the cases where we do get recessions and list all the other cases where we don’t get recessions, do we understand recessions?”
    and the simulation itself is a black box, then you do not have an explanation. For we already know that lot’s of agents interacting can result in a recession: that’s what prompted the question in the first place.
    It seems to me however that if you have to program it all, then you’re in essence not doing much else than comparative statics? The outcome of a program seems to me to be no different than a version of the artsy non-linearity you discussed earlier.
    We will know therefore what ’caused’ a recession when we figure out what to change about the program to avoid the recession. This is no different than stating that an excess demand for money ’caused’ a recession.

  12. Eric Pedersen's avatar

    There’s quite a few strategies to effectively learn about what causes a phenomenon in an ABM; they all, I think, stem from understanding what an ABM really is: an experimental system where you control all inputs, and are free to change things as needed. That means that learning from a given ABM means making hypotheses about what assumptions, rules, etc. are necessary to get a given outcome, then testing those hypotheses via controlled manipulation of the actions and assumptions of your agents. It also means using ‘scaffolding’ (HT to Andrew Gelman for the term); that is, creating a set of models with different but related assumptions, as well as varying degrees of complexity; by using this sort of strategy, we can learn how robust an observed relationship is to various perturbations. These scaffolding models can easily include simpler analytical models, or verbal descriptions of mechanisms.
    Also, regarding the “black box” issue: this depends pretty heavily on code-sharing practices. If the researcher has done the work to provide code (and make sure it’s reproducible!), make sure the interface to the code is easy to understand, and that you can observe the internal state of the model easily, I don’t think it’d be any more inaccessible than a mathematical model. It just requires slightly different standards and infrastructure.

  13. Nick Rowe's avatar

    Eric: “If the researcher has done the work to provide code (and make sure it’s reproducible!), make sure the interface to the code is easy to understand, and that you can observe the internal state of the model easily, I don’t think it’d be any more inaccessible than a mathematical model.”
    OK. I expect I was (implicitly) assuming the code was reproducible. So anyone else can repeat the “experiment”. (Sometimes I find mathematical models inaccessible too. I don’t trust them, unless I can “get” the intuition as well.)

  14. Lord's avatar

    I think you have to accept a black box not as an explanation but as a path to one. It involves a lot of probing of it, exploration of its state space, exhaustive enumeration, until one can formulate a model of a model. No one considers a weather simulation an explanation but it useful as a prediction and errors useful as probes of explanations.

  15. Dan Kervick's avatar
    Dan Kervick · · Reply

    I suppose one point of the modeling is to use computation to generate interesting patterns macrophenomena which can themselves then be empirically tested. Even if nobody can hold the deep structure of the explanation in their heads, it might suggest interesting phenomenological patterns that would not otherwise have been dreamed up from pure cogitation alone, unassisted by computer power.

  16. Jeremy Fox's avatar

    I see where you’re coming from here Nick. But I’d suggest that there are some cases–hopefully rare–in which it’s impossible to open the black box, to gain a macro-level understanding, to write down a “higher level” description, to “see the forest for the trees”. There are some problems that we can solve by brute computational force, and only by brute computational force, and know that we’ve solved, but yet not comprehend in the slightest why the solution is correct. I have an old post on this, based on what I think is a pretty clear-cut example: chess endgames.
    [link here NR]

  17. Peter N's avatar
    Peter N · · Reply

    Problems as seen by a practitoner:
    [link here pdf NR]
    Challenge 1: Fragile Parameter Estimates.
    The fragility of parameter estimates potentially translates into other objects of interest such as inference about the sources of business cycle fluctuations, forecasts, as well as policy prescriptions.Thus, accounting for model uncertainty as well as for different approaches of relating model variables to observables is of first-order importance.
    Challenge 2: Aggregate Uncertainty versus Misspecified Endogenous Propagation.
    The phenomenon that the variation in certain time series is to a large extent explained by shocks that are inserted into intertemporal or intratemporal optimality conditions is fairly widespread and has led to criticisms of existing DSGE models…
    Challenge 3: Trends.
    Most DSGE models impose strict balanced growth path restrictions implying, for instance, that consumption-output, investment-output, government spending-output, and real-wage output ratios should exhibit stationary fluctuations around a constant mean. In the data, however, many of these ratios exhibit trends. As a consequence, counterfactual low frequency implications of DSGE models manifest themselves in exogenous shock processes that are estimated to be highly persistent. To the extent that inference about the sources of business cycles and the design of optimal economic policies is sensitive to the persistence of shocks, misspecified trends are a reason for concern.
    Challenge 4: Statistical Fit.
    Macroeconometrics is plagued by a trade-off between theoretical coherence and empirical fit. Theoretically coherent DSGE models impose tight restrictions on the autocovariance sequence of a vector time series, which often limit its ability to track macroeconomic time series as well as, say, a less restrictive vector autoregression (VAR).
    Challenge 5: Reliability of Policy Predictions.
    More generally, to the extent that no (or very few) observations on the behavior of households and firms under a counterfactual policy exist, the DSGE model is used to derive the agents’ decision rules by solving intertemporal optimization problems assuming that the preferences and production technologies are unaffected by the policy change. In most cases, the policy invariance is simply an assumption, and there is always concern that the assumption is unreliable. This concern is typically exacerbated by evidence of model misspecification.

  18. nemi's avatar

    Lets say you had some real microfoundations and the aggregate behavior of the individuals in the model happened to look exactly as (your favorite version of) the representative agent, but that this were derived through the “black box” – does this mean that models which previously were seen as providing an explanation now no longer do?
    If this is the case, is it reasonable to assume that current DSGE models are providing any explanation?

  19. BT London's avatar
    BT London · · Reply

    In hard sciences, agent-based models are very popular. The solution to your problem with agent based models is obvious: the paper includes an online link to the code for the model. That way anyone can download the code and re-run the simulation (or variants of it).
    Yes, you have to learn how to program a computer to run these simulations. But someone like you can easily employ a PhD student to do it for you.
    The key to a good agent-based model is to have a full and accurate understanding of the mechanisms you are trying to model. That’s why it is important to know that loans create deposits, rather than the other way around as described in the textbooks.

  20. BT London's avatar
    BT London · · Reply

    The other point to make about complex systems is that many patterns emerge from the complexity – they are not intuitive. So you actually require a computer model ‘black box’ to understand how certain patterns can arise from simple rule-based interactions between agents.

  21. J.V. Dubois's avatar
    J.V. Dubois · · Reply

    This is example about having lost all knowledge is interesting, and believe me that it actually happens in hard sciences too. Actually one of the most interesting problems in mathematics is based exactly on that – Riemann Hypothesis: [link here NR]
    He formulated the hypothesis and said the following: “…it is very probable that all roots are real. Of course one would wish for a rigorous proof here; I have for the time being, after some fleeting vain attempts, provisionally put aside the search for this, as it appears dispensable for the next objective of my investigation.”
    And the proof for this claim eluded mathematicians for 153 years and it eludes them them till today. Nobody know if the hypothesis holds. Powerful computers are using brute force to compute yet more precise solution of the equation just to check if it holds. This process could be viewed as something akin to what you suggest. We have a program in computer that is designed to give us knowledge about important piece of knowledge. The program does not have to be intelligent, it just has to be good enough to provide us with insight to a problem that is too complex for us to understand in simpler way. And it could have large impact on the real world too. Random energies observed in quantum mechanics show behaviour that could be described by Rieman Function. Having a proof that Rieman hypothesis does not hold – even a artificially constructed one – could be a very important thing.

  22. acarraro's avatar
    acarraro · · Reply

    I would like to echo what J.V. Dubois said above.
    I think it’s a mistake to believe that all problems are solvable or understandable by intuition… The economy is too complex and we have too little information about it to actually have any hope of being able to understand it.
    I think agent-based modelling gives us an experimental tool. We might not be able to analytically solve a problem, but we can simulate it and see what the output looks like. I am not an expert, but monte-carlo modelling is pretty common in physics, where some problems are unsolvable analytically (even trivially sounding ones like calculating orbits in presence of many bodies with similar masses). If you can get your simulated model to be similar enough to reality, you start to get some confidence that the micro-foundations you have are actually correct. You can than use that knowledge to evaluate the impact of policy changes.
    You still have the problem about deciding whenever a simulated output is similar enough to reality. And I think you need much more detailed measurement of reality before we can really settle that issue. I am not sure there is enough richness in official statistics to get confidence in the ability of simulations to replicate reality.
    I actually think that online-gaming economies might offer some great dataset to test such modelling ability and might get some insight in the real world, but I am not sure, we don’t really get the option of stop playing in the real world.

  23. Luis Enrique's avatar
    Luis Enrique · · Reply

    Nick,
    here’s an agent-based paper that I hope might seem like less of a black box
    [link here pdf it’s by Gintis so it is almost certainly good NR]
    I’m taking a bit of a risk of getting egg on my face because that’s not actually the paper I had in mind and I’ve only skimmed it, but I think it basically says that if you set up little agents with easy to understand objectives and decision rules and let them potter about deciding whether to trade and at what price, we see a Walrasian-like outcome. [the half-remembered paper I had in mind but cannot find did not have the evolutionary element this paper has].
    I don’t think this complete escapes the blackbox problem, but I think if what the agents are doing is easy enough to understand and based on commonly understood economic concepts, we can really start to learn things about what phenomena emerge under what conditions.
    [I think you agree with my supervisor’s view I related in those comments?]

  24. Luis Enrique's avatar
    Luis Enrique · · Reply

    here is an agent-based post-doc, which sadly I don’t think I’d get if I applied, but might suit somebody on this thread.

  25. Zorblog's avatar

    As long as the maths is run correctly, it does not matter that the model is a black box. Maths is just like a pan where you put oil and corn, turn the heat, and you get pop-corn. It’s not important that you don’t see the corn pop… What really matters is that you put corn in the pan, not rice or coffee beans…
    What I’m trying to say is that it is all in the assumptions. Once they are laid, it’s just a matter of turning the heat. You’re right that papers go from introduction to conclusion, without offering a proper understanding of the mechanisms at play. But the mechanisms are not in the maths, they’re all in the assumptions. A tiny variation in the defined behavior of the agent can radically change the conclusions.
    An economic paper should have a 20-page introduction and a half-page conclusion, with all the maths in the appendix.
    Take the example of rational expectations. Well, if the agent is rational, you can’t fool him, right? (at least not twice) And all the conclusions of all the models based on this assumption are straightforward, it’s not even worth doing the maths. But is it ok to consider the agent not rational? Not really, since he’s not a complete fool either…
    This means that rational expectations is a good idea, but you can’t use it just like that. You need to model a near rational agent (which is technically more complicated), and that should lead you to more relevant conclusions.
    The problem of the black box is not that you don’t see inside it, it’s that as long as you’re looking into it, you don’t notice what goes inside.

  26. Nick Rowe's avatar

    Jeremy: “But I’d suggest that there are some cases–hopefully rare–in which it’s impossible to open the black box, to gain a macro-level understanding, to write down a “higher level” description, to “see the forest for the trees”.”
    “Seeing the forest for the trees” is a good metaphor for what concerns me.
    Take a complicated math model, for example. It’s not enough just to wade through it, equation by equation. We also need to stand back and try to get the intuition for the “big picture”, and why it gets the results it does.
    Luis Enrique: “[I think you agree with my supervisor’s view I related in those comments?]”
    Yes and no. Let’s say I sympathise with his view, but don’t fully agree with it.

  27. david's avatar

    It’s always assumptions in ABM, too, and robustness testing is rarely carried out in ABM well.
    There are basically two kinds of macroeconomic ABM, one with highly stylized assumptions which try to minimally explain a host of phenomena – it is helpfully possible to tie in e.g., distributional patterns of business size, lifespan, etc. The other kind is to use highly detailed assumptions built off micro data and still extract persuasive aggregate behavior.
    The methodological trade-offs of tractability vs. adherence to reality, or Occam’s Razor acting to stylize input assumptions or asserted aggregate behavior, plague ABM just as they do completely-solved closed-form macro. ABM just moves one step away from universality in the parameter space in exchange for being able to pursue non-completely-solvable assumptions. That is valuable, but claims by ABM proponents that parameter fragility or stylized assumptions are fatal for mainstream macro are often likewise the case in their own models.

  28. Nick Rowe's avatar

    BT London: ” That’s why it is important to know that loans create deposits, rather than the other way around as described in the textbooks.”
    That’s why it’s important to read a first year textbook, so that you learn that what you said there isn’t true. Agree or disagree with the money multiplier model, the first year textbook version of that model does say that an increase in bank loans creates an increase in bank deposits.
    (And that is the end of the discussion of that topic on this thread.)

  29. Nick Rowe's avatar

    Somebody in the “Read a First year Text” comments posted a link to a very good (redundancy) Peter Howitt paper describing the agent based modelling he did with Robert Clower, and why he did it. Now I can’t find it. Help. Thanks.

  30. Tnotstar's avatar
    Tnotstar · · Reply

    So, so, sorry! Just a last [maybe-not-related but just-important] link, about the role of computers in the [hard and not-so-hard] sciences: http://www.math.pitt.edu/~thales/papers/turing.pdf
    Is there some kind of digital divide between economics and the rest of sciences?

  31. Nick Rowe's avatar

    david: “That is valuable, but claims by ABM proponents that parameter fragility or stylized assumptions are fatal for mainstream macro are often likewise the case in their own models.”
    I think point about “fragility” (Peter Howitt calls it “brittleness”) is an important point. And I think that’s why we also need some sort of “intuitive” or “let’s see the forest too” understanding of any model.
    “Fragility” means that a tiny change in the assumptions causes a massive changes in the conclusions of a model. I don’t like fragile models. (But maybe that fragility is telling us something important.)

  32. Luis Enrique's avatar
    Luis Enrique · · Reply

    it wasn’t this Howitt paper I linked to was it? it does discuss work with Clower, but not sure what you mean be (redundancy) so maybe it ain’t.

  33. Geoff Willis's avatar

    The closest analytical scientific field to agent based modelling is statistical mechanics; which, like economics, deals with systems with millions of interacting sub-units. The interesting thing about statistical mechanics is that you can get ’emergent’ behaviour out of the models, behaviour that is not obvious from the modelling inputs. Sometimes the modelling output can give simple output relationships that can then be derived analytically, so you don’t actually need the model after all.
    One of the most interesting agent based models is that of Ian Wright, which from a very simple set of rules builds a model that gives outputs that match real economies well:
    [link here NR]
    My own work uses very simply specified models, but appears to give good explanations for income and wealth distributions and company size distributions, as well as explanations of boom/bust capital cycles. Somewhat to my surprise, a very simple formula emerged from the model that explains Kaldor’s fact of the constancy of the returns to capital and labour. Although the formula came from analysing the model, it proved trivial to derive the formula in half a dozen lines from basic economic identities – the formula is not dependent on the model. Interestingly a variant of the formula suggests a direct link from increasing consumer debt to increasing inequality. More information at:
    [link here pdf NR]
    or google ‘why money trickles up’:

  34. Nick Rowe's avatar

    Luis Enrique; Yes, that’s the one! Thanks.
    (The “redundancy” thing was a little joke. Peter Howitt is a very good economist (he taught me advanced macro, ages ago). So to say a paper is very good is redundant, if you have already said it’s by Peter Howitt, because you are repeating yourself.)

  35. Nick Rowe's avatar

    Geoff: “Somewhat to my surprise, a very simple formula emerged from the model that explains Kaldor’s fact of the constancy of the returns to capital and labour.”
    That’s a great story, with a very happy ending. Because in that case we do get the intuitive understanding as well. In that case, agent-based models and understanding are complements, not substitutes.

  36. Jonathan Dursi's avatar
    Jonathan Dursi · · Reply

    “agent-based models and understanding are complements — yes.
    We in the physical sciences have wrestled with this for a long time – centuries, even, because the issue isn’t just with simulation, the issue is with experiment vs theory. You prepare some inputs, push the button, and out comes an output. Great. So now what?
    Unfortunately, simulation is comparitively new, and so “best practices” for computational work are just starting to come together.
    Right now, the level of quality for a lot of computational work – in all fields – is uneven. For instance, and probably prompting your question, in all disciplines one often sees a simulation paper where the author runs one simulation, says “see! When you put in A, you get out B!”, plots some pretty pictures, and concludes.
    These papers make me wish academics had a demerit system. They should, of course, never be published.
    A better paper would go like this: Using well understood simulation software X, we wish to see what happens when we put A in. So we put a range of things like A in, over a suite of different simulations, and out comes a range of outcomes centred around B. Ok, interesting. But if we put A’ in, we don’t get B; we get B’. We have a hypothesis about this; we thing that A causes B but A’ doesn’t because of complex intermediate step C. So let’s modify our simulation to not have C (by modifying the inputs, or the behaviour of the agents, or what have you). Oh, look; When we have A, but not C, we don’t get B anymore! Thus we have evidence that A gives you B through C.
    The upside to doing simulations is because they’re controllable theoretical experiments, and that way you get to twiddle and tweak and use the changes to improve your understanding (given your underlying theory). They’re ways to understand how the theory plays out in complicated situations, the same way physical experiments give you the ability to understand how the physical world plays out in complicated situations.
    I’m involved with software carpentry ( http://software-carpentry.org ), which teaches basic computational skills to researchers (mainly graduate students) in many different disciplines; we do the same at different levels for Compute Canada. Email me if you think a workshop would be useful in your department.

  37. Nick Rowe's avatar

    Jonathan: best comment of the post! Makes a helluva lot of sense to me.

  38. Jaaqt's avatar

    Interesting thread. Perhaps you could summarizes some of the more interesting comments/discussions that come out of it?
    As someone with a background in engineering, it seems totally crazy to me that any company worth it’s salt will tend to build a fairly detailed software simulation of anything it’s going to build (e.g., buildings, wireless networks, circuits and so on) but we don’t do this for the economy as a whole.
    How is it that very important debates about Fed policy, fiscal policy, the ECB, and so on have basically no simulation analysis to see if the differing points make any sense? Aren’t these topics worth a few million dollars of funding to build a reasonable size simulation?
    Sure, the simulation won’t be perfect, parameters are hard to estimate and so on. But having differing camps each put forward their own simulation advocating austerity or fiscal expansion or whatever would seem to be a big step forward as compared to the current arguments. As a previous poster mentioned, if you could at least say such and such a model produces such and such an effect that would seem much more useful than each economist saying that his or her oversimplified equilibrium model predicts such and such when it is clear we are nowhere near equilibrium at times of crisis.
    It seems to me that there must be a good reason why such agent based simulations haven’t taken off. Is it the difficulty in building the simulations, the fact that “simulations aren’t real research”, or something else?
    Regarding the point of the article that simulations are black boxes, I think you have a reasonable point there. But I think a fundamental issue with the actual economy is that it is an inherently complicated collection of interacting pieces. If we can’t build a simplified simulation and either adapt existing models to understand that simulation then what makes us thinks that our existing models actually apply to the even more complicated case of the real world?
    Thanks,
    -J

  39. Jon's avatar

    The worst papers are the ones showing that agents following rules of thumbs can optimize production functions and reach equilibrium. The lack of self awareness exhibited by the authors is stunning….
    There is a large class of algorithms that converge a optimization problem to equilibrium. In normal fields we care about efficient algorithms…
    I once had to suffer through a long talk by a tenured professor using genetic algorithms in agent based models. The paper upon which the talk was based showed that evolving agents solved for the optimal equilibrium. Why I suffered through this I do not know. The first genetic algorithm was invented in the 1950s and it was then shown that genetic algorithms solved optimization problems and since then lots of work was done on convergence rates.
    Okay okay, I listened to this talk waiting for the angle that made the work worthwhile… Nothing, she had the agents mating by taking their production functions and swapping digits of their consumption decisions eg one agent did 49 and the other 53 so the offspring do 43 and 59…. (granted among some other things)
    There was a long discussion about the convergence of this process… Not one citation into the real literature on this subject where you can find very good bounds on the probability of converging after n iterations, etc. (this field has been mature by the 80s) and without the cloak of economic gibberish…

  40. Nemi's avatar

    The A, A’ thing is equally problematic with analytical and intuitive solutions.
    Assume two companies.
    If they chose quantities the price ends up between the PC and monopoly price.
    If they chose price we end up with the PC price.
    If we introduce one second search time, we end up with the monopoly price.
    Etc. Etc.
    If simulations have to show that they are robust to alternative assumptions, why dont we demand the same thing from analytical and intuitive solutions?

  41. JR Hulls's avatar

    I think that it’s a pretty safe bet that, eventually, building understanding and simulations at the agent/transaction level along with modern computational power will eventually come up with the economic equivalent of Computational Flow Dynamics in aerodynamics. (computational fiscal dynamics?) which will itself undergo evolutionary development. J Doyne Farmer provides a good example of this path with his INET grant to start with modeling the crash in the housing market.http://ineteconomics.org/video/30-ways-be-economist/doyne-farmer-macroeconomics-bottom but its a massive, long term task.
    It appears to me that one can draw some intellectual parallels between the current state of economics and the development of aerodynamics prior to 1900. Aristotle started it all in 350 BC when he described a model for a continuum and posited that a body moving through that medium encounters a resistance. Things have progressed somewhat, but until very recently, if one flew it was courtesy of calculations of lift and drag that depended largely on Lanchester’s 1907 analog of vortices off a wing twisting into a trunk at the wingtip, coupled with some very elegant math from Kutta followed by Prandtl’s brilliant 1904 visualization of the boundary layer.
    While the Euler equations for inviscid flow and the Navier-Stokes equations for viscous flow were well known by the middle of the 1800′s there were no known analytical solutions for these system of nonlinear partial differential equations, and thus were of no use to the early pioneers of flight. Thanks to Prandtl’s brilliant analogy of flows, it became possible to derive sufficiently accurate engineering solutions to give us the modern aircraft, albeit at the cost of ignoring some of the finer details of physics.
    Thanks to modern high speed computers, we now have computational flow dynamics and can obtain engineering solutions for practically any aircraft configuration which can then be ‘built’ on a computer and flown in a simulator. Referring to the equations for momentum, continuity and energy, CFD can be defined as, “…the art of replacing the…partial derivatives…in these equations with discretized algebraic forms which in turn are solved to provide numbers for the flow field values at discrete points in time and/or space” (Quote from “A History of Aerodynamics” by Anderson http://tinyurl.com/d6beby9. An excellent book for anyone interested in the development of science and technology)
    I don’t think that economics has yet had is “Prandtl moment” which, in aeronautics, enabled physics and math and engineering to converge to develop useful solutions, and what is needed is the development of sufficiently powerful analogs such as Prandtl’s boundary layer that will encompass sufficiently accurate simulation and analysis of the flow field of transactions to develop ever more refined ‘engineering’ solutions.
    It’s going to take a lot more work and cooperation across disciplines to develop understanding of the flow field of transactions to maximize social benefit rather than just extracting maximum individual profit from the flow. Modern aeronautics didn’t leap directly from the Wright Brothers to the 747, but followed an evolutionary path of theory, experimentation, simulation (wind tunnels) and practical experience. My work in economics has been to look at development of a very basic economic ‘flight simulator’ for environmental policy. Here’s my brief piece on economic simulation here. http://oecdinsights.org/2012/06/27/going-with-the-flow-can-analog-simulations-make-economics-an-experimental-science/
    JRHulls

  42. Patrick's avatar
    Patrick · · Reply

    QCD is another example My limited understanding is that physicists can’t solve the equations, so they simulate them. It’s apparently good enough to sort out the Higgs from the giant mess that spews from the proton-proton collisions at CERN.

  43. Alex Godofsky's avatar
    Alex Godofsky · · Reply

    I think analogies to computational physics are misguided. Economic systems have several huge disadvantages versus physical systems, including:
    1) It is difficult or impossible to discover the laws governing the behavior of individual pieces of the simulated universe.
    2) It is difficult or impossible to estimate the parameters of those laws.
    They have another, even larger disadvantage:
    3) Those parameters evolve over time. For many of them, it is impossible to predict their evolution, because knowledge of where they will go co-implies that they’ve already gone there. For example, we can’t know when or if we will discover workable nuclear fusion power until we actually discover it.
    All of those pale in comparison with the biggest difference:
    4) The components of an economic system are intelligent agents who are necessarily capable of predicting the outcome of the simulation. They are actively engaged in the same project you are – discovering how to predict the behavior of the system.
    The last one means that you can’t just specify the state of the world at t=0, specify rules for evolution of that state, and let go. Ever. It doesn’t work. You have to include some ratex-y mechanism in there, which is something the engineers never have to do.
    This isn’t to say that you can’t use toy models to study specific qualitative phenomena, but the idea that we’ll do for the economy what we do for airfoils is absurd.

  44. Nick Rowe's avatar

    Alex: “4) The components of an economic system are intelligent agents who are necessarily capable of predicting the outcome of the simulation. They are actively engaged in the same project you are – discovering how to predict the behavior of the system.”
    Yep. That is indeed the biggest difference. (And I especially like your phrase “They are actively engaged in the same project you are –“) RE “solves” this problem by assuming they have solved it (at least the reduced form, if not the structural equations). But rejecting RE doesn’t mean that we can ignore this. There’s a self-referential element in economies.

  45. Kaleberg's avatar
    Kaleberg · · Reply

    Physicists say that you can tell how good a theory of gravity is by seeing at what N it cannot solve the N-body problem in closed form. So Newton could get an analytic solution for N=2, the two body problem. Einstein could get an analytic solution for N=1, but more modern theories usually break down when N=0, the vacuum state.
    In the real world there aren’t any analytic solutions. The real world doesn’t have theories. Things just happen, and things can observe things. The real world is a black box. There are only a handful of closed form solutions, and “equilibrium” is usually a matter of contingency, not necessity. (The biologists got over this. Economists haven’t even faced it yet.) This made a lot of people uncomfortable, mainly scientists. Engineers never got serious about the whole solution thing in the first place.

    It’s actually high time economists moved into the 20th century. Monte Carlo simulation has been around for nearly 70 years now. Agent based modeling might get us a few less economic theories that strongly violate the basic principles of accounting.

    There’s a problem with the phrasing of:
    “4) The components of an economic system are intelligent agents who are necessarily capable of predicting the outcome of the simulation. They are actively engaged in the same project you are – discovering how to predict the behavior of the system.”
    It’s not that they are capable of predicting the outcome of the simulation. That would get you into the Halting Problem and you could them prove such entities could not exist. They do TRY to predict the outcome of the simulation, but they usually do a crummy job. (Why isn’t there a Rational Expectations Fund?) When John Von Neumann, a pioneer in computers and simulation, looked at the financial system, he wound up developing game theory. The much vaunted fundamentals of the market mattered so much less than the market strategies. No one makes money on the fundamentals.

  46. Alex Godofsky's avatar
    Alex Godofsky · · Reply

    Kaleberg: it’s worded correctly but could have been clearer. The economic system is the real economy. The simulation is a computer program containing a proposed model of the system.
    (Aside: the halting problem isn’t the issue. It’s entirely possible to have a system that can make proofs about itself; it just depends on what kind of proofs you are looking for and the complexity of the system. The issue you’re getting at is a more general information-theoretic constraint that a system cannot precisely forecast its own behavior; but a system CAN accurately forecast its own macroscopic behavior, which is what we care about.)

  47. Peter N's avatar
    Peter N · · Reply

    I was impressed by the Geoff Willis link, so I’m reposting it

    Click to access bullets.pdf

    To whet peoples appetites, this derivation of the Bowley ratio is from the paper:
    the Bowley ratio = total earnings / Y
    the profits ratio = total profits / Y
    By definition the Bowley ratio + the profit ratio = 1
    &beta + &rho = 1
    the profit ratio = the profit rate / total return rate
    &rho = r / Γ
    Y = earnings + profits ⇒ by definition, so: earnings = Y – profits
    e = Y – &pi
    e / Y = 1 − &pi/Y ⇒ so: Bowley = 1 − &pi/Y
    but:
    Consumption = Income ⇒ C = Y so:
    &beta = 1 − &pi/C
    or:
    &beta = 1 − (&pi/W)/(C/W)
    so:
    Bowley = 1 − profit rate/ consumption rate
    &beta = 1 – r / &omega
    results
    • the consumption rate &omega defines Γ; the ratio of total income to
    capital.
    • r is smaller than &omega – gives Bowley ratio between 0.5 and 1.0 –
    matches real values
    HTML Greek letters???

  48. Greg's avatar

    Can we understand agent-based models? Start with Schelling’s “Models of Segregation”. The model was computed with coins on a piece of paper; it’s generally thought to be understandable.
    Agent-based models are tools for abduction, not deduction. Complaining that they don’t lead to theorems or astronomical-type predictions is complaining that your spice rack is not a good can opener.
    Following from that, the phrase “black box agent-based model” is oxymoronic. If you don’t know and find plausible (for reasons of behavioural psychology, microeconomics, etc., external to the model) the rules that the agents and their interactions are following, you’re not doing agent-based modelling. Instead, you’re performing a rite in a cargo cult.
    In ABM, you always know exactly what settings are different between the model runs that produce recessions and the model runs that don’t. But that is where the science starts, not where it finishes: it tells you where to start looking for the answer to the “why?” question.
    Alex Godofsky’s point 3 badly overstates its case. In reality, no invention or event of any realistically possible kind (barring global thermonuclear war) can make a difference of more than a few percentage points in any direction over the next few years. This is easily handled within the ensemble of model runs.
    Godofsky’s particular example of workable fusion power, if demonstrated tomorrow, would have no noticeable effect on the macroeconomy for at least 20 years, while pilot plants are built, safety regulations and permitting procedures developed, IP battles fought, challenges from the coal industry fought off, investors found, staff trained, factories built, etc. After 20 years optimistically 0.001% of global electricity would come from fusion reactors. This particular possibility can be ignored in any medium-term model of the macroeconomy.
    Likewise, the evidence from behavioural econ (as I understand it) is that Godofsky’s point 4 overstates the difficulty too. Economic agents are interested in their own situations, not the global outcome; they use simple decision rules and are predictably fallible and variable. Points 1 and 2 are moot for the same reasons.
    Finally, the idea that agent-based modelling is unusable because there is no mathematical infrastructure for it is simply pathetic. Do what physicists do whenever they lack the mathematical theory for a promising new line of attack: invent the mathematics!

  49. C.H's avatar

    I like the parallel with Searle’ chinese room (really one of the most thoughtful thought experiment I know) but I’m not sure your argument applies only to ABM. The chinese room thought experiment shows that we cannot reduce semantic to a computation (a syntax system). Now, by the Church-Turing thesis, any computation is a deduction and of course a classic analytical, equation-base model is a deduction. I think that there is no qualitative difference : the model itself, its syntax, has no meaning. We ascribe a meaning to a mathematical model 1) because the logical rules are transparent and 2) we are able to define relations of similarity between the model and the target system. The same is true for an ABM with the difference that things are more complicated : there are more variables, the logical operations are done by the machine. My point is that the difference is epistemic but is not related to the “ontology” of an ABM.

Leave a reply to Zorblog Cancel reply