[Updated. See below.]
[I wrote this because we were starting to discuss the question in comments, and I thought it was interesting, so tried to collect my thoughts. No great original insights here. Nor even any conclusions, right or wrong. Read at your own risk. This post is really just a place for people who want to discuss this topic to do so. ]
I'm bad at math and computers, which means I find it hard to understand agent-based modelling. It also means I'm a bit biased, because I wouldn't be any good at doing agent-based modelling myself. But that doesn't mean I don't want others to do agent-based modelling. It's probably a good thing that some economists work in this area.
This is what I think I understand about agent-based modelling, and the reason for doing it:
The economy is very complicated. Real people can't hope to understand it, solve for the equilibrium, and figure out their optimal Rational Expectations/Bayesian strategy. Hell, we economists sometimes can't even solve models ourselves, so how could we possibly assume real people can? So it makes sense for people to follow rules of thumb to cope with a complicated world. And those rules of thumb, though they simplify the individual's problem, may make the economist's problem even more complicated.
So we model agents as following rules of thumb, put them all together in a computer, and then the computer tells us what happens.
Is that basically right? If so, it makes sense to me. I don't want to do it myself, but I'm glad someone (else) is doing it. It sounds a lot better use of our time, at the margin, than building more existence proofs of Arrow-Debreu General Equilibrium.
But it reminds me of John Searle's Chinese Room.
I want to understand recessions. A few months ago I read a paper putting forward what looked like an interesting new theory of recessions. I did not understand the theory, even though the math did not seem especially complex and the writing seemed reasonably clear. I put the paper aside, with the (tentative and provisional) conclusion that the theory made no sense. I couldn't figure out how they could get the conclusions from the premises.
Perhaps if I had spent more time with that paper, I would have understood it, and decided that the theory did make sense after all. Or perhaps I would have found a mistake in the paper, and decided it didn't make sense. But if I can't understand it, I'm not going to accept it as a possible explanation of recessions.
Now I don't understand the Theory of Relativity either. But physicists say that they do understand it, and I believe them, so I accept it as an explanation of whatever it is they say it explains. Maybe I could understand it too, if I really studied it hard.
What happens if the explanation is a black box, that we can never interrogate? Suppose I had lost my copy of the paper, and so had everyone else. And the only thing anyone could find was the page of assumptions and the page of conclusions. And we had lost and forgotten everything in between, and couldn't figure out what it must have been. And the authors had disappeared. Would we still want to say that the paper explains recessions? Even if we thought that the authors of the paper were very reliable economists who never made mistakes? Aren't the missing pages important?
Agent-based models, or any computer simulations, strike me as being a bit like that black box. A paper written by a very reliable economist where all the middle pages are missing and we've only got the assumptions and conclusions. I can see why computer simulations could be useful. If that's the only way to figure out if a bridge will fall down, then please go ahead and run them. But if we put agents in one end of the computer, and recessions get printed out the other end, and that's all we know, does that mean we understand recessions?
Even if we do lots of different simulations, with different assumptions, and list all the cases where we do get recessions and list all the other cases where we don't get recessions, do we understand recessions?
Even though it could be very useful for policy advice, I would find it deeply unsatisfactory if we had to stop there. "But why do we get recessions in that list of cases and no recessions in that other list of cases?"
[Update: I really liked this comment by (astrophysicist) Jonathan Dursi (very minor edits):
""agent-based models and understanding are complements" — yes.
We in the physical sciences have wrestled with this for a long time – centuries, even, because the issue isn't just with simulation, the issue is with experiment vs theory. You prepare some inputs, push the button, and out comes an output. Great. So now what?
Unfortunately, simulation is comparatively new, and so "best practices" for computational work are just starting to come together.
Right now, the level of quality for a lot of computational work – in all fields – is uneven. For instance, and probably prompting your question, in all disciplines one often sees a simulation paper where the author runs one simulation, says "See! When you put in A, you get out B!", plots some pretty pictures, and concludes.
These papers make me wish academics had a demerit system. They should, of course, never be published.
A better paper would go like this: Using well understood simulation software X, we wish to see what happens when we put A in. So we put a range of things like A in, over a suite of different simulations, and out comes a range of outcomes centred around B. Ok, interesting. But if we put A' in, we don't get B; we get B'. We have a hypothesis about this; we thing that A causes B but A' doesn't because of complex intermediate step C. So let's modify our simulation to not have C (by modifying the inputs, or the behaviour of the agents, or what have you). Oh, look; When we have A, but not C, we don't get B anymore! Thus we have evidence that A gives you B through C.
The upside to doing simulations is because they're controllable theoretical experiments, and that way you get to twiddle and tweak and use the changes to improve your understanding (given your underlying theory). They're ways to understand how the *theory* plays out in complicated situations, the same way physical experiments give you the ability to understand how the physical world plays out in complicated situations.
I'm involved with software carpentry, which teaches basic computational skills to researchers (mainly graduate students) in many different disciplines; we do the same at different levels for Compute Canada. Email me if you think a workshop would be useful in your department."
End Update.]
C.H.: You are the first commenter to pick up on the Chinese Room. Finally. I am so pleased!
There seems to be a lot of confusion on these posts about simulations vs. math models. Reiss’ paper ‘A Plea for (Good) Simulations: Nudging Economics Towards a More Experimental Science [link pdf NR] provides a useful access point and general discussion. Some of the earlier commenters have put forward the position that if you cannot predict future changes, the agent based model is somehow not useful, or that the agents are building their own simulations which invalidate the simulation. (Old science-fiction/Matrix-like plot here…when we run our simulations, are we really running a simulation or are we ourselves just modeled agents running simulations in someone else’ simulation, just trapped in a form of Chinese Room, incapable of knowing the reality of where or what we really are.) These objections may be somewhat true in a math, or ‘pen and paper’ model, as Reiss puts it, which is why he makes the point that a simulation need be far less constrained to produce useable results, and that many economic models are overconstrained just to make the math work. Reiss’ paper does an excellent job of defining the advantages of simulations as well as some of the limitations.
Analog simulations are a special case, and here I would refer to Einstein and Infeld’s 1938 classic,’The Evolution of Physics: From Early Concept to Relativity and Quanta’. It is firmly grounded in observable real world examples and analogy, as is wonderfully demonstrated by the chapter on quanta, which takes us from flopping rubber tubes to violin strings to probability waves. “It is easy to find a superficial analogy which really expresses nothing. But to discover some essential common features, hidden beneath a surface of external differences, to form, on this basis, a new successful theory, is important creative work. The development of the so-called wave mechanics, begun by de Broglie and Schrodinger, less than fifteen years ago, is a typical example of the achievement of a successful theory by means of deep and fortunate analogy”.
Which brings me back to Phillips and his Moniac. Here is a paper on the use of Phillip’s hydromechanical simulator to teach system dynamics. [link pdf NR] (These folks are always using bathtubs and hot and cold taps to explain things to people as mentioned in the abstract) The description of the integrative function of the machine and its ability to literally ‘turn off’ portions of the economy (including many omitted in DSGE models) and literally see the result is well described and illustrated in the paper. The paper also contains links to a demonstration of the Phillips machine by Dr. Allan McRobie, which I have referred to previously. The use of a fluid dynamics analogs merely adds more capabilities to Phillips basic concepts, such as flow rates, density and shear forces and the ability to continuously exchange potential and kinetic energy, all useful analogs in an economic model.
If, as appears to be the case with our gracious blog host, you are not one of those enamored with digital computers and mathematics for their own sakes, analog simulators such as Phillips can be a wonderfully visible relief. I’d be very cautious in placing limits on our rapidly increasing ability to simulate, as opposed to model, very complex structures. When the CFD people first started to get really serious, only a few decades ago,there were lots of jokes about how many hours on a Cray it took to model the formation of a single vortex, let alone an airfoil, yet now, with lattice methods, we can model an entire airliner directly from Boltzmann’s gas laws. [link pdf NR] The developmental methodology is interesting here. Aerodynamicists have constructed an actual (analog) wind tunnel model based on a very typical jetliner of well-understood real-world configuration, and then made available a computer model of the aircraft which is used as the basis for evaluating CFD models and making modifications. These changes can then be validated in the wind tunnel before application to actual aircraft. (the fairing to control separation example in the referenced paper)
In a pure math model encompassing the range even of the Phillips machine would be constrained even beyond the agent/market limitations in DSGE models so I’m still betting on Farmer’s approach leading to useful new insights, as it closely follows the aerodynamic model (agent behavior(moleclules/housebuyers interacting with a structure (aircraft or transaction) all continually refined and checked against hopefully an agreed upon ‘windtunnel’ model of the economy, then real world cases.
Not everyone agrees that the Chinese Room experiment actually shows that you cannot reduce semantics to syntax…
I mean, that may or may not be true. It’s just that it’s not at all clear that Searle’s experiment so conclusively demonstrates what some people think it does.
Greg:
You can quibble about the specific fusion power example if you want, but it’s easily observable that the world today looks different from the worlds of 10, 20, 30, etc. years ago in ways that couldn’t have been reasonably anticipated, and we should expect this to continue to be true. And you don’t even know for sure what you said about fusion power; maybe it’s possible to make something that could power a home on tapwater for $100, if only we knew how. We don’t know all the rules of the game, and it’s been a multi-century project to discover the ones we do know.
Regarding point four, the entire point of all this is to end up informing public policy. If we came up with a model like this that actually worked it would cause radical changes in public policy. A successful model cannot exist in a world like the one we live in because the very fact of the model’s existence changes the world dramatically.
And point 4 hold very well in more limited cases too – look at how the stock market reacts dramtically to Fed announcements, for instance. If the Fed came up with a model like this and made it public, that would cause asset prices to behave differently from before (I believe I’m basically restating the Lucas critique, here).
I rather like that Mehrling paper on Fischer Black. Economics as looking at value as certain past costs and finance as uncertain future flows, however the only basis for estimating those flows are current and past experience, so while ex post they may be future flows, ex ante they are projections from past ones. It brings to mind we do not understand expectations or really anything about the future, they are just terms to hide our ignorance. We are in our own Chinese box.
Mandos:
Well, that’s what the Chinese room shows according to Searle. Of course, not everyone agree and there is a huge literature about it (see for instance the entry in the Stanford Encyclopedia of Philosophy:
[link here NR]
Still, I must say that despite all the counterexamples and attacks made on Searle’s argument, I think that it holds well. Anyway, my point is independent of the validity of the Chinese room argument. In my opinion, there isn’t a fundamental difference between ABM and mathematical models. There is an interesting paper by Julian Reiss and Roman Frigg that provide a convincing defense of this claim:
[link here pdf NR]
Hey nick, I’m kind of showing my hand. But what I’ve been pursuing is process algebras, there Is a form of equivalence known as bisimulation. And there is the ability to prove stuff about systems. They’ve transferred it over into other parts of math too. Need to change one of the axioms of standard set theory. I don’t have the papers on me cause I’m still in the bush. It is going by different names as well. Coinduction as a proof technique. [link here NR]
Edeast: I found your comment totally incomprehensible. Which probably says more about my math than it says about your comment 😉
Ya, the godfather is Robin Milner and tony hoare, However one of Milner,s students luca cardelli has created some calculus and done the semantics for object oriented programming. Bisimulation translated into set theory is known is non well founded set theory. WHen translated to category theory it’s known as a coalgebra. The pi calculus milner’s, has encoded the lambda calculus, which is the traditional calculus for describing computational problems. The process algebras were created to deal with parallelism and many core computing, new problems are created not totally deterministic. However since the field has been established these formalisms have been used in other disciplines. Luca cardelli has been working on using it with biology. He uses process algebas to encode odes. Also cryptography uses the spi calculus. http://www.lucacardelli.name
Ill try to explain. They are used to model non deterministic sequential process. From the book I’m reading if two infinite objects or the black boxes in your example exhibit the same behavior they are bisimilar, and to prove they are equal is known as a proof by Coinduction.
http://en.wikipedia.org/wiki/Non-well-founded_set_theory
Here is a video of the ambient calculus. One of luca cardelli’s. http://m.youtube.com/watch?v=j6bZCSw-rVA
I think the software used in the video has a default example modelling taxis in ny. I thought it would be good for trying to model the shadow bank runs, but have spent the last couple years just trying to figure out the theoretical basis. Don’t ask me, I don’t have it figured out yet.
I was also hoping to use this stuff to prove your theories that the medium of exchange was unique and different. Going to argue that the list of commodities formed a poset ranked by how many markets they traded in, or liquidity. So money would be the least upper bound or the supremum of the poset. Argue that it forms a chain and indeed a domain. Anyway that’s where my comment on nontradeable zoblooms being semantic bottom or nonsense came from a couple of months ago. So the lub of each domain, has an undo effect on the aggregate, but then I got stuck, or that is as far as I’ve gotten. Coming from a comp sci approach.
Anyway my writing is unclear, partly to obfuscate my own confusion. I’ll let you know if I figure anything out.
Edeast: well, I don’t understand it (I thought “poset” was a typo, until I saw you repeat it), but you might be onto something. Sounds vaguely like Perry Mehrling on money?? Good luck with it!
You should be able to use partially ordered sets with the math you know. It’s not necessary to use this other weird process calculi, in case you want to take a crack at it. I was following it into domain theory, which might not be the best place, and where I was getting hung up.
At first glance, I found the title of this post surprising. Isn’t economics mainly about agent based models? (Not that they are all that easy to understand. ;))
In these models a typical agent is a rational utility maximizer. That description is ambiguous and in many economic situations is not enough to predict the agent’s behavior. Furthermore, such an agent is complex. True, humans are complex. However, in science we prefer parsimonious explanations. In the service of both parsimony and precision, we may assume agents whose behavior is specified by certain simple rules. While we may think of such rules as rules of thumb that actual humans follow, their main scientific value lies in precision and parsimony. (To say that such assumptions do not reflect how humans actually behave, well, the same may be said of homo economicus. :)) Now, if agent based models use assumptions that are different from standard assumptions, that may make them unfamiliar and less easy to understand.
Computer simulations with agent based models are thought experiments. We use computers because they are thought experiments that humans cannot carry out, or carry out easily. As thought experiments, they are the domain of the theoritician. They tell us what the consequences are of our assumptions. Some of these consequences may be surprising, as they do not obviously follow from those assumptions. In fact, the ability to generate such surprises is one of the values of such simulations. Simple assumptions can lead to complex, interesting behavior, which humans alone could not deduce beforehand. The fact that computer thought experiments may produce surprising results means that humans may have to treat them like regular experiments and come up with human understandable explanations for the results.
Let me illustrate from my own experience. A while back I set myself a small project to evolve computer programs to play a game. I started with an established software package. The starting programs knew nothing, not even the rules of the game. I started the project running and went to the movies.
When I got back, the programs had evolved an unexpected strategy, which I called the Good Trick. To give you the flavor of the good trick, suppose that you have a game in which players compete where they must move about to try to reach a goal. The players, however, have poor eyesight. Here is a good trick. Dig holes for the other players to fall into. True, some players fall into their own holes, but they are familiar with where they dug the holes, so they are less likely to do so than other players. What may make this strategy surprising is that it has nothing to do with goal seeking, which is the object of the game.
In a way, I had a explanation for the Good Trick in my actual project. It set traps for ignorant, unperceptive players (which is the population that I started out with). But the players did not have a plan to trap other players, they just evolved to set the traps. As it turns out, the Good Trick is fairly robust. I have seen it emerge with other software that I have written, with software that others have written, even when the players have been fairly sophisticated. Really good programs do not use the Good Trick, because their comparably good opponents will not fall for it. The Good Trick is emergent behavior which is a function not only of the game and the players who use it, but of their opponents.
Now, the human explanation for the Good Trick is not difficult to find. But it did not come through understanding the workings of the player programs or the evolution software. Hundreds, even thousands of programs use the Good Trick. The Good Trick serves a purpose, but one that no program has. It is as though an Invisible Hand — sorry. 😉 Computer simulations may produce explanations, since the results follow logically from the assumptions. But they may not produce human understandable explanations, nor are they meant to do so. What they do is to show us the consequences of our assumptions, and they require us to specify those assumptions unambiguously.
Secondary question: Is the Good Trick rational?
Well, the players are hardly what we would call rational. They are dumber than amoebas. Furthermore, the players do not have a plan for the Good Trick. In a way, they are lucky that other players fall for the trap. But the social environment in which they find themselves is one in which the other players will fall for it. In short, in the environment in which they find themselves, the Good Trick helps them to win the game. It is therefore hard to call it irrational. “Rational” is an ambiguous term. 🙂
BTW, Keynes argued that probabilities are only partially ordered. That means that they are not numbers. Even if they all lie between 0 and 1.
IMO, human preferences are only partially ordered, as well. Among other things, that means that they are not transitive. Which is what research indicates. 🙂
Small comment on Searle’s Chinese Room argument:
Consider the brain of a human who understands Chinese. It contains areas that are central to language understanding and production. Do any of the neurons in that brain understand Chinese? Don’t be ridiculous. Yet a system within that brain, or possibly the brain as a whole, understands Chinese. That understanding is distributed throughout the system.
In Searle’s Chinese Room no component of the system understands Chinese. Nor is it capable of doing what the brain of a person who understands Chinese does. Still, the mistake is to say that, because no component of the system understands Chinese, the system as a whole does not understand Chinese. (It’s the homunculus problem. Searle gives it a twist by positing a homunculus who does not understand Chinese instead of positing one who does. ;))
Min, Scott Aaronson’s Waterloo lectures describe quantum mechanics as probability with the complex numbers. Closest I’ve ever come to understanding it.
Thanks, edeast. 🙂
I expect that you are familiar with Feynman’s book, “QED”. Physical probability is a different beast from Bayesian probability.
No, I wasn’t familiar, thanks.
Ok, I don’t think economists are missing anything. They’ve translated it over to game theory, and game semantics solved an outstanding problem in domain theory, so you guys are fine.
This might fit in with your recent post on the very short run, but I think glynn winskel’s http://www.cl.cam.ac.uk/~gw104/ recent research program has the most potential. He’s tring to generalize over games. His lecture notes for the 2012 course on concurrent games, are good. At least the intro is good, gives the motivation. They describe his work on games as event structures but he mentions In a recent talk, games as factorization systems, and cause there is pictures, I like it. http://events.inf.ed.ac.uk/Milner2012/slides/Winskel/RobinMilner.pdf slide 12 and 22 till the end.