Blog Topics...

3D plotting (1) Academic Life (2) ACE (18) Adaptive Behavior (2) Agglomeration (1) Aggregation Problems (1) Asset Pricing (1) Asymmetric Information (2) Behavioral Economics (1) Breakfast (4) Business Cycles (8) Business Theory (4) China (1) Cities (2) Clustering (1) Collective Intelligence (1) Community Structure (1) Complex Systems (42) Computational Complexity (1) Consumption (1) Contracting (1) Credit constraints (1) Credit Cycles (6) Daydreaming (2) Decision Making (1) Deflation (1) Diffusion (2) Disequilibrium Dynamics (6) DSGE (3) Dynamic Programming (6) Dynamical Systems (9) Econometrics (2) Economic Growth (5) Economic Policy (5) Economic Theory (1) Education (4) Emacs (1) Ergodic Theory (6) Euro Zone (1) Evolutionary Biology (1) EVT (1) Externalities (1) Finance (29) Fitness (6) Game Theory (3) General Equilibrium (8) Geopolitics (1) GitHub (1) Graph of the Day (11) Greatest Hits (1) Healthcare Economics (1) Heterogenous Agent Models (2) Heteroskedasticity (1) HFT (1) Housing Market (2) Income Inequality (2) Inflation (2) Institutions (2) Interesting reading material (2) IPython (1) IS-LM (1) Jerusalem (7) Keynes (1) Kronecker Graphs (3) Krussel-Smith (1) Labor Economics (1) Leverage (2) Liquidity (11) Logistics (6) Lucas Critique (2) Machine Learning (2) Macroeconomics (45) Macroprudential Regulation (1) Mathematics (23) matplotlib (10) Mayavi (1) Micro-foundations (10) Microeconomic of Banking (1) Modeling (8) Monetary Policy (4) Mountaineering (9) MSD (1) My Daily Show (3) NASA (1) Networks (46) Non-parametric Estimation (5) NumPy (2) Old Jaffa (9) Online Gaming (1) Optimal Growth (1) Oxford (4) Pakistan (1) Pandas (8) Penn World Tables (1) Physics (2) Pigouvian taxes (1) Politics (6) Power Laws (10) Prediction Markets (1) Prices (3) Prisoner's Dilemma (2) Producer Theory (2) Python (29) Quant (4) Quote of the Day (21) Ramsey model (1) Rational Expectations (1) RBC Models (2) Research Agenda (36) Santa Fe (6) SciPy (1) Shakshuka (1) Shiller (1) Social Dynamics (1) St. Andrews (1) Statistics (1) Stocks (2) Sugarscape (2) Summer Plans (2) Systemic Risk (13) Teaching (16) Theory of the Firm (4) Trade (4) Travel (3) Unemployment (9) Value iteration (2) Visualizations (1) wbdata (2) Web 2.0 (1) Yale (1)

Sunday, September 26, 2010

Third Cooked Breakfast of the Year...

Had breakfast this morning at Gourmet Grub on Rose Street...I ordered the Ultimate Breakfast which consisted of Venison sausage, bacon, scrambled eggs, toast, and Haggis.  It was very good, but a bit pricey...

Wednesday, September 22, 2010

Totally Remiss on my Blogging...

OK...I am back.  Have been preparing for the start of the teaching term and as such have been a bit absent minded when it comes to blogging.  I have another meeting with my PhD adviser, Ed Hopkins, tomorrow afternoon and assuming he concurs with my research proposal for the next year I will move out smartly.

I have finally obtained access to the CRSP data (which was used in the Billio et al paper on econometric measures of systemic risk) and I plan to start my research by duplicating part of their work and then moving on the analyze a similar data-set for the UK.  I also have plans to ground some of the systemic risk measures a bit more in economic network theory, and then develop a couple measures of my own...

On the theory side of things, I am working on refining a set of research questions to tackle the complexity of large economic networks.  What I am going to try and tackle first is the following: I want to develop a economic model that captures...
  1. Densification power laws: networks are becoming denser over time, with the average degree increasing (and hence with the number of edges growing super-linearly in the number of nodes). Moreover, the densification follows a power law pattern
  2. Shrinking diameter: The effective diameter is, in many cases, actually decreasing as the network grows.
This idea is motivated by empirical findings and other work from this paper by Leskovic et al.  I have no firm modeling strategy yet, but strategic complements and some type of localized information structure will almost certainly be involved.

Sunday, September 19, 2010

Second Cooked Breakfast of the Year...

This morning I had breakfast at Ryan's Bar.  Menu included: 2 rashers of bacon, 1 sausage, scrambled eggs, blood pudding, 2 hash browns, 2 potato scones, beans and toast, grilled mushrooms and tomato.

The beans and toast was an excellent addition, and although the blood pudding was good...I missed my haggis!  Overall, I would say the breakfast was a step-up from last week's breakfast at Always Sunday on High St...

However, the coffee was terrible...although I noticed that the bartender who made my coffee had just started, so maybe that had something to do with it... 

Saturday, September 18, 2010

Datastream Tutorial...

So it looks like tomorrow is going to be spent in the library learning how to use Thomson-Reuters Datastream.  I have a nice idea for an empirical study of financial networks that will help me get started on my PhD...but first I will need to collect data on stock prices, market capitalization, and sector code for all stocks on the FTSE 100.  I think (hope?) that I can complete the vast majority of the work by Winter holiday....

Today's Afternoon Run...

Didn't run outside...although I wish I would have it is really nice in Edinburgh today!  Shower broken down at the flat (for the 4th straight day)...so I went to the gym and ran on the treadmill...

Introductory Maths and Stats: Intertemporal Optimization...

This lecture should be cut.  Material covered is not really used enough in the core curriculum to justify spending anytime on this...some version of the material could be included as a separate handout over winter holiday.

Introductory Maths and Stats: Discrete-time Intertemporal Optimization...

This should be the culminating lecture of QM0.  Students should be able to understand the difference between static and dynamic optimization.  The intuition for this can be built by focusing time on explaining how one derives the life-time budget constraint from the budget constraints for each time period (basically just add/aggregate, integrate depending).  Really invest time in going through the details of exactly what the budget constraint is how it is derived etc.  This is important as it comes up again and again...

Intertemporal Choice: Two-Period Example: Wouldn't really change much from this section, it is nicely written and hits all the high points...

Intertemporal Choice: T-Period Example: Here the focus is on the permanent income hypothesis, which is a pretty good example that demonstrates the techniques involved.

More Complicated T-period Example: This section should be cut from the lecture and covered in tutorials...this would allow lecturer to move through the above material at more measured pace.  I would jump from the permanent income hypothesis material straight to the simple discussion of dynamic programming.

Simple Discussion of Dynamic Programming: Section is good, although the maths needs to be simplified a bit so as to coincide with the permanent income hypothesis section that would proceed it.  Perhaps tutors could extend the dynamic programming case in the tutorials...

Introductory Maths and Stats: Kuhn-Tucker Theorm...

Intuition: Since most economic constraints are inequality constraints not equality constraints, it makes sense for students to learn a bit about Kuhn-Tucker theory...to build intuition I like to draw pictures in order to demonstrate the different sets of complementary slackness conditions for a single variable function in both the maximization case and the minimization case.  There would be six diagrams that clearly emphasize the corner solutions v. the interior optimum, and the idea of a binding constraint versus a slack constraint.  Remember, if one of the constraints is slack the other MUST be binding!

Kuhn-Tucker Theorem: After building intuition with diagrams in the single variable case, I would jump straight to the Kuhn-Tucker theorem and the corresponding algorithm used to solve inequality constrained optimization problems.

General Case: The notes on the general case are confusing and I am not sure that they add to the student's understanding of how to apply Kuhn-Tucker.  I would recommend cutting the notes on the general case and spend more time working problems and making sure that the students understand the difference between slack constraints and binding constraints...

Friday, September 17, 2010

Morning Swim...

A little late on the post, but I finally have my gym card and this morning I went for a short swim.  Did about 750 m total of freestyle, breast stroke, and kick board.  Plan on going for a run tomorrow morning, and I will start lifting on Monday...

Introductory Maths and Stats: Static Unconstrained Optimization of N-Variable Function...

The title is a mouthful, but the lecture itself is fairly straightforward (aside from the notation being a bit complex)...

Rules for Single Variable Optimization:
  • If df/dx=0 and d^2f/dx^2<0 at any point x0, then x0 is a local max
  • If df/dx=0 and d^2f/dx^2>0 at any point x0, then x0 is a local min
  • If df/dx=0 and d^2f/dx^2=0 at any point x0, then necessary but not sufficient conditions exist for x0 to be an inflexion point...
N-variable case is similar, but the notation becomes more complex (uses vectors instead of single variables etc.) 

The Two Variable Case: The discussion in the lecture notes of unconstrained optimization with two variables is good.  I particularly like how emphasis is placed on using Taylor expansions in the argument.  I would only recommend that more pictures be included.  Anytime a Taylor expansion is used, it just screams DRAW A PICTURE!!!

Quadratic Form, Definite Matrices and Hessians: I would like to see this discussion moved up a bit.  Hessians should be introduced in lecture 2 on multi-variable calculus.  The maths notes should link more closely with the stats notes (particularly the linear algebra parts).   Definite matrices should be emphasized in both the maths and stats, and a solid amount of lecture and tutorial time should be spent on the concept.  Definite matrices provide the coat-hanger on which much of the linear algebra that is used in microeconomics and QM hangs...

Concavity and Convexity: Again draws pictures.  Emphasize that the definitions are almost identical to the single variate case.  Only difference is that we are dealing with vectors now and not scalars in the argument of the function.

Chain Rule and the Envelope Theorem:  Material on the Envelope Theorem is scattered across three lectures.  I think the best think to do is devote an entire lecture to the envelope theorem after all of the necessary maths have been developed.  This would serve as a useful mid-course refresher for the students, and I think would make the theorem more understandable.  It is important, and thus I think it should get its own lecture...

Economic Applications: Solow Efficiency Wage model should be cut out of lecture and covered in a tutorial.  This would open up more lecture time for other more important topics...

The entire section that covers the derivations of the OLS equations using maximum likelihood should be cut from the lecture and converted into a handout for the students to study over winter holiday, it is very long and too complicated to ask about on the QM0 exam.  Lecture time and tutorials would be better spent elsewhere... 

Introductory Maths and Stats: Multi-Variable Calculus...

This is a continuation of my notes for my intro maths and stats tutorials.  This is my summary of Lecture Two: Multi-Variable Calculus...

Partial Differentiation: Easy to extend differentiation from single variable to multi-variable case.  Say you have f(x,y), then to take partial derivative with respect to x simply treat y as a constant and take the derivative of f with respect to x like single variable case!  That's it...also higher order derivatives are calculated by successive application of differentiation.  Demonstrate that cross-partial derivatives are equal (if f is well-behaved)!

I think it would be worthwhile to also mention the Hessian Matrix (matrix of second derivatives).  Talk about special cases when matrix is positive (semi) definite of negative (semi) definite.  Can also use it as an excuse to talk about eigenvalues, eigenvectors, determinants, etc from linear algebra.  Example: f(x,y)=x^2 + y^2...

Total Differentiation and Chain Rules: I totally agree with Yu Fu...one should not try to memorize all of the chain rules related to partial differentiation there are just too many combinations and cases.  Better to focus on understanding the concept of total differentiation and then the difference between independent and intermediate variables.  For example: suppose we have the usual case in economics where f(x(t), y(t)) and t=time.  In this case the independent variable is t, and the dependent variable is f (x and y are only intermediate variables that "filter" the effect of t on f). 

Implicit Functions and Differentiation: Just another application of partial differentiation and chain rules...

The Envelope Theorem: Understanding the envelope theorem is key in microeconomic price theory.  Mathematically, the envelope theorem is simply an application of chain rules, total differentiation, and partial differentiation!  No sweat...

Systems of Implicit Functions and Jacobian Determinants: BLAH!  OK, first I think the lecture notes need to be re-ordered so that the lecturer reviews determinants, Cramer's rule etc. BEFORE tackling this section.  Note that Cramer's rule is a REALLY inefficient way to solve a system of linear equations!  For QM0 exam the students may have to compute 3x3 determinant, so they need to know a formula for it...I would go with the Co-factor expansion...

Leibnitz's Rule: This is a cut I think...should be covered in detail by lecturer on Ramsey model in Macroeconomics I...

Integration with Several Variables: Move towards the beginning...this is very straightforward and should probably be talked about right after partial differentiation...

Homogeneous and Homothetic Functions: This is a cut.  Not because it isn't important...it is very important (implications of CRTS and such) but I think that the lecturer should cover these topics in class during term.  There is already too much material in the QM0 lecturers and this would allow for more detailed coverage of other topics...

Linear Dynamic System: If we want to keep this material in course, then we need to do a much better job of teaching eigenvalues, eigenvectors, and matrix diagonalization techniques.  Would recommend moving Appendix on eigenvalues, and eigenvectors into the lecture notes and teaching students how to reach the general solution of a linear dynamic system properly...

Introductory Maths and Stats: Single Variable Calculus...

As a first year Phd student I will be teaching introductory maths and stats to the MSc students this year. I am going through the lecture notes and making little notes for myself about things that I think should be emphasized (or de-emphaszied) in the turorials as well as some little tricks that I have picked up along the way that should be helpful for the incoming MSc students.  The following are my notes to myself on single variable calculus...

Rules of Differentiation: The derivtive is a linear operator.  Mathematically this means that d/dx(f(x) + g(x))=d(f(x)) + d(g(x)) and d/dx(t*f(x))=t*d/dx(f(x)).  In words this means that the derivative of any linear combination of well-behaved functions is equal to the same linear combination of the derivatives of the individual functions.  Note that if you remember that the derivative is a linear operator then you automatically know how to take derivatives of sums and differences of functions.

Other Rules I remember:
  1. Constant: The derivative of a constant is always zero.
  2. Powers: If f(x)=x^k, then df(x)/dx=kx^(k-1)
  3. The Chain Rule: NEVER forget the chain rule! d/dx(f(g(x)))=df/dg*dg/dx.  Most simple mistakes in taking a derivative come from forgetting about the chain rule.
  4. Derivative of the exponential and the natural logarithm functions: Easy...d/dx(e^x)=e^x (this result is one of the reasons that exponential functions turn up so often in the general solutions to differential equations), and d/dx(ln(x)=1/x.  Maybe review some basic properties of logarithms and exponentials...
  5. Product Rule: d/dx(f(x)*g(x))=d/dx(f(x))*g(x) + f(x)*d/dx(g(x))
Rules I never remember:
  1. Quotient Rule: Why? Because the quotient rule is simply an application of the product rule and the chain rule.
  2. Rule for d/dx(a^x): Why? Because it is better to just take natural logarithms and the differentiate.  For example, if f(x)=a^x then ln(f(x))=ln(a^x) and because I know the my properties of logarithms, the rule for taking d/dx(ln(x)) and the chain rule this becomes d/dx(ln(f(x))=[1/f(x)]*d/dx(f(x))=d/dx(ln(a^x))=ln(a) and finally d/dx(f(x))=ln(a)*a^x
l'Hopital's Rule: Unbelievably useful for taking limits.  Basically if you are ever in the case where the lim of two functions turns out to be 0/0 or inf/inf, then take deriviates of the top and bottom and take limits again...

At this point the lecture notes have a discussion of 1st order differential equations that is out of place.  These equations have not been covered yet in the notes, and even though this discussion is brief it detracts from more important material.  Lecture notes also have a long digression on stock returns, capital gains, and dividends.  This is an important economic application of the material being taught, but should be covered by tutors in the QM0 tutorials where it can be gone over at a slower pace...

Optimization: Recall geometric interpretation of derivative: the value of a derivative at a given point tells you whether the function is increasing of decreasing at the point:
  • If df/dx>0, then the function is increasing
  • If df/dx<0, then the function is decreasing
  • If df/dx=0, then f has a critical point
At this point I like to draw pictures to help me remember that d^2f(x)/dx^2>0 (<0) implies that the function is convex (concave). Which leads to the corresponding definitions of maximums and minimums of functions.  I like to point out the main ideas both graphically and in terms of the FOC and SOC on derivatives.

Taylor Expansions: Important topic that the lecturer should spend more time laying out the details.  Re-empahsis should be placed on the Taylor Expansion in the tutorials.

Concavity/Convexity and Quasi-concavity/Quasi-convexity of Functions: I never remember the  derivative or algebraic definitions for these terms.  Best to draw pictures!  Three functions to remember f=x^2 (convex), f=ln(x) (concave), and f=x^3 (quasi-convex and quasi-concave)

Rules of Integration: Emphasize the area under the curve interpretation of an integral.  The rules for integration are easy IF you know your rules for differentiation.  The two processes work in reverse.  When taking an integral of f(x), think what function would I need to take the derivative of the get the function f(x).  Don't forget about the arbitrary constant!

Wednesday, September 15, 2010

Phd Induction...

Had a nice meet and greet with the rest of the first year PhD students here at Edinburgh.  Also had my first meeting with my supervisor Ed Hopkins today...I found out that he was supervised by Alan Kirman!  Fantastic...that makes Alan Kirman my intellectual grandfather (so to speak)...

Didn't get a chance to talk much about my research agenda, but we will meet again in a few days time to discuss how I plan to move ahead over the next year.  I am now thinking that I will go heavy on the empirical work over the next year (with maybe a little of my own theory sprinkled throughout)...

A Nice Summary of Micro-foundations...

I link to a discussion paper on Micro-foundations from Tinbergen University.  It nicely summarizes the debate on what constitutes "proper" microfoundations for macroeconomics. 

Tuesday, September 14, 2010

Today's Morning Run...

Monday, September 13, 2010

Network Position and Interest Rates...

The title says it all: Systemically important banks get better terms for their overnight borrowing

Basically, they get a hold of a really nice data set from the Central Bank of Norway that covers a three year period that includes the recent financial crisis (i.e., 2006-2009) and using a panel data econometric model they find that banks who occupy key positions within the interbank network in Norway were able to use this position to get better deals on interest rates for their overnight borrowing/lending.  They also found that interest rates depended not only on the total amount of market liquidity, but also on the distribution of that liquidity amongst the market participants.  This suggests that banks with surplus liquidity are able to exploit this market power in order to get beneficial rates.  Finally they find that the aforementioned effects on interest rates were stronger during the recent financial crisis than in the period leading up to it.

Excellent Research Resource on Financial Networks...

Anyone interested in financial network analysis should check out Kimmo Soramaki's blog...

Quote of the Day...

"Once the paper is written I put it aside for a couple of weeks. Papers need to age like fine cheese - it's true that mold might develop, but the flavor is often enhanced."
-Hal Varian

Breaking: Kronecker Product of Bipartite Graphs is Disconnected...

Kronecker Product of Bipartite Graphs is disconnected.  This means that the Kronecker product of N-star graphs, which typically arise in the economic networks literature as the equilbrium network in situations where there are strategic substitutes, is disconnected.  Is this relevant? I think so, but haven't quite figured out why yet...

I want to make some type of statement like: strategic susbsititutes makes the network structure not scalable (in some yet to be defined sense), while strategic complements allows the network structure to be scalable.

Relationship Between Kronecker Graphs and Economic Theory...

As I am reading through material on Kronecker graphs I am thinking more deeply about the empirical properties of large graphs that that Kronecker graphs are able to capture, particularly the densification power law, and shrinking/stabilizing diameter.  It occurs to me that the densification property is actually implied by a number of micro-founded models of endogenous network formation.  In the economics literature, the key to achieving this densification process is for there to exist strategic complements in the link formation game (i.e., I want to form links if other people are also forming links).  In such games, the complete nework (which is as dense as a network can get) is typically an equilibrium network.  In sum, in the presence of strategic complements one should expect the network to become more dense over time.

In these economic network games with strategic complements it is generally assumed that agents have complete information on strategies, the number of other players, etc.  The ability to achieve complete network connectivity and thus maximum density depends on complete information.  However, in the real world, agents make link formation decision based on local information, and as such complete network connectivity and thus maximum density will not be very likely in practice (even if it were desirable in theory).  Here is an interesting question: is there a link between local information/asymmetric information and the power law densification process that shows up in the empirical data?  Do networks formed by self-interested agents acting on the basis of local information about link options densify according to a power law? A related question: how, if at all, are Kronecker graphs related to the behavior of sefl-interested agents?  To be continued...

Local Info v. Asymmetric Info...

For those of you with too much time on your hands...

On this very windy, and very Scottish morning here in Edinburgh, I am trying to work out what the difference is between "local information" and "asymmetric information"?  Is there a difference? Or is local information simply a case of asymmetric information? Do agents acting solely on local information create a situation of asymmetric information when they interact?  I think the answer to this last question is clearly yes!

A classic case of asymmetric information is Akerloff's lemons in the used car market.  It is typically assumed that the dealer has more/better/more accurate information about the used-car than does the buyer and thus we get an asymmetry of information.  Is it fair to say that the buyer's information set is a subset of the dealer's information set?  I will have to go back and review my Akerloff Adverse Selection notes on this point.

Local information seems to me to be a different beast.  With local information, two agents have different information sets.  There might be some overlap of information sets or there might not.  I think for local information to create a case of asymmetric information, the two information sets must overlap in some non-trivial way.

The image I have in my head is of two hill-walkers wandering around the highlands at night wearing the same brand of head torches (so that the amount of terrain that each can see at any given time is the same...no hill-walker has a technological comparative advantage in info gathering).  It is pitch black so the only information they can gather about the terrain is from what is illuminated by the head torch.  As they each wander around they are collecting information about the terrain locally because of their limited vision.  If they happen to wander over some of the same terrain, then they will have this information in common (that is their information sets will have some overlap...think 2D Venn diagram).  However they could even have information sets that are entirely disjoint (maybe one is not really a hill-walker and prefers to faff about in the valley, while the other is running around on the crags).  If they were to encounter one another in their wanderings, would this interaction be a case of asymmetric information?  I think it depends.  If one or both hill-walkers were interested in heading in the direction that the other had already been, then this is clearly a case of asymmetric information.  Perhaps they would even decide to trade information (assuming they had a technology that would allow them to do this).  However if they were interested in heading off in different directions, then this might not be a case of asymmetric information because neither hill-walker has information about where the other is going (there would also be no reason to trade).

Am I right to think about local and asymmetric information in this way?  Am I over complicating things by trying to make some distinction?  I think this is relevant to my research because financial institutions clearly use their local information to try and create a situation of asymmetric information that they can exploit for profit and trade (which creates financial inter-linkages or more dense financial networks).  

Sunday, September 12, 2010

I Under-appreciated the Kronecker Product...

Today, I came to the realization that I massively under-appreciated the Kronecker product of two matrices.  Until today, I thought that it wasn't all that useful...boy was I wrong!  Kronecker products are wonderfully useful, particularly if you are interesting in networks.  Here are links to a some papers on using Kronecker products as a method for generating network graphs that have power law degree distributions and small diameter.  I will write additional posts on this topic later, as I expect that it will be extremely useful in my own research...

Online Lecture Series on Differential Equations...

MIT OCW saves the day again!  A whole course on solving ODE's from MIT's Arthur Mattuck...

Excellent PhD Thesis Lecture on Dynamics of Large Networks...

I must say that I am a little bit jealous...if my PhD thesis and defense are half as good as the one presented here then I will be happy...

A major finding of his research is that networks tend to become more dense over time and that this densification process follows a power law.  My network version of Minsky's Financial Instability Hypothesis would postulate some type of limit cycle behavior (either stable of chaotic) of network density.  Now I am not convinced that his empirical work contradicts my idea for the following reason.  His data come from LinkedIn, MSN, and other large online data-sets.  It does not seem like there would be strong incentives to break links in these types of networks given that the cost of continuing to maintain the link is zero.  However, in my world where the links represent financial inter-dependencies of agents, there can be very strong incentives to break links as the cost of maintaining a link could conceivably be quite high...    

First Cooked Breakfast of the Year...

Today being Sunday, I took myself out to a nice full breakfast of sausage, bacon, haggis, eggs, potato pancakes, mushrooms, and crusty bread.  Sounds like a lot of food...and it is, but it will probably be the only meal I eat today (at least this is how I justify it!).

I ate my breakfast at a place called Always Sunday on High Street.  It was a bit expensive (10 GBP) for a full breakfast and OJ, and even though the food was good I am convinced that there are better breakfasts in Edinburgh for less money...

I will file another report next week.  If anyone has any suggestions on where to get a good cooked breakfast in Edinburgh please pass them along...

Modeling Financial Instability...

Most intriguing paper by Steve Keen on Financial and Economic Breakdown: Modeling Minsky's Financial Instability Hypothesis.  Using Goodwin's limit cycle model as his foundation, Keen extends the model to account for the four keys to Minsky's idea:
  1. Tendency of capitalists to incur debt on the basis of euphoric expectations
  2. The importance of long-term debt
  3. The destabilizing impact of income inequality
  4. The stabilizing role of government
Inclusion of Minsky's ideas converts Goodwin's stable but cyclical system into a chaotic one with the possibility of a divergent breakdown (i.e., depression)...

My interest in this paper is that it is the first attempt that I have come across so far that presents a mathematical model of Minsky's Financial Instability Hypothesis.  I would like to try to somehow incorporate these ideas into a network model...here is a link to a paper by Gallegati et al that uses an agent-based approach to model some of these ideas.  

Saturday, September 11, 2010

Another Brilliant Linear Algebra Lecture...

Prof. Gilbert Strang at MIT delivers a beautiful lecture on how to solve systems of first order differential equations using linear algebra...an absolute must for anyone studying graduate economics...

The America I Know...

I typically avoid writing posts on politics (I hope!) but...

As I reflect today about the how the events of September 11th 2001 touched my life, and in a much more tragic way, the lives of my friends, I find myself distracted by current events: the embarrassing debate over the so-called "Ground Zero Mosque" and the dangerously insensitive Qu'ran burning event that has garnered massive news coverage...

These two events paint a picture of an America that is intolerant at best, and bigoted at worst. This is not the America that I know, and I suspect that this is not the America that those of you who have had the good fortune to travel in and around the U.S. know.

With this in mind, I hope that you will read this story on CNN about two Muslim friends who traveled around the U.S. on a 30 Mosques in 30 States in 30 days for Ramadan...the America that they experienced on their journey is the America that I know and love...

Today's Morning Run...

Friday, September 10, 2010

DeGroot Model of Social Influence and the Microfoundations of Eigenvector Centrality...

Most of what follows is culled from M. Jackson's Social and Economic Networks...

Start with the DeGroot model of imitation and social influence.  In this model individuals start with initial opinions on a subject.  These opinions are represented by an n-dimensional vector of probabilities:

p(0)=(p1(0), ..., pn(0))

Each of the p1(0) lies on [0,1] and might be thought of as the probability that a given statement about the world (such as "the economy is an open thermodynamic system") is true.  Alternatively one could interpret the vector of initial opinions as beliefs concerning the quality of a product, or the likelihood that an individual might engage in a given activity (like lending perhaps), etc...

In this model the interactions are captured through a possibly weighted and directed nxn non-negative matrix T.  The matrix T is also a row stochastic matrix, which means that the entries across each row will always sum to one.  The interpretation of Tij is that it represents the amount of weight or trust that agent i places on the current belief of agent j in forming i's belief next period.  Beliefs are updated over time so that 

p(t)=Tp(t-1)=(T^t)p(0)

Illustrative Example: Suppose that we have three banks.  Bank 1 is run by James, Bank 2 is run by Keshav, and Bank 3 is run by David. These three banks have an updating matrix T and a network digram that that looks as follows:
With this matrix T, Bank 1 (run by James) puts equal weight (1/3 each) on Banks 2 and 3 (run by David and Keshav) when forming his beliefs about the world.  Bank 2 weights its own beliefs slightly more, but completely discounts Bank 3 (which given that the bank is run by Keshav is fairly ridiculous).  Bank 3 meanwhile, puts the most weight/trust in its own beliefs (3/4) and then puts a weight of 1/4 on Bank 2.

Now suppose that the initial vector of beliefs is p(0)=(1,0,0).  So Bank 1, run by James, initially believes that some future event (like a financial crisis) will occur with probability 1.  Lets see how these beliefs evolve overtime given this influence network...


Does beliefs converge in this case? The answer is yes.  Standard mathematical results from Markov chain theory show that as long as T is strongly connected (which means that there is a directed path from any node to any other node) and aperiodic (there are no cycles of beliefs; no feedback loops where beliefs flow from one agent through all other agents and then end up influencing the initial agent again) then societal beliefs must converge.  In this case they converge to...

Thus even though we started with a situation in which only James and his bank believed that a financial crisis was going to happen with probability 1, we end up in a situation where all three banks believe that a financial crisis will occur with positive probability.  I think that this type of idea is highly relevant to my research into the relationship between financial network structure and systemic risk (assuming that social influence has something to do with bank lending practices for example). 

Now suppose we want to keep track of how each agent in the social network influences the limiting beliefs.  Let the n-vector of limiting beliefs be defined as p(*)=(p*,p*,...,p*):

To keep track of the limiting influence that each agent has, we want to find a n-vector s whose entries are all on [0,1] and whose entries sum to on, such that p* equals the inner product of s and the vector of initial beliefs (i.e., p*=s.p(0)). If we can find such a vector s, then our limiting/consensus beliefs would be a weighted average of the initial beliefs where the relative weights would be the influence of the various agents on the consensus beliefs.  Now since starting with p(0) or with p(1)=Tp(0) ends up in the same limit, it must be the case that s.p(1)=s.p(0) and that therefore s.(Tp(0))=s.p(0).

Since this equation is required to hold for any vector of initial beliefs p(0) it follows that:

sT=s

But his just says that s is the left-hand eigenvector of the matrix T whose eigenvalue is 1!  Appealing to the mathematical gods, we find that as long T is strongly connected, aperiodic, and row stochastic then there exists a unique such unit eigenvector and this eigenvector has all non-negative values.  

Finally we have arrived at our connection between a measures of social influence and eigenvector centrality.  Eigenvector centrality measures essentially rank agents based on the size of the respective component of the eigenvector corresponding to the largest eigenvalue of whatever connectivity matrix you are studying...which in this case is equivalent to finding the vector of social influence s and then ranking agents high-to-low based on their contribution to consensus beliefs.
So IF we were able to map a financial network, say a network of banks, and we were able to obtain a matrix of "social influence," say by getting lots of data on balance sheet linkages between the banks, then have a theoretical basis for using eigenvector centrality measures to identify which banks are relatively more important than others.  This type of idea would, I think, be useful for (amongst other things...) identifying banks for idiosyncratic microprudential regulation.

Crises, Evolution, and Growth...

Sean passes along this interesting story from British economics blogger Chris Dillow.

Thursday, September 9, 2010

Endogenous Business Cycles and Randomness...

As I was re-reading Masanao Aoki's Reconstructing Macroeconomics: A Perspective from Statistical Physics and Combinatorial Stochastic Processes, I came upon (again) an insight regarding the stochastic nature of business cycles that I think is worth sharing.  The insight was originally Eugene Slutsky's.  The basic idea is that the simple summation of random variables can produce cycles.  A concrete example will help fix ideas.  Suppose we have the following random walk model:

Si-Si-1=ei=+/-1 for all i=1,2,... with S0=0

with the respective probability of either outcome (i.e., -1 or 1) equal to a half.   This is simply a model of "winnings" from tossing a fair coin (where heads wins $1 and tails losses $1)  The unconditional mean of this process is zero, which might lead one to expect that the process spends "most" of the time "near" its zero unconditional mean.  But this intuition would be incorrect.  Take a look at the following diagram of a sample path of 10,000 tosses of a fair coin:
 
For me, Slutsky's insight should be taken as a reminder that quite sophisticated behavior, including cycles, can be generated out of simple randomness.  I actually think that this insight is more general than even Slutsky might have been willing to admit.  Simple randomness is a major force in this world, especially in economic behavior.  We as economists tend to be too quick to assume that the sophisticated/complicated macroeconomic behavior that we see in the world must be the result of fairly (or extremely) sophisticated human behavior at the micro level. 

Perhaps the macroeconomic behavior we observe is actually being driven by fairly simplistic humans operating by "rules of thumb" sprinkled with a bit of randomness...

Today's Morning Run...

Wednesday, September 8, 2010

Interesting MTG with Informatics Profs...

I just had a very interesting discussion about my research agenda and the possibility of pursuing a significant portion of my PhD work through the School of Informatics.  The profs I spoke with were very excited about my research agenda, as it overlapped significantly with some of their own work.  They also pointed me towards courses on networks and agent-based modelling that start in a few weeks that I will be able to take...

More Rambling Thoughts on Strategic Network Formation...

A continuation/elaboration on yesterday's post...basically I am trying to developed a decision calculus that will serve as a micro-foundation for my more traditional network formation model...

Question: Why do banks form credit networks?
Banks basically want to do two things:
  1. Make lots of money, and
  2. Generate liquidity
With this in mind, I am thinking of inter-bank network formation as a strategy banks use to achieve these ends.  Banks make money by lending money (ie., by forming links with other banks, firms, etc): banks essentially trade money now for more money later.  This is my working justification for why the number of direct links should be included in the bank's pay-off function.  Now in order to achieve its other objective, liquidity generation, a bank may want to establish lines of credit with other banks.  I am tempted to say that, all things equal, small banks should exhibit some type of preference to link with banks that have lots of links already.  This could be for one of two reasons, either because number of links is viewed as a proxy for something useful, or because lots of links implies greater access to the rest of the network which would affect interest rate charged on the line of credit.  This is my working justification for including indirect links (at least those of its immediate neighbors) in the bank's pay-off function.

In an effort to try and make things more concrete, lets follow (Gai and Kapadia, 2010) and suppose that we have N banks B1, ...Bn and that each Bi has two types of assets and two sources of liabilities...
  • Assets:
    • Inter-bank assets Ai,IB: these are liquid assets that flow into Bi in the form of loan payments made by other banks in the network that owe Bi money.  The number of inter-bank assets Ai,IB represents the in-degree score for Bi. 
    • Durable assets Ai,M: these are illiquid assets that the Bi owns. (Gai and Kapadia, 2010) think of them as morgtages.  Durable assets have a resale price of q.
  • Liabilities:
    • Inter-bank liabilities Li,IB: these are the liabilities that flow out of Bi in the form of loan payments made to the banks in the network to whom Bi owes money.  The number of these inter-bank liabilities Li,IB represents the out-degree of Bi.
    • Customer deposits Di: These are exactly what they sound like.  (Gai and Kapadia, 2010) treat these as exogenously given. 
    • Regulatory Captial Requirement K: this is the minimum amount of cash-on-hand that banks must maintain on deposit with regulatory authority.  It is the same for all banks 
Now the question is where to go from here given that I am trying to develop some type of micro-founded endogenous network formation process.  There are basically two markets going on here (possibly three if you want to endogenize the customer deposits): a credit market for for the liquid assets and a spot market for the illiquid asset (and possibly a market for customer deposits).  Banks want to loan as much money as they can (i.e., form links with other banks) in order to generate value (value for whom though? customers...) subject to the constraint that they remain solvent and that they must maintain capital regulatory requirement.

I am not sure if this is an improvement over yesterday's effort, or whether I am simply wandering further down the rabbit hole...  



Tuesday, September 7, 2010

What have I done today...

Well besides the generic new student paperwork, I did have sometime to think a bit about some of the issues that I am likely to encounter.  This is a summary of what I have come up with so far:

Question: Why do banks form credit networks?
Banks basically want to do two things:
  1. Make lots of money, and
  2. Generate liquidity
Addressing 1: Banks make money by lending money (i.e., by forming links with other banks, firms, etc).  Banks essentially trade money now for more money later.  This is my working justification for why the number of direct links should be included in the bank's pay-off function.  Addressing 2: Forming credit networks may be a strategy to generate liquidity.  This is my justification for including indirect links in the bank's pay-off function.  I am tempted to say that, all things equal, banks should exhibit some type of preference to link with banks that have lots of links already (either because number of links is viewed as a proxy for something, or because lots of links implies greater access to the rest of the credit network) but this may be getting ahead of myself...

Very abstractly...banks can be divided into three broad classes:
  1. Pure lenders
  2. Pure borrowers
  3. Banks that do both (i.e., lend and borrow)
In the graphic above blue arrows represent flow of money now, and red dashed arrows represent flow of money later.  Not sure this diagram is all that useful except that it helped formalize my intuition concerning how link formation might generate liquidity.  The basic idea is that suppose we have three banks:
Now suppose that Bank 1 is willing to lend to Bank 2 but not to Bank 3 (for whatever reason...perhaps Bank 3 is too risky), but Bank 2 is willing to lend to Bank 3 (because Bank 2 has different risk tolerances...I suppose I have introduced my first heterogenous parameter).  In this case the linkages generate liquidity because without them, Bank 3 would not have been able to borrow. 

Question: How should returns to bank i from a link with bank j be defined?
At this point I am playing with the idea that the value of the link to the lender is something like the discounted present value of the money later minus costs (yet to be defined), while the value to the borrower is the loan amount minus costs (i.e., interest).  How to set interest rate?  Starting point would be to take interest rate as exogenous, but at the moment I am toying with the idea of having banks bargain locally over the interest rate.  In the bargaining process the lender would have several outside options (i.e., either lending to another bank or parking his money in "risk-free" securities) the only outside options available to the borrower would be other banks.

All of this is highly specualtive at this point...but what should you expect on day one of a PhD career!

2 Mile Loop...

In case anyone was curious...this was the loop I ran this morning:

First Post on Mountaineering...

As promised in the subtitle, this blog is going to be about more than just economics and my PhD research.  Mountaineering is a major hobby of mine, and one that I plan on pursuing with gusto while here in Scotland.  But in order to go climbing and hiking one has to be fit and strong...and alas I am best described as soft (essentially the opposite of fit and strong).  To remedy this situation I have started a fitness program consisting of running, lifting weights, and hiking as much as possible...today was day 1 of my program: I ran 2 miles (in the rain).  That may not sound like much (and it isn't)...but one has to start somewhere...

Now off to watch more linear algebra lectures (16, 17, and 18 I think), shower, and then start my PhD lit review...

Monday, September 6, 2010

Arrived in Edinburgh...

I have hit the ground in Edinburgh, unpacked my things, and set-up internet connectivity...expect posting to resume apace...