Blog Topics...

3D plotting (1) Academic Life (2) ACE (18) Adaptive Behavior (2) Agglomeration (1) Aggregation Problems (1) Asset Pricing (1) Asymmetric Information (2) Behavioral Economics (1) Breakfast (4) Business Cycles (8) Business Theory (4) China (1) Cities (2) Clustering (1) Collective Intelligence (1) Community Structure (1) Complex Systems (42) Computational Complexity (1) Consumption (1) Contracting (1) Credit constraints (1) Credit Cycles (6) Daydreaming (2) Decision Making (1) Deflation (1) Diffusion (2) Disequilibrium Dynamics (6) DSGE (3) Dynamic Programming (6) Dynamical Systems (9) Econometrics (2) Economic Growth (5) Economic Policy (5) Economic Theory (1) Education (4) Emacs (1) Ergodic Theory (6) Euro Zone (1) Evolutionary Biology (1) EVT (1) Externalities (1) Finance (29) Fitness (6) Game Theory (3) General Equilibrium (8) Geopolitics (1) GitHub (1) Graph of the Day (11) Greatest Hits (1) Healthcare Economics (1) Heterogenous Agent Models (2) Heteroskedasticity (1) HFT (1) Housing Market (2) Income Inequality (2) Inflation (2) Institutions (2) Interesting reading material (2) IPython (1) IS-LM (1) Jerusalem (7) Keynes (1) Kronecker Graphs (3) Krussel-Smith (1) Labor Economics (1) Leverage (2) Liquidity (11) Logistics (6) Lucas Critique (2) Machine Learning (2) Macroeconomics (45) Macroprudential Regulation (1) Mathematics (23) matplotlib (10) Mayavi (1) Micro-foundations (10) Microeconomic of Banking (1) Modeling (8) Monetary Policy (4) Mountaineering (9) MSD (1) My Daily Show (3) NASA (1) Networks (46) Non-parametric Estimation (5) NumPy (2) Old Jaffa (9) Online Gaming (1) Optimal Growth (1) Oxford (4) Pakistan (1) Pandas (8) Penn World Tables (1) Physics (2) Pigouvian taxes (1) Politics (6) Power Laws (10) Prediction Markets (1) Prices (3) Prisoner's Dilemma (2) Producer Theory (2) Python (29) Quant (4) Quote of the Day (21) Ramsey model (1) Rational Expectations (1) RBC Models (2) Research Agenda (36) Santa Fe (6) SciPy (1) Shakshuka (1) Shiller (1) Social Dynamics (1) St. Andrews (1) Statistics (1) Stocks (2) Sugarscape (2) Summer Plans (2) Systemic Risk (13) Teaching (16) Theory of the Firm (4) Trade (4) Travel (3) Unemployment (9) Value iteration (2) Visualizations (1) wbdata (2) Web 2.0 (1) Yale (1)
Showing posts with label Liquidity. Show all posts
Showing posts with label Liquidity. Show all posts

Wednesday, November 30, 2011

Power-laws in Gold? A journey into the world of dependent data...

Attention Conservation Notice: 
  1. I have a son!
  2. Though plausible, the power-law is probably not the best model for the positive tails of gold returns; power-law model is firmly rejected for the negative tail.  
  3. Rest of post is the long, tired ramblings about my travails in finding confidence intervals for my power-law parameter estimates with dependent data.
Apologies for the long lag between posts, but my wife gave birth to our son Callan Harry on Sept. 19th and the joys of being a father took precedence over my blog posts!  Now that things have settled down a bit, I thought I would take the time to write up some thoughts on the price of gold... 
The above top plot in the above chart displays the daily nominal price of gold from 2 January 1973 through 28 November 2011using historical data from usagold.com.   Note the sharp increase in the nominal price of gold following the creation of gold backed ETFs in March 2003.  The bottom plot displays normalized standard logarithmic returns for gold over the same time period.  Note the ever-present volatility clustering.

Normally when looking at an asset price over such a long time period one would prefer to look at the real price of the asset rather than the nominal price.  As I am interested in exploring the effects of credit and liquidity constraints on asset prices, I want to focus on the dynamics of the nominal price of gold.  Credit constraints typically enter the picture in the form of debt contracts which I assume are written in nominal rather than real terms.  There are two ways to think about liquidity/re-saleability: quantity liquidity/re-saleability and value liquidity/re-saleability.  A measure of quantity liquidity might be the average daily trading volume of an asset over some time period.  A measure of value liquidity, might be the average $-value of an asset traded over some time period.  I feel like the two concepts are not identical, and I prefer to think of liquidity in terms of value re-saleability rather than quantity re-saleability.

Show me the tails!
In thinking about how the dynamics of asset prices might be impacted by credit and liquidity constraints, it seems logical (to me at least) that one should focus on the behavior of the extreme tails of the return distribution.  Below is a survival plot of both the positive and negative tails of normalized gold returns along with the best-fit power-law model found using the Clauset et al. (2009) method.
This approach to parameter estimation assumes that the underlying data (at least above the threshold parameter) are independent.  How realistic is this assumption for gold returns (and for asset returns in general)?  The random-walk model for asset prices, which is consistent with the strong form of EMH, would suggest that asset returns should be independent (otherwise historical asset price data could be profitably used to predict future asset prices).  Given that gold returns, and asset returns generally, display clustered volatility one would expect that the absolute returns would display significant long-range correlations.   Below are plots of various autocorrelation functions for gold...
Both the positive and negative tails of normalized gold returns display significant autocorrelations.  MLE of scaling exponents assumes that gold returns, above the threshold parameter, are independent.  My main concern is that the consistency of the MLE will be impacted by the dependency in the data. Perhaps the autocorrelations will go away if I restrict the sample to only observations above a given threshold?
Dependency in the positive tail of gold returns does appear to go away as I raise the threshold (it almost disappears completely above the optimal threshold).  ACF plot for the negative tail is similar, although given that the optimal threshold is lower for the negative tail, the dependency is a bit more significant.  Perhaps this argues for using an alternative to criterion to the KS distance in selecting the optimal threshold parameter.  Perhaps Anderson-Darling?  Anderson-Darling tends to be conservative in that it chooses a higher threshold parameter which might help deal with data dependency issues.  But higher threshold means less data in the tail, which will make it more difficult to both reject a spurious power law and rule out alternative hypotheses.  Life sure is full of trade-offs!

Given that the behavior of ACF is likely to vary considerably across assets, I would like to be able to make more general statements about the consistency of the MLE estimator in the presence of dependent data.  Any thoughts?

Parameter Uncertainty...
Following the Clauset et al. (2009) methodology, I derive standard errors and confidence intervals for my parameter estimates using a non-parametric bootstrap.  My first implementation of the non-parametric bootstrap simply generates synthetic data by re-sampling, with replacement, the positive and negative tails of gold returns, and then estimates the sampling distributions for the power-law model's parameters by fitting a power-law model to each synthetic data set.

To give you an idea of what the re-sampled data look like, here is a survival plot of the negative returns for gold along with all 1000 bootstrap re-samples used to derive the confidence intervals...
...and here is a plot of the bootstrap re-samples over-laid with the best-fit power law models:
The bootstrap replicates can be used to derive standard-errors for the scaling exponent and threshold parameters for the positive and negative tails, as well as various confidence intervals (percentile, basic, normal, student, etc) for the parameter estimates.  Which of these confidence intervals is most appropriate depends, in part, on the shape of the sampling distribution.  The simplest way to get a look at the shape of the sampling distributions is to plot kernel density estimates of the bootstrap replicates for the various parameters.  Dotted black lines indicate the MLE parameter estimate.

WTF! Note that all of the sampling distributions have multiple local peaks!  I suspect that the multi-peaked densities are due to sensitivity of the optimal threshold to the resampling procedure.  Is this an example of non-parametric bootstrapping gone awry?  The fact the the bootstrap estimates of the sampling distributions have multiple peaks suggests that standard confidence intervals (i.e., percentile, basic, normal, student, etc) are unlikely to provide accurate inferences.

The best alternative method (that I have come across so far) for calculating confidence intervals when the sampling distribution has multiple peaks is the Highest Density Regions (HDR) from Hyndman (1996).  Again, the dotted black lines indicate the MLE parameter estimates. 
The (significant!) differences between the estimated densities in the simple kernel density plots and the HDR plots is due to the use of different estimation methods.  HDR uses a more sophisticated (more accurate?) density estimation algorithm based on a quantile algorithm from Hyndman (1996).  Bandwidth for density estimation is selected using the algorithm from Samworth and Wand (2010). The green, red, and blue box-plots denote the 50, 95, and 99% confidence region(s) respectively.  Note that sometimes the confidence region consists of disjoint intervals.  But I still don't like that the densities are so messy.  It makes me think that there might be something wrong in my resampling scheme.  Can I come up with a better way to do the non-parametric bootstrap sampling for asset returns?  I think so...

My second implementation of the non-parametric bootstrap attempts to deal with the possible dependency in the gold returns data by using a maximum entropy bootstrap to simulate the logarithm of the gold price series directly.  I generate 1000 synthetic price series, and for each series I calculate the normalized returns, split the normalized returns into positive and negative tails, and then generate the sampling distributions for the power law parameters by fitting a power-law model to each tail separately.

Here are some plots of synthetic gold price data generated using the meboot() package:
I did not expect that the distribution of synthetic data would collapse as the number of bootstrap replicates increased.  Was I wrong to have this expectation? Probably! Is this a law-of-large numbers/CLT type result? Seems like variance at time t of the bootstrap replicates of the gold price is collapsing towards zero while the mean at time t is converging to the value of the observed gold price. Clearly I need to better understand how the maximum entropy bootstrap works! Update: this behavior is the result of my using the default setting of force.clt=TRUE within the meboot() function.  One can eliminate this behavior by setting force.clt=FALSE.  Unfortunately, I have not been able to find a whole lot of information about the costs/benefits of setting force.clt=TRUE or force.clt=FALSE!  The documentation for the meboot package is a bit sparse...

However, a plot of the negative tails of the normalized returns calculated from the simulated price series does look like I expected (which is good because this is the synthetic data to which I actually fit the power-law model!)...
The HDR confidence intervals based on the maximum entropy non-parametric bootstrap are much nicer!  Mostly, I think, because the densities of the bootstrap replicates are much more well-behaved (particularly for the scaling exponents!).
Clearly I have more work to do here.  I really don't feel like I have a deep understanding of either the maximum entropy bootstrap or the HDR confidence intervals.  But I feel like both are an improvement over my initial implementation...

Goodness-of-fit for the Power-Law?
Using the KS goodness-of-fit test advocated in Clauset et al. (2009), I find that while the power-law model is plausible for the positive tail (p-value: 0.81 > 0.10), the power-law model can be rejected for the negative tail of normalized gold returns (p-value: 0.00 < 0.10).
The KS goodness-of-fit test generates synthetic data similar to the empirical data below the estimated power-law threshold, but that follow a true power-law (with MLE for scaling exponent) above the estimated threshold.  This implementation destroys any underlying time dependencies in the data.  Are these underlying time dependencies important for understanding the heavy-tailed behavior of asset returns? Previous research suggests yes.  Suppose that time dependencies, in particular the slow-decay of volatility correlations, are key to understanding the heavy-tailed behavior of asset returns.  Does the current implementation of the goodness-of-fit test under-estimate the goodness-of-fit for the power-law model?

Testing Alternative Hypotheses:
I end this ludicrously long blog post with a quick discussion of the results of the likelihood ratio tests used to determine if some alternative distribution fits the data better than a power-law.
Results are typical.  Even though the power-law model is plausible for the positive tail of gold returns, the power-law can be rejected at the 10% level in favor of the power-law with an exponential cut-off.  Also, other heavy-tailed alternatives, namely the log-normal and the stretched-exponential, can not be ruled out (even the thin-tailed exponential distribution can not be ruled out!).  For the negative tail, the power-law is simply not plausible, and the power-law with cut-off is preferred based on log-likelihood criterion.  All of these alternative distributions were fit to the data using maximum likelihood estimators derived under the assumption of independence of observations in the tail.  

Code and data to reproduce the above results can be found here.

Many thanks to Aaron Clauset for his recent post on power-laws and terrorism, which was a major inspiration/motivation for my writing this post.  Also, many thanks to Cosma Shalizi for continuing to provide excellent lecture notes on all things R.

Friday, June 17, 2011

Demand slopes UP...Supply slopes DOWN!

This post is a continuation of a previous post on liquidity and leverage (both posts closely follow the analysis of Adrian and Shin (2008)).

First, a quick summary of previous post.  There is a negative relationship between growth in balance sheets (value of assets) and leverage for passive investors such as households.  All data is taken from the U.S. Flow of Funds accounts maintained by the Federal Reserve.
However there is a positive relationship between growth in balance sheets (asset values) and leverage for security brokers and dealers (i.e., investment banks).  For this group, at least, leverage is pro-cyclical.
Continuing with the implications of pro-cyclical leverage.  Back to basic balance sheet arguments.  Consider a firm that actively manages it's balance sheet in order to maintain a constant leverage ratio of 10:1.  Start with the following balance sheet:

AssetsLiabilities
100Equity: 10

Debt: 90

Again assume that the value of debt is constant in response to change in asset values (which seems reasonable for small changes in assets values).  Now suppose that the value of assets increase by 1% to 101.  The new balance sheet is as follows (note that equity has increased by 10% as a result of the 1% rise in asset prices).

AssetsLiabilities
101Equity: 11

Debt: 90

The leverage ratio is now 101 / 11 = 9.18.  If the firm wants to maintain a leverage ratio of 10, it must take on more debt and use the cash it borrows to buy more assets.  How much debt?  Firm requires that 101 + D / 11 = 10, which implies that D=9.  Thus an increase in the value of assets of 1 leads to an increase in the quantity demanded of assets by the firm worth 9.  Price of asset goes up, quantity demanded of the asset by firm goes up.  The demand curve for assets slopes up!  The firm's new balance sheet now has a leverage ratio of 10.

AssetsLiabilities
110Equity: 11

Debt: 90

Same type of mechanism is at play for negative shocks to asset prices.  Doing the math leads to the conclusion that for a firm actively managing its balance sheet to maintain a constant leverage ratio, the supply curve for assets slopes down.  Note that just as equity increases as asset prices increase, equity bears the burden of adjustment when asset prices are falling.

This mechanical adjustment process of leverage will be strengthened if either:
  1. Leverage is pro-cyclical
  2. Asset markets are not completely liquid
If asset markets are not completely liquid, then the asset price will be affected by the change in the firm's demand for assets following an, asset price shock.  Consider a negative asset price shock.  The negative shock to asset prices leads, through the mechanism above, to the firm to sell assets in order to maintain the desired leverage ratio.  If markets are not perfectly liquid, then the firm's decision to sell assets will further decrease the price of these assets, which then weakens the firm's balance sheet even more, which causes the firm to sell more assets, and so the cycle goes.

Adrian and Shin (2008) outline empirical evidence consistent with the above amplification story.  Specifically they demonstrate that firm's balance sheet components are able to forecast changes in asset price volatility.  I am working to replicate their results (as they are highly relevant to my own empirical work on leverage and asset price volatility) and will follow up with the R code of the replication as soon written (hopefully in the near future!)...

Thursday, June 16, 2011

Leverage and Balance Sheets...

Killing time at JFK International waiting to board my flight to Paris.  I recently came across a brilliant and simple discussion of the relationship between leverage and the size of balance sheets in Adrian and Shin (2008) and had to share...

Suppose that a household owns a house that is financed via a mortgage.  Suppose that the house has a value of 100, and that the value of the mortgage is 90 (thus household net-worth, or equity, is 100 - 90 = 10).  This household has the following balance sheet:

Assets Liabilities
100 Equity: 10
Debt: 90

What is this household's leverage ratio? Leverage (L) is defined as the ratio of the value of total assets (A) and the value of equity (E).  In this case the household has a leverage ratio of 100:10 or 10:1.  What happens to the household's leverage ratio as the value of the house (its asset) fluctuates?  For simplicity assume that the value of debt stays fixed for small changes is asset prices.  Then leverage is roughly...

L ≈ A / (A - 90)

So as the value of the house (i.e., assets) rises, with the value of debt fixed, household net worth increases and the leverage goes down!  Leverage varies inversely with the value of total assets.  This inverse relationship is exactly what is born out in the data.  To calculate leverage, I use the ratio of household total assets and household net worth from the quarterly  U.S. Flow of Funds data for the years 1963 - 2011. 

There is a clear negative relationship between changes in leverage and changes in household assets.

Now instead of household balance sheets, let's consider firm balance sheets.  Specifically, based on data availability, consider three classes of firms:
  1. Non-financial (non-farm) firms
  2. Commercial banks
  3. Security brokers and dealers (includes investment banks)
If, like households, firms where fairly passive in their balance sheet management, then we should see a nice negative relationship between firms' asset values and leverage...however (not surprisingly) the data indicate that firms are a bit more active in managing their balance sheets than households.  As with households, all firm data is quarterly, and is taken from the U.S. Flow of Funds accounts maintained by the Federal Reserve.

Non-Financial (Non-Farm) Firms: I take the data from the U.S. flow of funds What is going on here? Certainly the clear negative relationship from the household scatter  is gone.  You can kind of begin to see clustering around zero growth in leverage (suggestive of firms actively managing balance sheets to maintain a fixed leverage ratio?)
Commercial Banks: The clustering phenomenon is now very apparent and consistent with the fact that commercial banks actively manage their balance sheets in order to maintain a fixed leverage ratio.
Security Brokers and Dealers: Now the relationship between leverage and assets is positive! This means that for security brokers and dealers (a group which includes all major Wall Street investment banks) leverage is pro-cyclical.

How can this be? What are the implications of pro-cyclical leverage?  I leave this as a topic for a future post...alternatively you can read Adrian and Shin (2008).

Saturday, April 16, 2011

Diamond and Dybvig (1983)...

As part of my preparatory reading for my upcoming summer school in Jerusalem, I am reading some of the seminal papers in the literature on banking, financial, and credit crises.  I have chosen to start with Diamond and Dybvig (1983): Bank Runs, Deposit Insurance, and Liquidity.  The following is a short summary of the basic ideas of the paper.  I may write up my notes more formally, and will share them if I do so.

In Diamond and Dybvig (1983) banks fulfill an explicit economic function.  Banks transform illiquid assets into liquid liabilities.   

Model demonstrates three points:
  1. Banks issuing demand deposits can improve on competitive market by providing better risk sharing among people who need to consume at different random times.
  2. Demand deposit contract that provides this improvement has a "bad" equilibrium (a bank run) in which all depositors panic and withdraw funds immediately.
  3. Bank runs have real economic consequences because even "healthy" banks can fail, causing the recall of loans and the termination of productive investment.
At its core, the model is an asymmetric information story.  In a perfectly competitive set-up where agents have private and unverifiable information about their liquidity needs, Diamond and Dybvig (1983) show that the market outcome can be inefficient relative to a world where information was perfect and liquidity needs where publicly observable. 

Diamond and Dybvig (1983) argue that demand deposit contracts can help achieve the same optimal risk sharing arrangements that can be made when liquidity needs are publicly observable.  However, the ability to achieve optimal risk sharing via demand deposits comes at the cost of introducing the possibility of bank runs.   

Demand deposit contracts can be improved upon if the bank has some information about the distribution of liquidity needs amongst its depositors.  If the normal volume of withdrawals is known and not stochastic, then writing demand deposit contracts that call for the suspension of convertibility (i.e., a suspension of allowing withdrawal of deposits) in the event of a bank run can actually prevent a run from happening along the equilibrium path.  However, the situation is very different in the event that the volume of withdrawals is stochastic.  In this case, bank contracts cannot achieve optimal risk sharing (although deposit contracts with suspension are still an improvement over the basic demand deposit contracts).

Diamond and Dybvig (1983) conclude by demonstrating how government deposit insurance can provide improvements over both basic demand deposit contracts, and demand deposit contracts with suspension in the event that the volume of withdrawals is stochastic (i.e., liquidity needs are randomly distributed).  In fact, demand deposits with government deposit insurance can achieve the full-information optimum even in this most general stochastic case.

Friday, March 25, 2011

Riding in a car with a Nobel laureate...

I had the experience of a life-time this past Tuesday when I attended a master class on mechanism design taught by 2007 Nobel Prize winner Prof. Eric Maskin at St. Andrews University. The morning lectures were on auctions and incomplete contracts, whilst the afternoon lecture was on voting theory.  I would provide link, but class materials have not been posted to the web (yet).

It was an absolutely amazing day all around...

Relevant to my ongoing research project, I would like to point out the following papers recommended by Prof. Maskin as being particularly relevant to understanding the recent financial crisis:
  1. Diamond and Dybvig (1983): Bank Runs, Deposit Insurance, and Liquidity
  2. Holmstrom and Tirole (1998): Private and Public Supply of Liquidity
  3. Dewatripont and Tirole (1994): The Prudential Regulation of Banks
  4. Kiyotaki and Moore (1997): Credit Cycles
  5. Fostel and Geanakoplos(2008): Leverage Cycles and the Anxious Economy

Sunday, January 16, 2011

The dangers of high frequency trading...

I am becoming increasingly convinced that high frequency trading represents a significant de-stabilizing influence on stock markets.  I do not buy the idea that high frequency trading strategies simply improve informational efficiency in the markets (by eliminating minute price differentials between buyers and sellers) and provide extra liquidity to the market when it is needed.  I happen to think that these high-frequency trading algorithms and electronic trading strategies destabilize the market.  How? I have no firm idea.  Just the intuition that such strategies increase the frequency and density of interactions between market participants and that perhaps this is not a good thing...also it may be the case that by driving down buy-sell spreads HFTs force institutional investors to pursue other more risky strategies to make higher returns (i.e., mutual funds, pension funds, etc may increase their leveraged positions in order to compete with HFTs or something similar).   

Here is the link to the SEC report on the "flash crash" that took place in May 2010. The report details how the interaction between high-frequency trading algorithms and electronic trading strategies implemented by structural traders (i.e., a particular mutual fund complex) where main causes of the flash crash.  Here are some links to recent articles and blog posts that go into more detail on the issue.

Tuesday, September 7, 2010

What have I done today...

Well besides the generic new student paperwork, I did have sometime to think a bit about some of the issues that I am likely to encounter.  This is a summary of what I have come up with so far:

Question: Why do banks form credit networks?
Banks basically want to do two things:
  1. Make lots of money, and
  2. Generate liquidity
Addressing 1: Banks make money by lending money (i.e., by forming links with other banks, firms, etc).  Banks essentially trade money now for more money later.  This is my working justification for why the number of direct links should be included in the bank's pay-off function.  Addressing 2: Forming credit networks may be a strategy to generate liquidity.  This is my justification for including indirect links in the bank's pay-off function.  I am tempted to say that, all things equal, banks should exhibit some type of preference to link with banks that have lots of links already (either because number of links is viewed as a proxy for something, or because lots of links implies greater access to the rest of the credit network) but this may be getting ahead of myself...

Very abstractly...banks can be divided into three broad classes:
  1. Pure lenders
  2. Pure borrowers
  3. Banks that do both (i.e., lend and borrow)
In the graphic above blue arrows represent flow of money now, and red dashed arrows represent flow of money later.  Not sure this diagram is all that useful except that it helped formalize my intuition concerning how link formation might generate liquidity.  The basic idea is that suppose we have three banks:
Now suppose that Bank 1 is willing to lend to Bank 2 but not to Bank 3 (for whatever reason...perhaps Bank 3 is too risky), but Bank 2 is willing to lend to Bank 3 (because Bank 2 has different risk tolerances...I suppose I have introduced my first heterogenous parameter).  In this case the linkages generate liquidity because without them, Bank 3 would not have been able to borrow. 

Question: How should returns to bank i from a link with bank j be defined?
At this point I am playing with the idea that the value of the link to the lender is something like the discounted present value of the money later minus costs (yet to be defined), while the value to the borrower is the loan amount minus costs (i.e., interest).  How to set interest rate?  Starting point would be to take interest rate as exogenous, but at the moment I am toying with the idea of having banks bargain locally over the interest rate.  In the bargaining process the lender would have several outside options (i.e., either lending to another bank or parking his money in "risk-free" securities) the only outside options available to the borrower would be other banks.

All of this is highly specualtive at this point...but what should you expect on day one of a PhD career!

Tuesday, August 17, 2010

Linking Liquidity Constraints to Network Formation...

For some time now I have been struggling to develop a mechanism to link liquidity constraints with the agent's network formation decision.  I suspect that this has a lot to do with the fact that while I have read quite a lot on network theory, I have not yet got around to reading much of anything having to do with liquidity constraints.  All of my knowledge of liquidity constraints has come from the two weeks we spent talking about them during my MSc.  What follows is my first attempt to link the two concepts together.  The idea follows closely to the textbook treatment of consumption decisions of agents facing liquidity constraints.

The agent has two choice variables each time period: consumption, ct, and the number of neighbors in the network, nt.  The agent has wealth wt and anticipates some uncertain future income yt+1. This agent's savings can be defined to be st=wt-ct.  Typically an agent then maximizes something like the sum of his current utility plus the expected discounted sum of future utility subject to the constraint that next period's wealth wt+1=Rt+1(st+yt+1), where Rt+1 is the interest rate.  A liquidity constraint in this scenario would require that the agent's savings st be non-negative (i.e., agents can not borrow) 

What I want to do is allow agents to borrow funds from neighbors in the network (assuming that they have excess savings to lend).  Agents would become liquidity constrained if neither themselves nor any of their neighbors had funds to lend them.  In this scenario, an agent's consumption decisions over time are affected by his position in the network (i.e., his access to credit from his neighbors).  Clearly there are a number of issues to work out with the framework.  Such as how to specify the interest rate, the income stream, the appropriate utility function, the information set, etc.   Also this is definitely not going to be analytically tractable (but them neither are more traditional liquidity constraint problems). Ideally it would also be nice to be able to model agent default.  Maybe this could be done by specifying some type of stochastic income stream where there is some positive probability of the agent unexpectedly ending up in the "low" income state and is thus unable to repay his loan. 

This is pretty much wild speculation at this point...I just wanted to get these thoughts down on digital paper... 

Friday, August 6, 2010

The University of Essex is doing interesting things...

The Centre for Computational Finance and Economic Agents at the University of Essex is doing really interesting work applying large scale agent-based computational modeling to study contagion and systemic risk in the financial sector.  The work they have done is highly relevant to what I am hoping to accomplish as part of my PhD research...particularly in year 3 where I hope to build a computational model that ties all of my research on the diffusion of liquidity and the evolution of networks together.

Monday, July 26, 2010

Free-form Thought of the Day...

Not quite Faulkner-esque stream of conciousness, but...

While walking the dog this morning, I began thinking about the evolution of economic networks over the business cycle.  I think there may be a way to take Hyman Minsky's financial instability hypothesis and embed it into a dynamic network context.  The idea would be to take a model that links business cycles in the real economy with network evolution in the financial sector.  Two possible features of network evolution over the business cycle that I think would map nicely into Minsky's instability hypothesis:
  1. During the growth phase of the business cycle links are added to the financial network as more lending takes place, and
  2. If one thinks of the links between any two financial institutions in the network as being weighted in proportion to the amount of credit/debt between the two firms, then if leverage levels increase with the business cycle these links will be "strengthened."
Do business cycles incentivize the development of certain network structures in financial markets that are locally robust, but globally fragile?  If so how does this robust, yet fragile network relate to Minsky's instability hypothesis?  Alternatively does the evolution of certain network structures in the financial markets drive business cycles?  Do incentive structures in the financial markets encourage the development of robust yet fragile networks that invariably breakdown causing recessions?  Maybe a little bit of both?  To be continued...

Thursday, July 8, 2010

Interview with Rob Hall...

A nice interview with Rob Hall.  Pertinent excerpt:
Region: Your recent paper on gaps, or “wedges,” between the cost of and returns to borrowing and lending in business credit markets and homeowner loan markets argues that such frictions are a major force in business cycles.
Would you elaborate on what you mean by that and tell us what the policy implications might be?
Hall: There’s a picture that would help tell the story. It’s completely compelling. This graph shows what’s happened during the crisis to the interest rates faced by private decision makers: households and businesses. There’s been no systematic decline in those interest rates, especially those that control home building, purchases of cars and other consumer durables, and business investment. So although government interest rates for claims like Treasury notes fell quite a bit during the crisis, the same is not true for private interest rates.

Between those rates is some kind of friction, and what this means is that even though the Fed has driven the interest rate that it controls to zero, it hasn’t had that much effect on reducing borrowing costs to individuals and businesses. The result is it hasn’t transmitted the stimulus to where stimulus is needed, namely, private spending.

The government sector—federal, state and local—has been completely unable to crank up its own purchases of goods; the federal government has stimulated [spending] slightly but not enough to offset the decline that’s occurred at state and local governments.

Region: Yes, I’d like to ask you about that later.

Hall: So to get spending stimulated you need to provide incentive for private decision makers to reverse the adverse effects that the crisis has had by delivering lower interest rates. So far, that’s just not happened. The only interest rate that has declined by a meaningful amount is the conventional mortgage rate. But if you look at BAA bonds or auto loans or just across the board—there are half a dozen rates in this picture—they just haven’t declined. So there hasn’t been a stimulus to spending.

The mechanism we describe in our textbooks about how expansionary policy can take over by lowering interest rates and cure the recession is just not operating, and that seems to be very central to the reason that the crisis has resulted in an extended period of slack.
Can Hall's financial frictions be explained by network effects?  I don't know I will think on it...I have in mind a model where agents are connected via some underlying network where the network topology would serve to exacerbate a contagion of negative beliefs regarding the probability that private lenders would be repayed.  This dynamic would be exacerbated further in the presence of some type of information asymmetry in which lenders were uncertain which agents would default.  In short by helping to spread contagion of beliefs, the network undermines trust (which is crucial in any lending/credit network).