Let’s say that a friend of mine, Matt, is visiting and I need to order food. Matt’s favorite food is fish custard, so it makes sense that I should order him a dish of fish custard. Simple enough. Of course, if my other friend Peter were coming over, and I knew Peter didn’t like fish custard and much preferred haggis, it wouldn’t make sense for me to order him fish custard. Obviously I’d order haggis for him. To make this work mathematically, let’s say that each friend of mine likes his preferred dish about the same, so we could say that Matt gets utility of u_{M}(fish custard) = 1 from eating fish custard and u_{M}(haggis) = 0 from eating haggis, and Peter gets the reverse – u_{P}(fish custard) = 0 and u_{P}(haggis) = 1. If I want to maximize my friend’s utility, then I should do so by buying each friend his favorite dish. But what if I don’t know who’s coming over?

Intuitively, it makes sense – if Matt’s more likely to come over, I should order fish custard. If Peter’s probably on the way, haggis should go on the menu. If they’re equally likely, flip a coin. More mathematically, if the probability that Matt is coming is π_{M} and the probability that Peter is coming is π_{P} (and π_{M} + π_{P} = 1), the **expected utility** of my guest when I order fish custard is

E[u(fish custard)] = π_{M}u_{M}(fish custard) + π_{P}u_{P}(fish custard),

which reduces to

π_{M}u_{M}(fish custard) + π_{P}(0).

Since u_{M}(fish custard) = 1, then the expected utility of my guest from fish custard is just π_{M}. For haggis, of course, the same logic applies – Matt gets 0 utility, so we can ignore him, and the expected utility ends up being π_{P}. So, whichever friend of mine is more likely to show up should get his favorite dish ordered, and if I don’t know who’s coming, I might as well just draw names from a hat.

This gets a little bit more complicated if both of my friends like each dish. So, let’s consider what happens if u_{M}(fish custard) = 1 and u_{M}(haggis) = 0, but if u_{P}(fish custard) = 0.5 and u_{P}(haggis) = 1. Then, our calculation for the expected u(haggis) stays the same (it’s still π_{p}) but now our E[u(fish custard) ends up being

π_{M}u_{M}(fish custard) + π_{P}u_{P}(fish custard) = π_{M}*(1) + π_{P}*(.5)

So now, in order to figure out what dish to order, we need to know what the probabilities are! If it's fifty-fifty, then E[u(haggis)] is .5, but E[u(fish custard)] is .5 + .5*.5 = .75! In that case, we should order fish custard, even though the guys are equally likely to show up, since Peter likes fish custard a little bit, too. We don’t get to flip a coin (mathematically, we don’t reach our **indifference point**) until

π_{M}u_{M}(fish custard) + π_{P}u_{P}(fish custard) = π_{M}u_{M}(haggis) + π_{P}u_{P}(haggis)

or

π_{M}(1) + π_{P}(.5) = π_{M}(0) + π_{P}(1) => π_{M }+ π_{P}(.5) = π_{P} => π_{M} = .5*π_{P}

That means we don’t get to the indifference point until Matt is twice as likely to arrive as Peter! Funny how these probabilities influence the choices we’ll make. Sometimes things get a bit more complicated, but that’s a post for another time.

… lord.

]]>

*Really, seriously, please.*

During the 50th Anniversary Special episode of Doctor Who, there was an interesting scene where a group of characters and their malevolent clones were negotiating a treaty that would determine, in part, whether the Zygons (an empire of malevolent aliens) would take over the Earth. This presents some quite interesting problems that were solved by some quite interesting guys – the Tenth and Eleventh incarnations of the Doctor.

Obviously each side had opposite incentives. The Zygons needed a planet to live on, and the human population did as well. Take for granted that the populations can’t coexist and you can see that this is what’s known in economics as a **zero-sum game**. Since they would not agree to share the planet, one side would have to come out the winner in the negotiations, with the other losing. Similarly, since each Zygon cloned a human negotiator, it inherited at least some of its human’s memories, meaning that we can assume **symmetric**** information**. In cases like this, it’s usually up to the agent with the higher valuation of the asset being negotiated to make some sort of concession to the agent with the lower valuation in exchange for agreeing to give up any claims – in other words, pay them to go away. In other cases, the agents will agree to let an arbitrator make the decision for them, under the assumption that the arbitrator won’t be clouded by personal decisions and will make the “best” choice, for some value of “best” to be determined.

The Doctors (played by David Tennant and Matt Smith) opted for an entirely different (and clever) solution, albeit one that wouldn’t work in the real world: they wiped the humans’ and Zygons’ memories so that each would forget which side they were on, allowing for efficient resolution of the problem. The theory, which is similar to Richard Rorty’s Veil of Ignorance, is that people who don’t know whether their side stands to benefit will reach an equitable solution, if not one that they might have argued for in the first place. As such, it’s an example of **hidden information**. The same thinking generates the idea in law and economics of efficient breach. This might result in the solution that’s efficient in the economic sense, but it probably won’t lead to anyone being the best off they could be.

]]>

Textbooks are a special product because demand for them has historically been inelastic: if you need the book, you need it. You won’t have any choice as far as whether to buy it. That’s led price-sensitive consumers to buy used textbooks, but there’s a limited supply. (Of course, even more price-sensitive consumers download the books illegally.) There’s also a pretty big market in old editions, even though there are usually slight differences between editions.

Recursive macro is a pretty standard subject, and although the third edition of Ljungqvist and Sargent includes new material, it probably won’t appreciably change the experience for a first-year grad student taking grad macro. There’s some benefit to the new edition, sure, but many students will be faced with the following choice: Pay $82.44 (as of today) for a new third edition, or pay $49.99 for a second edition. That’s a significant savings. However, second-edition sales aren’t beneficial to the publisher, so how can they create an incentive for price-sensitive buyers to give money to the publishers rather than to the secondary market?

Well, a Kindle version of the third edition is $51.29. Even price-sensitive students would probably find $1.30 a fair price to pay for an up-to-date edition, assuming they already had a Kindle-equipped device like an iPad. As used copies of the third edition accumulate, the prices of those will probably collect around the $50 mark as well, and if they don’t, I’d expect the price of the Kindle version to float along with that. Make it easy on price-sensitive consumers to give you money instead of giving it to the secondary market, and some of them will.

]]>

It’s easy to imagine some reasons that people would have different levels of patience. For one, you’d expect a healthy thirty-year-old (named Jim) to be more patient than a ninety-year-old (named Methuselah). What if someone (named Peter) offered us a choice between $100 today or a larger amount of money a year from now? How much would it take for Jim and Methuselah to take the delayed payoff? Would they take $100 a year from now? A lot can change in a year:

- There could be a whole bunch of inflation, and the $100 will be worth less next year than it is now. Boom, we’ve lost.
- We could put the money in the bank and earn a few basis points of interest. Boom, we’ve lost.
- We could die and not be able to pick up the money. Boom, we’ve lost.
- Peter could die and we wouldn’t be able to collect. Boom, we’ve lost.

Based on these, we’ll want a little bit more money next year than this year in order to be willing to take the money later instead of the money now. Statistically, though, Jim is more likely than Methuselah to be there to pick up the money.

Neither would take any less than $100 next year, but that’s just a lower limit. According to Bankrate.com, Discover Bank is paying 0.8% APY, which means that the $100 would be worth .8% more next year – just by putting the money in the bank, we can trade the risk that the bank goes bust (really unlikely) for the risk of Peter dying. That’s an improvement in risk and an improvement in payoff, so there’s no reason to take any less than $100.80. Again, though, this is a lower bound. Peter still has to pay for making them wait. That’s where the third point comes into play.

Methuselah is probably not going to live another year. It’s much more likely that he’ll get to spend the $100 than whatever he gets in a year; in order to make it worth the wait, the payoff would have to be huge. Methuselah views money later as worth a lot less than money today. He might need $200 to make it worth the wait. Jim, on the other hand, might only need $125. He has more time, so he’s much more patient.

This level of patience is called a **discount rate** and is usually called β. You can do this sort of experiment to figure out someone’s patience level. You’d then be able to set up an equation like this, where the benefit is the $100 and the cost is what you give up later:

Methuselah, then, would have

so β = 1/2.

Jim would have the following equation:

so β = 4/5.

Based on this, we can say that Methuselah values money one year from now at 50% of its current value, but Jim values money one year from now at 80% of its current value. Everyone’s discount rate is going to be a little bit different, and different discount rates can lead people to make different choices. If Peter offers $100 today or $150 tomorrow, Jim will wait patiently for $150. Methuselah will jump at the $100 today. Both of them are rational even though their choices are different.

]]>

Measuring the firm’s welfare is fairly simple. Most of the time we’re thinking about firms, what we’re thinking about will be their profit. A business’s **profit function** is always of the form

Profit = Total Revenue – Total Costs

**Total revenue** is the total money a firm takes in. In a simple one-good market, this is just the number of goods sold (the **quantity**) times the amount charged for each good (the **price**). **Marginal revenue** represents how much extra money will be taken in for producing another unit. **Total costs** need to take into account two pieces: the **fixed cost**, which represents things the firm cannot avoid paying in the short term (like rent and bills that are already due) and the **variable cost**, which is the cost of producing each unit. If a firm has a **constant variable cost** then the cost of producing the third item is the same as the cost of producing the 1000th; in other words, constant variable costs imply a **constant marginal cost** as well. If marginal cost is falling, then there’s efficiency in producing more goods; if it’s rising, then each unit is more expensive than the last. The marginal cost is the derivative of the variable cost, but it can also be figured out by looking at the change in cost from one unit to the next.

Measuring the consumer’s welfare is a bit more difficult. We need to take all of the goods sold and meausre how much more people were willing to pay than they actually did. To do that we’ll need a consumer **demand function**, which represents the marginal buyer’s willingness to pay (that is, what the price would have to be to get one more person to buy the good). Let’s say the market demand is governed by the function

Q^{D} = 250 – 2P

That is, at a price of $0, 250 people will line up to buy the good. At a price of $125, no one wants the good (Q^{D} = 0). In between, quantity demanded is positive. We’ll also need to know what price is actually charged. Let’s try it with a few different prices, but we’ll always use the following format^{1}:

Consumer Surplus = (1/2)*(p^{max} – p^{actual})*Q^{D}

where p^{max} is the price where 0 units would be sold and Q^{D} is the quantity demanded at the actual price. In our example, that’s 125.

Let’s say that we set a price of $125. Then, no goods are demanded, and anything times 0 is 0.

What about $120? At that price, the quantity demanded is (250 – 240) or 10; the price difference is (125 – 120) or 5; half of 5*10 is 25, so that’s the consumer surplus. That means that the people who bought those 10 units were willing to pay $25 more, in total, than they actually had to pay.^{2}

Finally, at a price of $50, 100 units are demanded; the total consumer surplus is (1/2)(75)(100) or 1875.

Whenever the number of firms goes up, the price decreases, and quantity increases. When quantity increases or when price decreases, all else equal, consumer surplus will go up; consequently, more firms in competition are better for the consumer.

**Note:
**

]]>

We’ll maintain the assumption of **identical goods**, so that consumers won’t be loyal to one company or the other. We’ll also assume that each company has the same costs, so we’re looking at **identical firms** as well. Finally, assume that there are a lot of buyers, so the firms face a **market demand** of, let’s say, Q^{D}(P) = 500 – 2P, so P = 250 – Q^{D}(P)/2. Since the firms are producing the same goods, then Q^{S}(P) = q_{1}(P) + q_{2}(P).

Neither firm knows what the other is doing, but each firm knows the other is identical to it, and each firm knows the other knows this. Even though neither firm knows what’s going on behind the scenes, they’ll assume that a firm facing the same costs and revenues is rational and will optimize its own profit, sothey can make good, educated guesses about what the other firm will do. Each firm will determine the other firm’s likely course of action and compute its own **best response**. (That’s the one that maximizes its profit.)

Now, let’s take a look at what the firms’ profit functions will look like.

Recall that Total Profit = Total Revenue – Total Cost, and that Marginal Profit = Marginal Revenue – Marginal Cost. Companies will choose quantity to optimize their profit, so they’ll continue producing until their expected Marginal Profit is 0, and then produce no more. Firm 1′s total revenue is Pxq_{1} – revenue is always price times quantity. Keeping in mind that price is a function of quantity, we can rewrite this as (250 – Q^{D}(P)/2)xq_{1}. Since Q^{D}(P) = q_{1} + q_{2}, this is the same as writing (250 – (1/2)(q_{1} + q_{2}))q_{1}. Then, we need to come up with a **total cost** function. Let’s say it’s 25 + q_{1}^{2}, where 25 is a **fixed cost** (representing, say, rent for the factory) and q_{1}^{2} is the **variable cost** of producing each good. Then, Firm 1′s profit function is:

Profit_{1} = (250 – (1/2)(q_{1} + q_{2}))q_{1} – 25 – q_{1}^{2}

or

Profit_{1} = (250 – q_{1} /2 - q_{2}/2)q_{1} – 25 – q_{1}^{2}

or

Profit_{1} = 250q_{1} – q_{1}^{2}/2 – q_{1}q_{2}/2 – 25 – q_{1}^{2}

The marginal profit is the change in the total profit function if Firm 1 produces one more unit; in this case it’s easier to just use the calculus concept of taking a derivative, which yields

Marginal Profit_{1} = 250 – q_{1} – q_{2}/2 – 2q_{1} = 250 – 3q_{1} – q_{2}/2

Since the firms are identical, though, firm 1 knows that firm 2 is doing the same optimization! So, q_{1} = q_{2}, and we can substitute it in.

Marginal Profit_{1} = 250 – 3q_{1} – q_{1}/2 = 250 – 5q_{1}/2

This is 0 where 250 = 5q_{1}/2, or where q_{1} = 100. Firm 2 will also produce 100 units. Total supplied quantity is then 200, and total price will be 200. We can figure out each firm’s profit simply by plugging in these numbers:

Total Revenue = Pxq_{1}^{2} = 200×100 = 20,000

Total cost = 25 + q_{1}^{2} = 25 + 100×100 = 25 + 10,000 = 10,025

Total Profit = 9,075

This was a bit heavier on the mathematics than some of the other problems we’ve talked about, but all that math is just getting to one big idea: it’s rational to produce when you expect your marginal benefit to be at least as much as your marginal cost.

]]>

- Identical goods
- Lots of sellers
- Lots of buyers

Today, we’ll talk about what happens when that second condition doesn’t hold – that is, when sellers have **market power**. When sellers don’t have market power, they have to price according to what the market will bear. If they price too high, someone will undercut them, but if they price too low, they’ll lose money. The only thing they can do is price at their break-even point, where price is equal to marginal cost. (This is sometimes called the **zero profit condition**.)

When only one seller exists, he is called a **monopolist**, and the market is called a **monopoly**. A monopoly can arise for one of two reasons: either it can be because the owner has exclusive access to some important resource, called a **natural monopoly**, or the owner has an **ordinary monopoly** because of laws, barriers to entry, or some other reason.

A natural monopoly is one that arises not because of anticompetitive action by the monopolist but because of exclusive access to some resource. For example, owning a waterfall means you have unbridled access to it for hydroelectric purposes; being the first to lay cable or pipelines makes it inefficient for anyone else to access those resources; essentially, anything where there’s a **high fixed cost** and a **zero marginal cost** are good candidates for natural monopoly status.

Regardless of whether a monopoly is natural or ordinary, a monopolist isn’t subject to the same zero-profit condition as he would be in a perfectly competitive market, since there’s no one to undercut him if he prices higher than his own marginal cost. He’s free to do the absolute best he can – in other words, to **maximize his profit**. The monopolist doesn’t have to take the price, as a perfectly competitive market would force him to; he’ll choose the price himself by choosing the quantity he produces.

The monopolist’s profit-maximization condition is that his **marginal revenue = marginal cost**. This derives from the monopolist’s profit function, Profit = Total Revenue – Total Cost. The monopolist will produce as long as each unit provides positive profit – in other words, as long as **marginal profit ≥ 0**. In non-economic terms, he’ll continue producing as long as it’s worth it for him – as long as each extra unit he produces gives him at least a little bit of profit. Once his marginal profit is 0, there’s no point in producing any further, since every unit he produces will then cost him a little bit of profit. Because Profit = Total Revenue – Total Cost, another equation holds: Marginal Profit =Marginal Revenue – Marginal Cost. Saying that marginal profit is nonnegative means exactly that marginal revenue is at least as much as marginal cost.

Finally, note that marginal revenue is the price of the last (marginal) unit, but keep in mind that the monopolist has control over the quantity that’s produced. Thus, he has control over the price, and will choose quantity to get his optimal profit.

]]>

In a **perfectly competitive market**, three conditions hold:

- All goods are
**identical**. If the seller is selling apples, then all apples are the same - there are no MacIntosh apples, no Red Delicious apples, just apples. - There are
**lots of sellers**, so sellers can’t price-fix because there will always be another seller who will undercut. - There are
**lots of buyers**, so a buyer boycotting won’t make a difference.

The last two conditions sum up together to mean that no one has any **market power**. That means, essentially, that no action an individual buyer or seller takes can affect the price of the goods. If ANY of these conditions isn’t true, then we’re not dealing with a perfectly competitive market – it might be a monopoly or a monopsony, or it might be possible to price-discriminate, but you’ll have to do a bit more to find an equilibrium.

Speaking of that, an **equilibrium** in microeconomics happens when we find a price where buyers are willing to buy exactly as much as sellers are willing to sell. Mathematically, an equilibrium price is a price such that Q^{S}(P) = Q^{D}(P), where Q^{S} is the quantity supplied, Q^{D} is the quantity demanded, and the (P) means that the quantities depend on the price P. Since the quantity is the same, economists sometimes call an equilibrium quantity Q* and the equilibrium price P*.

**Consumers** are optimizing their **utility**, or happiness. This might be represented using something called a utility function, or it might be **aggregated** and presented as a **market demand function** where the quantity demanded by everyone in the world is decided as a function of the price of the good. A common demand function would look like this:

Q^{D}(P) = 100 – 2*P

That means if the price is $0, there are 100 people willing to buy one good each; at a price of $1, there are (100 – 2*1) = 98 people willing to buy one good each; and so on, until no one is willing to buy if the price is $50. Demand curves slope downward because as price goes up, demand goes down. Essentially, a demand function allows us to ignore the consumer optimization step. Demand represents the marginal buyer’s willingness to pay; price equalling willingness to pay is something to remember.

**Firms** optimize **profit**, which is defined as **Total revenue, minus total costs**. If we have a firm’s costs, we can figure out how much they’d need to charge to break even on each sale. Let’s say that it costs a firm $39 to produce a each good. They won’t produce at all until they’ll at least break even – or, until their marginal benefit is at least equal to their marginal cost, at which point they’ll be **indifferent**. Then, as the price rises above $39, charging more will lead to more profit. Even if the firm’s marginal cost changes as they produce more unity, the price of the **marginal unit** will need to be at least as much as the **marginal cost** for that unit. Otherwise, selling it wouldn’t make sense.

The first condition to remember when solving microeconomics problems is that in a perfectly competitive market, a firm will set Price equal to Marginal Cost. If you have price and a marginal cost function, you can find the equilibrium quantity. If you have supply and demand functions, set Q^{S}(P) = Q^{D}(P) and solve for the price, or simply graph the functions and figure out where they meet.

]]>

The fact is that talking imposes an extra cost on the filibustering party. When people are

**The model:**^{1}

First, like all good economists, let’s make some simplifying assumptions. Say there are two parties, the Bears and the Bulls, and that there are 59 Bears and 41 Bulls. Assume that everyone votes strictly along party lines, so every vote comes out in favor of the Bears 59-41. That’s not enough for a 60% majority, so under the current system, the Bulls can filibuster every bill without stopping other legislation.

Parties aim to maximize their political capital, which is generated in two ways:

**Passing bills**. The more partisan a bill is, the more capital is generated. A bill that the entire country would agree to pass has zero partisanship; a bill only Bears would vote for has a very high partisanship. The minority party generates goodwill based on voting for bills, but it decreases when the bills are more partisan.**Public perception**(goodwill). Filibustering leads to a negative public perception. This is directly related to how partisan a bill is – filibustering a totally nonpartisan bill (discount bus fares for war widows) would lead to a highly negative perception, but filibustering a very contentious bill would be offset. Similarly, a filibuster stops all business, so the longer it goes on, the angrier people get.

The Bulls’ capital generation would look like this, with the “talking filibuster” term last, P=Partisanship and D = Days spent filibustering:

Under the current system, days spent filibustering is 0, since nobody actually has to filibuster. That is, the **marginal cost** of filibustering a bill is 0. If a bill passes, the Bulls generate 41 political capital per unit of partisanship for voting, but lose some capital for losing the vote. If a bill has Partisanship of 20.5, then the Bulls are **indifferent** between filibustering and allowing the vote; anything more partisan will definitely be filibustered, and anything less partisan will be voted on.

If talking filibusters are required, though, the whole thing gets much more complicated. Adding a marginal cost for being on TV filibustering makes the minority party far less likely to filibuster. The marginal political capital generated for filibustering for one day is

The Bulls are indifferent between filibustering and allowing the vote when Partisanship is about 20.5012. That’s just what we’d expect – that it takes a more contentious bill to justify a talking filibuster than a silent filibuster. Then, let’s take a look at a two-day filibuster:

A slightly longer filibuster requires a slightly more controversial bill, requiring Partisanship to be 20.5048. Finally, let’s take a look at a 90-day (3-month) filibuster:

That would require a bill of partisanship 26.338. The model displays the expected features: that it takes a more contentious bill to merit a filibuster at all, and longer filibusters require much more contentious bills. If we raise the costs of doing something, it becomes used less often.

**Note:
**

]]>

The category can be determined knowing two things: Is the good **rival**? Is it **excludable**?

If a good is **rival**, one person using it prevents someone else from using it. This is a bit of a weird concept, since air can only be breathed by one person at a time, but air is so abundant as to be nonrival. Air in a SCUBA tank, though, would be rival, since only one person can breathe from it at a time. If a good is **excludable**, you can prevent someone from using the good if you don’t want them to. My apartment is excludable because I have a lock on the door.

**Private goods** are rival and excludable. Just about anything you can think of going to a store and buying is a private good. My TI-36X Pro calculator is rival (if you’re using it, I can’t) and it’s excludable (if I don’t want you to use it, I’ll just put it in my pocket). Private goods have some interesting properties and merit further discussion.

**Public goods** are defined as goods that are nonrival and nonexcludable. The classic example of a public good is military defense. If the Army exists and prevents other countries from invading the United States, then there’s no way to keep me from benefiting from that defense that doesn’t also prevent someone else (e.g., my no-good brother) from benefiting (so defense is nonexcludable). Similarly, defending the United States is nonrival because the fact that I’m defended doesn’t have any effect on how defended someone else is. I don’t use up military defense, so it doesn’t (in the simplest case) cost anything to defend my neighbor if I’m already being defended.

**Club goods** are excludable but nonrival. My landlord’s wireless internet connection is a club good. It’s excludable, because there’s a password on it; it’s nonrival, though, because up to a certain point it doesn’t matter how many people are connected to the network. My enjoyment of the internet doesn’t depend on whether my wife is online or not. (It would take a whole bunch of people, enough to cause **congestion**, to make my internet too slow to use.)

**Common goods** are pretty interesting, because there’s an intuitive concept called the **tragedy of the commons**. Common goods are rival, but nonexcludable. The classic example here is a meadow where you graze your sheep. Every one of us can use the meadow, since it’s public property, but if I graze my sheep here, they eat some of the grass and there’s less for your sheep. It’s in both of our interests to conserve the meadow, but it’s also in both of our interests to cheat and consume as much as we want to. Common goods tend to get used up.

What goods seem to straddle the line between two of these categories, and how do you think that confusion can be resolved?

]]>