## Anecdote Alert: Do restaurant deposits depress attendance?January 1, 2015

Posted by tomflesher in Examples.
Tags: ,

Last night I spent New Year’s Eve at one of my favorite restaurants, Verace in Islip, New York. I actually did New Year’s Eve there last year, too, and there were three very interesting changes. The upshot is that the restaurant, though it had a fantastic menu, was significantly less full than it was last year, and the crowd skewed slightly older.

First, the price of the dinner was $65 last year and$85 this year. That corresponds to about a 30% price hike. That might deter some people, but I’m skeptical. The price-elasticity of demand for restaurant meals is about 2.3, or very elastic. (That means that if the price of a restaurant meal changes by 1%, the quantity of restaurant meals sold would drop about 2.3%.) If that’s the correct elasticity to apply here, that would explain a 69% drop in attendance, but I’m not so sure that restaurant meals on New Year’s are as elastic as restaurant meals during the rest of the year. The well-known Valentine’s Day Effect causes price elasticity for certain goods (cut roses) to drop on Valentine’s Day, and since a meal at home isn’t a close substitute for a restaurant meal on a special occasion, I’m skeptical that this price change would explain the precipitous drop in attendance.

Second, the restaurant required a deposit this year – $50 per person, returned at the beginning of the meal as a gift card. This was my first hypothesis, but I’m not sure it’s much of an explanation. For one thing, I put down my deposit on Monday, so there was no real loss of value.$50 per person to hold a spot is well within the income for most demographics that you see at Verace most nights [more on this in a moment], especially since it operated as a credit on the bill. No dice here, really.

Third, this might be the big one – Verace is part of the Bohlsen Restaurant Group, which operates a couple of restaurants at slightly different price points. This year, BRG made a big deal of advertising different, keyed experiences at their different restaurants. Specifically, Teller’s was a much more expensive steakhouse offering, Verace was a meal only, but Monsoon – their lower-priced, Asian fusion restaurant – had a modular menu with options of $75 for a meal (a bit cheaper than Verace, but not much) and$75 for an open bar. The open bar and Monsoon’s dance floor almost surely made it more attractive to younger revelers. That also explains the shift in demographics – Verace’s younger crowd may have been cannibalized by another BRG restaurant.

In the alternative, our waiter’s hypothesis: The manager did a great job seating people. “This guy,” says he, “is a magician.” He may be, but I’m more interested in seeing Monsoon’s numbers.

## Uncertainty Over Time (Lords)November 25, 2013

Posted by tomflesher in Micro.
Tags: , , ,

The previous post introduced a problem that arose in the Doctor Who 50th Anniversary special in which uncertainty was a core element. To explore that problem in depth, we’ll need some tools to work with uncertainty. Also, that problem was a little grim, so let’s take the edge off.

Let’s say that a friend of mine, Matt, is visiting and I need to order food. Matt’s favorite food is fish custard, so it makes sense that I should order him a dish of fish custard. Simple enough. Of course, if my other friend Peter were coming over, and I knew Peter didn’t like fish custard and much preferred haggis, it wouldn’t make sense for me to order him fish custard. Obviously I’d order haggis for him. To make this work mathematically, let’s say that each friend of mine likes his preferred dish about the same, so we could say that Matt gets utility of uM(fish custard) = 1 from eating fish custard and uM(haggis) = 0 from eating haggis, and Peter gets the reverse – uP(fish custard) = 0 and uP(haggis) = 1. If I want to maximize my friend’s utility, then I should do so by buying each friend his favorite dish. But what if I don’t know who’s coming over?

Intuitively, it makes sense – if Matt’s more likely to come over, I should order fish custard. If Peter’s probably on the way, haggis should go on the menu. If they’re equally likely, flip a coin. More mathematically, if the probability that Matt is coming is πM and the probability that Peter is coming is πP (and πM + πP = 1), the expected utility of my guest when I order fish custard is

E[u(fish custard)] = πMuM(fish custard) + πPuP(fish custard),

which reduces to

πMuM(fish custard) + πP(0).

Since uM(fish custard) = 1, then the expected utility of my guest from fish custard is just πM. For haggis, of course, the same logic applies – Matt gets 0 utility, so we can ignore him, and the expected utility ends up being πP. So, whichever friend of mine is more likely to show up should get his favorite dish ordered, and if I don’t know who’s coming, I might as well just draw names from a hat.

This gets a little bit more complicated if both of my friends like each dish. So, let’s consider what happens if uM(fish custard) = 1 and uM(haggis) = 0, but if uP(fish custard) = 0.5 and uP(haggis) = 1. Then, our calculation for the expected u(haggis) stays the same (it’s still πp) but now our E[u(fish custard) ends up being

πMuM(fish custard) + πPuP(fish custard) = πM*(1) + πP*(.5)

So now, in order to figure out what dish to order, we need to know what the probabilities are! If it’s fifty-fifty, then E[u(haggis)] is .5, but E[u(fish custard)] is .5 + .5*.5 = .75! In that case, we should order fish custard, even though the guys are equally likely to show up, since Peter likes fish custard a little bit, too. We don’t get to flip a coin (mathematically, we don’t reach our indifference point) until

πMuM(fish custard) + πPuP(fish custard) = πMuM(haggis) + πPuP(haggis)

or

πM(1) + πP(.5) = πM(0) + πP(1) => πM + πP(.5) = πP => πM = .5*πP

That means we don’t get to the indifference point until Matt is twice as likely to arrive as Peter! Funny how these probabilities influence the choices we’ll make. Sometimes things get a bit more complicated, but that’s a post for another time.

… lord.

## SPOILERS: The Tenth and Eleventh Doctors’ ProblemNovember 24, 2013

Posted by tomflesher in Uncategorized.
Tags: , , ,

Major, major spoilers here. If you haven’t yet seen the Doctor Who 50th Anniversary Special, please avoid this post.

During the 50th Anniversary Special episode of Doctor Who, there was an interesting scene where a group of characters and their malevolent clones were negotiating a treaty that would determine, in part, whether the Zygons (an empire of malevolent aliens) would take over the Earth. This presents some quite interesting problems that were solved by some quite interesting guys – the Tenth and Eleventh incarnations of the Doctor.

Obviously each side had opposite incentives. The Zygons needed a planet to live on, and the human population did as well. Take for granted that the populations can’t coexist and you can see that this is what’s known in economics as a zero-sum game. Since they would not agree to share the planet, one side would have to come out the winner in the negotiations, with the other losing. Similarly, since each Zygon cloned a human negotiator, it inherited at least some of its human’s memories, meaning that we can assume symmetric information. In cases like this, it’s usually up to the agent with the higher valuation of the asset being negotiated to make some sort of concession to the agent with the lower valuation in exchange for agreeing to give up any claims – in other words, pay them to go away. In other cases, the agents will agree to let an arbitrator make the decision for them, under the assumption that the arbitrator won’t be clouded by personal decisions and will make the “best” choice, for some value of “best” to be determined.

The Doctors (played by David Tennant and Matt Smith) opted for an entirely different (and clever) solution, albeit one that wouldn’t work in the real world: they wiped the humans’ and Zygons’ memories so that each would forget which side they were on, allowing for efficient resolution of the problem. The theory, which is similar to Richard Rorty’s Veil of Ignorance, is that people who don’t know whether their side stands to benefit will reach an equitable solution, if not one that they might have argued for in the first place. As such, it’s an example of hidden information. The same thinking generates the idea in law and economics of efficient breach. This might result in the solution that’s efficient in the economic sense, but it probably won’t lead to anyone being the best off they could be.

## Shockingly, Economists Price Things IntelligentlyMay 27, 2013

Posted by tomflesher in Uncategorized.
Tags: , ,

I was excited to see that there’s a new edition of Recursive Macroeconomic Theory by Lars Ljungqvist and the Nobel Prize winning Tom Sargent. I was even more excited to see prices starting to fall into place with respect to the Kindle edition.

Textbooks are a special product because demand for them has historically been inelastic: if you need the book, you need it. You won’t have any choice as far as whether to buy it. That’s led price-sensitive consumers to buy used textbooks, but there’s a limited supply. (Of course, even more price-sensitive consumers download the books illegally.) There’s also a pretty big market in old editions, even though there are usually slight differences between editions.

Recursive macro is a pretty standard subject, and although the third edition of Ljungqvist and Sargent includes new material, it probably won’t appreciably change the experience for a first-year grad student taking grad macro. There’s some benefit to the new edition, sure, but many students will be faced with the following choice: Pay $82.44 (as of today) for a new third edition, or pay$49.99 for a second edition. That’s a significant savings. However, second-edition sales aren’t beneficial to the publisher, so how can they create an incentive for price-sensitive buyers to give money to the publishers rather than to the secondary market?

Well, a Kindle version of the third edition is $51.29. Even price-sensitive students would probably find$1.30 a fair price to pay for an up-to-date edition, assuming they already had a Kindle-equipped device like an iPad. As used copies of the third edition accumulate, the prices of those will probably collect around the $50 mark as well, and if they don’t, I’d expect the price of the Kindle version to float along with that. Make it easy on price-sensitive consumers to give you money instead of giving it to the secondary market, and some of them will. ## Don’t Discount the Importance of PatienceApril 9, 2013 Posted by tomflesher in Micro, Teaching. Tags: , , 1 comment so far Uncertainty is one explanation for why interest rates vary. Tolerance for uncertainty is called risk aversion, and it can be pretty complicated. (We’ll talk about it a little bit later on.) Another big concept is patience. Willingness to wait is also pretty complicated, but that’s our topic for today. It’s easy to imagine some reasons that people would have different levels of patience. For one, you’d expect a healthy thirty-year-old (named Jim) to be more patient than a ninety-year-old (named Methuselah). What if someone (named Peter) offered us a choice between$100 today or a larger amount of money a year from now? How much would it take for Jim and Methuselah to take the delayed payoff? Would they take $100 a year from now? A lot can change in a year: • There could be a whole bunch of inflation, and the$100 will be worth less next year than it is now. Boom, we’ve lost.
• We could put the money in the bank and earn a few basis points of interest. Boom, we’ve lost.
• We could die and not be able to pick up the money. Boom, we’ve lost.
• Peter could die and we wouldn’t be able to collect. Boom, we’ve lost.

Based on these, we’ll want a little bit more money next year than this year in order to be willing to take the money later instead of the money now. Statistically, though, Jim is more likely than Methuselah to be there to pick up the money.

Neither would take any less than $100 next year, but that’s just a lower limit. According to Bankrate.com, Discover Bank is paying 0.8% APY, which means that the$100 would be worth .8% more next year – just by putting the money in the bank, we can trade the risk that the bank goes bust (really unlikely) for the risk of Peter dying. That’s an improvement in risk and an improvement in payoff, so there’s no reason to take any less than $100.80. Again, though, this is a lower bound. Peter still has to pay for making them wait. That’s where the third point comes into play. Methuselah is probably not going to live another year. It’s much more likely that he’ll get to spend the$100 than whatever he gets in a year; in order to make it worth the wait, the payoff would have to be huge. Methuselah views money later as worth a lot less than money today. He might need $200 to make it worth the wait. Jim, on the other hand, might only need$125. He has more time, so he’s much more patient.

This level of patience is called a discount rate and is usually called β. You can do this sort of experiment to figure out someone’s patience level. You’d then be able to set up an equation like this, where the benefit is the $100 and the cost is what you give up later: $100 = \beta \times Payoff$ Methuselah, then, would have $100 = \beta \times 200$ so β = 1/2. Jim would have the following equation: $100 = \beta \times 125$ so β = 4/5. Based on this, we can say that Methuselah values money one year from now at 50% of its current value, but Jim values money one year from now at 80% of its current value. Everyone’s discount rate is going to be a little bit different, and different discount rates can lead people to make different choices. If Peter offers$100 today or $150 tomorrow, Jim will wait patiently for$150. Methuselah will jump at the $100 today. Both of them are rational even though their choices are different. ## Evaluating Different Market StructuresDecember 13, 2012 Posted by tomflesher in Micro, Teaching. Tags: , , , , , , , , , , add a comment Market structures, like perfect competition, monopoly, and Cournot competition have different implications for the consumer and the firm. Measuring the differences can be very informative, but first we have to understand how to do it. Measuring the firm’s welfare is fairly simple. Most of the time we’re thinking about firms, what we’re thinking about will be their profit. A business’s profit function is always of the form Profit = Total Revenue – Total Costs Total revenue is the total money a firm takes in. In a simple one-good market, this is just the number of goods sold (the quantity) times the amount charged for each good (the price). Marginal revenue represents how much extra money will be taken in for producing another unit. Total costs need to take into account two pieces: the fixed cost, which represents things the firm cannot avoid paying in the short term (like rent and bills that are already due) and the variable cost, which is the cost of producing each unit. If a firm has a constant variable cost then the cost of producing the third item is the same as the cost of producing the 1000th; in other words, constant variable costs imply a constant marginal cost as well. If marginal cost is falling, then there’s efficiency in producing more goods; if it’s rising, then each unit is more expensive than the last. The marginal cost is the derivative of the variable cost, but it can also be figured out by looking at the change in cost from one unit to the next. Measuring the consumer’s welfare is a bit more difficult. We need to take all of the goods sold and meausre how much more people were willing to pay than they actually did. To do that we’ll need a consumer demand function, which represents the marginal buyer’s willingness to pay (that is, what the price would have to be to get one more person to buy the good). Let’s say the market demand is governed by the function QD = 250 – 2P That is, at a price of$0, 250 people will line up to buy the good. At a price of $125, no one wants the good (QD = 0). In between, quantity demanded is positive. We’ll also need to know what price is actually charged. Let’s try it with a few different prices, but we’ll always use the following format1: Consumer Surplus = (1/2)*(pmax – pactual)*QD where pmax is the price where 0 units would be sold and QD is the quantity demanded at the actual price. In our example, that’s 125. Let’s say that we set a price of$125. Then, no goods are demanded, and anything times 0 is 0.

What about $120? At that price, the quantity demanded is (250 – 240) or 10; the price difference is (125 – 120) or 5; half of 5*10 is 25, so that’s the consumer surplus. That means that the people who bought those 10 units were willing to pay$25 more, in total, than they actually had to pay.2

Finally, at a price of $50, 100 units are demanded; the total consumer surplus is (1/2)(75)(100) or 1875. Whenever the number of firms goes up, the price decreases, and quantity increases. When quantity increases or when price decreases, all else equal, consumer surplus will go up; consequently, more firms in competition are better for the consumer. Note: 1 Does this remind you of the formula for the area of a triangle? Yes. Yes it does. 2 If you add up each person’s willingness to pay and subtract 120 from each, you’ll underestimate this slightly. That’s because it ignores the slope between points, meaning that there’s a bit of in-between willingness to pay necessary to make the curve a bit smoother. Breaking this up into 100 buyers instead of 10 would lead to a closer approximation, and 1000 instead of 100 even closer. This is known mathematically as taking limits. ## Duopoly and Cournot EquilibriumDecember 12, 2012 Posted by tomflesher in Micro, Teaching. Tags: , , , , add a comment A few days ago, we discussed perfectly competitive markets; yesterday, we talked about monopolistic markets. Now, let’s expand into a case in between – a duopolistic, or two-seller, market. This is usually called a Cournot problem, after the economist who invented it. We’ll maintain the assumption of identical goods, so that consumers won’t be loyal to one company or the other. We’ll also assume that each company has the same costs, so we’re looking at identical firms as well. Finally, assume that there are a lot of buyers, so the firms face a market demand of, let’s say, QD(P) = 500 – 2P, so P = 250 – QD(P)/2. Since the firms are producing the same goods, then QS(P) = q1(P) + q2(P). Neither firm knows what the other is doing, but each firm knows the other is identical to it, and each firm knows the other knows this. Even though neither firm knows what’s going on behind the scenes, they’ll assume that a firm facing the same costs and revenues is rational and will optimize its own profit, sothey can make good, educated guesses about what the other firm will do. Each firm will determine the other firm’s likely course of action and compute its own best response. (That’s the one that maximizes its profit.) Now, let’s take a look at what the firms’ profit functions will look like. Recall that Total Profit = Total Revenue – Total Cost, and that Marginal Profit = Marginal Revenue – Marginal Cost. Companies will choose quantity to optimize their profit, so they’ll continue producing until their expected Marginal Profit is 0, and then produce no more. Firm 1’s total revenue is Pxq1 – revenue is always price times quantity. Keeping in mind that price is a function of quantity, we can rewrite this as (250 – QD(P)/2)xq1. Since QD(P) = q1 + q2, this is the same as writing (250 – (1/2)(q1 + q2))q1. Then, we need to come up with a total cost function. Let’s say it’s 25 + q12, where 25 is a fixed cost (representing, say, rent for the factory) and q12 is the variable cost of producing each good. Then, Firm 1’s profit function is: Profit1 = (250 – (1/2)(q1 + q2))q1 – 25 – q12 or Profit1 = (250 – q1 /2 – q2/2)q1 – 25 – q12 or Profit1 = 250q1 – q12/2 – q1q2/2 – 25 – q12 The marginal profit is the change in the total profit function if Firm 1 produces one more unit; in this case it’s easier to just use the calculus concept of taking a derivative, which yields Marginal Profit1 = 250 – q1 – q2/2 – 2q1 = 250 – 3q1 – q2/2 Since the firms are identical, though, firm 1 knows that firm 2 is doing the same optimization! So, q1 = q2, and we can substitute it in. Marginal Profit1 = 250 – 3q1 – q1/2 = 250 – 5q1/2 This is 0 where 250 = 5q1/2, or where q1 = 100. Firm 2 will also produce 100 units. Total supplied quantity is then 200, and total price will be 200. We can figure out each firm’s profit simply by plugging in these numbers: Total Revenue = Pxq12 = 200×100 = 20,000 Total cost = 25 + q12 = 25 + 100×100 = 25 + 10,000 = 10,025 Total Profit = 9,075 This was a bit heavier on the mathematics than some of the other problems we’ve talked about, but all that math is just getting to one big idea: it’s rational to produce when you expect your marginal benefit to be at least as much as your marginal cost. ## Monopolistic MarketsDecember 11, 2012 Posted by tomflesher in Micro, Teaching. Tags: , , , , , , 1 comment so far Continuing our whistle-stop tour through market types, today’s topic is monopolies. Yesterday’s discussion was of perfectly competitive markets, where three conditions held: • Identical goods • Lots of sellers • Lots of buyers Today, we’ll talk about what happens when that second condition doesn’t hold – that is, when sellers have market power. When sellers don’t have market power, they have to price according to what the market will bear. If they price too high, someone will undercut them, but if they price too low, they’ll lose money. The only thing they can do is price at their break-even point, where price is equal to marginal cost. (This is sometimes called the zero profit condition.) When only one seller exists, he is called a monopolist, and the market is called a monopoly. A monopoly can arise for one of two reasons: either it can be because the owner has exclusive access to some important resource, called a natural monopoly, or the owner has an ordinary monopoly because of laws, barriers to entry, or some other reason. A natural monopoly is one that arises not because of anticompetitive action by the monopolist but because of exclusive access to some resource. For example, owning a waterfall means you have unbridled access to it for hydroelectric purposes; being the first to lay cable or pipelines makes it inefficient for anyone else to access those resources; essentially, anything where there’s a high fixed cost and a zero marginal cost are good candidates for natural monopoly status. Regardless of whether a monopoly is natural or ordinary, a monopolist isn’t subject to the same zero-profit condition as he would be in a perfectly competitive market, since there’s no one to undercut him if he prices higher than his own marginal cost. He’s free to do the absolute best he can – in other words, to maximize his profit. The monopolist doesn’t have to take the price, as a perfectly competitive market would force him to; he’ll choose the price himself by choosing the quantity he produces. The monopolist’s profit-maximization condition is that his marginal revenue = marginal cost. This derives from the monopolist’s profit function, Profit = Total Revenue – Total Cost. The monopolist will produce as long as each unit provides positive profit – in other words, as long as marginal profit ≥ 0. In non-economic terms, he’ll continue producing as long as it’s worth it for him – as long as each extra unit he produces gives him at least a little bit of profit. Once his marginal profit is 0, there’s no point in producing any further, since every unit he produces will then cost him a little bit of profit. Because Profit = Total Revenue – Total Cost, another equation holds: Marginal Profit =Marginal Revenue – Marginal Cost. Saying that marginal profit is nonnegative means exactly that marginal revenue is at least as much as marginal cost. Finally, note that marginal revenue is the price of the last (marginal) unit, but keep in mind that the monopolist has control over the quantity that’s produced. Thus, he has control over the price, and will choose quantity to get his optimal profit. ## Perfectly Competitive MarketsDecember 10, 2012 Posted by tomflesher in Micro, Teaching. Tags: , , , , , add a comment When solving economic problems, the type of firm you’re dealing with can lead you to use different techniques to figure out the firm’s rational choice of action. This week, I’ll set up a thumbnail sketch of how to solve different firms’ types of problems, since a common exam question in intermediate microeconomics is to set up a firm’s production function and ask a series of different questions. The important thing to remember about all types of markets is that every economic agent is optimizing something. In a perfectly competitive market, three conditions hold: • All goods are identical. If the seller is selling apples, then all apples are the same – there are no MacIntosh apples, no Red Delicious apples, just apples. • There are lots of sellers, so sellers can’t price-fix because there will always be another seller who will undercut. • There are lots of buyers, so a buyer boycotting won’t make a difference. The last two conditions sum up together to mean that no one has any market power. That means, essentially, that no action an individual buyer or seller takes can affect the price of the goods. If ANY of these conditions isn’t true, then we’re not dealing with a perfectly competitive market – it might be a monopoly or a monopsony, or it might be possible to price-discriminate, but you’ll have to do a bit more to find an equilibrium. Speaking of that, an equilibrium in microeconomics happens when we find a price where buyers are willing to buy exactly as much as sellers are willing to sell. Mathematically, an equilibrium price is a price such that QS(P) = QD(P), where QS is the quantity supplied, QD is the quantity demanded, and the (P) means that the quantities depend on the price P. Since the quantity is the same, economists sometimes call an equilibrium quantity Q* and the equilibrium price P*. Consumers are optimizing their utility, or happiness. This might be represented using something called a utility function, or it might be aggregated and presented as a market demand function where the quantity demanded by everyone in the world is decided as a function of the price of the good. A common demand function would look like this: QD(P) = 100 – 2*P That means if the price is$0, there are 100 people willing to buy one good each; at a price of $1, there are (100 – 2*1) = 98 people willing to buy one good each; and so on, until no one is willing to buy if the price is$50. Demand curves slope downward because as price goes up, demand goes down. Essentially, a demand function allows us to ignore the consumer optimization step. Demand represents the marginal buyer’s willingness to pay; price equalling willingness to pay is something to remember.

Firms optimize profit, which is defined as Total revenue, minus total costs. If we have a firm’s costs, we can figure out how much they’d need to charge to break even on each sale. Let’s say that it costs a firm $39 to produce a each good. They won’t produce at all until they’ll at least break even – or, until their marginal benefit is at least equal to their marginal cost, at which point they’ll be indifferent. Then, as the price rises above$39, charging more will lead to more profit. Even if the firm’s marginal cost changes as they produce more unity, the price of the marginal unit will need to be at least as much as the marginal cost for that unit. Otherwise, selling it wouldn’t make sense.

The first condition to remember when solving microeconomics problems is that in a perfectly competitive market, a firm will set Price equal to Marginal Cost. If you have price and a marginal cost function, you can find the equilibrium quantity. If you have supply and demand functions, set QS(P) = QD(P) and solve for the price, or simply graph the functions and figure out where they meet.

## When is a filibuster not a filibuster?December 7, 2012

Posted by tomflesher in Micro, Models, Teaching.
Tags: , , , , , , ,

A filibuster is a legislative technique where a lawmaker who is in the minority will block passage of the bill. Historically, that required talking continuously on the legislature floor, as that would prevent anyone else from doing anything. In the US Senate, a bill can be filibustered simply by declaring it so – the filibustering Senator doesn’t actually need to talk. The Senate is considering a rule change to move back to historical, “talking” filibusters. In either case, a filibuster can be broken by a 60-vote supermajority (called cloture), but a talking filibuster can also be broken by the filibustering Senator getting tired and quitting. What’s the economic difference between these two rules?

The fact is that talking imposes an extra cost on the filibustering party. When people are

The model:1

First, like all good economists, let’s make some simplifying assumptions. Say there are two parties, the Bears and the Bulls, and that there are 59 Bears and 41 Bulls. Assume that everyone votes strictly along party lines, so every vote comes out in favor of the Bears 59-41. That’s not enough for a 60% majority, so under the current system, the Bulls can filibuster every bill without stopping other legislation.

Parties aim to maximize their political capital, which is generated in two ways:

• Passing bills. The more partisan a bill is, the more capital is generated. A bill that the entire country would agree to pass has zero partisanship; a bill only Bears would vote for has a very high partisanship. The minority party generates goodwill based on voting for bills, but it decreases when the bills are more partisan.
• Public perception (goodwill). Filibustering leads to a negative public perception. This is directly related to how partisan a bill is – filibustering a totally nonpartisan bill (discount bus fares for war widows) would lead to a highly negative perception, but filibustering a very contentious bill would be offset. Similarly, a filibuster stops all business, so the longer it goes on, the angrier people get.

The Bulls’ capital generation would look like this, with the “talking filibuster” term last, P=Partisanship and D = Days spent filibustering:

$C = - P^2+ 41*P - \frac{1}{P}*D^2$

Under the current system, days spent filibustering is 0, since nobody actually has to filibuster. That is, the marginal cost of filibustering a bill is 0. If a bill passes, the Bulls generate 41 political capital per unit of partisanship for voting, but lose some capital for losing the vote. If a bill has Partisanship of 20.5, then the Bulls are indifferent between filibustering and allowing the vote; anything more partisan will definitely be filibustered, and anything less partisan will be voted on.

If talking filibusters are required, though, the whole thing gets much more complicated. Adding a marginal cost for being on TV filibustering makes the minority party far less likely to filibuster. The marginal political capital generated for filibustering for one day is

$MCapital = -2*P + 41 + \frac{1}{P^2}$

The Bulls are indifferent between filibustering and allowing the vote when Partisanship is about 20.5012. That’s just what we’d expect – that it takes a more contentious bill to justify a talking filibuster than a silent filibuster. Then, let’s take a look at a two-day filibuster:

$MCapital = -2*P + 41 + \frac{4}{P^2}$

A slightly longer filibuster requires a slightly more controversial bill, requiring Partisanship to be 20.5048. Finally, let’s take a look at a 90-day (3-month) filibuster:

$MCapital = -2*P + 41 + \frac{8100}{P^2}$

That would require a bill of partisanship 26.338. The model displays the expected features: that it takes a more contentious bill to merit a filibuster at all, and longer filibusters require much more contentious bills. If we raise the costs of doing something, it becomes used less often.

Note:
1 As far as I know, this isn’t stolen from anyone, but if it’s similar to one currently in the literature please let me know so I can do some reading and properly credit the inventor.