Showing posts with label behavioral economics. Show all posts
Showing posts with label behavioral economics. Show all posts

Tuesday, June 30, 2020

30/6/20: Long-Term Behavioral Implications of COVID19 Pandemic


My article on the behavioural economics and finance implications of COVID19 pandemic is now available on @TheCurrency website: https://www.thecurrency.news/articles/19675/debt-distress-and-behavioural-finance-the-post-pandemic-world-be-marked-by-deep-and-long-lasting-scars.


Hint: dealing with COVID19 impact will be an uphill battle for many and for the society and economy at large.

This is a long read piece, covering general behavioural fallout from the pandemic, and Ireland-specific data.

Tuesday, January 21, 2020

21/1/20: Investor Fear and Uncertainty in Cryptocurrencies


Our paper on behavioral biases in cryptocurrencies trading is now published by the Journal of Behavioral and Experimental Finance volume 25, 2020:



We cover investor sentiment effects on pricing processes of 10 largest (by market capitalization) crypto-currencies, showing direct but non-linear impact of herding and anchoring biases in investor behavior. We also show that these biases are themselves anchored to the specific trends/direction of price movements. Our results provide direct links between investors' sentiment toward:

  1. Overall risky assets investment markets,
  2. Cryptocurrencies investment markets, and
  3. Macroeconomic conditions,
and market price dynamics for crypto-assets. We also show direct evidence that both markets uncertainty and investor fear sentiment drive price processes for crypto-assets.

Wednesday, July 3, 2019

3/7/19: Record Recovery: Duration and Perceptions


While last month the ongoing 'recovery' has clocked the longest duration of all recoveries in the U.S. history (see chart 1 below), there is a continued and sustained perception of this recovery as being somehow weak.

And, in fairness, based on real GDP growth during the modern business cycles (next chart), current expansion is hardly impressive:

However, public perceptions should really be more closely following personal disposal income dynamics than the aggregate economic output growth. So here is a chart plotting evolution of the real disposable income per capita through business cycles:


By disposable income metrics, here is what matters:

  1. The Great Recession was horrific in terms of duration and depth of declines in personal disposable income.
  2. The recovery has been extremely volatile over the first 7 years.
  3. It took 22 quarters for personal disposable income to recover to the levels seen in the third quarter of the recovery.
So what matters to the public perception of the recovery in the current cycle is the long-lasting memory of the collapse, laced with the negative perceptions lingering from the early years of the recovery.

To confirm this, look at the average rate of recovery in the real disposable income per quarter of the recovery cycle. The next two charts plot this metric, relative to the (a) full business cycle - from the start of the recession to the end of the recovery (next chart) and (b) recovery cycle alone - from the trough of the recession to the end of the recovery (second chart below):




So looking at the trough-to-peak part of the cycle (the expansion part of the cycle) alone implies we are experiencing the best recovery on modern record. But looking at the start-of-recession-to-end-of-recovery cycle, the current recovery period has been less than spectacular, ranking fourth in strength overall.

Which is, of course, to say that our negative perceptions of the recovery are anchored to our experience of the crisis. We are, after all, behavioral animals, rather than rational agents.

Tuesday, April 30, 2019

30/4/19: Journal of Financial Transformation paper on cryptocurrencies pricing


Our paper with O’Loughlin, Daniel and Chlebowski, Bartosz, titled "Behavioral Basis of Cryptocurrencies Markets: Examining Effects of Public Sentiment, Fear and Uncertainty on Price Formation" is out in the new edition of the Journal of Financial Transformation Volume 49, April 2019. Available at SSRN: https://ssrn.com/abstract=3328205 or https://www.capco.com/Capco-Institute/Journal-49-Alternative-Capital-Markets.



Saturday, January 12, 2019

11/1/19: Herding: the steady state of the uncertain markets


Markets are herds. Care to believe in behavioral economics or not, safety is in liquidity and in benchmarking. Both mean that once large investors start rotating out of one asset class and into another, the herd follows, because what everyone is buying is liquid, and when everyone is buying, they are setting benchmark expected returns. If you, as a manager, perform in line with the market, you are safe at the times of uncertainty and ambiguity. In other words, it is better to bet on losing or underperforming alongside the crowd of others, than to bet on a more volatile expected returns, even though these might offer a higher upside.

How does this work? Here:


Everyone loves Corporate debt, until everyone runs out of it and into Government debt. Everyone hates Government debt, until everyone hates corporate debt. It's ugly. But it is real. Herding is what drives markets, even though everyone is keen on paying analysts top dollar not to herd.

Friday, January 11, 2019

11/1/19: A Behavioral Experiment: Irish License Plates and Household Demand for Cars


While a relatively well known and understood fact in Ireland, this is an interesting snapshot of data for our students in Behavioral Finance and Economics course at MIIS.


In 2013, Ireland introduced a new set of car license plates that created a de facto natural experiment in behavioural economics. Prior to 2013, Irish license plates contained, as the first two digits, the year of car production (see lower two images). Since 2013, prompted by the ‘fear of the number ’13’’, the license plates contain three first digits designating the year and the half-year of the make.


Prior to 2013 change in licenses, Irish car buyers were heavily concentrated in the first two months of each year - a ‘vanity effect’ of license plates that provided additional utility to the earlier months’ car purchasers from having a vehicle with current year identifier for a longer period of time. Post-2013 changes, therefore can be expected to yield two effects:
1) The ‘vanity effect’ should be split between the first two months of 1H of the year, and the first two months of 2H of the year; and
2) Overall, ‘vanity effect’ across two segments of the year should be higher than the same for th period pre-2013 change.


As chart above illustrates, both of these factors are confirmed in the data. Irish buyers are now (post-2013) more concentrated in the January, February, July and August months than prior to 2013. In 2009-2012, average share of annual sales that fell onto these four months stood at 44.8 percent. This rose to 55.75 percent for the period starting in 2014. This difference is statistically significant at 5% percent level.

The share of annual sales that fell onto January-February remained statistically unchanged, nominally rising from 31.77 percent for 2009-2012 average to 32.56 percent since 2014. This difference is not statistically significant at even 10%. However, share of sales falling into July-August period rose from 13.04 percent in 2009-2012 to 23.19 percent since the start of 2014 This increase is statistically significantly greater than zero at 1 percent level.

Similar, qualitatively and statistically, results can be gained from looking at 2002-2008 average. Moving out to pre-2002 average, the only difference is that increases in concentration of sales in January-February period become statistically significant.

In simple terms, what is interesting about the Irish data is the fact that license plate format - in particular identification of year of the car make - strongly induces a ‘vanity effect’ in purchaser behaviour, and that this effect is sensitive to the granularity of the signal contained in the license plate format. What would be interesting at this point is to look at seasonal variation of pricing data, including that for used vehicles, controlling for hedonic characteristics of cars being sold and accounting for variable promotions and discounts applied by brokers.

Sunday, April 29, 2018

28/4/18: Unintended Consequence of Tax Audits


The law of unintended consequences applies to all policies and all state systems design, including tax policies, tax laws and tax enforcement. This is a statement of truism. And it  works both ways. A well-designed policy to promote income supports and aligned incentives to work, for example, can have an unintended impact of increasing fraud. Conversely, a measure to enforce the policy to prevent fraud can result in undoing some of the positive impacts of the policy which it was designed to deliver. These statements are also a form of truism.

However, rarely do we see research into the unintended consequences of core tax policies delivering a negative view of the perceived wisdom of regulators and enforcers. Instead, we tend to think of tax laws enforcement as an unquestionable good. Fraud and tax evasion prevention are seen as intrinsically important to the society, and the severity of penalties and punishments imposed on non-compliance (whether by error or design) is seen as being not only just, but pivotal to the sustainability of the entire tax system. Put differently, there is an inherent asymmetry in the relationship between tax payers and tax enforcers: the former face potentially devastating penalties for even minor infringements, while the latter face zero cost for wrongfully accusing the former of such infringements. Tax audits are free of consequences to enforcers, and tax audits are of grave consequences to those being audited.

In this environment, tax audits can lead to severe distortions in the balance of intended and unintended consequences of the tax law. Yet, rarely such distortions are considered in the academic literature. The prevalent wisdom that the tax authorities are always right to audit and severely punish lax practices is, well, prevalent.

One recent exception to this rule is a very interesting paper, titled “Tax Enforcement and Tax Policy: Evidence on Taxpayer Responses to EITC Correspondence Audits” by John Guyton, Kara Leibel, Dayanand S. Manoli, Ankur Patel, Mark Payne, and Brenda Schafer (NBER Working Paper No. 24465, March 2018).  Five of the six authors work for Uncle Sam in either IRS or Treasury.

The paper starts by explaining how EITC audits work. "Each year, the United States Internal Revenue Service (IRS) sends notices to selected taxpayers who claim Earned Income Tax credit (EITC) benefits to request additional documentation to verify those claims." Worth noting here, that IRS' EITC audits are the lowest cost audits from the point of view of the taxpayers who face them: they are based on email exchanges between IRS and the audited taxpayer and request pretty limited information. In this, the EITC audits should create lower unintended consequences in the form of altering taxpayers' behavior than, say, traditional audits that require costly engagement of specialist accountants and lawyers by the taxpayers being audited.

So, keep in mind, fact 1: EITC audits are lower cost audits from taxpayer's perspective.

The study then proceeds to examine "the impacts of these correspondence audits on taxpayer behavior." The study specifically focuses on the labor market changes in response to audits. Now, in spirit, EITC was created in the first place to incentivise greater labor force participation and work effort for lower income individuals. The authors describe the EITC as "the United States’ largest wage subsidy antipoverty program."

Thus, keep in mind, fact 2: EITC was created to improve labor supply choices by lower income individuals.

As noted by the authors, "because these correspondence audits often lead to the disallowance of EITC benefits for many individuals, we are able to examine how the disallowance of EITC benefits affects individuals’ labor supply decisions." The authors use audits data for 2010-2012 and have accompanying administrative data for 2001-2016, so the "data allow for analysis of short-term changes in behaviors one year after the audit, as well as persistent or longer-term changes in behaviors up to six years after the audit".

The study "results indicate significant changes in taxpayer behavior following an EITC correspondence audit. In the year after being audited, we estimate a decline in the likelihood of claiming EITC of roughly 0.30, or 30 percentage points. The decrease in the likelihood of claiming EITC benefits persists for multiple years after the EITC correspondence audits, although the size of the effect is reduced over time." In year four, the likelihood of audited EITC filers still filing EITC claims is 1/4 of that for non-audited higher risk EITC filers.

Now, logical question is: was the decrease down to audits weeding out fraudulent claims? The answer is, not exactly. "Much of the decline in claiming EITC benefits following an EITC correspondence audit appears driven by decreases in the likelihood of filing a tax return." Authors suggest that 2/3rds of the decline in EITC filings post-audit is down to taxpayers stopping filing any tax returns post-audit. Which means that even some of the taxpayers who continue to file returns post-EITC audit are dropping out of EITC system.


Audits seem to trigger reductions in tax liabilities post-audit for self-employed taxpayers (ca $300 in a year following the audit) and no changes in tax liabilities post-audit for wage earners. This suggests that post-audit reported incomes either fall (for the self-employed) or remain static for those in employment. This, in turn, suggests that EIDC audits do not lead to improvements in income status for those audited by the IRS. In other words, audits do not reinforce or improve the stated objectives of EITC (see fact 2 above).

"For the Self-Employed, we estimate an increase in labor force participation (where labor force participation is defined in terms of having positive W-2 wage earnings), possibly indicating some reallocation of labor supply from self-employment to wage employment. In contrast, for Wage Earners, we estimate a decrease in labor force participation following the EITC correspondence audits."

Thus, we have fact 4: self-employed are likely to switch their income from self-employment to wages post-audit, while wage earners tend to drop their labor force participation post-audit.

The former part of fact 4 suggests can be reflective of fraudulent behavior by some self-employed who might over-state their self-employment income prior to audit in order to draw EIDC tax credits. The latter effect, however, clearly contravenes the stated objective of the EIDC system. On the first point, quick clarification via the authors of the study:"Intuitively, some lower-income individuals may increase reported self-employment (non-third-party verified) income, possibly by choosing to disclose more income, invent income, or not disclose expenses, to claim the EITC, but if they are detected by audit, they may become averse to inventing self-employment income for purposes of claiming EITC and without this income they may not file a tax return. These taxpayers may perceive the payoff from not filing as better than the payoff from filing and correctly reporting income."

Now, one can think of the effect on self-employment to be a relatively positive one. "Following the disallowance of EITC benefits due to an EITC correspondence audit, taxpayers with self-employment income on their audited returns appear more likely to have wage earnings in the next year, perhaps to offset the loss of EITC as a financial resource." But that is only true if we consider self-employment as a substitute for employment. In contrast, if self-employment is viewed as potentially entrepreneurial activity, such substitution harms the likelihood of entrepreneurship amongst lower earners. The study does not cover this aspect of the enforcement outcomes.

In measured terms, if EITC audits were successful in reinforcing EITC intended objectives, post-audits, we should see increases in wages and earnings for EITC audited individuals. Thus, we should see migration of lower earners EITC recipients to higher earners. Put differently, the share of higher earners within EITC eligible population should rise, while the share of lower earners should fall.

This is not what appears to be happening. Instead, we see increase in density (share) of lower earnings and slight decreases in densities of higher earnings:


Unambiguously, however, the study shows the damaging effects of audits: they tend to reduce labor force participation, offsetting the intended positive effects of the EITC program, and they tend to increase income tax non-filing, effectively pushing taxpayers into a much graver offence of income tax non-compliance.

Yet, still, we continue to insist that punitive, aggressive audit practices designed to impose maximal damage on tax codes violating taxpayers is a good thing. There has to be a more effective way to enforce the tax codes than throwing pain of audits around at random.

Sunday, April 8, 2018

8/4/18: Talent vs Luck: Differentiating Success from Failure


In their paper, "Talent vs Luck: the role of randomness in success and failure", A. Pluchino. A. E. Biondo, A. Rapisarda (25 Feb 2018: https://arxiv.org/pdf/1802.07068.pdf) tackle the mythology of the "dominant meritocratic paradigm of highly competitive Western cultures... rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, efforts or risk taking".

The authors note that, although "sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant material success, ...it is rather common to underestimate the importance of external forces in individual successful stories".

Some priors first: "intelligence or talent exhibit a Gaussian distribution among the population, whereas the distribution of wealth - considered a proxy of success - follows typically a power law (Pareto law). Such a discrepancy between a Normal distribution of inputs, suggests that some hidden ingredient is at work behind the scenes."

The authors show evidence that suggests that "such an [missing] ingredient is just randomness". Or, put differently, a chance.

The authors "show that, if it is true that some degree of talent is necessary to be successful in life, almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals."

Two pictures are worth a 1000 words, each:

Figure 5 taken from the paper shows:

  • In panel (a): Total number of lucky events and
  • In panel (b): Total number of unlucky events 

Both are shown as "function of the capital/success of the agents"


Overall, "the plot shows the existence of a strong correlation between success and luck: the most successful individuals are also the luckiest ones, while the less successful are also the unluckiest ones."

Figure 7 shows:
In panel (a): Distribution of the final capital/success for a population with different random initial conditions, that follows a power law.
In panel (b): The final capital of the most successful individuals is "reported as function of their talent".

Overall, "people with a medium-high talent result to be, on average, more successful than people with low or medium-low talent, but very often the most successful individual is a moderately gifted agent and only rarely the most talented one.


Main conclusions on the paper are:

  • "The model shows the importance, very frequently underestimated, of lucky events in determining the final level of individual success." 
  • "Since rewards and resources are usually given to those that have already reached a high level of success, mistakenly considered as a measure of competence/talent, this result is even a more harmful disincentive, causing a lack of opportunities for the most talented ones."

The results are "a warning against the risks of what we call the ”naive meritocracy” which, underestimating the role of randomness among the determinants of success, often fail to give honors and rewards to the most competent people."

Sunday, November 19, 2017

19/11/17: Mainstream Media & Fake News: Twin Forces Behind Voter Behavior Biases


Behavioral biases come in all shapes and forms. Many of these, however, relate to the issue of imperfect information (e.g. asymmetric information, instances of costly information gathering and processing that can distort decision-making, incomplete information, etc).

A recent Quartz article on the balance of threats/risks arising from the 'fake news' phenomenon (the distortion of facts presented, sometimes, by alternative and mainstream media alike) and another informational asymmetry, namely selectivity biases (which apply to our propensity to select information either due to its proximity to us - e.g. referencing bias, or due to its ideological value to us - e.g. confirmation bias, etc). Note: Quartz article is available here: https://qz.com/1130094/todays-biggest-threat-to-democracy-isnt-fake-news-its-selective-facts/.

According to the article: "News sources aim to cover—in the words of the editor in chief of Reuters—the “facts [we] need to make good decisions.”" But, "As readers, we also suffer from what’s called confirmation bias: We tend to seek out news organizations and social media posts that confirm our views. Selective facts occur precisely for this reason." In other words, confirmation bias is a part of our use and understanding of information. The author concludes that "Selective facts are worse than outright fake news because they’re pervasive and harder to question than clearly false statements."

So far so good. except for one thing. The article does not go in detail into why selective facts are, all of a sudden, prevalent in today's world. Why does confirmation bias (and, unmentioned by the author, proximity heuristic) matter today more than they mattered yesterday?

The answer to this, at least in part, has to be the continued polarization of the mainstream media (and, following it, non-traditional media).

Here is a PewResearch study from 2014 on ideological polarization in the mainstream media and social media: http://www.journalism.org/2014/10/21/political-polarization-media-habits/.  Two charts from this:


Not enough to drive home the point? Ok, here is from Forbes article covering the topic (source: https://www.forbes.com/sites/brettedkins/2017/06/27/u-s-media-among-most-polarized-in-the-world-study-finds/#1ee9a3242546):
"The Reuters Institute recently released its 2017 Digital News Report, analyzing surveys from 70,000 people across 36 countries and providing a comprehensive comparative analysis of modern news consumption. The report reveals several important media trends, including rising polarization in the United States. While 51% of left-leaning Americans trust the news, only 20% of conservatives say the same. Right-leaning Americans are far more likely to say they avoid the news because “I can’t rely on news to be true.""

The trend is not new. In the 1990s, plenty of research have shown that print and cable media have started drifting (polarizing) away from the 'centre-focused' news reporting as local monopolies of newspapers and TV stations started to experience challenges from competitors. You can read about this here:

  • Tuning Out or Tuning Elsewhere? Partisanship, Polarization, and Media Migration from 1998 to 2006 by Barry A. Hollander (2008), Journalism & Mass Communication Quarterly, Volume 85, Issue 1, which posits a view that polarization of the mass media has been driving moderate voters away from news and toward entertainment. Which, of course, effectively hollows out the 'centre' of media ideological spectrum. 
  • "This article examines if the emergence of more partisan media has contributed to political polarization and led Americans to support more partisan policies and candidates," according to "Media and Political Polarization" published in Annual Review of Political Science Vol. 16:101-127 (May 2013) by Markus Prior.
  • And economics of media polarization in "Political polarization and the electoral effects of media bias" by Dan Bernhardt, Stefan Krasa, and Mattias Polborn, published in Journal of Public Economics, Volume 92, Issues 5–6, June 2008, Pages 1092-1104
These are just three examples, but there are plenty more (hundreds, in fact) of research papers looking into twin, causally interlinked, effects of media polarization and the rise of the polarized voter preferences.

Which brings us to the Quartz's observation: "While social media and partisan news has allowed more voices to be heard, it also means we are now surrounded by more people manipulating what facts make it to our newsfeeds. We’d draw a different conclusion—or even just a more nuanced picture—if we were given all the information on an issue, not just the parts that best benefit a particular viewpoint."

It may be true, indeed, that current markets for supply of alt-news are enabling greater confirmation bias prevalence in voter attitudes. But it is at best just a fraction of the complete diagnosis. In fact, the polarized, or put differently - biased, nature of the mainstream news is at least as responsible for the evolution of these biases, as it is responsible for the growth in alt-news. That is correct: fake information is finding are more accepting audiences today, in part, because the CNN and FoxNews have decided to cultivate ideologically polarized market differentiation for their platforms in the past.


Sunday, October 22, 2017

22/10/17: Framing Effects and S&P500 Performance


A great post highlighting the impact of framing on our perception of reality: https://fat-pitch.blogspot.com/2017/10/using-time-scaling-and-inflation-to.html.

Take two charts of the stock market performance over 85 odd years:


The chart on the left shows nominal index reading for S&P500. The one on the right shows the same, adjusted for inflation and using log scale to control for long term duration of the time series. In other words, both charts, effectively, contain the same information, but presented in a different format (frame).

Spot the vast difference in the way we react to these two charts...

Tuesday, October 3, 2017

3/10/17: Ambiguity Fun: Perceptions of Rationality?



Here is a very insightful and worth studying set of plots showing the perceived range of probabilities under subjective measure scenarios. Source: https://github.com/zonination/perceptions




The charts above speak volumes about both, our (human) behavioural biases in assessing probabilities of events and the nature of subjective distributions.

First on the former. As our students (in all of my courses, from Introductory Statistics, to Business Economics, to advanced courses of Behavioural Finance and Economics, Investment Analysis and Risk & Resilience) would have learned (to a varying degree of insight and complexity), the world of Rational expectations relies (amongst other assumptions) on the assumption that we, as decision-makers, are capable of perfectly assessing true probabilities of uncertain outcomes. And as we all have learned in these classes, we are not capable of doing this, in part due to informational asymmetries, in part due to behavioural biases and so on. 

The charts above clearly show this. There is a general trend in people assigning increasingly lower probabilities to less likely events, and increasingly larger probabilities to more likely ones. So far, good news for rationality. The range (spread) of assignments also becomes narrower as we move to the tails (lower and higher probabilities assigned), so the degree of confidence in assessment increases. Which is also good news for rationality. 

But at that, evidence of rationality falls. 

Firstly, note the S-shaped nature of distributions from higher assigned probabilities to lower. Clearly, our perceptions of probability are non-linear, with decline in the rate of likelihoods assignments being steeper in the middle of perceptions of probabilities than in the extremes. This is inconsistent with rationality, which implies linear trend. 

Secondly, there is a notable kick-back in the Assigned Probability distribution for Highly Unlikely and Chances Are Slight types of perceptions. This can be due to ambiguity in wording of these perceptions (order can be viewed differently, with Highly Unlikely being precedent to Almost No Chance ordering and Chances Are Slight being precedent to Highly Unlikely. Still, there is a lot of oscillations in other ordering pairs (e.g. Unlikely —> Probably Not —> Little Chance; and We Believe —> Probably. This also consistent with ambiguity - which is a violation of rationality.

Thirdly, not a single distribution of assigned probabilities by perception follows a bell-shaped ‘normal’ curve. Not for a single category of perceptions. All distributions are skewed, almost all have extreme value ‘bubbles’, majority have multiple local modes etc. This is yet another piece of evidence against rational expectations.

There are severe outliers in all perceptions categories. Some (e.g. in the case of ‘Probably Not’ category appear to be largely due to errors that can be induced by ambiguous ranking of the category or due to judgement errors. Others, e.g. in the case of “We Doubt” category appear to be systemic and influential. Dispersion of assignments seems to be following the ambiguity pattern, with higher ambiguity (tails) categories inducing greater dispersion. But, interestingly, there also appears to be stronger ambiguity in the lower range of perceptions (from “We Doubt” to “Highly Unlikely”) than in the upper range. This can be ‘natural’ or ‘rational’ if we think that less likely event signifier is more ambiguous. But the same holds for more likely events too (see range from “We Believe” to “Likely” and “Highly Likely”).

There are many more points worth discussing in the context of this exercise. But on the net, the data suggests that the rational expectations view of our ability to assess true probabilities of uncertain outcomes is faulty not only at the level of the tail events that are patently identifiable as ‘unlikely’, but also in the range of tail events that should be ‘nearly certain’. In other words, ambiguity is tangible in our decision making. 



Note: it is also worth noting that the above evidence suggests that we tend to treat inversely certainty (tails) and uncertainty (centre of perceptions and assignment choices) to what can be expected under rational expectations:
In rational setting, perceptions that carry indeterminate outruns should have greater dispersion of values for assigned probabilities: if something is is "almost evenly" distributed, it should be harder for us to form a consistent judgement as to how probable such an outrun can be. Especially compared to something that is either "highly unlikely" (aka, quite certain not to occur) and something that is "highly likely" (aka, quite certain to occur). The data above suggests the opposite.

Friday, January 13, 2017

12/1/17: Betrayal Aversion, Populism and Donald Trump Election


In their 2003 paper, Koehler and Gershoff provide a definition of a specific behavioural phenomenon, known as betrayal aversion. Specifically, the authors state that “A form of betrayal occurs when agents of protection cause the very harm that they are entrusted to guard against. Examples include the military leader who commits treason and the exploding automobile air bag.” The duo showed - across five studies - that people respond differently “to criminal betrayals, safety product betrayals, and the risk of future betrayal by safety products” depending on who acts as an agent of betrayal. Specifically, the authors “found that people reacted more strongly (in terms of punishment assigned and negative emotions felt) to acts of betrayal than to identical bad acts that do not violate a duty or promise to protect. We also found that, when faced with a choice among pairs of safety devices (air
bags, smoke alarms, and vaccines), most people preferred inferior options (in terms of risk exposure) to options that included a slim (0.01%) risk of betrayal. However, when the betrayal risk was replaced by an equivalent non-betrayal risk, the choice pattern was reversed. Apparently, people are willing to incur greater risks of the very harm they seek protection from to avoid the mere possibility of betrayal.”

Put into different context, we opt for suboptimal degree of protection against harm in order to avoid being betrayed.

Now, consider the case of political betrayal. Suppose voters vest their trust in a candidate for office on the basis of the candidate’s claims (call these policy platform, for example) to deliver protection of the voters’ interests. One, the relationship between the voters and the candidate is emotionally-framed (this is important). Two, the relationship of trust induces the acute feeling of betrayal if the candidate does not deliver on his/her promises. Three, past experience of betrayal, quite rationally, induces betrayal aversion: in the next round of voting, voters will prefer a candidate who offers less in terms of his/her platform feasibility (aka: the candidate less equipped or qualified to run the office).

In other words, betrayal aversion will drive voters to prefer a poorer quality candidate.

Sounds plausible? Ok. Sounds like something we’ve seen recently? You bet. Let’s go over the above steps in the context of the recent U.S. presidential contest.


One: emotional basis for selection (vesting trust). The U.S. voters had eight years of ‘hope’ from President Obama. Hope based on emotional context of his campaigns, not on hard delivery of his policies. In fact, the entire U.S. electoral space has become nothing more than a battlefield of carefully orchestrated emotional contests.

Two: an acute feeling of betrayal is clearly afoot in the case of the U.S. electorate. Whether or not the voters today blame Mr. Obama for their feeling of betrayal, or they blame the proverbial Washington ’swamp’ that includes the entire lot of elected politicians (including Mrs. Clinton and others) is immaterial. What is material is that many voters do feel betrayed by the elites (both the Burn effect and the Trump campaign were based on capturing this sentiment).

Three: of the two candidates that did capture the minds of swing voters and marginalised voters (the types of voters who matter in election outrun in the end) were both campaigning on razor-thin policies proposals and more on general sentiment basis. Whether you consider these platforms feasible or not, they were not articulated with the same degree of precision and competency as, say, Mrs Clinton’s highly elaborate platform.

Which means the election of Mr Trump fits (from pre-conditions through to outcome) the pattern of betrayal aversion phenomena: fleeing the chance of being betrayed by the agent they trust, American voters opted for a populist, less competent (in traditional Washington’s sense) choice.

Now, enter two brainiacs from Harvard. Rafael Di Tella and Julio Rotemberg were quick on their feet recognising the above emergence of betrayal avoidance or aversion in voting decisions. In their December 2016 NBER paper, linked below, the authors argue that voters preference for populism is the form of “rejection of “disloyal” leaders.” To do this, the authors add an “assumption that people are worse off when they experience low income as a result of leader betrayal”, than when such a loss of income “is the result of bad luck”. In other words, they explicitly assume betrayal aversion in their model of a simple voter choice. The end result is that their model “yields a [voter] preference for incompetent leaders. These deliver worse material outcomes in general, but they reduce the feelings of betrayal during bad times.”

More to the point, just as I narrated the logical empirical hypothesis (steps one through three) above, Di Tella and Rotemberg “find some evidence consistent with our model in a survey carried out on the eve of the recent U.S. presidential election. Priming survey participants with questions about the importance of competence in policymaking usually reduced their support for the candidate who was perceived as less competent; this effect was reversed for rural, and less educated white, survey participants.”

Here you have it: classical behavioural bias of betrayal aversion explains why Mrs Clinton simply could not connect with the swing or marginalised voters. It wasn’t hope that they sought, but avoidance of putting hope/trust in someone like her. Done. Not ‘deplorables’ but those betrayed in the past have swung the vote in favour of a populist, not because he emotionally won their trust, but because he was the less competent of the two standing candidates.



Jonathan J. Koehler, and Andrew D. Gershof, “Betrayal aversion: When agents of protection become agents of harm”, Organizational Behavior and Human Decision Processes 90 (2003) 244–261: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.11.1841&rep=rep1&type=pdf

Di Tella, Rafael and Rotemberg, Julio J., Populism and the Return of the 'Paranoid Style': Some Evidence and a Simple Model of Demand for Incompetence as Insurance Against Elite Betrayal (December 2016). NBER Working Paper No. w22975: https://ssrn.com/abstract=2890079

Friday, May 11, 2012

11/5/2012: Ignoring that which almost happened?

In recent years, I am finding myself migrating more firmly toward behavioralist views on finance and economics. Not that this view, in my mind, is contradictory to the classes of models and logic I am accustomed to. It is rather an additional enrichment of them, adding toward completeness.

With this in mind - here's a fascinating new study.

How Near-Miss events Amplify or Attenuate Risky Decision Making, written by Catherine Tinsley, Robin Dillon and Matthew Cronin and published in April 2012 issue of Management Science studied the way people change their risk attitudes "in the aftermath of many natural and man-made disasters".

More specifically, "people often wonder why those affected were underprepared, especially when the disaster was the result of known or regularly occurring hazards (e.g., hurricanes). We study one contributing factor: prior near-miss experiences. Near misses are events that have some nontrivial expectation of ending in disaster but, by chance, do not."

The study shows that "when near misses are interpreted as disasters that did not occur, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions (e.g., choosing not to engage in mitigation activities for the potential hazard). On the other hand, if near misses can be recognized and interpreted as disasters that almost happened, this will counter the basic “near-miss” effect and encourage more mitigation. We illustrate the robustness of this pattern across populations with varying levels of real expertise with hazards and different hazard contexts (household evacuation for a hurricane, Caribbean cruises during hurricane season, and deep-water oil drilling). We conclude with ideas to help people manage and communicate about risk."

An interesting potential corollary to the study is that analytical conclusions formed ex post near misses (or in the wake of significant increases in the risk) matter to the future responses. Not only that, the above suggests that the conjecture that 'glass half-full' type of analysis should be preferred to 'glass half-empty' position might lead to a conclusion that an event 'did not occur' rather than that it 'almost happened'.

Fooling yourself into safety by promoting 'optimism' in interpreting reality might be a costly venture...