Site icon R-bloggers

Issues with Rank Dependent Utility: Valuation Differences in Forward VS Backward Induction

[This article was first published on R – Jacob Smith Economics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I have made several blog posts on the code I use to compute rank dependent utility In this blogpost however I want to illustrate (by means of an example in R) how the choice of forward vs backward induction can impact the valuation of the same prospect.

The Problem

To start lets direct our attention to our picture which shows a decision tree that shows the choice between two possible prospects. The upper branch from our square decision node has some uncertainty and our lower branch has no uncertainty at all. In the case of forward induction we reduce our probabilities and preserve our outcomes values but in the case of backward induction we compute the certainty equivalents up to the first stage of uncertainty. (This simple example is based on Appendix C in Peter Wakker’s Prospect Theory for Risk and Ambiguity).

Choice based on forward VS backward induction

Under expected utility, the valuation of the upper branch of this decision tree will be the same regardless of if we use forward or backward induction. In Rank dependent utility theory however We have very different valuations. Lets Illustrate this with the RDU_Data() function in the code below. (you can just copy and paste this in your R terminal for the results).

#RDU Algorithm Code
RDU_Data<-function(outcomes,p_vec,pw){
  
  if(length(outcomes)==length(p_vec)){  
    #Step 0 organize outcomes and probabilities into a single vector
    df<-data.frame(outcomes,p_vec)
    
    #Step 1 Organize Probabilities by outcome.
    df1<-df[order(-outcomes),]
    zerorow<-c(0,0)
    df2<-rbind(zerorow,df1)
    
    #Step 2  Define vector of ranks
    rank<-cumsum(df2[,2])
    df3<-data.frame(df2,rank)
    
    #Step 3 compute pweights
    pw_vec<-pw(df3$rank)
    df4<-data.frame(df3,pw_vec)
    
    #Step 4 Take difference between pw_vec to compute decision weights
    d_weights<-diff(df4$pw_vec)
    #Check if weights sum to 1
    sum(d_weights)
    #Add to dataframe
    d_weights1<-c(0,d_weights)
    df5<-data.frame(df4,d_weights1)
    
    #Step 5 drop the first "helper" row
    df6<-df5[-1,]
    colnames(df6)<-c("Outcomes","Probabilities","Rank",
                     "Weighted Ranks", "Decision Weights")
    return(df6)} else{
      print("Outcomes and vector of probabilities must be the same length")
    }
}

#Prospect 1- Forward Induction
payoffs_1<-c(100,0,49)
probs_1<-c(0.56,0.14,0.3)

#Prospect 2- Backward induction
payoffs_2<-c(80,49)
probs_2<-c(0.7,0.3)

#Prob weighting function
pwf<-function(p){p^2}

#Run RDU_Data for each case
RDU_1<-RDU_Data(payoffs_1,probs_1,pwf)
RDU_2<-RDU_Data(payoffs_2,probs_2,pwf)

#Valuations
finduct_payoff_EU<-payoffs_1%*%probs_1
binduct_payoff_EU<-payoffs_2%*%probs_2
finduct_payoff_RU<-payoffs_1%*%RDU_1$`Decision Weights`
binduct_payoff_RU<-payoffs_2%*%RDU_2$`Decision Weights`

#Table
data.frame(finduct_payoff_EU,
           binduct_payoff_EU,
           finduct_payoff_RU,
           binduct_payoff_RU)

The output from all this code gives us a table with both the payoffs of each method under Expected Utility and Rank Dependent Utility as follows:

 finduct_payoff_EU binduct_payoff_EU finduct_payoff_RU binduct_payoff_RU
1              70.7              70.7           44.1196             64.19

we see that the expected utility for each case is the same but our forward induction valuation and backward induction valuation differ.

Why is there a difference in valuations

To see why there is a difference in valuations we need to remember that the rank dependent algorithm uses cumulative probabilities as ranks, weights them and calculates the difference between these weighted ranks to get our decision weights. When we use either forward or backward induction we are transforming the way in which ranks are calculated and and in turn the decision weights used to compute rank dependent utility.

This is illustrated from the output of RDU_1 and RDU_2 below.

# RDU_1 Output
   Outcomes Probabilities Rank Weighted Ranks Decision Weights
11      100          0.56 0.56         0.3136           0.3136
3        49          0.30 0.86         0.7396           0.4260
2         0          0.14 1.00         1.0000           0.2604

# RDU_2 Output
  Outcomes Probabilities Rank Weighted Ranks Decision Weights
2       80           0.7  0.7           0.49             0.49
3       49           0.3  1.0           1.00             0.51

So do we compute RDU with Forward or Backward Induction?

The question of using either forward or backward induction would appear to be an empirical question of how decision makers approach prospects as modelled by a decision tree. There is some guidance provided by Rakesh Sarin and Peter Wakker in their 1998 paper “Dynamic Choice and Non Expected Utility” which invoke a condition of “sequential consistency” which assumes decision makers to commit themselves to either using forward or backward induction (or more broadly a “Family of models”). Its a tough paper to read, but my impression is that much like everything in life, you are forced to make a choice on how to compute your model.

Conclusion

Rank dependent utility theory is famous for correcting issues with the original version of prospect theory 1979 (namely violations of first order stochastic dominance) and helped with creating a new and improved version of cumulative prospect theory in 1992. But it appears that as one problem was solved a new problem was opened up- in this case its the difference in valuations of forward vs backward induction.

To leave a comment for the author, please follow the link and comment on their blog: R – Jacob Smith Economics.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Exit mobile version