Stochastic Growth Models With No Discounting

In this note, we consider in discrete time the Ramsey growth model without discounting under stochastic uncertainty modelled by Markov processes. To make the model computationally tractable we shall consider finite state approximations of the original model. Properties of policies maximizing mean value of the global utility of consumers over an infinite time horizon, along with algorithmic procedures finding optimal and suboptimal policies, are reported.


Stochastic Growth Models
with No Discounting  Karel Sladký *

Introduction and Notation
Optimal Growth Economics, originated by works of Frank P. Ramsey in the late twenties of the 20th century, is a relatively separated part of traditional macroeconomics.The heart of the seminal paper of F. P. Ramsey (1928) on mathematical theory of saving is an economy producing output from labour and capital and the task is to decide how to divide production between consumption and capital accumulation to maximize the global utility of the consumers.Ramsey's results were revisited and significantly extended only after almost thirty years by Cass (1965), Koopmans (1965) and Samuelson (1965) and at present the Ramsey model can be considered, along with the Solow model and overlapping generations model (see e.g.Solow, 1998), as one of the three most significant tools for the dynamic general equilibrium model in modern macroeconomics.Except of the Solow book (1998) this problem is also touched with for example in (Blanchard -Fischer, 1989;Sardar, 2001).
The respective mathematical model of the original Ramsey problem in discretetime setting (see also Dana -Le Van, 2003;Heer -Maußer, 2005; Stokey -Lukas, 1989) can be formulated as follows: We consider at discrete-time points 0 In each period the demand for (total) consumption t t c L and for gross investment cannot be greater than production, i.e. ( ) , (3) we get and if we define the function ( ) then (4) can be written as 1 ( ) where 0 0 where ( ) u  is instantaneous utility function and 1   (close to unity) is a given discount factor.
The problem is to find the rule how to split production between consumption and capital accumulation that maximizes global utility of the consumers for a finite or infinite time horizon T .
Throughout this note we make the following general assumption.
Assumption AS 0 can be justified since    close to one does not significantly prefer values obtained in the "near future" and its precise value depends on the choice of the decision maker.As we shall see later, specific choice of the discount factor  such that 1    considerably simplifies our further analysis.
Under AS 0 from (6) we get 0 0 0 ( ) ( ) where 0 0 ( ) ( ) for 1 2 is the global utility per consumer obtained up to time T .Observe that for T   the value 0 ( ) k U T is typically infinite.To this end we introduce the mean global utility as (we show that under our further assumptions g is independent of the initial capital 0 k ).
In the above formulation we assume that the production function ( ) f k and the consumption function ( ) u c fulfil some standard assumptions on production and consumption functions, in particular, that: is twice continuously differentiable and satisfies (0) 0 u   Moreover, ( ) u c is strictly increasing and concave (i.e., its derivatives satisfy is twice continuously differentiable and satisfies (0) 0 f   Moreover, ( ) f k is strictly increasing and concave (i.e., its derivatives satisfy Finding a sequence 0 0 1 1 ˆ( ) max ( ) for a given (possibly infinite) time horizon under the constraints 1 ( ) Note that since ( ) u  , ( ) f  are increasing (cf.assumptions AS 1 and AS 2) it is possible to replace the constraints (5) by 1 ( ) with ( ) 0 1 0 0 0 given and if also 0 Hence by (11) the expression for the maximum global utility per consumer can be written as ) for a finite or infinite time horizon where Observe that in virtue of assumption AS 2 and (11) it holds: . This can be easily verified since if we start with initial capital 0 0 k k   , and by selecting consumption at time 0 such that 0 and ( ) u  is increasing) and following for every 0 t  decisions given by 0 1 ( )

The Growth Model under Random Shocks
Up to now we have assumed that for a given t k the total output is given by ( ) .To this end we shall assume that in (5), (11) ( ) where { 0 1 } is a random process In the literature (cf.Majumdar -Mitra -Nishinmura, 2000).Unfortunately, assuming that Z is a Markov process with compact state space R then a rigorous treatment of the model given by ( 11) -( 13) requires a very sophisticated mathematics (see Blackwell, 1965 or Stokey -Lukas, 1989) and is not suitable for numerical computation.To make the model computationally tractable, similarly as in Sladký (2006), we shall approximate the time development of our system governed by (11), (12) (the symbol E is reserved for expectation).

Extension of the Stochastic Growth Model
Up to now we have assumed that the probability vector and similarly for every

Formulation in Terms of Stochastic Dynamic Programming
The above model can be treated as a highly structured Markov decision chain with finite state space 1 2 and for t (or  ) tending to infinity it can be shown (see e.g.Puterman, 1994;Ross, 1970) that under AS 3 the growth of ( ) , and for stationary policy ( ) 16) it is possible to conclude existence of i w  's such that ( ( )) ( ( )) where g  is unique and i w  's (for i I  ) are unique up to an additive constant (for details see e.g.Puterman, 1994;Ross, 1970)

Computation of Optimal Policies
In case that the time horizon T is finite, it is necessary to calculate (backwards) the dynamic programming recursion according to (16).Considering the infinite time horizon (i.e. if T   ), finding a solution of ( 17) is in some aspects much easier (optimal policy can be found in the class of stationary policies, i.e. policies selecting actions only with respect to the current state of Markov chain) and can be performed

(1 ) 1 n
    is very close to one and the discount factor


is the (unique) maximal mean reward and î w  's (for i I  ) are unique up to an additive constant (for details, see e.g.Puterman, 1994;Ross, 1970).
and t y is the total output at time t .
This approach is relatively simple, but ignores a lot of information and yields only a very rough bounds on optimal values.Obviously, significantly better results can be obtained if we replace the rough estimates of t y generated by upper and lower bounds max( ) u  is increasing, on replacing the production function ( ) t f k by max ( ) t f k and min ( ) t f k we obtain upper or lower bounds on the total output at time t and also, for fixed values of t k , also the upper and lower bounds on the maximal global utility of the consumers respectively.
Heer -Maußer, 2005; Majumdar -Mitra -Nishinmura, 2000, or the monograph Stokey -Lukas, 1989) it is usually assumed that Z is a Markov process (in general with state space R ) or an autoregressive process.Moreover, we assume that the decision maker can observe the current values of the total output t period of the considered economy model and the one-stage reward is accrued only in even transitions.Hence the global utility (i.e. the total reward of the Markov chain In what follows we assume that for an arbitrary policy the considered Markov chain contains a single class of recurrent states guaranteed by the following: r d is a (column) vector of onestage expected rewards (i.e.i -th element of ( ) r d is equal to ( ( )) i r d i ).