### One-step transition probabilities

As long as the conditional probability of an event given any prior events and the current state is not reliant on the prior events, the one-step transition probabilities are said to be stable. This is because the odds of transition are stationary, they do not fluctuate over time. In this situation, a stochastic process *X*_{t} is considered to be Markovian if it has stationary (one-step) transition probabilities. This process is shown thus;

\(P\left( {X_{t - 1 \, = } j/X_{0} = k_{0} ,X_{{{1 } = }} k_{{1}} , \ldots ,X_{{t - {1} = }} k_{{t - {1}}} ,X_{t} = i} \right) \, = P\left( {X_{t + 1} = j/X_{t} = i} \right);{\text{ for}}\;{\text{ i}} = 0,{1} \ldots\) and every.

Sequence *i*, *j*, *k*_{0}, *k*_{1},…, *k*_{t-1}.

$$P\left( {X_{t - n} = j/X_{t} = i} \right) \, = P\left( {X_{n} = j/X_{0} = i} \right);{\text{ for all t }} = \, 0,{ 1}$$

(3)

To simplify notation, assume that

$$P_{ij} = \, P\left( {X_{t + 1} = j/X_{t} = i} \right)$$

(4)

$$P_{ij}^{\left( n \right)} = P\left( {X_{t + n} j/X_{t} = i} \right)$$

(5)

To describe the transition probability \({p}_{i,j}^{(n)}\) of an individual's progression from one step to another, the term "n-step" is used. Conditional probability \({p}_{i,j}^{(n)}\) is a probability that, given a random start time, the system will be in a particular state *j* after exactly *n* steps (time units). It is worth noting that the conditional probability must be non-negative because the process must change states, and it must meet the requirements.

$$P_{ij}^{n} \ge 0;{\text{ for all}}\;i\;{\text{and}}\;j. \quad n \, = \, 0, \, 1, \, 2,$$

(6)

$$\sum {P_{ij}^{n} } \; = { 1};{\text{for all}}\;i. \quad n \, = \, 0, \, 1, \, 2,$$

(7)

A convenient way of showing all the *n*-step transition probabilities is the matrix form \(0, 1, 2,\dots M\). Equivalently, the *n*-step transition matrix

$$\begin{array}{c}\begin{array}{c}\begin{array}{c}\mathrm{State}\\ 0\end{array}\\ 1\end{array}\\ \vdots \\ M\end{array}\left[\begin{array}{ccc}\begin{array}{c}\begin{array}{c}0\\ {P}_{oo}^{(n)}\end{array}\\ {P}_{1O}^{(n)}\end{array}& \begin{array}{c}\begin{array}{c}1\\ {P}_{o1}^{(n)}\end{array}\\ {P}_{11}^{(n)}\end{array}& \begin{array}{c}\begin{array}{c}\begin{array}{cc}\cdots & M\end{array}\\ \begin{array}{cc}\cdots & {P}_{oM}^{(n)}\end{array}\end{array}\\ \begin{array}{cc}\cdots & {P}_{1M}^{(n)}\end{array}\end{array}\\ \cdots & \cdots & \begin{array}{cc}\cdots & \cdots \end{array}\\ {P}_{Mo}^{(n)}& {P}_{M1}^{(n)}& \begin{array}{cc}\cdots & {P}_{MM}^{(n)}\end{array}\end{array}\right]$$

(8)

Note that the transition probability in a particular row and the column is for the transition from the row *state* to the column state. When \(n=1\), we drop the superscript \(n\) and simply refer to this as the transition matrix*. For* \(n=0, {P}_{i,j}^{(0)}\) *is just *\(P({X}_{0}=j/{X}_{0}= i)\)* and hence is 1 when i* = *j and is 0 when *\(i\ne j\)*.* The Markov chains to be considered in this section have the following properties and form: (a) a finite number of states and (b) stationary transition probabilities.

The initial probabilities *P(Xo* = *i)* for all *1*, the (one-step) transition probabilities, i.e. the elements of the (one-step) transition matrix, are:

$$\begin{array}{c}\mathrm{State}\\ 0\\ \begin{array}{c}1\\ \begin{array}{c}2\\ 3\end{array}\end{array}\end{array}\left|\begin{array}{ccc}\begin{array}{c}\begin{array}{c}0\\ {P}_{\infty }\end{array}\\ {P}_{10}\end{array}& \begin{array}{c}\begin{array}{c}1\\ {P}_{01}\end{array}\\ {P}_{11}\end{array}& \begin{array}{cc}\begin{array}{c}\begin{array}{c}2\\ {P}_{02}\end{array}\\ {P}_{12}\end{array}& \begin{array}{c}\begin{array}{c}3\\ {P}_{03}\end{array}\\ {P}_{13}\end{array}\end{array}\\ {P}_{20}& {P}_{21}& \begin{array}{cc}{P}_{22}& {P}_{23}\end{array}\\ {P}_{30}& {P}_{31}& \begin{array}{cc}{P}_{32}& {P}_{33}\end{array}\end{array}\right|$$

(9)

### Chapman-Kolmogorov equations

THE following Chapman–Kolmogorov equations provide a method for computing these *n*- step transition probabilities:

$${P}_{i,j}^{(n)}= \sum_{k=0}^{m}{P}_{i,k}^{(m)}{P}_{k,j}^{(n-m)}$$

(10)

For\(i and j=0, 1, \dots , M\), while \(m 1, 2, \dots , n-1\)

These equations point out that in going from state *i* to state *j* in *n*-steps, the process will be in some state *k* after exactly *m* (less than *n*) states. Thus, \({P}_{i,k}^{(m)}{P}_{k,j}^{(n-m)}\) is just the conditional probability that, given a starting point of state *i*, the process goes to state *k* after *m* steps and then to state *j* in *n-m steps*. Therefore, adding the conditional probabilities over all possible *k* must yield \({P}_{i,j}^{(n)}\). The special cases of *m* = 1 and *m* = *n* -1 lead to the expressions;

$${P}_{i,j}^{(n)}= \sum_{k=0}^{m}{P}_{i,k}{P}_{k,j}^{(n-1)} and {P}_{i,j}^{(n)}= \sum_{k=0}^{m}{P}_{i,k}^{(n-1)}{P}_{k,j}$$

(11)

where *i* and *j* represents all states*.* These expressions enable the *n*-step transition probabilities to be obtained from the one-step transition probabilities recursively. This recursive relationship is best explained in matrix notation. For *n* = 2, these expressions become;

$$P_{i,j}^{\left( 2 \right)} = \mathop \sum \limits_{k = 0}^{m} P_{i,k} P_{k,j} \;,{\text{ for all states}}\;i\;{\text{and}}\;j,$$

(12)

These equations also hold in a trivial sense when *m* = 0 or *m* = *n*, but *m* = 1, 2…, *n* – 1 are the only interesting case where the \({P}_{i,j}^{(n)}\) is the elements of a matrix \({P}^{(2)}\). Also note that these elements are obtained by multiplying the matrix of one transition probabilities by itself. That is; \({P}^{(2)}=P.P= {P}^{2}\)

In the same manner, the above expressions for \({P}_{i,j}^{(n)}\), when *m* = *n-1* indicate that the matrix of *n-step* transition probabilities is;

\({P}^{\left(n\right)}=P{P}^{\left(n-1\right)}= {P}^{\left(n-1\right)}P\). Such that; \(P{P}^{n-1}= {P}^{n-1}P= {P}^{n}\)

Thus, the *n*-step transition probability matrix \({P}^{n}\) can be obtained by computing the *n*th power of the one-step transition matrix *P***.**

### Long-run properties of Markov Chains: steady-state probabilities

FOR any irreducible ergodic Markov chain.

\(\frac{Lim}{n\to \infty }{P}_{i,j}^{(n)}\)**,** exists and is independent of *i*. Furthermore, \(\frac{Lim}{n\to \infty }{P}_{i,j}^{(n)}= {\pi }_{j}>0\), where the \({\pi }_{j}\) uniquely satisfy the following steady-state equation.

$$\pi_{j} = \mathop \sum \limits_{i = 0}^{m} \pi_{i} P_{i,j} , \;{\text{for}}\; j = 0,1, \ldots ,M\; {\text{and}}\; \mathop \sum \limits_{j = 0}^{m} \pi_{i} = 1$$

(13)

Markov chain steady-state probabilities are \({\pi }_{j}\). Steady-state means the probability of finding the process in a specific state, say *j*, after a large number of transitions tends to \({\pi }_{j}\) regardless of the probability distribution of the beginning state. It is vital to emphasise that steady-state probability does not mean the process settles down. The process continues to transition from one state to another, and for any step *n*, the transition probability from state *i* to state j is \({P}_{i,j}\). The \({\pi }_{j}\) can be read as stationary probability (not stationary transition probability) in the following way. If the initial probability of being in state *j* is \({\pi }_{j}\) (\(P\left\{{X}_{o}=j\right\} = {\pi }_{j}\) for all j), then the probability of finding the process in state *j* at time *t*.

*n* = 1, 2, … is also given by \(P\left\{{X}_{n}=j\right\} = {\pi }_{j}\).

Steady-state equations consist of *M* + *2* equations in *M* + *1* unknowns and have a unique solution. A redundant equation can be eliminated. It can't be \({\sum }_{j=0}^{m}\pi =1\) since \({\pi }_{j}=0\) for all *j* and satisfies *M* + *1*. The other *M* + *1* steady-state equations have unique solutions up to a multiplicative constant, but the last equation necessitates a probability distribution. So, steady-state equations are:

$${\pi }_{1}= {\pi }_{1}{P}_{\mathrm{0,0}}+ {\pi }_{2}{P}_{\mathrm{0,1}}$$

$${\pi }_{2}= {\pi }_{1}{P}_{\mathrm{1,0}}+ {\pi }_{2}{P}_{\mathrm{1,1}}$$

$$1= {\pi }_{1}+ {\pi }_{2}$$

The Markov property is a process whose future probability behaviour is uniquely determined by the current state of the system. *X*_{n} is Markov.

\(P\left\{{X}_{t-1}=j/{X}_{0}= {k}_{0}, {X}_{1}= {k}_{1}, \dots , {X}_{t-1}= {k}_{t-1}, {X}_{t}=i\right\}\), this gives

$$P\left\{{X}_{t+1}=j/{X}_{t}= i\right\}, for i=0, 1,$$

(14)

### Generalised linear regression model (GLM)

This versatile expansion of linear regression includes response variables with non-normal error distributions. GLM generalises linear regression related to the response variable through link function and by allowing each measurement's variance to be a function of its predicted value. This model analyses the simultaneous impacts of many variables, including categorical and continuous mixes. Model structure describes relationships and associations. Parameters measure association strength. Model parameters are estimated. The goal is to model *Y's* predicted value as a linear function of *X* such that; \(E\left({Y}_{t}\right)={\beta }_{0}+{\beta }_{1}{X}_{i}\). The structural form of the model takes the following form;

$${Y}_{i}={\beta }_{0}+{\beta }_{1}{X}_{i}+{e}_{t}$$

(15)

where \({Y}_{i}\) is independent and normally distributed. Errors are normally distributed, *e*_{i}∼*N* (0, σ^{2}), and *X* is fixed, while σ^{2} is the constant variance. Therefore, *Y*_{t} is the vectors of economic growth measures and *X*_{t} is the vectors of stock market measures. To integrate these measures into the model, and to enable us establish the impact of share price movement on economic growth, Eq. (15) is therefore transformed into Eq. (16) as indicated below:

$${\mathrm{EGR}}_{t}={\beta }_{0}+{\beta }_{1}{\mathrm{SPM}}_{t}+{\beta }_{2}{\mathrm{CV}}_{t}+{\varepsilon }_{t}$$

(16)

where EGR_{t}* and *SPM_{t} in the model are vectors of economic growth measures and share price movement, respectively, while CV_{t}* are* the measures of other control variables in the study. EGR_{t} is measured with gross domestic product growth rate (GDP*gr*_{t}). While SPM_{t} is proxied with all-share indexes, CV_{t} controls for other variables such as market capitalisation (MC_{t}) as percentage of GDP, budget deficit ratio (BDR_{t}) and exchange rate (EXR_{t}). \({\beta }_{0}\) and \({\varepsilon }_{t}\) are the intercept and the error term, respectively, while \({\beta }_{1}\) and \({\beta }_{2}\) are the parameters that measures the coefficients of share price movement, market capitalisation of second-tier market of the Nigerian exchange. The data on GDP*gr*_{t} and control variables were source from national bureau of statistic (NBS).