The following operations can be performed on Markov Chain models.
- Transient States – inquires for a number of steps to analyze and determines the sequence of distributions for the given number of steps;
- Transient Reward – inquires for a number of steps to analyze and determines the sequence of expected rewards for the given number of steps;
- Transient Matrix – inquires for a number of steps and determines the transition matrix corresponding to the given number of steps;
- Communicating Classes – determines the equivalence classes of communicating states;
- Classify Transient Recurrent – classifies all states as either transient or recurrent;
- Determine Periodicity – determines the (a)periodicity of all the recurrent states;
- Determine MC Type – determines the type of the Markov Chain, whether it is ergodic, and whether it is a unichain;
- Hitting Probability – inquires for a state and determines the probability to hit that state from each of the states of the chain;
- Reward until Hit – inquires for a state and determines the expected reward gained until hitting that state from each of the states of the chain;
- Hitting Probability Set – inquires for a set of states and determines the probability to hit a state from that set from each of the states of the chain;
- Reward until Hit Set – inquires for a set of states and determines the expected reward gained until hitting any state from that set from each of the states of the chain;
- Limiting Matrix – computes the (Cezàro) limiting matrix for the Markov Chain;
- Limiting Distribution – computes the (Cezàro) limiting distribution for the Markov Chain;
- Long-run Reward – computes the long-run expected reward for an ergodic Markov Chain or the long-run expected average reward for a non-ergodic Markov Chain.
- View Transition Diagram – opens a separate view on the transition diagram.