# Purpose

In statistical estimation or forecasting, when probability distribution of target variable is already known, then we don’t need to run MC and analytical (“true”) answer can be computed immediately. However, if it is unknown, or known yet too hard to analytically solve, then Monte-Carlo simulation is the only method to get an appropriate answer. This is more realistic case.

Pseudo-random numbers (PRNs) are generated by several algorithms, some of which are introduced in this blog. However, as must known, it is impossible either to generate true random numbers, (at least, by using computer program) or to make them completely follow the expected probability distribution. As PRNs are not true RNs, there is always error in Monte-Carlo simulation; difference between simulation results and theoretical expected values (i.e. analytical answer). Therefore, methods to reduce this simulation error, if any, would be welcomed.

This simulation error comes from difference between analytical and numerical variance; i.e. PRNGs sometimes generate outliers those drive variance of PRNs away. And these outliers affect results of simulation, often badly.

Many scholars have been studying how to overcome this problem. And there have been primarily 2 ways for this purpose.

• PNRG modifications: Try to improve PRNs by taking numerical variance (and mean) closer to analytical values.

• Variance reduction methods: Try to reduce variance of simulation results directly; not that of PRNs.

• Antithetic variate is the simplest algorithm of variance reduction.

Posted 15 May. 2011 by in Pseudo-Random Number Generator

# Purpose

Following previous 2 posts about algorithms of pseudo-random number generator, this post introduces another one, complementary multiply-with-carry (CMWC), proposed by George Marsaglia (2002). For its extremely long period and statistical properties, it is considered the best PRNG as of today, along with Mersenne Twister (MT16670) by Matsumoto Makoto (1998). However, it has been proved that CMWC is much easier and faster than MT. Also, it often shows better performance than MT16670, and always passes all 16 PRN tests, while MT16670 sometimes fails some of them.

Posted 15 May. 2011 by in Pseudo-Random Number Generator

# Purpose

Among many kinds of algorithms, linear congruential generator (LCG) is the most common one due to its speed. However, as mentioned in the post about LCG, it is not suitable for Monte-Carlo or any other serious studies using random numbers. Therefore, researchers need to learn other kinds of algorithms.

Generalized feedback shift register (GFSR) has long been a powerful alternative to LCG for its long period and statistical robustness.

Posted 10 May. 2011 by in Pseudo-Random Number Generator

# Purpose

Random number generator is the most important part of Monte-Carlo Simulation. Quality of random numbers, say, randomness, no serial correlation, and long period are essential key to better simulation. For these reasons, there have been tons of studies about random number generator algorithms.

Strictly, computer algorithms cannot generate truly “random” numbers; because any programs run as defined in its algorithm. In other words, computer cannot generate “unpredictable” numbers.

However, even though these defects, it is surely possible to generate numbers (look like) statistically random, uncorrelated, and of long period enough, by using computer programs, These algorithms are referred to as pseudo-random number generators (PRNG), and widely accepted and used.

Aside Monte-Carlo, there are some methods to generate numbers applicable to cryptography (i.e. encryption study)

Posted 7 May. 2011 by in Pseudo-Random Number Generator

# Purpose

Conjugate gradient method is, along with Newton’s method, one of the most popular algorithm to solve optimization problems. In comparison with Newton’s method, conjugate gradient is more accurate because it is guaranteed to converge to true answer, while Newton’s method is faster to converge given initial input close enough to the true answer.

Since it requires 1st-order differentiation, conjugate gradient method is usually explained that it requires to compute Jacobian matrix. However, numerical differentiation can be a nice solution to remedy this problem.

Posted 5 May. 2011 by in Optimization

# Purpose

Numerical solutions give easy and fast way to the answers to linear and nonlinear equations problems.

# Concepts

There are several algorithms for this purpose, but this post will only introduce 2 ways; bisection method and Newton’s method. These are very common and popular for their own properties; bisection method is slow but always converges to the true answer, and Newton’s method is extremely quick to converge but sometimes fails.

Bisection method requires initial 2 values to make an interval where the true answer lies between them. And then it takes median on the interval to make new interval, and so on…

Newton’s method requires 1 initial value. This initial value is updated for each iteration to approach true answer by $f(x)cot{\theta} = \frac{f(x)}{tan{\theta}}$

Posted 1 May. 2011 by in Solving Equations

# Purpose

Numerical integration is implemented to compute integral of functions which are too hard or impossible.

Posted 30 April. 2011 by in Solving Equations