Consider the random variable \[X=X_0+\theta\;,\] with some random variable \(X_0\) and (unknown) central tendency or position parameter \(\theta\in\mathbb R\). The distribution function \(F_0\) of \(X_0\) is supposed to be symmetric, so that the distribution \(F_\theta\) of \(X\) given by \(F_\theta(t)=F_0(t-\theta)\) is symmetric with respect to \(\theta\).

Let \(X_1,\dots,X_n\) be an i.i.d. sample form distribution \(F_\theta\). We consider three frequently used estimators of the central tendency parameter \(\theta\), namely

the sample mean \(\hat\theta_1=\frac1n\sum_{i=1}^nX_i\),

the sample median \(\hat\theta_2=X_{(\lceil n/2\rceil)}\), where \(X_{(i)}\) denotes the \(i\)-th order statistic in the sample and \(\lceil m\rceil\) denotes the smallest integer greater or equal \(m\), and

the truncated sample mean \(\hat\theta_3=\bar X_{\text{tr($\gamma$)}}\) with truncation parameter \(\gamma \in]0,1/2[\) defined as the sample mean after deleting the top \(\gamma100\)% and the bottom \(\gamma100\)% observations of the sample. More precisely, \[\bar X_{\text{tr($\gamma$)}}=\frac1{n-2\lceil \gamma n\rceil}\sum_{i=\lceil \gamma n\rceil+1}^{n-\lceil \gamma n\rceil}X_{(i)}\;.\] For \(\gamma\) close to 0, the truncated mean behaves almost like the sample mean. For \(\gamma\) close to \(1/2\), it gives results similar to the sample median. In the following simulations, \(\gamma\) is set to \(0.05\).

Having at hand three estimators of the same parameter \(\theta\), the natural question arises: which one is the best? One can show that (under some regularity assumptions on the distribution \(F_0\)) all three estimators are consistent and asymptotically normal. Though, asymptotic properties of the estimators do not matter a lot in practice, where mainly the behavior on *finite* samples is of interest. Unfortunately, the theoretical study of the estimators on *finite* samples is rather involved. Hence, we choose to conduct Monte-Carlo simulations for comparison of the estimators.

Our Monte-Carlo simulations consist of

fixing a model (i.e. choosing a distribution \(F_0\) and a parameter value \(\theta\)) and a sample size \(n\),

generating large samples (of size \(R\)) of realizations of all estimators \(\hat\theta_1,\hat\theta_2\) and \(\hat\theta_3\) and

using these samples to approach the estimators' characteristics as the bias, the standard error and the mean squared error (MSE).

In what follows \(F_0\) is set to the Student's \(t\)-distribution \(t_q\). The interface below allows the user to choose the degree of freedom \(q\) of the \(t\)-distribution as well as the parameter value \(\theta\), the sample size \(n\) and the number \(R\) of realizations of each estimator.

The program generates i.i.d. samples \((x_1^{(r)},\dots,x_n^{(r)})\) for \(r=1,\dots,R\) from the distribution \(F_\theta\) and evaluates all estimators \(\hat\theta_1,\hat\theta_2\) and \(\hat\theta_3\) on every sample \((x_1^{(r)},\dots,x_n^{(r)})\). Thus, we obtain an i.i.d. sample \((\hat\theta_k^{(r)},\dots,\hat\theta_k^{(R)})\) of estimator \(\hat\theta_k\) for \(k=1,2,3\). In the figure these samples are represented by boxplots. Moreover, the (sample) bias, standard error and the mean squared error associated with the different estimators are computed and illustrated in the figure.

To see how estimation results depend on the different parameters of the set-up, try different values of one of them while keeping fixed the others.

**Influence of the number \(R\) of realizations of each estimator.** Increasing \(R\) implies longer computing times but less variance of the results. Choosing \(R=1000\) seems to be fine for the following simulations.

**Influence of the parameter value \(\theta\).** When fixing \(n\) and \(q\) and varying \(\theta\) (e.g. \(\theta=0, 10, 444, -1200\)), we observe that the bias, standard error and mean squared error basically remain unchanged for all three estimators. That means that the estimation quality of all three estimators is invariant with respect to the value of \(\theta\). Remark that this is characteristic for estimators of the central tendency.

**Influence of the sample size \(n\).** When fixing \(\theta\) and \(q\) and varying \(n\) (e.g. \(n=10, 100,1000,10000\)), we observe that all estimators do better when the sample size increases. Furthermore, for large \(n\) the differences between the three estimators vanish. Consequently, on large samples it is not very important which estimator we choose, however on small samples it matters.

**Influence of the degree of freedom \(q\).** Finally, we fix \(\theta\) and \(n\) (e.g. \(n=100\)) and vary the degree of freedom \(q\) (e.g. \(q=1,2,3,5,10,50\)). Interestingly, it depends on the degree of freedom, which estimator is the best (in terms of the mean squared error). How to explain this phenomenon? We remark that the \(t_1\)-distribution is the Cauchy distribution, that is, a distribution with heavy tails that is not integrable. As a consequence, the law of large numbers does not apply to the sample mean, which is not consistent in this case. This explains why the sample mean fails completely in this setting. When increasing the degree of freedom \(q\), the tails of Student's \(t\)-distribution become lighter. Indeed, the \(t_q\)-distribution tends to the standard normal distribution when \(q\) goes to infinity. In other words, these simulations illustrate the impact of the tails of the distribution of the observations on the sample mean, the sample median and the truncated sample mean. We see that the sample median outperforms the other estimators for heavy tailed distributions, while the sample mean is doing slightly better than the others for light tails. Furthermore, we observe that the truncated mean gives very satisfying results in any setting. Indeed, the idea of the truncated mean is to delete outliers, which have a strong (negative) impact on the sample mean. In this sense, the truncated sample mean can be viewed as a robust version of the sample mean.

Realized by Tabea Rebafka

Developed by Daphné Giorgi and Altaïr Pelissier