AbstractFor a sample $$X_1, X_2,\ldots X_N$$ X 1 , X 2 , … X N of independent identically distributed copies of a log-logistically distributed random variable X the maximum likelihood estimation is analysed in detail if a left-truncation point $$x_L>0$$ x L > 0 is introduced. Due to scaling properties it is sufficient to investigate the case $$x_L=1$$ x L = 1 . Here the corresponding maximum likelihood equations for a normalised sample (i.e. a sample divided by $$x_L$$ x L ) do not always possess a solution. A simple criterion guarantees the existence of a solution: Let $$\mathbb {E}(\cdot )$$ E ( · ) denote the expectation induced by the normalised sample and denote by $$\beta _0=\mathbb {E}(\ln {X})^{-1}$$ β 0 = E ( ln X ) - 1 , the inverse value of expectation of the logarithm of the sampled random variable X (which is greater than $$x_L=1$$ x L = 1 ). If this value $$\beta _0$$ β 0 is bigger than a certain positive number $$\beta _C$$ β C then a solution of the maximum likelihood equation exists. Here the number $$\beta _C$$ β C is the unique solution of a moment equation,$$\mathbb {E}(X^{-\beta _C})=\frac{1}{2}$$ E ( X - β C ) = 1 2 . In the case of existence a profile likelihood function can be constructed and the optimisation problem is reduced to one dimension leading to a robust numerical algorithm. When the maximum likelihood equations do not admit a solution for certain data samples, it is shown that the Pareto distribution is the $$L^1$$ L 1 -limit of the degenerated left-truncated log-logistic distribution, where $$L^1(\mathbb {R}^+)$$ L 1 ( R + ) is the usual Banach space of functions whose absolute value is Lebesgue-integrable. A large sample analysis showing consistency and asymptotic normality complements our analysis. Finally, two applications to real world data are presented.
Read full abstract