Notations
\(\mathcal{S}^D = \left\{\left(x_1, \dots, x_D\right)^\top \in {\mathbb{R}_+^*}^D, \, \sum_{1\leq i \leq D} x_i = 1\right\}\) is the simplex in dimension \(D\), and its elements are called compositions.
If \(\mathbf{z}\in {\mathbb{R}_+^*}^D\), \(\mathcal{C}\left(\mathbf{z}\right)\) is the unique composition which is collinear to \(\mathbf{z}\).
\(\mathbf{x}=\left(x_1, \dots, x_D\right)^\top\) and \(\mathbf{y}=\left(y_1, \dots, y_D\right)^\top\) are two compositions.
\(\mathbf{x}\oplus \mathbf{y}= \mathcal{C}\left(\left(x_1 y_1, \dots, x_D y_D\right)^\top\right)\) is the Aitchison addition (or perturbation).
If \(\alpha \in \mathbb{R}\), \(\alpha \odot \mathbf{x}= \mathcal{C}\left(\left({x_1}^\alpha, \dots, {x_D}^\alpha\right)^\top\right)\) is the Aitchison scalar multiplication (or powering).
\(\langle \mathbf{x},\mathbf{y} \rangle_a = \frac{1}{D} \sum_{1\leq i<j\leq D} \ln\left(\frac{x_i}{x_j}\right) \, \ln\left(\frac{y_i}{y_j}\right)\) is the Aitchison inner product.
\(\mathcal{B}= \left(\mathbf{e}_1, \dots, \mathbf{e}_{D-1}\right)\) is an orthonormal basis of \(\mathcal{S}^D\), \(\mathop{\mathrm{ilr}}\) and \(\mathop{\mathrm{V}}\) are respectively its associated transformation and its contrast matrix.
\(\mathbf{1}_D = \left(1, \dots, 1\right)^\top \in \mathbb{R}^D\) is the vector whose entries are all ones, \(\mathop{\mathrm{I}}_{D}\) denotes the \(D \times D\) identity matrix, and \(\mathop{\mathrm{G}}_D\) denotes the \(D \times D\)-matrix \(\mathop{\mathrm{I}}_{D} - \frac{1}{D} \mathbf{1}_{D} {\mathbf{1}_{D}}^\top\).
\(\mathbf{X}\) is a random variable on \(\mathbb{R}^D\), whose distribution function is \(F_{\,\mathbf{X}}: \mathcal{B}\left(\mathbb{R}^D\right) \rightarrow \mathbb{R}\), and if absolutely continuous with respect to the Lebesgue measure, its probability density function is \(f_{\,\mathbf{X}}: \mathbb{R}^D \rightarrow \mathbb{R}\).
\(\left(\mathbf{X}_1, \dots, \mathbf{X}_n\right)\) are independent random variables such that each one follows the distribution \(F_{\,\mathbf{X}}\), and that represent the statistical observations.
If \(\theta\) is a parameter of the distribution function of \(\mathbf{X}\), we will use as often as possible the notation \(\hat{\theta}\) for an estimator of \(\theta\), except in particular for \(\mathbb{E}\left[\mathbf{X}\right]\) (estimated by the sample mean \(\overline{\mathbf{X}_j}\)) and the variance \(\mathbb{V}\left[\mathbf{X}\right]\) (estimated by the sample variance matrix \(\hat{\Sigma} = \hat{\Sigma}\left(\mathbf{X}_1, \dots, \mathbf{X}_n\right)\)). We will omit the dependence on \(\mathbf{X}_1, \dots, \mathbf{X}_n\) as often as possible.