The Mathematics of Compositional Analysis

The term compositional data analysis is historically associated to the approach based on the logratio transformations introduced in the eighties. Two main principles of this methodology are scale invariance and subcompositional coherence. New developments and concepts emerged in the last decade revealed the need to clarify the concepts of compositions, compositional sample space and subcomposition. In this work the mathematics of compositional analysis based on equivalence relation is presented. A logarithmic isomorphism between quotient spaces induces a metric space structure for compositions. The logratio compositional analysis is the statistical analysis of compositions based on this structure, consisting of analysing logratio coordinates.


Introduction
The term compositional data (CoDa) was first introduced by Aitchison (1982) and later developed in Aitchison (1986).In these publications CoDa is identified with vectors of strictly positive components whose sum is always equal to one; that is, vectors of the unit simplex S D = {(w 1 , . . ., w D ) : w 1 > 0, . . ., w D > 0; w 1 + . . .+ w D = 1}.
The term compositional data analysis (CoDA) has been implicitly associated with the methodology proposed by Aitchison (1986), which is based on applying the logratio transformations to the CoDa and describing, analysing and modelling them statistically from the logratios of their components.The main aim of this methodology is to free the CoDa from the constraints of the constant sum in order to be able to use the standard distributions in the real space to model the CoDa, e.g., the multivariate normal distribution.This strategy has two fundamental concepts in the so-called principles of CoDA (Aitchison 1986), namely, 'scale invariance' and'subcompositional coherence'. From Aitchison (1986), "scale invariance merely reinforces the intuitive idea that a composition provides information only about relative values not about absolute values and therefore ratios of componentes are the relevant entities to study"; and "subcompositional coherence demands that two scientist, one using full composition and the other using subcompositions of these full compositions, should make the same inference about relations within the common parts".Later it was seen that the methodology initiated by Aitchison is more than a simple transformation of the CoDa, because it is in fact a way to provide the simplex with a structure of Euclidean space.The interested reader can refer to Egozcue, Barceló-Vidal, Martín-Fernández, Jarauta-Bragulat, Díaz-Barrero, and Mateu-Figueras (2011) for further information.
The identification of the term CoDA with the methodology based on the logratio transformations developed by Aitchison has meant that other possible methods for analysing CoDa have made little impact.Watson and Philip (1989), Wang, Liu, Mok, Fu, and Tse (2007) or Scealy and Welsh (2011), for example, prefer to apply the techniques characteristic of directional data, given that they take the positive orthant of the unit hypersphere centred at the origin as the sample space of the CoDa.At that time this alternative method for analysing CoDa was cause for intense epistolary exchanges between D. F. Watson (and G. M. Philip) and J. Aitchison (see Aitchison 1990;Watson 1990;Aitchison 1991;Watson 1991).Recently, Scealy and Welsh (2014), have returned to the controversial questioning of the principles of CoDa, which they consider to be made to specifically exclude any methodology other than the one developed by Aitchison.As Scealy and Welsh (2014) recognise, the crux of the controversy lies in the definitions of composition and sample space in CoDA, both of which were introduced by Aitchison (1986) and based on constant sum vectors.The lack of clarity in the presentation of the properties scale invariance and subcompositional coherence is also a matter for discussion.
The main aim of this paper is to provide a precise and unequivocal definition of the concepts of composition, CoDa sample space and subcomposition, on which compositional analysis (CoAn) is based.Contrary to Scealy and Welsh (2014), we turn to mathematics to introduce these concepts with maximum precision.Thus, in Section 2 we define the quotient space of the compositions and we provide a precise definition of the concept of subcomposition.We also define what we understand by CoAn, distinguishing it from the traditional concept CoDA.In Section 3 we show how the logarithmic and exponential functions allow us to structure the sample space as a Euclidean space and to operate with the logratio coordinates of the data as if we were doing so in the real space.In the last section we compile the advantages and limitations of CoAn based on logratio coordinates and of the analysis based on transformations that take the positive orthant of the unit hypersphere as the sample space.Finally, we present the main conclusions.

A composition is an equivalence class
We assume that our data and observations materialise in vectors w = (w 1 , . . ., w D ) with strictly positive components, that is vectors from real space IR D + , the positive orthant of IR D .Note that we are eluding to the case of zero values in the data.We consider the zero as a special value that deserves a particular analysis according to its nature (Palarea-Albaladejo and Martín-Fernández 2015); that is, the reason why a zero value is present in a CoDa set is informative and determines the approach to be applied.The interested reader is referred to Martín-Fernández, Palarea-Albaladejo, and Olea (2011) for further information.In the discussion we outline some of the approaches and discuss some kinds of zero.
Sometimes the observational vectors w are constant sum vectors.Typical examples are the data from time-use surveys where the sum equals to 24 in hours, 1440 in minutes or 100 in percentages.This case of CoDa is known as 'closed data'.In other situations, the components of the observational vectors are themselves meaningful, that is, they represent absolute magnitudes.However, in spite of that, we can decide to take only the relative information into account for our analysis.For example, in the analysis of household expenditure on D commodity groups, we can decide to analyze the distribution of the expenditure regardless of the total.In both scenarios we are implicitly assuming that the vectors w and kw, with k ∈ IR + , are providing us with the same compositional information, that is, the information given by the ratios between the components.For example, the vectors (0.3, 0.5, 0.2), (30, 50, 20), (7.2, 12, 4.8), and (3/2, 5/2, 1) provide the same compositional information.In both cases we are assuming that our data are CoDa and our analysis will be a CoAn.Moreover, from a strictly mathematical point of view this implies that in a CoAn the sample space is not IR D + .
Definition 2.1.Two D-observational vectors w and w * are compositionally equivalent, written w ∼ w * , if there is a positive constant k such that w = kw * .This equivalence relation on IR D + splits the space into equivalence classes, called D-compositions or, simply, compositions.The composition generated by an observational vector w, i.e. the equivalence class of w, is symbolized by w: Following Aitchison (1986), it is clear that a D-part composition can be geometrically interpreted as a ray from the origin in the positive orthant of IR D (Figure 1).Therefore, from a strictly mathematical point of view, the term CoAn is the equivalent of assuming that the sample space is the set of all D-compositions.
Definition 2.2.The set of all D-compositions, that is, the quotient space IR D + /∼ is called the D-compositional space or, in brief, compositional space, and is symbolized by C D .We symbolize by ccl (from compositional class) the mapping from IR D + to C D which assigns each D-observational vector w to the composition w, i.e., ccl : Property 2.1.Two D-observational vectors w = (w 1 , . . ., w D ) and w * = (w * 1 , . . ., w * D ) are compositionally equivalent when the information provided by their ratios is the same, that is for each i, j = 1, . . ., D .
Any D-composition w is completely determined by its ratios w i /w j of their components.Therefore, in a CoAn the relevant information provided by the observational vector w is found not in its components w i , but rather in its ratios w i /w j .This is what we mean when we say that a composition only contains 'relative information' about its components.Note that all the observational vectors in the same ray (Figure 1) have the same ratios, providing the same relative information.That is, any point in the ray can be selected as representative of the equivalence class, and any statistical analysis has to provide the same information regardless of the representative selected.Importantly, if one applies a statistical method that does not take into account this essential attribute of the compositions, the application of different criteria to select the representatives will give different results and, likely, one will extract different conclusions.
To conclude, when we decide to do a CoAn we are assuming that the sample space of our data is the compositional space C D , which means in fact an acceptance of the 'scale invariance' principle.

Representatives of compositions
Any composition w is determined by any observational vector w that belongs to the equivalence class.Thus, many different criteria can be used to select a representative of a composition.Each criterion gives rise to a different reference frame where projecting the compositions of C D .Here we present the most commonly-used criteria that facilitate the interpretation and have relevant mathematical properties.

The Mathematics of Compositional Analysis
Definition 2.3.The linear criterion selects the unit-sum vector w/ D j=1 w j to represent the composition w.We symbolize by r l the mapping from C D to the subset where S D is the well-known unit simplex, historically considered as the sample space of CoDa.
The mapping r l corresponds to the constraining operator or closure operator C introduced by Aitchison (1986).Geometrically, r l (w) is the intersection of the ray going from the origin through w and the hyperplane of IR D defined by the equation w 1 + . . .+ w D = 1 (Figure 1).This criterion can be generalized to representatives with a sum equal to 100 or any other positive value.
We symbolize by r s the mapping from C D to the subset Sph D + of IR D + which assigns to composition w the intersection of the ray going from the origin through w and the unit hypersphere of IR D centred in the origin, i.e., r s : where Sph D + is the strictly positive orthant of the unit hypersphere of IR D centred in the origin.We call this selection criterion the spherical criterion (Figure 1) because the representatives are unit-norm vectors using the classical Euclidean norm.This selection criterion was proposed by Watson and Philip (1989).
Definition 2.5.The hyperbolic criterion, r h , assigns to composition w the intersection of the ray going from the origin through w and the hyperbolic surface Hip D + in IR D + implicitly defined by the equation D i=1 w i = 1: where g(w) = ( D j=1 w j ) 1/D is the geometric mean of the components of vector w (Fig. 1).
Note that the function composition log • r h is equivalent to the centred logratio transformation (clr) introduced by Aitchison (1986): clr (w) = log(w/g(w)).
The mappings r l , r s and r h can also be viewed as scale-invariant functions from + is said to be 'scale invariant' if for any positive constant k and for any observational vector w, the function verifies f (kw) = f (w).
These criteria to select a representative of a composition can be extended to any surface defined in IR D + using a bijective function.Indeed, making each composition correlate with the intersection of the corresponding ray with the surface is sufficient.

Subcompositions
In a CoAn, attention is usually focused on a determinate subset of the components of our observations of IR D + .For example, in time-use surveys we might only be interested in those activities that are different from the sleeping hours.If the analysis to be carried out on the components selected from our observations must also be compositional, then the sample space also needs to interpret it as a quotient space.This brings us to the need to introduce the concept of subcomposition.
Definition 2.6.Given a composition w ∈ C D , any composition obtained from the selection of two or more components of the D-observational vector w is termed a subcomposition of w.More precisely, let s be the number of selected components, with 2 ≤ s < D, and i 1 < . . .< i s the sub indexes of these components (we implicity assume that the sub indexes of the D-observational vectors are 1, . . ., D).Let S be the s × D matrix with the ones in the positions (1, i 1 ), . . ., (s, i s ) of the matrix and zeros in the remaining positions.Making a subcomposition can be viewed as the transformation sub S from C D to C s given by sub S : The symbol w S indicates the observational subvector Sw = (w i 1 , . . ., w is ) , and w S represents the final subcomposition which belongs to the compositional space C s .The transformation sub S is compatible with the equivalence relation ∼, that is, equivalent observational vectors are transformed into equivalent subvectors.Importantly, the selected components (w i 1 , . . ., w is ) provide the same relative information regardless they belong to w or they form the subcomposition w S .This 'subcompositional coherence' is an inherent attribute of the compositions rather a required principle.The formation of a subcomposition w S from a D-composition w can be geometrically interpreted as the orthogonal projection of the ray associated to w onto the subspace of IR D + generated by the positive coordinate axes associated to the components in the subcomposition.Figure 2 shows the subcompositions for the case D = 3 and the relationship with the corresponding representatives.

The Euclidean compositional space
Any statistical analysis with data from the sample space C D needs this space to have an algebraic and metric structures.Remember that such basic concepts as the mean and the variance of a set of data are based on the algebraic and metric structure of the sample space of the data.The strategy that we develop is to define an isomorphism between C D and another Euclidean space using the logarithmic function.Despite this isomorphism may not be the unique feasible isomorphism, the rest of the possibilities are still unknown to us.

A quotient Euclidean space in the real space
The well known classical Euclidean space IR D is based on the addition and subtraction operations.Because we need to connect the relative information provided by the ratios of components with an existing Euclidean space, the logarithmic function becomes a useful option.
Figure 3 shows that the classes z can be geometrically interpreted by straight lines parallel to 1 D .A simple criterion for selecting a representative of an equivalence class z is to assign the intersection point of the straight line associated to this class and the orthogonal hyperplane by the origin Definition 3.2.We denote by r V D the one-to-one mapping which assigns each class z to this representative r  Dashed line is the orthogonal hyperplane to vector 1D.
With these definitions, the quotient space L D becomes a real vector space.The class of 0 D is the neutral element and the opposite of z is the class −z.Moreover, the mapping r V D is an isomorphism between the vector space (L D , +, •) and the subspace V D of IR D (Equation 7).
Since the dimension of V D is D − 1, the dimension of the vector space L D will be also equal to D − 1.
The vector space structure defined in L D is coherent with a subcompositional analysis because one can define subvectors in the space V D and reproduce them in L D using the inverse mapping r V D −1 .More precisely, the mapping sub S (Equation 5) corresponds to orthogonal projection of the hyperplane V D (Equation 7) onto the subspace of IR D defined implicitly by {z ∈ IR D : z 1 D = 0; z j 1 = 0; . . ., z j (D−s) = 0} , where j 1 , . . ., j (D−s) are the sub indexes of the no-selected components in the subvector.
Given that the elements of L D can be interpreted as straight lines parallel to vector 1 D , one can define the distance between the two classes z and z * of L D as the Euclidean distance between these two straight lines in IR D .This distance is equal to the length of the difference vector r V D (z) − r V D (z * ) (Figure 4).
Following this strategy, it is possible to reproduce the Euclidean structure defined on V D ⊂ IR D on L D .Definition 3.4.For each z, z * ∈ L D , we define the L-inner product < z, z * > L as the usual inner product < r Then it is possible to define a norm and a distance in L D from the L-inner product.
Definition 3.5.The L-norm of an equivalence class z ∈ L D is given by and the L-distance between two classes z and z * in L D is given by ), the following property holds.
From the definitions ( 9) and ( 10), the quotient space L D becomes an Euclidean space isometric to the subspace V D of IR D .Definition 3.6.We will symbolize by logc the transformation from C D to L D , i.e., logc :

The logarithmic isomorphism between the quotient spaces
and by expc the inverse transformation from L D to C D , i.e., expc : The representative in V D of the equivalence class log w is , where g(w) is the geometric mean of the vector w.
Importantly, the function composition r V D • logc is equivalent to the transformation clr (Aitchison 1986).This one-to-one correspondence between C D and L D allows a real vector space isomorphic to L D to be defined in C D .
In correspondence with the sum in L D , the inner operation ⊗ in C D is defined as Similarly, in correspondence with the product by a constant in L D , the external operation in C D is defined as The operations ⊗ and are respectively the perturbation and power operations introduced by Aitchison (1986).
Therefore, (C D , ⊗, ) becomes a real vector space of dimension D − 1, isomorphic to the quotient space L D and to the subspace V D of IR D .In the commutative group (C D , ⊗), the composition 1 D = (1, . . ., 1) is the neutral element, and the inverse composition w −1 of w is the composition Moreover, the structure of real vector space of (C D , ⊗, ) is compatible with the concept of subcomposition.
Property 3.2.The mapping sub S defined in Equation 5 is a linear function between the vector spaces (C D , ⊗, ) and (C s , ⊗, ).Therefore, it holds that sub S (w ⊗ w * ) = w S ⊗ w * S and sub S (α w) = α w S , for any w, w * ∈ C D and α ∈ IR.

The compositional space as an affine Euclidean space
Because (C D , ⊗, ) is a real vector space, it can be viewed as an affine space when the group (C D , ⊗) operates on C D as a group of transformations.
Definition 3.8.Given a composition p ∈ C D , the perturbation associated to p is the transformation from C D to C D defined by Then we say that p ⊗ w is the composition which results when the perturbation p is applied to the composition w.
Perturbations in the compositional space play the same role as translations play in the real space.Like them, the set of all perturbations in C D is a commutative group isomorphic to (C D , ⊗).Thus, the composition of two perturbations p 1 and p 2 is the perturbation associated to p 1 ⊗ p 2 .Furthermore, the perturbation associated to 1 D is the identity perturbation which does not produce any change when applied to a composition.Also, for any given perturbation p there is the inverse perturbation p −1 which undoes the changes produced by p. Finally, given two compositions w and w * ∈ C D , a unique perturbation p exists which transforms w on w * .This perturbation is the perturbation difference between w and w * .Thus, the measurement of the 'difference' between two compositions is defined from the ratios between the components of compositions.
The one-to-one transformations logc (Equation 11) and expc (Equation 12) between C D and L D allow the real Euclidean structure defined on L D to be transfered to C D .
Definition 3.9.The compositional inner product of two compositions w and w * will be equal to Importantly, < w, w * > C =< clr w, clr w * >, i.e., the standard inner product of the clrtransformed vectors.
From this inner product in C D we can define a norm and a distance in the compositional space.
Definition 3.10.The compositional norm of a composition w ∈ C D will be given by and the compositional distance between two compositions w and w * of C D is given by The distance d C (w, w * ) defined on C D is equivalent to the Aitchison distance (Aitchison, Barceló-Vidal, Martín-Fernández, and Pawlowsky-Glahn 2000) that can be expressed as the typical Euclidean distance between the corresponding clr -transformed vectors.
Property 3.3.In relation to subcompositions, the distance d C satisfies what is known as subcompositional dominance, according to which for any w, w * ∈ C D and for any subcomposition S.

Proof.
It is sufficient to demonstrate that the compositional norm of a composition w is greater or equal to the compositional norm of a subcomposition w S obtained by removing one of its components.If, without lack of generality, we assume that w S = (w 1 , . . ., w D−1 ) , then it holds that The subcompositional dominance property of the Euclidean space C D correlates with the traditional property at the real space IR D , according to which the distance between the orthogonal projections of two points on any subspace is never greater than the original distance between the points.In practical terms, this property also admits the following interpretation: given two observational vectors w S and w * S , if one adds supplementary components to both vectors to form, respectively, the vectors w and w * , then the difference between the new vectors must be at least equal to the difference between the initial vectors.
Palarea-Albaladejo, Martín-Fernández, and Soto (2012) present examples to illustrate that other usual distances, like the typical Euclidean or the angular distances, do not verify this property.As a consequence, when one applies these distances or one calculates related statistics (e.g., correlation coefficient), some misleading results can be obtained.
Since C D is a real vector space of dimension D − 1, any composition w could be identified with its D − 1 coordinates relative to a basis of C D .In practice, we can obtain a basis of C D from a basis of the subspace ) is a basis of C D , and the coordinates of a composition w relative to this basis coincide with the coordinates of r V D (log w) relative to v 1 , . . ., v D−1 .
Let v 1 , . . ., v D−1 be a basis of V D , and let V be the D × (D − 1) matrix [v 1 : . . .: v D−1 ].Then the coordinates of the composition w relative to the basis expc(r −1 Note the expression of the coordinates of w will depend on the matrix V we selected.These coordinates are usually known as logratio coordinates because they are always expressed in terms of logarithms of ratios of components.For example, for the coordinates of w relative to V are equal to (log(w 1 /w D ), . . ., log(w D−1 /w D )) .In this case, the logratio coordinates coincide with the additive logratio transformation (alr) introduced by Aitchison (1986).
When one is making a statistical analysis it is recommendable to select orthonormal basis in C D because the metrics properties are preserved under a change of basis.This fact guarantees the invariance of the results under a change of basis.To select an orthonormal basis it suffices that the matrix V verifies the two identities V V = I D−1 and VV = H D .In this case, the mapping that assigns composition w to its logratio coordinates is the isometric logratio transformation ilr V relative to matrix V (Egozcue, Pawlowsky-Glahn, Mateu-Figueras, and Barceló-Vidal 2003), that is ilr In practice, it is very useful to select a basis that facilitates the interpretation of the logratio coordinates.Egozcue and Pawlowsky-Glahn (2005) describe a stepwise procedure to make an orthonormal basis of C D from sequential binary partitions of components of the observational vectors of IR D + .

Final remarks and conclusions
Because all Euclidean spaces of the same dimension are isometric, the sample space of CoDa C D is isometric to IR D−1 .This fact allows all the statistical procedures that we naturally apply on the real space IR D−1 to be applied to CoDa.The isomorphism presented in this article is based on the logaritmic function.From a theoretical point of view, other approaches could be possible but are unknown to us.With our approach, the compositional quotient space C D has an algebraic and a metric structure induced by the isomorphism.Consequently, it suffices to work with the logratio coordinates of the compositions with respect to an orthonormal basis on C D (Mateu-Figueras, Pawlowsky-Glahn, and Egozcue 2011).That is, our CoAn is in essence a logratio CoAn, that is an analysis of CoDa based on the logarithm of the information provided by the ratios.
The fact that our analysis focuses on ratios means that it can be applied directly to the original data of IR D + , to the simplex S D , to the strictly positive orthant of the unit hypersphere Sph D + , to the hyperbolic surface Hip D + or to any other representative.Moreover, when working with logratio coordinates all of the statistical procedures that are defined in IR D−1 , both descriptive and inferential, are transferred to the space C D .The application of CoAn leads to the assumption that the group of perturbations is the operating group on the compositional space, in the same manner as we assume that the translations is the operating group in the real space.This is the keystone of the methodology introduced by Aitchison (1986).In fact, it means accepting that the 'difference' between two compositions w = (w 1 , . . ., w D ) and w * = (w * 1 , . . ., w * D ) is based on the ratios w * j /w j between parts instead of on the arithmetic differences w * j − w j , according to the 'relative scale' property.Therefore, for example, the difference between the compositions (0.980, 0.010, 0.010) and (0.970, 0.002, 0.028) is more than three times greater than the distance between (0.300, 0.200, 0.500) and (0.200, 0.300, 0.500) .The relative scale property of CoAn justifies the choice of the logarithm transformation to measure the difference between two compositions.

The CoAn applies only in the open orthant IR D
+ .That is, the components of the observational vectors must be strictly positive.This limitation is certainly a difficulty because often the observations contain zeros.However, when the zeros are rounded zeros or are count zeros they can be preprocessed using techniques inspired by techniques for missing data (Palarea-Albaladejo and Martín-Fernández 2015) that make a replacement by a small value.When the zero is an essential zero, that is, the zero value is a true value, it makes no sense to replace the zero by a small value.In this case, the analysis should take into account the presence and absence of zeros, that is, the pattern of zeros.Both descriptive and inferential analysis should be performed among the groups defined by the pattern of zeros.Some researchers, for example Watson and Philip (1989), consider that the appropriate group to operate on compositions is the rotations on the sphere and not the perturbations on which the logratio CoAn is based.Watson and Philip (1989) represent a composition w from the components of the unitary vector w/||w||, that is, from the cosine of the different angles that w forms with the axes of coordinates.Then, the angle formed by two observation vectors w and w * is taken as the appropriate measure from which to define the distance between the two compositions.Others, for example Wang et al. (2007) and Scealy and Welsh (2011), also apply the methodology of Watson and Philip (1989) after applying the scale-invariant transformation w → (w/ D j=1 w j ) 1/2 to the observations.Thus, they work with the components of the unitary vector (w/ D j=1 w j ) 1/2 rather than the coordinates of the vector w/||w||.From these approaches, which are based on the representation of the compositions on the positive orthant of the unit hypersphere centred at the origin, the authors apply the statistical analysis (characteristic) of directional statistics, based on the von Mises-Fisher distribution.As stated in Aitchison (1982)'s final discussion, the problems of this approach derives from the fact that the von Mises-Fisher distribution is defined on the whole unit hypersphere and not only on the positive orthant.This leads to problems when the components of w are too close to 0. Aitchison (1982) also points out the difficulties that CoAn based on the spherical representation of the compositions encounters when dealing with problems related to independence and regression.Neither is it possible from this representation to easily relate the statistics that describe a set of compositions w 1 , . . ., w n of C D to the statistics of the subcompositions w S,1 , . . ., w S,n .
To conclude, the most relevant results shown in this article are: • A composition is an equivalence class and its sample space is the quotient space C D .Geometrically, the compositions are semi-straight lines by the origin of the positive orthant of the space IR D + .We refer any analysis of these equivalence classes as compositional analysis (CoAn).Regardless the use of the logarithm function or a transformation, when an analyst decides to do a CoAn he or she is assuming that the sample space of the data is the compositional space C D , which means in fact an acceptance of the 'scale invariance' principle of CoDA.
• The logarithmic and exponential transformations provide the space C D with an Euclidean space structure.We denominate logratio CoAn the compositional analysis developed from this structure of C D .It agrees with the methodology introduced by Aitchison (1982), based on a logratio relative scale of measurement of the difference between two compositions.
• The logratio CoAn allows us to carry out the standard statistical analyses on the logratio coordinates.
• The logratio CoAn allows us to apply the subcompositional analysis in a natural and intuitive way, giving results which are coherent with those obtained from the whole compositions.
• The logratio CoAn has the drawback of being unable to operate directly with compositions with zero values.Applying preprocessing techniques to replace rounded and count zeros is then recommended.A statistical analysis in the presence of essential zeros must take into account the groups defined by the pattern of zeros.
• When the techniques for analysing directional data are restricted to compositions, they must be considered to be a CoAn.Even though these analyses do not have the problem of the zeros it is still impossible to guarantee that coherent results will always be obtained in inferential studies (e.g., confidence regions), that is, strictly contained within the positive orthant, because the sample space of these analyses is the whole sphere.Moreover, this approach does not guarantee that a subcompositional analysis will produce results that concur with the results of the analysis of the whole composition.

Figure 2 :
Figure 2: Geometrical interpretation in IR 3 + of a subcomposition w12 of a composition w ∈ C 3 .Filled circles are the observational vectors.Empty circles their corresponding linear representatives.
where H D is the D ×D centering matrix, that is H D = I D −D −1 J D (I D is the identity matrix of order D × D, andJ D = 1 D 1 D ).Definition 3.3.The sum of two classes z and z * in L D is defined as z + z * = z + z * , and the product of an equivalence class z by a constant α ∈ IR is defined by αz = αz.

Figure 3 :
Figure 3: Selection of the representative rV D z (empty circle) for an equivalence class z in L 2 = IR 2 / ≡.

Figure 4 :
Figure 4: Two equivalence classes z and z * of L D , its corresponding representatives rV D z and rV D z * , and the distance between them (case D=3).
The logarithmic and exponential transformations from IR D + to IR D are with the equivalence relations ∼ and ≡ defined in IR D + and IR D , respectively, i.e., w ∼ w * in IR D + ⇐⇒ log w ≡ log w * in IR D , and z ≡ z * in IR D ⇐⇒ exp z ∼ exp z * in IR D + .Therefore, these transformations can be extended to the quotient spaces C D and L D .