Buckets:
Title: Linear optimal partial transport embedding
URL Source: https://arxiv.org/html/2302.03232
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Background: OT and LOT 3Linear Optimal Partial Transport Embedding 4Applications 5Summary References License: CC BY 4.0 arXiv:2302.03232v5 [cs.LG] 23 Apr 2024 Linear optimal partial transport embedding Yikun Bai soheil.kolouri@vanderbilt.edu Ivan Medri soheil.kolouri@vanderbilt.edu Rocio Diaz Martin rocio.p.diaz.martin@vanderbilt.edu Rana Muhammad Shahroz Khan soheil.kolouri@vanderbilt.edu Soheil Kolouri soheil.kolouri@vanderbilt.edu Abstract
Optimal transport (OT) has gained popularity due to its various applications in fields such as machine learning, statistics, and signal processing. However, the balanced mass requirement limits its performance in practical problems. To address these limitations, variants of the OT problem, including unbalanced OT, Optimal partial transport (OPT), and Hellinger Kantorovich (HK), have been proposed. In this paper, we propose the Linear optimal partial transport (LOPT) embedding, which extends the (local) linearization technique on OT and HK to the OPT problem. The proposed embedding allows for faster computation of OPT distance between pairs of positive measures. Besides our theoretical contributions, we demonstrate the LOPT embedding technique in point-cloud interpolation and PCA analysis. Our code is available at https://github.com/Baio0/LinearOPT.
1Introduction
The Optimal Transport (OT) problem has found numerous applications in machine learning (ML), computer vision, and graphics. The probability metrics and dissimilarity measures emerging from the OT theory, e.g., Wasserstein distances and their variations, are used in diverse applications, including training generative models [3, 24, 33], domain adaptation [16, 15], bayesian inference [28], regression [26], clustering [42], learning from graphs [29] and point sets [35, 36], to name a few. These metrics define a powerful geometry for comparing probability measures with numerous desirable properties, for instance, parameterized geodesics [2], barycenters [18], and a weak Riemannian structure [40].
In large-scale machine learning applications, optimal transport approaches face two main challenges. First, the OT problem is computationally expensive. This has motivated many approximations that lead to significant speedups [17, 13, 39]. Second, while OT is designed for comparing probability measures, many ML problems require comparing non-negative measures with varying total amounts of mass. This has led to the recent advances in unbalanced optimal transport [14, 12, 32] and optimal partial transport [8, 20, 21]. Such unbalanced/partial optimal transport formulations have been recently used to improve minibatch optimal transport [37] and for point-cloud registration [4].
Comparing πΎ (probability) measures requires the pairwise calculation of transport-based distances, which, despite the significant recent computational speed-ups, remains to be relatively expensive. To address this problem, [41] proposed the Linear Optimal Transport (LOT) framework, which linearizes the 2-Wasserstein distance utilizing its weak Riemannian structure. In short, the probability measures are embedded into the tangent space at a fixed reference measure (e.g., the measuresβ Wasserstein barycenter) through a logarithmic map. The Euclidean distances between the embedded measures then approximate the 2-Wasserstein distance between the probability measures. The LOT framework is computationally attractive as it only requires the computation of one optimal transport problem per input measure, reducing the otherwise quadratic cost to linear. Moreover, the framework provides theoretical guarantees on convexifying certain sets of probability measures [34, 1], which is critical in supervised and unsupervised learning from sets of probability measures. The LOT embedding has recently found diverse applications, from comparing collider events in physics [9] and comparing medical images [5, 30] to permutation invariant pooling for comparing graphs [29] and point sets [35].
Many applications in ML involve comparing non-negative measures (often empirical measures) with varying total amounts of mass, e.g., domain adaptation [19]. Moreover, OT distances (or dissimilarity measures) are often not robust against outliers and noise, resulting in potentially high transportation costs for outliers. Many recent publications have focused on variants of the OT problem that allow for comparing non-negative measures with unequal mass. For instance, the optimal partial transport (OPT) problem [8, 20, 21], KantorovichβRubinstein norm [25, 31], and the HellingerβKantorovich distance [11, 32]. These methods fall under the broad category of βunbalanced optimal transportβ [12, 32]. The existing solvers for βunbalanced optimal transportβ problems are generally as expensive or more expensive than the OT solvers. Hence, computation time remains a main bottleneck of these approaches.
To reduce the computational burden for comparing unbalanced measures, [10] proposed a clever extension for the LOT framework to unbalanced nonnegative measures by linearizing the Hellinger-Kantorovich, denoted as Linearized Hellinger-Kantorovich (LHK), distance, with many desirable theoretical properties. However, an unintuitive caveat about HK and LHK formulation is that the geodesic for the transported portion of the mass does not resemble the OT geodesic. In particular, the transported mass does not maintain a constant mass as it is transported (please see Figure 1). In contrast, OPT behaves exactly like OT for the transported mass with the trade-off of losing the Riemannian structure of HK.
Contributions: In this paper, inspired by OT geodesics, we provide an OPT interpolation technique using its dynamic formulation and explain how to compute it for empirical distributions using barycentric projections. We use this interpolation to embed the space of measures in a euclidean space using optimal partial transport concerning a reference measure. This allows us to extend the LOT framework to LOPT, a linearized version of OPT. Thus, we reduce the computational burden of OPT while maintaining the decoupling properties between noise (created and destroyed mass) and signal (transported mass) of OPT. We propose a LOPT discrepancy measure and a LOPT interpolating curve and contrast them with their OPT counterparts. Finally, we demonstrate applications of the new framework in point cloud interpolation and PCA analysis, showing that the new technique is more robust to noise.
Organization: In section 2, we review Optimal Transport Theory and the Linear Optimal Transport framework to set the basis and intuitions on which we build our new techniques. In Section 3 we review Optimal Partial Transport Theory and present an explicit solution to its Dynamic formulation that we use to introduce the Linear Optimal Partial Transport framework (LOPT). We define LOPT Embedding, LOPT Discrepancy, LOPT interpolation and give explicit ways to work with empirical data. In Section 4 we show applications of the LOPT framework to approximate OPT distances, to interpolate between point cloud datasets, and to preprocess data for PCA analysis. In the appendix, we provide proofs for all the results, new or old, for which we could not find a proof in the literature.
Figure 1:The depiction of the HK and OPT geodesics between two measures, at times π‘ β { 0 , 0.25 , 0.5 , 0.75 , 1 } . The top row (Blue) represents two initial deltas of mass one located at positions -1.2 and -1. The bottom row (Purple) shows two final deltas of mass one located at 1 and 1.2. At intermediate time steps π‘
0.25 , 0.5 , 0.75 , the transported part (middle delta moving from -1 to 1) changes mass for HK while its mass remains constant for OPT. Outer masses (located at -1.2 for initial time π‘
0 , and at 1.2 for final time π‘
1 ) are being destroyed and created, so mass changes are expected. Notably, mass is created/destroyed with a linear rate for OPT and a nonlinear rate for HK. See Appendix H.4 for further analysis. 2Background: OT and LOT 2.1Static Formulation of Optimal Transport
Let π« β’ ( Ξ© ) be the set of Borel probability measures defined in a convex compact subset Ξ© of β π , and consider π 0 , π π β π« β’ ( Ξ© ) . The Optimal Transport (OT) problem between π 0 and π π is that of finding the cheapest way to transport all the mass distributed according to the reference measure π 0 onto a new distribution of mass determined by the target measure π π . Mathematically, it was stated by Kantorovich as the minimization problem
π β’ π β’ ( π 0 , π π ) := inf πΎ β Ξ β’ ( π 0 , π π ) πΆ β’ ( πΎ ; π 0 , π π )
(1)
for πΆ β’ ( πΎ ; π 0 , π π ) := β« Ξ© 2 β π₯ 0 β π₯ π β 2 β’ π πΎ β’ ( π₯ 0 , π₯ π ) ,
(2)
where Ξ β’ ( π 0 , π π ) is the set of all joint probability measures in Ξ© 2 with marginals π 0 and π π . A measure πΎ β Ξ β’ ( π 0 , π π ) is called a transportation plan, and given measurable sets π΄ , π΅ β Ξ© , the coupling πΎ β’ ( π΄ Γ π΅ ) describes how much mass originally in the set π΄ is transported into the set π΅ . The squared of the Euclidean distance1 β π₯ 0 β π₯ π β 2 is interpreted as the cost of transporting a unit mass located at π₯ 0 to π₯ π . Therefore, πΆ β’ ( πΎ ; π 0 , π π ) represents the total cost of moving π 0 to π π according to πΎ . Finally, we will denote the set of all plans that achieve the infimum in (1), which is non-empty [40], as Ξ β β’ ( π 0 , π π ) .
Under certain conditions (e.g. when π 0 has continuous density), an optimal plan πΎ can be induced by a rule/map π that takes all the mass at each position π₯ to a unique point π β’ ( π₯ ) . If that is the case, we say that πΎ does not split mass and that it is induced by a map T. In fact, it is concentrated on the graph of π in the sense that for all measurable sets π΄ , π΅ β Ξ© , πΎ β’ ( π΄ Γ π΅ )
π 0 β’ ( { π₯ β π΄ : π β’ ( π₯ ) β π΅ } ) , and we will write it as the pushforward πΎ
( id Γ π ) # β’ π 0 . Hence, (1) reads as
π β’ π β’ ( π 0 , π π )
β« Ξ© β π₯ β π β’ ( π₯ ) β 2 β’ π π 0 β’ ( π₯ )
(3)
The function π : Ξ© β Ξ© is called a Monge map, and when π 0 is absolutely continuous it is unique [7].
Finally, the square root of the optimal value π β’ π β’ ( β , β ) is exactly the so-called Wasserstein distance, π 2 , in π« β’ ( Ξ© ) [40, Th.7.3], and we will call it also as OT squared distance. In addition, with this distance, π« β’ ( Ξ© ) is not only a metric space but also a Riemannian manifold [40]. In particular, the tangent space of any π β π« β’ ( Ξ© ) is π― π
πΏ 2 β’ ( Ξ© ; β π , π )
{ π’ : Ξ© β β π : β π’ β π 2 < β } , where
β π’ β π 2 := β« Ξ© β π’ β’ ( π₯ ) β 2 β’ π π β’ ( π₯ ) .
(4) 2.2Dynamic Formulation of Optimal Transport
To understand the framework of Linear Optimal Transport (LOT) we will use the dynamic formulation of the OT problem. Optimal plans and maps can be viewed as a static way of matching two distributions. They tell us where each mass in the initial distribution should end, but they do not tell the full story of how the system evolves from initial to final configurations.
In the dynamic formulation, we consider π β π« β’ ( [ 0 , 1 ] Γ Ξ© ) a curve of measures parametrized in time that describes the distribution of mass π π‘ := π β’ ( π‘ , β ) β π« β’ ( Ξ© ) at each instant 0 β€ π‘ β€ 1 . We will require the curve to be sufficiently smooth, to have boundary conditions π 0
π 0 , π 1
π π , and to satisfy the conservation of mass law. Then, it is well known that there exists a velocity vector field π£ π‘ := π£ β’ ( π‘ , β ) such that π π‘ satisfies the continuity equation2 with boundary conditions
β π‘ π + β β π β’ π£
0 , π 0
π 0 , π 1
π π .
(5)
The length3 of the curve can be stated as β« [ 0 , 1 ] Γ Ξ© β π£ β 2 β’ π π := β« 0 1 β π£ π‘ β π π‘ 2 β’ π π‘ , for β₯ β β₯ π π‘ as in (4), and π β’ π β’ ( π 0 , π π ) coincides with the length of the shortest curve between π 0 and π π [6]. Hence, the dynamical formulation of the OT problem (1) reads as
π β’ π β’ ( π 0 , π π )
inf ( π , π£ ) β π β’ β° β’ ( π 0 , π π ) β« [ 0 , 1 ] Γ Ξ© β π£ β 2 β’ π π ,
(6)
where π β’ β° β’ ( π 0 , π π ) is the set of pairs ( π , π£ ) , where π β π« β’ ( [ 0 , 1 ] Γ Ξ© ) , and π£ : [ 0 , 1 ] Γ Ξ© β β π , satisfying (5).
Under the assumption of existence of an optimal Monge map π , an optimal solution for (6) can be given explicitly and is pretty intuitive. If a particle starts at position π₯ and finishes at position π β’ ( π₯ ) , then for 0 < π‘ < 1 it will be at the point
π π‘ β’ ( π₯ ) := ( 1 β π‘ ) β’ π₯ + π‘ β’ π β’ ( π₯ ) .
(7)
Then, varying both the time π‘ and π₯ β Ξ© , the mapping (7) can be interpreted as a flow whose time velocity4 is
π£ π‘ β’ ( π₯ )
π β’ ( π₯ 0 ) β π₯ 0 , for β’ π₯
π π‘ β’ ( π₯ 0 ) .
(8)
To obtain the curve of probability measures π π‘ , one can evolve π 0 through the flow π π‘ using the formula π π‘ β’ ( π΄ )
π 0 β’ ( π π‘ β 1 β’ ( π΄ ) ) for any measurable set π΄ . That is, π π‘ is the push-forward of π 0 by π π‘
π π‘
( π π‘ ) # β’ π 0 , 0 β€ π‘ β€ 1 .
(9)
The pair ( π , π£ ) defined by (9) and (8) satisfies the continuity equation (5) and solves (6). Moreover, the curve π π‘ is a constant speed geodesic in π« β’ ( Ξ© ) between π 0 and π π [22], i.e., it satisfies that for all 0 β€ π β€ π‘ β€ 1
π β’ π β’ ( π π , π π‘ )
( π‘ β π ) β’ π β’ π β’ ( π 0 , π 1 ) .
(10)
A confirmation of this comes from comparing the OT cost (3) with (8) obtaining
π β’ π β’ ( π 0 , π π )
β« Ξ© β π£ 0 β’ ( π₯ ) β 2 β’ π π 0 β’ ( π₯ )
(11)
which tells us that we only need the speed at the initial time to compute the total length of the curve. Moreover, π β’ π β’ ( π 0 , π π ) coincides with the squared norm of the tangent vector π£ 0 in the tangent space π― π 0 of π« β’ ( Ξ© ) at π 0 .
2.3Linear Optimal Transport Embedding
Inspired by the induced Riemannian geometry of the π β’ π squared distance, [41] proposed the so-called Linear Optimal Transportation (LOT) framework. Given two target measures π π , π π , the main idea relies on considering a reference measure π 0 and embed these target measures into the tangent space π― π 0 . This is done by identifying each measure π π with the curve (9) minimizing π β’ π β’ ( π 0 , π π ) and computing its velocity (tangent vector) at π‘
0 using (8).
Formally, let us fix a continuous probability reference measure π 0 . Then, the LOT embedding [34] is defined as
π π β¦ π’ π := π π β id β π π β π« β’ ( Ξ© )
(12)
where π π is the optimal Monge map between π 0 and π π . Notice that by (3), (4) and (11) we have
β π’ π β π 0 2
π β’ π β’ ( π 0 , π π ) .
(13)
After this embedding, one can use the distance in π― π 0 between the projected measures to define a new distance in π« β’ ( Ξ© ) that can be used to approximate π β’ π β’ ( π π , π π ) . The LOT squared distance is defined as
πΏ β’ π β’ π π 0 β’ ( π π , π π ) := β π’ π β π’ π β π 0 2 .
(14) 2.4LOT in the Discrete Setting
For discrete probability measures π 0 , π π of the form
π 0
β π
1 π 0 π π 0 β’ πΏ π₯ π 0 , π π
β π
1 π π π π π β’ πΏ π₯ π π ,
(15)
a Monge map π π for π β’ π β’ ( π 0 , π π ) may not exist. Following [41], in this setting, the target measure π π can be replaced by a new measure π ^ π for which an optimal transport Monge map exists. For that, given an optimal plan πΎ π β Ξ β β’ ( π 0 , π π ) , it can be viewed as a π 0 Γ π π matrix whose value at position ( π , π ) represents how much mass from π₯ π 0 should be taken to π₯ π π . Then, we define the OT barycentric projection5 of π π with respect to π 0 as
π ^ π := β π
1 π 0 π π 0 β’ πΏ π₯ ^ π π , where π₯ ^ π π := 1 π π 0 β’ β π
1 π π πΎ π , π π β’ π₯ π π .
(16)
The new measure π ^ π is regarded as an π 0 -point representation of the target measure π π . The following lemma guarantees the existence of a Monge map between π 0 and π ^ π .
Lemma 2.1.
Let π 0 and π π be two discrete probability measures as in (15), and consider an OT barycentric projection π ^ π of π π with respect to π 0 as in (16). Then, the map π₯ π 0 β¦ π₯ ^ π π given by (16) solves the OT problem π β’ π β’ ( π 0 , π ^ π ) .
It is easy to show that if the optimal transport plan πΎ π is induced by a Monge map, then π ^ π
π π . As a consequence, the OT barycentric projection is an actual projection in the sense that it is idempotent.
Similar to the continuous case (12), given a discrete reference measure π 0 , we can define the LOT embedding for a discrete measure π π as the rule
π π β¦ π’ π := [ ( π₯ ^ 1 π β π₯ 1 0 ) , β¦ , ( π₯ ^ π 0 π β π₯ π 0 0 ) ] .
(17)
The range π― π 0 of this application is identified with β π Γ π 0 with the norm β π’ β π 0 := β π
1 π 0 β π’ β’ ( π ) β 2 β’ π π 0 , where π’ β’ ( π ) β β π denotes the π th entry of π’ . We call ( β π Γ π 0 , β₯ β β₯ π 0 ) the embedding space.
By the discussion above, if the optimal plan πΎ π for problem OT β’ ( π 0 , π π ) is induced by a Monge map, then the discrete embedding is consistent with (13) in the sense that
β π’ π β π 0 2
π β’ π β’ ( π 0 , π ^ π )
π β’ π β’ ( π 0 , π π ) .
(18)
Hence, as in section 2.3, we can use the distance between embedded measures in ( β π Γ π 0 , β₯ β β₯ π 0 ) to define a discrepancy in the space of discrete probabilities that can be used to approximate π β’ π β’ ( π π , π π ) . The LOT discrepancy7 is defined as
πΏ β’ π β’ π π 0 β’ ( π π , π π ) := β π’ π β π’ π β π 0 2 .
(19)
We call it a discrepancy because it is not a squared metric between discrete measures. It does not necessarily satisfy that πΏ β’ π β’ π β’ ( π π , π π ) β 0 for every distinct π π , π π . Nevertheless, β π’ π β π’ π β π 0 is a metric in the embedding space.
2.5OT and LOT Geodesics in Discrete Settings
Let π π , π π be discrete probability measures as in (15) (with β π β in place of 0 ). If an optimal Monge map π for π β’ π β’ ( π π , π π ) exists, a constant speed geodesic π π‘ between π π and π π , for the OT squared distance, can be found by mimicking (9). Explicitly, with π π‘ as in (7),
π π‘
( π π‘ ) # β’ π π
β π
1 π π π π π β’ πΏ ( 1 β π‘ ) β’ π₯ π π + π‘ β’ π β’ ( π₯ π π ) .
(20)
In practice, one replaces π π by its OT barycentric projection with respect to π π (and so, the existence of an optimal Monge map is guaranteed by Lemma 2.1).
Now, given a discrete reference π 0 , the LOT discrepancy provides a new structure to the space of discrete probability densities. Therefore, we can provide a substitute for the OT geodesic (20) between π π and π π . Assume we have the embeddings π π β¦ π’ π , π π β¦ π’ π as in (17). The geodesic between π’ π and π’ π in the LOT embedding space β π Γ π 0 has the simple form π’ π‘
( 1 β π‘ ) β’ π’ π + π‘ β’ π’ π . This correlates with the curve π ^ π‘ in π« β’ ( Ξ© ) induced by the map π ^ : π₯ ^ π π β¦ π₯ ^ π π 8 as
π ^ π‘ := ( π ^ π‘ ) # β’ π ^ π
β π
1 π 0 π π 0 β’ πΏ π₯ π 0 + π’ π‘ β’ ( π ) .
(21)
By abuse of notation, we call this curve the LOT geodesic between π π and π π . Nevertheless, it is a geodesic between their barycentric projections since it satisfies the following.
Proposition 2.2.
Let π ^ π‘ be defined as (21), and π ^ 0
π ^ π , π ^ 1
π ^ π , then for all 0 β€ π β€ π‘ β€ 1
πΏ β’ π β’ π π 0 β’ ( π ^ π , π ^ π‘ )
( π‘ β π ) β’ πΏ β’ π β’ π π 0 β’ ( π ^ 0 , π ^ 1 ) .
(22) 3Linear Optimal Partial Transport Embedding 3.1Static Formulation of Optimal Partial Transport
In addition to mass transportation, the OPT problem allows mass destruction at the source and mass creation at the target. Let β³ + β’ ( Ξ© ) denote the set of all positive finite Borel measures defined on Ξ© . For π β₯ 0 the OPT problem between π 0 , π π β β³ + β’ ( Ξ© ) can be formulated as
π β’ π β’ π π β’ ( π 0 , π π ) := inf πΎ β Ξ β€ β’ ( π 0 , π π ) πΆ β’ ( πΎ ; π 0 , π π , π )
(23)
for πΆ β’ ( πΎ ; π 0 , π π , π ) := β« Ξ© 2 β π₯ 0 β π₯ π β 2 β’ π πΎ β’ ( π₯ 0 , π₯ π ) + π β’ ( | π 0 β πΎ 0 | + | π π β πΎ 1 | )
(24)
where | π 0 β πΎ 0 | is the total mass of π 0 β πΎ 0 (resp. | π π β πΎ 1 | ), and Ξ β€ β’ ( π 0 , π π ) denotes the set of all measures in Ξ© 2 with marginals πΎ 0 and πΎ 1 satisfying πΎ 0 β€ π 0 (i.e., πΎ 0 β’ ( πΈ ) β€ π 0 β’ ( πΈ ) for all measurable set πΈ ), and πΎ 1 β€ π π . Here, the mass destruction and creation penalty is linear, parametrized by π . The set of minimizers Ξ β€ β β’ ( π 0 , π π ) of (23) is non-empty [20]. One can further restrict Ξ β€ β’ ( π 0 , π π ) to the set of partial transport plans πΎ such that β π₯ 0 β π₯ π β 2 < 2 β’ π for all ( π₯ 0 , π₯ π ) β supp β‘ ( πΎ ) [4, Lemma 3.2]. This means that if the usual transportation cost is greater than 2 β’ π , it is better to create/destroy mass.
3.2Dynamic Formulation of Optimal Partial Transport
Adding a forcing term π to the continuity equation (6), one can take into account curves that allow creation and destruction of mass. That is, those who break the conservation of mass law. Thus, it is natural that the minimization problem (23) can be rewritten [12, Th. 5.2] into a dynamic formulation as
π β’ π β’ π π β’ ( π 0 , π π )
inf ( π , π£ , π ) β β± β’ π β’ β° β’ ( π 0 , π π ) β« [ 0 , 1 ] Γ Ξ© β π£ β 2 β’ π π + π β’ | π |
(25)
where β± β’ π β’ β° β’ ( π 0 , π π ) is the set of tuples ( π , π£ , π ) such that π β β³ + β’ ( [ 0 , 1 ] Γ Ξ© ) , π β β³ β’ ( [ 0 , 1 ] Γ Ξ© ) (where β³ stands for signed measures) and π£ : [ 0 , 1 ] Γ Ξ© β β π , satisfying
β π‘ π + β β π β’ π£
π , π 0
π 0 , π 1
π π .
(26)
As in the case of OT, under certain conditions on the minimizers πΎ of (23), one curve π π‘ that minimizes the dynamic formulation (25) is quite intuitive. We show in the next proposition that it consists of three parts πΎ π‘ , ( 1 β π‘ ) β’ π 0 and π‘ β’ π π (see (27), (28), and (29) below). The first is a curve that only transports mass, and the second and third destroy and create mass at constant rates | π 0 | , | π π | , respectively.
Proposition 3.1.
Let πΎ β β Ξ β€ β β’ ( π 0 , π π ) be of the form πΎ β
( id Γ π ) # β’ πΎ 0 β for π : Ξ© β Ξ© a (measurable) map. Let
π 0 := π 0 β πΎ 0 β , π π := π π β πΎ 1 β ,
(27)
π π‘ β’ ( π₯ ) := ( 1 β π‘ ) β’ π₯ + π‘ β’ π β’ ( π₯ ) , πΎ π‘ := ( π π‘ ) # β’ πΎ 0 β .
(28)
Then, an optimal solution ( π , π£ , π ) for (25) is given by
π π‘ := πΎ π‘ + ( 1 β π‘ ) β’ π 0 + π‘ β’ π π ,
(29)
π£ π‘ β’ ( π₯ ) := π β’ ( π₯ 0 ) β π₯ 0 , if β’ π₯
π π‘ β’ ( π₯ 0 ) .
(30)
π π‘ := π π β π 0
(31)
Moreover, plugging in ( π , π£ , π ) into (25), it holds that
π β’ π β’ π π β’ ( π 0 , π π )
β π£ 0 β πΎ 0 β , 2 β’ π 2 + π β’ ( | π 0 | + | π π | ) ,
(32)
where π£ 0 β’ ( π₯ )
π β’ ( π₯ ) β π₯ (i.e., π£ π‘ at time π‘
0 ), and
β π£ β π , 2 β’ π 2 := β« Ξ© min β‘ ( β π£ β 2 , 2 β’ π ) β’ π π , for β’ π£ : Ξ© β β π .
In analogy to the OT squared distance, we also call the optimal partial cost (32) as the OPT squared distance.
3.3Linear Optimal Partial Transport Embedding Definition 3.2.
Let π 0 , π π β β³ + β’ ( Ξ© ) such that π β’ π β’ π π β’ ( π 0 , π π ) is solved by a plan induced by a map. The LOPT embedding of π π with respect to π 0 is defined as
π π β¦ ( π’ π , π Β― π , π π ) := ( π£ 0 , πΎ 0 , π π )
(33)
where π£ 0 , πΎ 0 , π π are defined as in Proposition 3.1.
Let us compare the LOPT (33) and LOT (12) embeddings. The first component π£ 0 represents the tangent of the curve that transports mass from the reference to the target. This is exactly the same as the LOT embedding. In contrast to LOT, the second component πΎ 0 is necessary since we need to specify what part of the reference is being transported. The third component π π can be thought of as the tangent vector of the part that creates mass. There is no need to save the destroyed mass because it can be inferred from the other quantities.
Now, let π 0 β§ π π be the minimum measure9 between π 0 and π π . By the above definition, π 0 β¦ ( π’ 0 , π Β― 0 , π 0 )
( 0 , π 0 , 0 ) . Therefore, (32) can be rewritten
π β’ π β’ π π β’ ( π 0 , π π )
β π’ 0 β π’ π β π Β― 0 β§ π Β― π , 2 β’ π 2 + π β’ ( | π Β― 0 β π Β― π | + | π 0 | + | π π | )
(34)
This motivates the definition of the LOPT discrepancy.10
Definition 3.3.
Consider a reference π 0 β β³ + β’ ( Ξ© ) and target measures π π , π π β β³ + β’ ( Ξ© ) such that π β’ π β’ π π β’ ( π 0 , π π ) and π β’ π β’ π π β’ ( π 0 , π π ) can be solved by plans induced by mappings as in the hypothesis of Proposition 3.1. Let ( π’ π , π Β― π , π π ) and ( π’ π , π Β― π , π π ) be the LOPT embeddings of π π and π π with respect to π 0 . The LOPT discrepancy between π π and π π with respect to π 0 is defined as
πΏ β’ π β’ π β’ π π 0 , π β’ ( π π , π π )
:= β π’ π β π’ π β π Β― π β§ π Β― π , 2 β’ π 2 + π β’ ( | π Β― π β π Β― π | + | π π | + | π π | )
(35)
Similar to the LOT framework, by equation (34), LOPT can recover OPT when π π
π 0 . That is,
πΏ β’ π β’ π β’ π π 0 , π β’ ( π 0 , π π )
π β’ π β’ π π β’ ( π 0 , π π ) .
3.4LOPT in the Discrete Setting
If π 0 , π π are π 0 , π π β size discrete non-negative measures as in (15) (but not necessarily with total mass 1), the OPT problem (23) can be written as
min πΎ β Ξ β€ β’ ( π 0 , π π ) β’ β π , π β π₯ π 0 β π₯ π π β 2 β’ πΎ π , π + π β’ ( | π 0 | + | π π | β 2 β’ | πΎ | )
where the set Ξ β€ β’ ( π 0 , π π ) can be viewed as the subset of π 0 Γ π π matrices with non-negative entries
Ξ β€ β’ ( π 0 , π π ) := { πΎ β β + π 0 Γ π π : πΎ β’ 1 π π β€ π 0 , πΎ π β’ 1 π 0 β€ π π } ,
where 1 π 0 denotes the π 0 Γ 1 vector whose entries are 1 (resp. 1 π π ), π 0
[ π 1 0 , β¦ , π π 0 0 ] is the vector of weights of π 0 (resp. π π ), πΎ β’ 1 π π β€ π 0 means that component-wise holds the β β€ β (resp. πΎ π β’ 1 π 0 β€ π π , where πΎ π is the transpose of πΎ ), and | π 0 |
β π
1 π 0 | π π 0 | is the total mass of π 0 (resp. | π π | , | πΎ | ). The marginals are πΎ 0 := πΎ β’ 1 π π , and πΎ 1 := πΎ π β’ 1 π 0 .
Similar to OT, when an optimal plan πΎ π for π β’ π β’ π π β’ ( π 0 , π ^ π ) is not induced by a map, we can replace the target measure π π by an OPT barycentric projection π ^ π for which a map exists. Therefore, allowing us to apply the LOPT embedding (see (33) and (40) below).
Definition 3.4.
Let π 0 and π π be positive discrete measures, and πΎ π β Ξ β€ β β’ ( π 0 , π π ) . The OPT barycentric projection11 of π π with respect to π 0 is defined as
π ^ π := β π
1 π 0 π ^ π π β’ πΏ π₯ ^ π π , where
(36)
π ^ π π := β π
1 π π πΎ π , π π , 1 β€ π β€ π 0 ,
(37)
π₯ ^ π π := { 1 π ^ π π β’ β π
1 π π πΎ π , π π β’ π₯ π π
if β’ π ^ π π
0
π₯
π
0
if
β’
π
^
π
π
0 .
(38) Theorem 3.5.
In the same setting of Definition 3.4, the map π₯ π 0 β¦ π₯ ^ π π given by (38) solves the problem π β’ π β’ π π β’ ( π 0 , π ^ π ) , in the sense that induces the partial optimal plan πΎ ^ π
diag β‘ ( π ^ 1 π , β¦ , π ^ π 0 π ) .
It is worth noting that when we take a barycentric projection of a measure, some information is lost. Specifically, the information about the part of π π that is not transported from the reference π 0 . This has some minor consequences.
First, unlike (18), the optimal partial transport cost π β’ π β’ π π β’ ( π 0 , π π ) changes when we replace π π by π ^ π . Nevertheless, the following relation holds.
Theorem 3.6.
In the same setting of Definition 3.4, if πΎ π is induced by a map, then
π β’ π β’ π π β’ ( π 0 , π π )
π β’ π β’ π π β’ ( π 0 , π ^ π ) + π β’ ( | π π | β | π ^ π | )
(39)
The second consequence12 is that the LOPT embedding of π ^ π will always have a null third component. That is,
π ^ π β¦ ( [ π₯ ^ 1 π β π₯ 1 0 , β¦ , π₯ ^ π 0 π β π₯ π 0 0 ] , β π
1 π 0 π ^ π π β’ πΏ π₯ π 0 , β0 ) .
(40)
Therefore, we represent this embedding as π ^ π β¦ ( π’ π , π ^ π ) , for π’ π
[ π₯ ^ 1 π β π₯ 1 0 , β¦ , π₯ ^ π 0 π β π₯ π 0 0 ] and π ^ π
[ π ^ 1 π , β¦ , π ^ π 0 π ] . The last consequence is given in the next result.
Proposition 3.7.
If π 0 , π π , π π are discrete and satisfy the conditions of Definition 3.3, then
πΏ β’ π β’ π β’ π π 0 , π β’ ( π π , π π )
πΏ β’ π β’ π β’ π π 0 , π β’ ( π ^ π , π ^ π ) + π β’ πΆ π , π
(41)
where πΆ π , π
| π π | β | π ^ π | + | π π | β | π ^ π | .
As a byproduct we can define the LOPT discrepancy for any pair of discrete measures π π , π π as the right-hand side of (41). In practice, unless to approximate π β’ π β’ π π β’ ( π π , π π ) , we set πΆ π , π
0 in (41). That is,
πΏ β’ π β’ π β’ π π 0 , π β’ ( π π , π π ) := πΏ β’ π β’ π β’ π π 0 , π β’ ( π ^ π , π ^ π ) .
(42) 3.5OPT and LOPT Interpolation
Inspired by OT and LOT geodesics as defined in section 2.5, but lacking the Riemannian structure provided by the OT squared norm, we propose an OPT interpolation curve and its LOPT approximation.
For the OPT interpolation between two measures π π , π π for which exists πΎ β Ξ β€ β β’ ( π π , π π ) of the form πΎ
( id Γ π ) # β’ πΎ 0 , a natural candidate is the solution π π‘ of the dynamic formulation of π β’ π β’ π π β’ ( π π , π π ) . The exact expression is given by Proposition 3.1. When working with general discrete measures π π , π π (as in (15), with β π β in place of 0 ) such πΎ is not guaranteed to exist. Then, we replace the latter with its OPT barycentric projection with respect to π π . And by Theorem 3.5 the map π : π₯ π π β¦ π₯ ^ π π solves π β’ π β’ π π β’ ( π π , π ^ π ) and the OPT interpolating curve is13
π‘ β¦ β π
1 π π π ^ π π β’ πΏ ( 1 β π‘ ) β’ π₯ π π + π‘ β’ π β’ ( π₯ π π ) + ( 1 β π‘ ) β’ β π
1 π π ( π π π β π ^ π π ) β’ πΏ π₯ π π .
When working with a multitude of measures, it is convenient to consider a reference π 0 and embed the measures in β ( π + 1 ) Γ π 0 using LOPT. Hence, doing computations in a simpler space. Below we provide the LOPT interpolation.
Definition 3.8.
Given discrete measures π 0 , π π , π π , with π 0 as the reference, let ( π’ π , π ^ π ) , ( π’ π , π ^ π ) be the LOPT embeddings of π π , π π . Let π ^ π β’ π := π ^ π β§ π ^ π , and π’ π‘ := ( 1 β π‘ ) β’ π’ π + π‘ β’ π’ π . We define the LOPT interpolating curve between π π and π π by
π‘ β¦
β π β π· π π ^ π π β’ π β’ πΏ π₯ π 0 + π’ π‘ β’ ( π ) + ( 1 β π‘ ) β’ β π β π· π· ( π ^ π π β π ^ π π β’ π ) β’ πΏ π₯ π 0 + π’ π π + π‘ β’ β π β π· πΆ ( π ^ π π β π ^ π π β’ π ) β’ πΏ π₯ π 0 + π’ π π
where π· π
{ π : π ^ π π β’ π > 0 } , π· π·
{ π : π ^ π π > π ^ π π β’ π ) } and π· πΆ
{ π : π ^ π π β’ π < π ^ π π ) } are respectively the sets where we transport, destroy and create mass.
4Applications
Approximation of OPT Distance: Similar to LOT [41], and Linear Hellinger Kantorovich (LHK) [10], we test the approximation performance of OPT using LOPT. Given πΎ empirical measures { π π } π
1 πΎ , for each pair ( π π , π π ) , we compute π β’ π β’ π π β’ ( π π , π π ) and πΏ β’ π β’ π β’ π π 0 , π β’ ( π π , π π ) and the mean or median of all pairs ( π π , π π ) of relative error defined as
| π β’ π β’ π π β’ ( π π , π π ) β πΏ β’ π β’ π β’ π π 0 , π β’ ( π π , π π ) | π β’ π β’ π π β’ ( π π , π π ) .
Similar to LOT and LHK, the choice of π 0 is critical for the accurate approximation of OPT. If π 0 is far away from { π π } π
1 πΎ , the linearization is a poor approximation because the mass in π π and π 0 would only be destroyed or created. In practice, one candidate for π 0 is the barycenter of the set of measures { π π } . The OPT can be converted into OT problem [8], and one can use OT barycenter [18] to find π 0 .
For our experiments, we created πΎ point sets of size π
500 for πΎ different Gaussian distributions in β 2 . In particular, π π βΌ π© β’ ( π π , πΌ ) , where π π is randomly selected such that β π π β
3 for π
1 , β¦ , πΎ . For the reference, we picked an π point representation of π 0 βΌ π© β’ ( π Β― , πΌ ) with π Β―
β π π / πΎ . We repeated each experiment 10 times. To exhibit the effect of the parameter π in the approximation, the relative errors are shown in Figure 2. For the histogram of the relative errors for each value of π and each number of measures πΎ , we refer to Figure 6 in the Appendix H. For large π , most mass is transported and π β’ π β’ ( π π , π π ) β π β’ π β’ π π β’ ( π π , π π ) , the performance of LOPT is close to that of LOT, and the relative error is small.
(a)Mean of the error (b)Medean of the error Figure 2:Graphs of the mean and median relative errors between π β’ π β’ π π and πΏ β’ π β’ π β’ π π , π 0 as a function of the parameter π .
In Figure 3 we report wall clock times of OPT vs LOPT for π
5 . We use linear programming [27] to solve each OPT problem with a cost of πͺ β’ ( π 3 β’ log β’ ( π ) ) each. Thus, computing the OPT distance pair-wisely for { π π } π
1 πΎ requires πͺ β’ ( πΎ 2 β’ π 3 β’ log β’ ( π ) ) . In contrast, to compute πΏ β’ π β’ π β’ π , we only need to solve πΎ optimal partial transport problems for the embeddings (see (33) or (40)). Computing LOPT discrepancies after the embeddings is linear. Thus, the total computational cost is πͺ β’ ( πΎ β’ π 3 β’ log β’ ( π ) + πΎ 2 β’ π ) . The experiment was conducted on a Linux computer with AMD EPYC 7702P CPU with 64 cores and 256GB DDR4 RAM.
Figure 3:Wall-clock time between OPT and LOPT. The LP solver in PythonOT [23] is applied to each individual OPT problem, with 100 β’ π maximum number of iterations.
Point Cloud Interpolation: We test OT geodesic, LOT geodesic, OPT interpolation, and LOPT interpolation on the point cloud MNIST dataset. We compute different transport curves between point sets of the digits 0 and 9 . Each digit is a weighted point set { π₯ π π , π π π } π
1 π π , π
1 , 2 , that we consider as a discrete measure of the form π π
β π
1 π π π π π β’ πΏ π₯ π π + 1 / π π β’ β π
1 π β’ π π πΏ π¦ π π , where the first sum corresponds to the clean data normalized to have total mass 1, and the second sum is constructed with samples from a uniform distribution acting as noise with total mass π . For HK, OPT, LHK and LOPT, we use the distributions π π without re-normalization, while for OT and LOT, we re-normalize them. The reference in LOT, LHK and LOPT is taken as the OT barycenter of a sample of the digits 0, 1, and 9 not including the ones used for interpolation, and normalized to have unit total mass. We test for π
0 , 0.5 , 0.75 (see Figure 8 in the Appendix H). The results for π
0.5 are shown in Figure 4. We can see that OT and LOT do not eliminate noise points. HK, OPT still retains much of the noise because interpolation is essentially between π 1 and π ^ 2 (with respect to π 1 ). So π 1 acts as a reference that still has a lot of noise. In LHK, LOPT, by selecting the same reference as LOT we see that the noise significantly decreases. In the HK and LHK cases, we notice not only how the masses vary, but also how their relative positions change obtaining a very different configuration at the end of the interpolation. OPT and LOPT instead returns a more natural interpolation because of the mass preservation of the transported portion and the decoupling between transport and destruction/creation of mass.
Figure 4:We demonstrate the OT geodesic, OPT interpolation, LOT geodesic and LOPT interpolation in MNIST dataset. In LOT geodesic and LOPT interpolation, we use the same reference measure. The percentage of noise π is set to 0.5 . In OPT and LOPT interpolation, we set π
20 ; in HK and LHK, we set the scaling to be 2.5 .
PCA analysis: We compare the results of performing PCA on the embedding space of LOT, LHK and LOPT for point cloud MNIST. We take 900 digits from the dataset corresponding to digits 0 , 1 and 3 in equal proportions. Each element is a point set { π₯ π π } π
1 π π that we consider as a discrete measure with added noise. The reference, π 0 , is set to the OT barycenter of 30 samples from the clean data. For LOT we re-normalize each π π to have a total mass of 1, while we do not re-normalize for LOPT. Let π π := { π π : noise level
π } π
1 900 . We embed π π using LOT, LHK and LOPT and apply PCA on the embedded vectors { π’ π } . In Figure 5 we show the first two principal components of the set of embedded vectors based on LOT, LHK and LOPT for noise levels π
0 , 0.75 . It can be seen that when there is no noise, the PCA dimension reduction technique works well for all three embedding methods. When π
0.75 , the method fails for LOT embedding, but the dimension-reduced data is still separable for LOPT and LHK. For the running time, LOT, LOPT requires 60-80 seconds and LHK requires about 300-350 seconds. The experiments are conducted on a Linux computer with AMD EPYC 7702P CPU with 64 cores and 256GB DDR4 RAM.
Figure 5:We plot the first two principal components of each π’ π based on LOT and LOPT. For LOPT, we set π
20.0 , and for LHK, we set the scaling to be 2.5 .
We refer the reader to Appendix H for further details and analysis.
5Summary
We proposed a Linear Optimal Partial Transport (LOPT) technique that allows us to embed distributions with different masses into a fixed dimensional space in which several calculations are significantly simplified. We show how to implement this for real data distributions allowing us to reduce the computational cost in applications that would benefit from the use of optimal (partial) transport. We finally provide comparisons with previous techniques and show some concrete applications. In particular, we show that LOPT is more robust and computationally efficient in the presence of noise than previous methods. For future work, we will continue to investigate the comparison of LHK and LOPT, and the potential applications of LOPT in other machine learning and data science tasks, such as Barycenter problems, graph embedding, task similarity measurement in transfer learning, and so on.
Acknowledgements
This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) under Contracts No. HR00112190132 and No. HR00112190135. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agencies.
References [1] β Akram Aldroubi, Shiying Li, and Gustavo K Rohde.Partitioning signal classes using transport transforms for data analysis and machine learning.Sampling Theory, Signal Processing, and Data Analysis, 19(1):1β25, 2021. [2] β Luigi Ambrosio, Nicola Gigli, and Giuseppe SavarΓ©.Gradient flows: in metric spaces and in the space of probability measures.Springer Science & Business Media, 2005. [3] β Martin Arjovsky, Soumith Chintala, and LΓ©on Bottou.Wasserstein generative adversarial networks.In International conference on machine learning, pages 214β223. PMLR, 2017. [4] β Yikun Bai, Bernard Schmitzer, Mathew Thorpe, and Soheil Kolouri.Sliced optimal partial transport.arXiv preprint arXiv:2212.08049, 2022. [5] β Saurav Basu, Soheil Kolouri, and Gustavo K Rohde.Detecting and visualizing cell phenotype differences from microscopy images using transport-based morphometry.Proceedings of the National Academy of Sciences, 111(9):3448β3453, 2014. [6] β Jean-David Benamou and Yann Brenier.A computational fluid mechanics solution to the monge-kantorovich mass transfer problem.Numerische Mathematik, 84(3):375β393, 2000. [7] β Yann Brenier.Polar factorization and monotone rearrangement of vector-valued functions.Communications on pure and applied mathematics, 44(4):375β417, 1991. [8] β Luis A Caffarelli and Robert J McCann.Free boundaries in optimal transport and monge-ampere obstacle problems.Annals of mathematics, pages 673β730, 2010. [9] β Tianji Cai, Junyi Cheng, Nathaniel Craig, and Katy Craig.Linearized optimal transport for collider events.Physical Review D, 102(11):116019, 2020. [10] β Tianji Cai, Junyi Cheng, Bernhard Schmitzer, and Matthew Thorpe.The linearized hellingerβkantorovich distance.SIAM Journal on Imaging Sciences, 15(1):45β83, 2022. [11] β Lenaic Chizat, Gabriel PeyrΓ©, Bernhard Schmitzer, and FranΓ§ois-Xavier Vialard.An interpolating distance between optimal transport and fisherβrao metrics.Foundations of Computational Mathematics, 18(1):1β44, 2018. [12] β Lenaic Chizat, Gabriel PeyrΓ©, Bernhard Schmitzer, and FranΓ§ois-Xavier Vialard.Unbalanced optimal transport: Dynamic and kantorovich formulations.Journal of Functional Analysis, 274(11):3090β3123, 2018. [13] β Lenaic Chizat, Pierre Roussillon, Flavien LΓ©ger, FranΓ§ois-Xavier Vialard, and Gabriel PeyrΓ©.Faster wasserstein distance estimation with the sinkhorn divergence.Advances in Neural Information Processing Systems, 33:2257β2269, 2020. [14] β Lenaic Chizat, Bernhard Schmitzer, Gabriel Peyre, and Francois-Xavier Vialard.An interpolating distance between optimal transport and fisher-rao.arXiv:1506.06430 [math], 2015. [15] β Nicolas Courty, RΓ©mi Flamary, Amaury Habrard, and Alain Rakotomamonjy.Joint distribution optimal transportation for domain adaptation.Advances in Neural Information Processing Systems, 30, 2017. [16] β Nicolas Courty, RΓ©mi Flamary, and Devis Tuia.Domain adaptation with regularized optimal transport.In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 274β289. Springer, 2014. [17] β Marco Cuturi.Sinkhorn distances: Lightspeed computation of optimal transport.Advances in neural information processing systems, 26, 2013. [18] β Marco Cuturi and Arnaud Doucet.Fast computation of wasserstein barycenters.In International conference on machine learning, pages 685β693. PMLR, 2014. [19] β Kilian Fatras, Thibault SΓ©journΓ©, RΓ©mi Flamary, and Nicolas Courty.Unbalanced minibatch optimal transport; applications to domain adaptation.In International Conference on Machine Learning, pages 3186β3197. PMLR, 2021. [20] β Alessio Figalli.The optimal partial transport problem.Archive for rational mechanics and analysis, 195(2):533β560, 2010. [21] β Alessio Figalli and Nicola Gigli.A new transportation distance between non-negative measures, with applications to gradients flows with dirichlet boundary conditions.Journal de mathΓ©matiques pures et appliquΓ©es, 94(2):107β130, 2010. [22] β Alessio Figalli and Federico Glaudo.An Invitation to Optimal Transport, Wasserstein Distances, and Gradient Flows.European Mathematical Society, 2021. [23] β RΓ©mi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, AurΓ©lie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, LΓ©o Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer.Pot: Python optimal transport.Journal of Machine Learning Research, 22(78):1β8, 2021. [24] β Aude Genevay, Gabriel PeyrΓ©, and Marco Cuturi.Gan and vae from an optimal transport point of view.arXiv preprint arXiv:1706.01807, 2017. [25] β Kevin Guittet.Extended Kantorovich norms: a tool for optimization.PhD thesis, INRIA, 2002. [26] β Hicham Janati, Marco Cuturi, and Alexandre Gramfort.Wasserstein regularization for sparse multi-task regression.In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1407β1416. PMLR, 2019. [27] β Narendra Karmarkar.A new polynomial-time algorithm for linear programming.In Proceedings of the sixteenth annual ACM symposium on Theory of computing, pages 302β311, 1984. [28] β Sanggyun Kim, Rui Ma, Diego Mesa, and Todd P Coleman.Efficient bayesian inference methods via convex optimization and optimal transport.In 2013 IEEE International Symposium on Information Theory, pages 2259β2263. IEEE, 2013. [29] β Soheil Kolouri, Navid Naderializadeh, Gustavo K Rohde, and Heiko Hoffmann.Wasserstein embedding for graph learning.In International Conference on Learning Representations, 2020. [30] β Shinjini Kundu, Soheil Kolouri, Kirk I Erickson, Arthur F Kramer, Edward McAuley, and Gustavo K Rohde.Discovery and visualization of structural biomarkers from mri using transport-based morphometry.NeuroImage, 167:256β275, 2018. [31] β Jan Lellmann, Dirk A Lorenz, Carola Schonlieb, and Tuomo Valkonen.Imaging with kantorovichβrubinstein discrepancy.SIAM Journal on Imaging Sciences, 7(4):2833β2859, 2014. [32] β Matthias Liero, Alexander Mielke, and Giuseppe Savare.Optimal entropy-transport problems and a new hellingerβkantorovich distance between positive measures.Inventiones mathematicae, 211(3):969β1117, 2018. [33] β Huidong Liu, Xianfeng Gu, and Dimitris Samaras.Wasserstein gan with quadratic transport cost.In Proceedings of the IEEE/CVF international conference on computer vision, pages 4832β4841, 2019. [34] β Caroline MoosmΓΌller and Alexander Cloninger.Linear optimal transport embedding: Provable wasserstein classification for certain rigid transformations and perturbations.Information and Inference: A Journal of the IMA, 12(1):363β389, 2023. [35] β Navid Naderializadeh, Joseph F Comer, Reed Andrews, Heiko Hoffmann, and Soheil Kolouri.Pooling by sliced-wasserstein embedding.Advances in Neural Information Processing Systems, 34:3389β3400, 2021. [36] β Khai Nguyen, Dang Nguyen, and Nhat Ho.Self-attention amortized distributional projection optimization for sliced wasserstein point-cloud reconstruction.arXiv preprint arXiv:2301.04791, 2023. [37] β Khai Nguyen, Dang Nguyen, Tung Pham, Nhat Ho, et al.Improving mini-batch optimal transport via partial transportation.In International Conference on Machine Learning, pages 16656β16690. PMLR, 2022. [38] β F. Santambrogio.Optimal Transport for Applied Mathematicians. Calculus of Variations, PDEs and Modeling.BirkhΓ€user, 2015. [39] β Meyer Scetbon and marco cuturi.Low-rank optimal transport: Approximation, statistics and debiasing.In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. [40] β Cedric Villani.Topics in Optimal Transportation, volume 58.American Mathematical Society, 2003. [41] β Wei Wang, Dejan SlepΔev, Saurav Basu, John A Ozolek, and Gustavo K Rohde.A linear optimal transportation framework for quantifying and visualizing variations in sets of images.International journal of computer vision, 101(2):254β269, 2013. [42] β Jianbo Ye, Panruo Wu, James Z Wang, and Jia Li.Fast discrete distribution clustering using wasserstein barycenter with sparse support.IEEE Transactions on Signal Processing, 65(9):2317β2332, 2017.
Appendix
We refer to the main text for references.
Appendix ANotation β’
( β π , β₯ β β₯ ) , π -dimensional Euclidean space endowed with the standard Euclidean norm β π₯ β
β π
1 π | π₯ β’ ( π ) 2 | . π β’ ( β , β ) is the associated distance, i.e., π β’ ( π₯ , π¦ )
β π₯ β π¦ β .
β’
π₯ β π¦ canonical inner product in β π .
β’
1 π , column vector of size π Γ 1 with all entries equal to 1.
β’
diag β‘ ( π 1 , β¦ , π π ) , diagonal matrix.
β’
π€ π , transpose of a matrix π€ .
β’
β + , non-negative real numbers.
β’
Ξ© , convex and compact subset of β π . Ξ© 2
Ξ© Γ Ξ© .
β’
π« β’ ( Ξ© ) , set of probability Borel measures defined in Ξ© .
β’
β³ + β’ ( Ξ© ) , set of positive finite Borel measures defined in Ξ© .
β’
β³ , set of signed measures.
β’
π 0 : Ξ© 2 β Ξ© , π 0 β’ ( π₯ 0 , π₯ 1 ) := π₯ 0 ; π 1 : Ξ© 2 β Ξ© , π 1 β’ ( π₯ 0 , π₯ 1 ) := π₯ 1 , standard projections.
β’
πΉ # β’ π , push-forward of the measure π by the function. πΉ # β’ π β’ ( π΅ )
π β’ ( πΉ β 1 β’ ( π΅ ) ) for all measurable set π΅ , where πΉ β 1 β’ ( π΅ )
{ π₯ : πΉ β’ ( π₯ ) β π΅ } .
β’
πΏ π₯ , Dirac measure concentrated on π₯ .
β’
π
β π
1 π π π β’ πΏ π₯ π , discrete measure ( π₯ π β Ξ© , π π β β + ). The coefficients π π are called the weights of π .
β’
supp β‘ ( π ) , support of the measure π .
β’
π π β§ π π minimum measure between π π and π π ; π π β§ π π vector having at each π entry the minimum value π π π and π π π .
β’
π 0 reference measure. π π , π π target measures. In the OT framework, they are in π« β’ ( Ξ© ) . In the OPT framework, they are in β³ + β’ ( Ξ© ) .
β’
Ξ β’ ( π 0 , π π )
{ πΎ β π« β’ ( Ξ© 2 ) : π 0 β’ # β’ πΎ
π 0 , π 1 β’ # β’ πΎ
π π } , set of Kantorovich transport plans.
β’
πΆ β’ ( πΎ ; π 0 , π π )
β« Ξ© 2 β π₯ 0 β π₯ π β 2 β’ π πΎ β’ ( π₯ 0 , π₯ π ) , Kantorovich cost given by the transportation plan πΎ between π 0 and π π .
β’
Ξ β β’ ( π 0 , π π ) , set of optimal Kantorovich transport plans.
β’
π , optimal transport Monge map.
β’
id , identity map id β’ ( π₯ )
π₯ .
β’
π π‘ β’ ( π₯ )
( 1 β π‘ ) β’ π₯ + π‘ β’ π β’ ( π₯ ) for π : Ξ© β Ξ© . Viewed as a function of ( π‘ , π₯ ) , it is a flow.
β’
In section 2: π β π« β’ ( [ 0 , 1 ] Γ Ξ© ) curve of measures. At each 0 β€ π‘ β€ , π π‘ β π« β’ ( Ξ© ) . In section 3: analogous, replacing π« by β³ + .
β’
π£ : [ 0 , 1 ] Γ Ξ© β β π . at each time π£ π‘ : Ξ© β β π is a vector field. π£ 0 , initial velocity.
β’
β β π , divergence of the vector field π with respect to the spatial variable π₯ .
β’
β π‘ π + β β π β’ π£
0 , continuity equation.
β’
π β’ β° β’ ( π 0 , π π ) , set of solutions of the continuity equation with boundary conditions π 0 and π π .
β’
β π‘ π + β β π β’ π£
π , continuity equation with forcing term π .
β’
β± β’ π β’ β° β’ ( π 0 , π π ) , set solutions ( π , π£ , π ) of the continuity equation with forcing term with boundary conditions π 0 and π π .
β’
π β’ π β’ ( π 0 , π π ) , optimal transport minimization problem. See (1) for the Kantorovich formulation, and (6) for the dynamic formulation.
β’
π 2 β’ ( π 0 , π π )
π β’ π β’ ( π 0 , π π ) , 2 -Wasserstein distance.
β’
π― π
πΏ 2 β’ ( Ξ© ; β π , π ) , where β π’ β π 2
β« Ξ© β π’ β’ ( π₯ ) β 2 β’ π π β’ ( π₯ ) (if π is discrete, π― π is identified with β π Γ π and β π’ β π 2
β π
1 π β π’ β’ ( π ) β 2 β’ π π ).
β’
πΏ β’ π β’ π π 0 β’ ( π π , π π ) , see (14) for continuous densities, and (19) for discrete measures.
β’
π
0 , penalization in OPT.
β’
π β€ π if π β’ ( π΅ ) β€ π β’ ( πΈ ) for all measurable set πΈ and we say that π is dominated by π .
β’
Ξ β€ β’ ( π 0 , π π )
{ πΎ β β³ + β’ ( Ξ© 2 ) : π 0 β’ # β’ πΎ β€ π 0 , π 1 β’ # β’ πΎ β€ π π } , set of partial transport plans.
β’
πΎ 0 := π 0 β’ # β’ πΎ , πΎ 1 := π 1 β’ # β’ πΎ , marginals of πΎ β β³ + β’ ( Ξ© 2 ) .
β’
π 0
π 0 β πΎ 0 (for the reference π 0 ), π π
π π β πΎ 1 (for the target π π ).
β’
πΆ β’ ( πΎ ; π 0 , π π , π )
β« β π₯ 0 β π₯ π β 2 β’ π πΎ β’ ( π₯ 0 , π₯ π ) + π β’ ( | π 0 β πΎ 0 | + | π π β πΎ 1 | ) , partial transport cost given by the partial transportation plan πΎ between π 0 and π π with penalization π .
β’
Ξ β€ β β’ ( π 0 , π π ) , set of optimal partial transport plans.
β’
π β’ π β’ π β’ ( π 0 , π π ) , partial optimal transport minimization problem between π 0 and π π . See (23) for the static formulation, and (25) for the dynamic formulation.
β’
β π£ β π , 2 β’ π 2 := β« Ξ© min β‘ ( β π£ β 2 , 2 β’ π ) β’ π π
β’
πΏ β’ π β’ π β’ π π β’ ( π π , π π ) , see (35), and (42) and the discussion above.
β’
π π β¦ π’ π , LOT embedding (fixing first a reference π 0 β π« β’ ( Ξ© ) ). If π 0 has continuous density, π’ π := π£ 0 π
π π β id , and so it is a map from measures π π β π« β’ ( Ξ© ) to vector fields π’ π defined in Ξ© , see (12). For discrete measures ( π 0 , π π discrete), LOT embedding is needed first, and in this case, the embedding is a map from discrete probability to β π Γ π 0 , see (17).
β’
π , OT and OPT barycentric projection of π with respect to a reference π 0 (in π« or β³ + , resp.), see (16) and (36), respectively. π₯ ^ π denote the new locations where π ^ is concentrated. π π 0 and π ^ π denote the wights of π ^ for OT and OPT, respectively.
β’
π’ π‘
( 1 β π‘ ) β’ π’ π + π‘ β’ π’ π , geodesic in LOT embedding space.
β’
π ^ π‘ , LOT geodesic.
β’
LOPT embeding, see (33), and (40).
β’
OPT interpolating curve, and LOPT interpolating curve are defined in section 3.5.
Appendix BProof of lemma 2.1 Proof.
We fix πΎ β β Ξ β β’ ( π 0 , π π ) 14, and then we compute the map π₯ π 0 β¦ π₯ ^ π π according to (16). This map induces a transportation plan πΎ ^ β := diag β‘ ( π 1 0 , β¦ , π π 0 0 ) β β + π 0 Γ π 0 . We will prove that πΎ ^ β β Ξ β β’ ( π 0 , π ^ π ) . By definition, it is easy to see that its marginals are π 0 and π ^ π . The complicated part is to prove optimality. Let πΎ ^ β Ξ β’ ( π 0 , π ^ π ) be an arbitrary transportation plan between π 0 and π π (i.e. πΎ β’ 1 π 0
πΎ π β’ 1 π 0
π 0 ). We need to show
πΆ β’ ( πΎ ^ β ; π 0 , π ^ π ) β€ πΆ β’ ( πΎ ^ ; π 0 , π ^ π ) .
(43)
Since πΎ ^ β is a diagonal matrix with positive diagonal, its inverse matrix is ( πΎ ^ β ) β 1
diag β‘ ( 1 / π 1 0 , β¦ , 1 / π π 0 0 ) . We define
πΎ := πΎ ^ β’ ( πΎ ^ β ) β 1 β’ πΎ β β β + π 0 Γ π π .
We claim πΎ β Ξ β’ ( π 0 , π π ) . Indeed,
πΎ β’ 1 π π
πΎ ^ β’ ( πΎ ^ β ) β 1 β’ πΎ β β’ 1 π π
πΎ ^ β’ ( πΎ ^ β ) β 1 β’ π 0
πΎ ^ β’ 1 π 0
π 0
(44)
πΎ π β’ 1 π 0
( πΎ β ) π β’ ( πΎ ^ β ) β 1 β’ πΎ ^ π β’ 1 π 0
( πΎ β ) π β’ ( πΎ ^ β ) β 1 β’ π 0
( πΎ β ) π β’ 1 π 0
π π ,
(45)
where the second equality in (44) and the third equality in (45) follow from the fact πΎ β β Ξ β’ ( π 0 , π 1 ) , and the fourth equality in (44) and the second equality in (45) hold since πΎ ^ β Ξ β’ ( π 0 , π ^ π ) .
Since πΎ β is optimal for π β’ π β’ ( π , π π ) ,we have πΆ β’ ( πΎ β ; π 0 , π π ) β€ πΆ β’ ( πΎ ; π 0 , π π ) to denote the transportation cost for πΎ β and πΎ , respectively. We write
πΆ β’ ( πΎ β ; π 0 , π π )
β π
1 π 0 β π
1 π π β π₯ π 0 β π₯ π π β 2 β’ πΎ π , π β
β π
1 π 0 β π₯ π 0 β 2 β’ π π 0 + β π
1 π π β π₯ π π β 2 β’ π π π β 2 β’ β π
1 π 0 β π
1 π π π₯ π β π₯ π π β’ πΎ π , π β
πΎ 1 β 2 β’ β π
1 π 0 β π
1 π π π₯ ^ π β π₯ π π β’ πΎ π , π β ,
where πΎ 1 is a constant (which only depends on π 0 , π π ). Similarly,
πΆ β’ ( πΎ ; π 0 , π π )
πΎ 1 β 2 β’ β π
1 π 0 β π
1 π π π₯ π 0 β π₯ π π β’ πΎ π , π
πΎ 1 β 2 β’ β π
1 π 0 β π
1 π π β β
1 π 0 πΎ ^ π , β β’ πΎ β , π β π β 0 β’ π₯ π 0 β π₯ π π
Analogously, we have
πΆ β’ ( πΎ ^ β ; π 0 , π ^ π )
β π
1 π 0 β π
1 π 0 β π₯ ^ π β π₯ ^ π π β 2 β’ πΎ ^ π , π β
β π
1 π 0 β π₯ π 0 β 2 β’ π π 0 + β π
1 π 0 β π₯ ^ π π β 2 β’ π π 0 β 2 β’ β π
1 π 0 β π
1 π 0 π₯ π 0 β π₯ ^ π π β’ πΎ ^ π , π β
πΎ 2 β 2 β’ β π
1 π 0 β π
1 π π π₯ π 0 β π₯ π π β’ πΎ π , π β
πΎ 2 β πΎ 1 + πΆ β’ ( πΎ β ; π 0 , π π )
(46)
where πΎ 2 is constant (which only depends on π 0 and π ^ π ) and similarly
πΆ β’ ( πΎ ^ ; π 0 , π ^ π )
πΎ 2 β 2 β’ β π
1 π 0 β π
1 π π β β
1 π 0 πΎ ^ π , β β’ πΎ β , π β π β 0 β’ π₯ π 0 β π₯ π π
πΎ 2 β πΎ 1 + πΆ β’ ( πΎ ; π 0 , π 1 )
(47)
Therefore, by (46) and (47), we have that πΆ β’ ( πΎ β ; π 0 , π π ) β€ πΆ β’ ( πΎ ; π 0 , π π ) if and only if πΆ β’ ( πΎ ^ β ; π 0 , π ^ π ) β€ πΆ β’ ( πΎ ^ ; π 0 , π ^ π ) . So, we conclude the proof by the fact πΎ β is optimal for π β’ π β’ ( π 0 , π π ) . β
Appendix CProof of Proposition 2.2 Lemma C.1.
Given discrete measures π 0
β π
1 π 0 π π 0 β’ πΏ π₯ π 0 , π 1
β π
1 π 0 π π 0 β’ πΏ π₯ π 1 , π 2
β π
1 π 0 π π 0 β’ πΏ π₯ π 2 , suppose that the maps π₯ π 0 β¦ π₯ π 1 and π₯ π 0 β¦ π₯ π 2 solve π β’ π β’ ( π 0 , π 1 ) and π β’ π β’ ( π 0 , π 2 ) , respectively. For π‘ β [ 0 , 1 ] , define π π‘ := β π
1 π 0 π π 0 β’ πΏ ( 1 β π‘ ) β’ π₯ π 1 + π‘ β’ π₯ π 2 . Then, the mapping π π‘ : π₯ π 0 β¦ ( 1 β π‘ ) β’ π₯ π 1 + π‘ β’ π₯ π 2 solves the problem π β’ π β’ ( π 0 , π π‘ ) .
Proof.
Let πΎ β
diag β‘ ( π 1 0 , β¦ , π π 0 0 ) be the corresponding transportation plan induced by π π‘ . Consider an arbitrary πΎ β β + π 0 Γ π 0 such that πΎ β Ξ β’ ( π 0 , π π‘ ) . We need to show
πΆ β’ ( πΎ β ; π 0 , π π‘ ) β€ πΆ β’ ( πΎ ; π 0 , π π‘ ) .
We have
πΆ β’ ( πΎ β ; π 0 , π π‘ )
β π , π β²
1 π 0 β π₯ π 0 β ( 1 β π‘ ) β’ π₯ π β² 1 β π‘ β’ π₯ π β² 2 β 2 β’ πΎ π , π β² β
β π
1 π 0 β π₯ π 0 β 2 β’ π π 0 + β π
1 π 0 β ( 1 β π‘ ) β’ π₯ π β² 1 + π‘ β’ π₯ π β² 2 β 2 β’ π π 0 β 2 β’ β π , π β²
1 π 0 π₯ π 0 β ( ( 1 β π‘ ) β’ π₯ π β² 1 + π‘ β’ π₯ π β² 2 )
πΎ β 2 β’ [ ( 1 β π‘ ) β’ β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 1 β’ πΎ π , π β² β + π‘ β’ β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 2 β’ πΎ π , π β² β ] ,
(48)
where in third equation, πΎ is a constant which only depends on the marginals π 0 , π π‘ . Similarly,
πΆ β’ ( πΎ ; π 0 , π π‘ )
πΎ β 2 β’ [ ( 1 β π‘ ) β’ β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 1 β’ πΎ π , π β² + π‘ β’ β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 2 β’ πΎ π , π β² ] .
(49)
By the fact that πΎ , πΎ β β Ξ β’ ( π ^ , π π‘ )
Ξ β’ ( π 0 , π 1 )
Ξ β’ ( π 0 , π 2 ) , and that πΎ β is optimal for π β’ π β’ ( π ^ , π 1 ) , π β’ π β’ ( π ^ , π 2 ) , we have
β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 1 β’ πΎ π , π β² β β₯ β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 1 β’ πΎ π , π β² and β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 2 β’ πΎ π , π β² β β₯ β π , π β²
1 π 0 π₯ ^ π β π₯ π β² 2 β’ πΎ π , π β² .
Thus, by (48) and (49), we have πΆ β’ ( πΎ β ; π ^ , π π‘ ) β€ πΆ β’ ( πΎ ; π ^ , π π‘ ) , and this completes the proof. β
Proof of Proposition 2.2.
Consider 0 β€ π β€ π‘ β€ 1 . By Lemma 2.1, the maps π π : π₯ π 0 β¦ π₯ ^ π π , π π : π₯ π 0 β¦ π₯ ^ π π solve π β’ π β’ ( π 0 , π ^ π ) , π β’ π β’ ( π 0 , π ^ π ) , respectively. Moreover, by Lemma C.1 (under the appropriate renaming), we have the mapping
π₯ π 0 β¦ ( 1 β π ) β’ π₯ ^ π π + π β’ π₯ ^ π π
π₯ π 0 + ( 1 β π ) β’ π’ π π + π β’ π’ π π , 1 β€ π β€ π 0
solves π β’ π β’ ( π 0 , π ^ π ) , and similarly
π₯ π 0 β¦ ( 1 β π‘ ) β’ π₯ ^ π π + π‘ β’ π₯ ^ π π
π₯ π 0 + ( 1 β π‘ ) β’ π’ π π + π‘ β’ π’ π π , 1 β€ π β€ π 0
solves π β’ π β’ ( π 0 , π ^ π‘ ) .
Thus,
πΏ β’ π β’ π π 0 β’ ( π ^ π , π ^ π‘ )
β π
1 π 0 β ( ( 1 β π ) β’ π₯ ^ π π + π β’ π₯ ^ π π ) β ( ( 1 β π‘ ) β’ π₯ ^ π π + π‘ β’ π₯ ^ π π ) β 2 β’ π π 0
β π
1 π 0 ( π‘ β π ) 2 β’ β π₯ ^ π π β π₯ ^ π π β 2 β’ π π 0
( π‘ β π ) 2 β’ πΏ β’ π β’ π π 0 β’ ( π π , π π ) ,
and this finishes the proof. β
Appendix DProof of proposition 3.1
This result relies on the definition (29) of the curve π π‘ , which is inspired by the fact that solutions of non-homogeneous equations are given by solutions of the associated homogeneous equation plus a particular solution of the non-homogeneous. We choose the first term of π π‘ as a solution of a (homogeneous) continuity equation ( πΎ π‘ defined in (28)), and the second term ( πΎ Β― π‘ := ( 1 β π‘ ) β’ π 0 + π‘ β’ π π ) as an appropriate particular solution of the equation (26) with forcing term π defined in (31). Moreover, the curve π π‘ will become optimal for (25) since both πΎ π‘ and πΎ Β― π‘ are βoptimalβ in different senses. On the one hand, πΎ π‘ is optimal for a classical optimal transport problem15. On the other hand, πΎ Β― π‘ is defined as a line interpolating the new part introduced by the framework of partial transportation, βdestruction and creation of massβ.
Although this is the core idea behind the proposition, to prove it we need several lemmas to deal with the details and subtleties.
First, we mention that, the push-forward measure πΉ # β’ π can be defined satisfying the formula of change of variables
β« π΄ π β’ ( π₯ ) β’ π πΉ # β’ π β’ ( π₯ )
β« πΉ β 1 β’ ( π΄ ) π β’ ( πΉ β’ ( π₯ ) ) β’ π π β’ ( π₯ )
for all measurable set π΄ , and all measurable function π , where πΉ β 1 β’ ( π΄ )
{ π₯ : πΉ β’ ( π₯ ) β π΄ } . This is a general fact, and as an immediate consequence, we have conservation of mass
| πΉ # β’ π |
| π | .
Then, in our case, the second term in the cost function (24) we can be written as
π ( | π 0 β πΎ 0 | + | π π β πΎ 1 |
π ( | π 0 | + | π 1 | β 2 | πΎ | )
since πΎ 0 and πΎ 1 are dominated by π 0 and π π , respectively, in the sense that πΎ 0 β€ π 0 and πΎ 1 β€ π π .
Finally, we would like to point out that when we say that ( π , π£ , π ) is a solution for equation (26), we mean that it is a weak solution or, equivalently, it is a solution in the distributional sense [38, Section 4 4]. That is, for any test function π : Ξ© β β continuous differentiable with compact support, ( π , π£ , π ) satisfies
β π‘ ( β« Ξ© π β’ π π π‘ ) β β« Ξ© β π β π£ π‘ β’ π β’ π π‘
β« Ξ© π β’ π π π‘ ,
or, equivalently, for any test function π : [ 0 , 1 ] Γ Ξ© β β continuous differentiable with compact support, ( π , π£ , π ) satisfies
β« [ 0 , 1 ] Γ Ξ© β π‘ π β’ π β’ π + β« [ 0 , 1 ] Γ Ξ© β π β π£ β’ π β’ π + β« [ 0 , 1 ] Γ Ξ© π π
β« Ξ© π β’ ( 1 , β ) β’ π π π β β« Ξ© π β’ ( 0 , β ) β’ π π 0 .
Lemma D.1.
If πΎ β is optimal for π β’ π β’ π π β’ ( π 0 , π 1 ) , and has marginals πΎ 0 β and πΎ 1 β , then πΎ β is optimal for π β’ π β’ π π β’ ( π 0 , πΎ 1 β ) , π β’ π β’ π π β’ ( πΎ 0 β , π 1 ) and π β’ π β’ ( πΎ 0 β , πΎ 1 β ) .
Proof.
Let πΆ β’ ( πΎ ; π 0 , π 1 , π ) denote the objective function of the minimization problem π β’ π β’ π π β’ ( π 0 , π 1 ) , and similarly consider πΆ β’ ( πΎ ; πΎ 0 β , π 1 , π ) , πΆ β’ ( πΎ ; π 0 , πΎ 1 β , π ) . Also, let πΆ β’ ( πΎ , πΎ 0 β , πΎ 1 β ) be the objective function of π β’ π β’ ( πΎ 0 β , πΎ 1 β ) .
First, given an arbitrary plan πΎ β Ξ β’ ( πΎ 0 β , πΎ 1 β ) β Ξ β€ β’ ( π 0 , π 1 ) , we have
πΆ β’ ( πΎ ; π 0 , π 1 , π )
β« Ξ© 2 β π₯ 0 β π₯ 1 β 2 β’ π πΎ β’ ( π₯ 0 , π₯ 1 ) + π β’ ( | π 0 β πΎ 0 | + | π 1 β πΎ 1 | )
πΆ β’ ( πΎ ; πΎ 0 β , πΎ 1 β ) + π β’ ( | π 0 β πΎ 0 | + | π 1 β πΎ 1 | ) .
Since πΆ β’ ( πΎ β ; π 0 , π 1 , π ) β€ πΆ β’ ( πΎ ; π 0 , π 1 , π ) , we have πΆ β’ ( πΎ β ; πΎ 0 β , πΎ 1 β ) β€ πΆ β’ ( πΎ ; πΎ 0 β , πΎ 1 β ) . Thus, πΎ β is optimal for π β’ π β’ ( πΎ 0 β , πΎ 1 β ) .
Similarly, for every πΎ β Ξ β€ β’ ( πΎ 0 β , π 1 ) , we have πΆ β’ ( πΎ ; π 0 , π 1 , π )
πΆ β’ ( πΎ ; πΎ 0 β , π 1 ) + π β’ | π 0 β πΎ 0 β | and thus πΎ β is optimal for π β’ π β’ π π β’ ( πΎ 0 β , π 1 ) . Analogously, πΎ β is optimal for π β’ π β’ π π β’ ( π 0 , πΎ 1 β ) . β
Lemma D.2.
If πΎ β β Ξ β€ β β’ ( π 0 , π 1 ) , then π β’ ( supp β‘ ( π 0 ) , supp β‘ ( π 1 ) ) β₯ 2 β’ π , where π 0
π 0 β πΎ 0 , π 1
π 1 β πΎ 1 , and π is the Euclidean distance in β π .
Proof.
We will proceed by contradiction. Assume that π β’ ( supp β‘ ( π 0 ) , supp β‘ ( π 1 ) ) < 2 β’ π . Then, there exist π₯ ~ 0 β supp β‘ ( π 0 ) and π₯ ~ 1 β supp β‘ ( π 1 ) such that β π₯ ~ 0 β π₯ ~ 1 β < 2 β’ π . Moreover, we can choose π
0 such π 0 β’ ( π΅ β’ ( π₯ ~ 0 , π ) )
0 and π 1 β’ ( π΅ β’ ( π₯ ~ 1 , π ) )
0 , where π΅ β’ ( π₯ ~ π , π ) denotes the ball in β π of radius π centered at π₯ ~ π .
We will construct πΎ ~ β Ξ β€ β’ ( π 0 , π 1 ) with transportation cost strictly less than π β’ π β’ π π β’ ( π 0 , π 1 ) , leading to a contradiction. For π
0 , 1 we denote by π ~ π the probability measure given by 1 π π β’ ( π΅ β’ ( π₯ ~ π , π ) ) β’ π π restricted to π΅ β’ ( π₯ ~ π , π ) . Let πΏ
min β‘ { π 0 β’ ( π΅ β’ ( π₯ ~ 0 , π ) ) , π 1 β’ ( π΅ β’ ( π₯ ~ 1 , π ) ) } . Now, we consider
πΎ ~ := πΎ β + πΏ β’ ( π ~ 0 Γ π ~ 1 ) .
Then, πΎ ~ β Ξ β€ β’ ( π 0 , π 1 ) because
π π β’ # β’ πΎ ~
πΎ π + πΏ β’ π ~ π β€ πΎ π + π π
π π for β’ π
0 , 1 .
The partial transport cost of πΎ ~ is
πΆ β’ ( πΎ ~ , π 0 , π 1 , π )
β« Ξ© 2 β π₯ 0 β π₯ 1 β 2 β’ π πΎ ~ + π β’ ( | π 0 | + | π 1 | β 2 β’ | πΎ ~ | )
β« Ξ© 2 β π₯ 0 β π₯ 1 β 2 β’ π πΎ + πΏ β’ β« Ξ© 2 β π₯ 0 β π₯ 1 β 2 β’ π β’ ( π ~ 0 Γ π ~ 1 ) + π β’ ( | π 0 | + | π 1 | β 2 β’ | πΎ | β 2 β’ πΏ β’ | π ~ 0 Γ π ~ 1 | )
π
β’
π
β’
π
π
β’
(
π
0
,
π
1
)
+
πΏ
β’
β«
π΅
β’
(
π₯
~
0
,
π
)
Γ
π΅
β’
(
π₯
~
1
,
π
)
β
π₯
0
β
π₯
1
β
2
β’
π
β’
(
π
~
0
Γ
π
~
1
)
β
2
β’
π
β’
πΏ
<
π
β’
π
β’
π
π
β’
(
π
0
,
π
1
)
+
2
β’
π
β’
πΏ
β
2
β’
π
β’
πΏ
π β’ π β’ π π β’ ( π 0 , π 1 ) ,
which is a contradiction.
β
Lemma D.3.
Using the notation and hypothesis of Proposition 3.1, for π‘ β ( 0 , 1 ) let π· π‘ := { π₯ : π£ π‘ β’ ( π₯ ) β 0 } . Then, πΎ π‘ β§ π 0 β‘ πΎ π‘ β§ π π β‘ 0 over π· π‘ .
Proof.
We will prove this for πΎ π‘ β§ π 0 . The other case is analogous. Suppose by contradiction that πΎ π‘ β§ π 0 is not the null measure. By Lemma D.1, we know that πΎ β is optimal for π β’ π β’ ( πΎ 0 β , πΎ 1 β ) . Since we are also assuming that πΎ is induced by a map π , i.e. πΎ β
( πΌ Γ π ) # β’ πΎ 0 β , then π is an optimal Monge map between πΎ 0 β and πΎ 1 β . Therefore, the map π π‘ : Ξ© β Ξ© is invertible (see for example [38, Lemma 5.29]). For ease of notation we will denote π ~ 0 := πΎ π‘ β§ π 0 as a measure in π· π‘ .
The idea of the proof is to exhibit a plan πΎ ~ with smaller π β’ π β’ π π cost than πΎ β , which will be a contradiction since πΎ β β Ξ β€ β β’ ( π 0 β’ π π ) . Let us define
πΎ ~ := πΎ β + ( πΌ , π β π π‘ β 1 ) # β’ π ~ 0 β ( πΌ , π ) # β’ ( π π‘ β 1 ) # β’ π ~ 0 .
Then, πΎ ~ β Ξ β€ β’ ( π 0 , π π ) since
π 0 β’ # β’ πΎ ~
π
0
β’
#
β’
πΎ
β
+
π
~
0
β
(
π
π‘
β
1
)
#
β’
π
~
0
β€
πΎ
0
β
+
π
~
0
β€
π
0
,
π
1
β’
#
πΎ
~
π 1 β’ # πΎ β + ( π β π π‘ β 1 ) # π ~ 0 β ( π β π π‘ β 1 ) # π ~ 0
πΎ 1 β . ( and we know πΎ 1 β β€ π π ) .
From the last equation we can also conclude that | πΎ ~ |
| πΎ 1 β |
| πΎ β | . Therefore, the partial transportation cost for πΎ ~ is
β« Ξ© 2 β π₯ β π¦ β 2 β’ π πΎ ~ β’ ( π₯ , π¦ ) + π β’ ( | π 0 | + | π π | β 2 β’ | πΎ ~ | )
β« Ξ© 2 β π₯ β π¦ β β’ π πΎ β β’ ( π₯ , π¦ ) + π β’ ( | π 0 | + | π π | β 2 β’ | πΎ | ) β π β’ π β’ π π β’ ( π 0 , π π )
- β« π· π‘ ( β π₯ β π β’ ( π π‘ β 1 β’ ( π₯ ) ) β 2 β β π π‘ β 1 β’ ( π₯ ) β π β’ ( π π‘ β 1 β’ ( π₯ ) ) β 2 ) β’ π π ~ 0 β’ ( π₯ )
< π β’ π β’ π π β’ ( π 0 , π 1 )
where in the last inequality we used that if π¦
π π‘ β 1 β’ ( π₯ ) we have
β π π‘ β’ ( π¦ ) β π β’ ( π¦ ) β
β ( 1 β π‘ ) β’ π¦ + π‘ β’ π β’ ( π¦ ) β π β’ ( π¦ ) β
( 1 β π‘ ) β’ β π¦ β π β’ ( π¦ ) β < β π¦ β π β’ ( π¦ ) β for β’ π‘ β ( 0 , 1 ) .
β
Lemma D.4.
Using the notation and hypothesis of Proposition 3.1, for π‘ β ( 0 , 1 ) the vector-valued measures π£ π‘ β π 0 and π£ π‘ β π π are exactly zero.
Proof.
We prove this for π£ π‘ β π 0 . The other case is analogous. We recall that the measure π£ π‘ β π 0 is determined by
β« Ξ© Ξ¦ β’ ( π₯ ) β π£ π‘ β’ ( π₯ ) β’ π π 0 β’ ( π₯ )
for all measurable functions Ξ¦ : Ξ© β β π , where Ξ¦ β’ ( π₯ ) β π£ π‘ β’ ( π₯ ) denotes the usual dot product in β π between the vectors Ξ¦ β’ ( π₯ ) and π£ π‘ β’ ( π₯ ) .
From Lemma D.3 we know that πΎ π‘ β§ π 0 β‘ 0 over π· π‘ . Therefore, πΎ π‘ and π 0 are mutually singular in that set. This implies that we can decompose π· π‘ into two disjoint sets π· 1 , π· 2 such that πΎ π‘ β’ ( π· 1 )
0
π 0 β’ ( π· 2 ) . Since π£ π‘ is a function defined πΎ π‘ βalmost everywhere (up to sets of null measure with respect to πΎ π‘ ), we can assume without loss of generality that π£ π‘ β‘ 0 on π· 1 .
Let Ξ¦ : Ξ© β β π be a measurable vector-valued function over Ξ© . Using that π£ π‘ β‘ 0 in Ξ© β π· π‘ and in π· 1 , and that π 0 β’ ( π· 2 )
0 , we obtain
β« Ξ© Ξ¦ β’ ( π₯ ) β π£ π‘ β’ ( π₯ ) β’ π π 0 β’ ( π₯ )
β« Ξ© β π· π‘ Ξ¦ β’ ( π₯ ) β π£ π‘ β’ ( π₯ ) β’ π π 0 β’ ( π₯ ) + β« π· 1 Ξ¦ β’ ( π₯ ) β π£ π‘ β’ ( π₯ ) β’ π π 0 β’ ( π₯ ) + β« π· 2 Ξ¦ β’ ( π₯ ) β π£ π‘ β’ ( π₯ ) β’ π π 0 β’ ( π₯ )
0 .
Since Ξ¦ was arbitrary, we can conclude that π£ π‘ β π 0 β‘ 0 . β
Lemma D.5.
Using the notation and hypothesis of Proposition 3.1, the measure πΎ Β― π‘
( 1 β π‘ ) β’ π 0 + π‘ β’ π 1 satisfies the equation
{ β π‘ πΎ Β― + β β πΎ Β― β’ π£
π
πΎ Β― 0
π 0 , πΎ Β― 1
π 1 .
(50) Proof.
From Lemma D.4 we have that π£ π‘ β πΎ Β― π‘
( 1 β π‘ ) β’ π£ π‘ β π 0 + π‘ β’ π£ π‘ β π 1
0 , then β β πΎ Β― β’ π£
0 . It is easy to see that πΎ Β― satisfies the boundary conditions by replacing π‘
0 and π‘
1 in its definition. Also, β π‘ πΎ Β―
π 1 β π 0 . Then, since π π‘
π 1 β π 0 for every π‘ , we get β π‘ πΎ Β― + β β πΎ Β― β’ π£
π 1 β π 0 + 0
π π‘ . β
Proof of Proposition 3.1.
First, we address that the π£ : [ 0 , 1 ] Γ Ξ© β β π is well defined. Indeed, by Lemma D.1, we have that πΎ β
( id Γ π ) # β’ πΎ 0 β is optimal for π β’ π β’ ( πΎ 0 β , πΎ 1 β ) . Thus, for each ( π‘ , π₯ ) β [ 0 , 1 ] Γ Ξ© , by so-called cyclical monotonicity property of the support of classical optimal transport plans, there exists at most one π₯ 0 β supp β‘ ( πΎ 0 β ) such that π π‘ β’ ( π₯ 0 )
π₯ , see [38, Lemma 5.29]. So, π£ π‘ β’ ( π₯ ) in (30) is well defined by understanding its definition as
π£ π‘ β’ ( π₯ )
{
π
β’
(
π₯
0
)
β
π₯
0
if
β’
π₯
π π‘ β’ ( π₯ 0 ) , for β’ π₯ 0 β supp β‘ ( πΎ 0 β )
0
elsewhere .
Now, we check that ( π , π£ , π ) defined in (29), (30), (31) is a solution for (26). In fact,
π π‘
πΎ π‘ + πΎ Β― π‘
where πΎ π‘ is given by (28), and πΎ Β― π‘ is as in Lemma D.5. Then, from section 2.2, ( πΎ , π£ ) solves the continuity equation (5) since πΎ β
( πΌ Γ π ) # β’ πΎ 0 β β Ξ β β’ ( πΎ 0 β , πΎ 1 β ) (Lemma D.1), and from its definition πΎ π‘
( π π‘ ) # β’ πΎ 0 β coincides with (9) in this context, as well as π£ π‘ coincides with (8) (the support of π£ π‘ lies on the support of πΎ 0 β ). On the other hand, from Lemma D.5, πΎ Β― π‘ solves (50). Therefore, by linearity,
β π‘ π π‘ + β β π π‘ β’ π£ π‘
β π‘ πΎ π‘ + β β πΎ π‘ β’ π£ π‘ β 0 + β π‘ πΎ Β― π‘ + β β πΎ Β― π‘ β’ π£ π‘ β π π‘
π π‘
Finally, by plugging in ( π , π£ , π ) into the objective function in (25), we have:
β« [ 0 , 1 ] Γ Ξ© β π£ β 2 β’ π π + π β’ | π |
β« [ 0 , 1 ] Γ Ξ© β π£ β 2 β’ π πΎ π‘ β’ π π‘ + π β’ | π |
β« Ξ© β π β’ ( π₯ 0 ) β π₯ 0 β 2 β’ π πΎ 0 β β’ ( π₯ 0 ) + π β’ ( | π 0 | + | π 1 | )
β« Ξ© 2 β π₯ 0 β π₯ π β 2 β’ π πΎ β β’ ( π₯ 0 , π₯ π ) + π β’ ( | π 0 β πΎ 0 β | + | π π β πΎ 1 β | )
π β’ π β’ π π β’ ( π 0 , π π )
since πΎ β β Ξ β€ β β’ ( π 0 , π π ) . So, this shows that ( π , π£ , π ) is minimum for (25).
The βmoreoverβ part holds from the above identities, using that
β« Ξ© β π£ 0 β 2 β’ π πΎ 0 β β’ ( π₯ 0 ) + π β’ ( | π 0 | + | π 1 | )
β«
Ξ©
min
β‘
(
β
π£
0
β
2
,
2
β’
π
)
β’
π
πΎ
0
β
β’
(
π₯
0
)
+
π
β’
(
|
π
0
|
+
|
π
1
|
)
for
β’
π£
0
β’
(
π₯
0
)
π β’ ( π₯ 0 ) β π₯ 0 ,
since πΎ β is such that β π₯ 0 β π₯ π β 2 < 2 β’ π for all ( π₯ 0 , π₯ π ) β supp β‘ ( πΎ ) [4, Lemma 3.2].
β
Appendix EProof of Theorem 3.5
The proof will be similar to that of Lemma 2.1. To simplify the notation, in this proof, we use π 1 (and π ^ 1 ) to replace π π (and π ^ π ).
E.1Notation setup
We choose πΎ 1 β Ξ β€ β β’ ( π 0 , π 1 ) . We will understand the second marginal distribution π 1 β’ # β’ πΎ 1 induced by πΎ 1 , either as the vector πΎ 1 1 := ( πΎ 1 ) π β’ 1 π 0 or as the measure β π
1 π 1 ( πΎ 1 1 ) π β’ πΏ π₯ π 1 , and, by abuse of notation, we will write π 1 β’ # β’ πΎ 1
( πΎ 1 ) π β’ 1 π 0
πΎ 1 1 (analogously for π 0 β’ # β’ πΎ 1 ).
Let πΎ ^ 1 := diag β‘ ( π ^ 1 1 , β¦ , π ^ π 0 1 ) denote the transportation plan induced by mapping π₯ π 0 β¦ π₯ ^ π 1 .
Let
π· := { π β { 1 , β¦ β’ π 0 } : π ^ π 1
0 } .
We select πΎ ^ β Ξ β€ β’ ( π 0 , π ^ 1 ) .
With a slight abuse of notation, we define
πΎ := πΎ ^ β’ ( πΎ ^ 1 ) β 1 β’ πΎ π
where we mean that ( πΎ ^ 1 ) β 1 β β π 0 Γ π 0 is a digonal matrix with:
( πΎ ^ 1 ) π β’ π
{ 1 π ^ π 1
if β’ π ^ π 1
0
0
elsewhere , β π β [ 1 : π 0 ] .
The goal is to show
πΆ β’ ( πΎ ^ π ; π 0 , π ^ π , π ) β€ πΆ β’ ( πΎ ^ ; π 0 , π ^ π , π ) .
(51) E.2Connect π β’ π β’ π π β’ ( π 0 , π 1 ) and π β’ π β’ π π β’ ( π 0 , π ^ 1 ) .
Note, First, we claim πΎ β Ξ β€ β’ ( π 0 , πΎ 1 1 ) β Ξ β€ β’ ( π 0 , π 1 ) . Indeed, we have
πΎ β’ 1 π 1
πΎ ^ β’ ( πΎ ^ 1 ) β 1 β’ πΎ π β’ 1 π 1
πΎ ^ β’ ( πΎ ^ 1 ) β 1 β’ π ^ π
πΎ ^ β’ 1 π·
πΎ ^ 0 β€ π 0
(52)
πΎ π β’ 1 π 0
( πΎ 1 ) π β’ ( πΎ ^ 1 ) β 1 β’ πΎ ^ π β’ 1 π 0 β€ ( πΎ 1 ) π β’ ( πΎ ^ 1 ) β 1 β’ π ^ 1
( πΎ 1 ) π β’ 1 π·
πΎ 1 1 ,
(53)
where 1 π· β β π 0 is defined with ( 1 π· ) π
1 if π β π· and ( 1 π· ) π
0 elsewhere.
In (52), the equality πΎ ^ β’ 1 π·
πΎ ^ 0 follows from the following: ( πΎ ^ 1 ) π β’ 1 π 0 β€ π ^ 1 , we have πΎ ^ π , π β²
0 , β π β² β π· , π β { 1 , β¦ , π 0 } . In (53), the inequality follows from the fact πΎ ^ β Ξ β€ β’ ( π 0 , π ^ 1 ) .
Note, from (52), we also have πΎ 0 1
πΎ ^ 0 1 .
We compute the transportation costs induced by πΎ 1 , πΎ , πΎ ^ 1 , πΎ ^ :
πΆ β’ ( πΎ 1 ; π 0 , π 1 , π )
β π
1 π 1 β π β π· β π₯ π 0 β π₯ π 1 β 2 β’ πΎ π , π 1 + π β’ ( | π 0 | + | π 1 | β 2 β’ | πΎ 1 | )
β π β π· β π₯ π 0 β 2 β’ ( πΎ 0 1 ) π + β π
1 π 1 β π₯ π 1 β 2 β’ ( πΎ 1 1 ) π β 2 β’ β π β π· β π
1 π 1 π₯ π 0 β π₯ π 1 β’ πΎ π , π 1 + π β’ ( | π 0 | + | π 1 | β 2 β’ | πΎ 1 | )
Similarly,
πΆ β’ ( πΎ ^ 1 ; π 0 , π ^ 1 , π )
β π β π· β π₯ π 0 β 2 β’ ( πΎ ^ 0 1 ) π + β π β² β π· β π₯ ^ π β² 1 β 2 β’ ( πΎ ^ 1 1 ) π β² β 2 β’ β π β π· β π β² β π· π₯ π 0 β π₯ ^ π β² 1 β’ πΎ ^ π , π β² 1 + π β’ ( | π 0 | + | π ^ 1 | β 2 β’ | πΎ ^ 1 | )
β π β π· β π₯ π 0 β 2 β’ ( πΎ ^ 0 1 ) π + β π β² β π· β π₯ ^ π β² 1 β 2 β’ ( πΎ ^ 1 1 ) π β² β 2 β’ β π β π· β π β² β π· β π
1 π 1 π₯ π 0 β π₯ π 1 β’ πΎ π β² , π 1 β’ πΎ ^ π , π β² 1 πΎ ^ π β² , π β² 1 + π β’ ( | π 0 | + | π ^ 1 | β 2 β’ | πΎ ^ 1 | )
β π β π· β π₯ π 0 β 2 β’ ( πΎ ^ 0 1 ) π + β π β² β π· β π₯ ^ π β² 1 β 2 β’ ( πΎ ^ 1 1 ) π β² β 2 β’ β π β π· β π
1 π 1 π₯ π 0 β π₯ π 1 β’ πΎ π , π 1 + π β’ ( | π 0 | + | π ^ 1 | β 2 β’ | πΎ ^ 1 | )
Therefore, we obtain
πΆ β’ ( πΎ ^ 1 ; π 0 , π ^ 1 , π )
πΆ β’ ( πΎ 1 ; π 0 , πΎ 1 1 , π ) β β π
1 π 1 β π₯ π 1 β 2 β’ ( πΎ 1 1 ) π + β π β² β π· β π₯ ^ π β² 1 β 2 β’ ( πΎ ^ 1 1 ) π β² + π β’ ( | π ^ 1 | β | π 1 | )
(54)
And also,
πΆ β’ ( πΎ ; π 0 , π 1 , π )
β π
1 π 0 β π₯ π 0 β 2 β’ ( πΎ 0 ) π + β π
1 π 1 β π₯ π 1 β 2 β’ ( πΎ 1 ) π β 2 β’ β π
1 π 0 β π
1
π
1
β
π
β²
β
π·
π₯
π
0
β
π₯
π
1
β’
πΎ
^
π
,
π
β²
β’
πΎ
π
β²
,
π
1
πΎ
^
π
β²
,
π
β²
1
+
π
β’
(
|
π
0
|
+
|
π
1
|
β
2
β’
|
πΎ
|
)
,
πΆ
β’
(
πΎ
^
;
π
0
,
π
^
1
,
π
)
β π
1 π 0 β π₯ π 0 β 2 β’ ( πΎ ^ 0 ) π + β π β² β π· β π₯ ^ π β² 1 β 2 β’ ( πΎ ^ 1 ) π β² β 2 β’ β π
1 π 0 β π β² β π· β π
1 π 0 π₯ π 0 β π₯ π 1 β’ πΎ ^ π , π β² 1 β’ πΎ π β² , π 1 πΎ ^ π β² , π β² 1 + π β’ ( | π 0 | + | π ^ 1 | β 2 β’ | πΎ ^ | )
Thus we obtain
πΆ β’ ( πΎ ^ ; π 0 , π ^ 1 , π )
πΆ β’ ( πΎ ; π 0 , πΎ 1 1 , π ) β β π
1 π 1 β π₯ π 1 β 2 β’ ( πΎ 1 ) π + β π β² β π· β π₯ ^ π β² 1 β 2 β’ ( πΎ ^ 1 ) π β² + π β’ ( | π ^ 1 | β | π 1 | )
(55)
Combining with (54) and (55), we have
πΆ β’ ( πΎ ^ 1 ; π 0 , π ^ 1 , π ) β πΆ β’ ( πΎ ^ ; π 0 , π ^ 1 , π )
πΆ β’ ( πΎ 1 ; π 0 , πΎ 1 1 , π ) β πΆ β’ ( πΎ ; π 0 , πΎ 1 1 , π ) + β π
1 π 1 β π₯ π 1 β 2 β’ ( ( πΎ 1 ) π β ( πΎ 1 1 ) π ) + β π β π· | π₯ ^ π 1 | 2 β’ ( ( πΎ ^ 1 1 ) π β ( πΎ ^ 1 ) π )
β π
1 π 1 β π₯ π 1 β 2 β’ ( ( πΎ 1 ) π β ( πΎ 1 1 ) π ) β β π β π· β π₯ ^ π 1 β 2 β’ ( ( πΎ ^ 1 ) π β ( πΎ ^ 1 1 ) π )
(56)
where the inequality holds sine πΎ 1 is optimal for π β’ π β’ π π β’ ( π 0 , πΎ 1 1 ) .
E.3Verification of the inequality
It remains to show (56) is less than 0 . By Jensenβs inequality, for each π β π· , we obtain
β π₯ ^ π 1 β 2 β€ β π
1 π 1 πΎ π , π 1 π ^ π 1 β’ β π₯ π 1 β 2 .
Combined with the fact πΎ ^ 1 β€ πΎ ^ 1 1 , we obtain:
(
β’
56
β’
)
β€
β
π
1 π 1 β π₯ π 1 β 2 β’ ( ( πΎ 1 ) π β ( πΎ 1 1 ) π ) β β π β π· β π
1 π 1 πΎ π , π 1 π ^ π 1 β’ β π₯ π 1 β 2 β’ ( ( πΎ ^ 1 ) π β ( πΎ ^ 1 1 ) π )
β π
1 π 1 β π₯ π 1 β 2 β’ ( ( πΎ 1 ) π β ( πΎ 1 1 ) π β β π β π· πΎ π , π 1 β’ ( ( πΎ ^ 1 ) π β π ^ π 1 ) π ^ π 1 )
(57)
Pick π β [ 1 : π 0 ] , we have:
( πΎ 1 ) π β ( πΎ 1 1 ) π β β π β π· πΎ π , π 1 β’ ( ( πΎ ^ 1 ) π β π ^ π 1 ) π ^ π 1
( πΎ 1 ) π β π ^ π 1 β β π β π· πΎ π , π 1 β’ ( ( πΎ ^ 1 ) π ) π ^ π 1 + π ^ π 1
β π β²
1 π πΎ π β² , π β β π β π· πΎ π , π 1 β’ ( ( πΎ ^ 1 ) π ) π ^ π 1
β π β²
1 π 0 β π β π· πΎ ^ π β² , π β’ πΎ π , π 1 π ^ π 1 β β π β π· β π β²
1 π 0 πΎ π , π 1 β’ πΎ ^ π β² , π π ^ π 1
(58)
= 0
where (58) holds by the fact: for each π β² β [ 1 : π 0 ] , π β [ 1 : π 1 ] , we have
πΎ π β² , π
( πΎ ^ β’ ( πΎ ^ 1 ) β 1 β’ [ π β² , : ] ) β’ πΎ 1 β’ [ : , π ]
[ πΎ ^ π β² , 1 β’ 1 π ^ 1 1 , β¦ β’ πΎ ^ π β² , π 0 β’ 1 π ^ π 0 π ] π β’ [ πΎ 1 , π 1 , β¦ β’ πΎ π 0 , π 1 ]
β π β π· πΎ ^ π β² , π β’ πΎ π , π 1 π ^ π 1
Thus (57) is 0 . Therefore, (56) is upper bounded by 0 , and we complete the proof.
Appendix FProof of Theorem 3.6 Proof.
Without loss of generality, we assume that πΎ π β Ξ β€ β β’ ( π 0 , π π ) is induced by a 1-1 map. We can suppose this since the trick is that we will end up with an π 0 -point representation of π π when we get π ^ π . For example, if two π₯ π 0 and π₯ π 0 are mapped to the same point π β’ ( π₯ π 0 )
π β’ ( π₯ π 0 ) , when performing the barycentric, we can split this into two different labels with corresponding labels. (The moral is that the labels are important, rather than where the mass is located.) Therefore, πΎ π β β π 0 Γ π π is such that in each row and column there exists at most one positive entry, and all others are zero. Let πΎ ^ := diag β‘ ( π ^ 1 π , β¦ , π ^ π 0 π ) . Then, by the definition of π ^ π , we have πΆ β’ ( πΎ π ; π 0 , π π , π )
πΆ β’ ( πΎ ^ ; π 0 , π ^ π , π ) . Thus,
π β’ π β’ π π β’ ( π 0 , π π )
πΆ β’ ( πΎ π ; π 0 , π π , π )
πΆ β’ ( πΎ π ; π 0 , πΎ 1 π , π ) + π β’ ( | π π | β | πΎ 1 π | )
πΆ β’ ( πΎ ^ ; π 0 , π ^ π , π ) + π β’ ( | π π | β | π ^ π | )
π β’ π β’ π π β’ ( π 0 , π ^ π ) + π β’ ( | π π | β | π ^ π | ) ,
where the last equality holds from Theorem 3.5. This concludes the proof.
Moreover, from the above identities (for discrete measure) we can express
π β’ π β’ π π β’ ( π 0 , π π )
β π
1 π 0 π ^ π β’ β π₯ π 0 β π₯ ^ π π β 2 + π β’ | π 0 β π ^ π π | + π β’ ( | π π | β | π ^ π | ) .
β
Appendix GProof of Proposition 3.7 Proof.
The result follows from (40) and (35). Indeed, we have
πΏ β’ π β’ π β’ π π , π 0 β’ ( π ^ π , π ^ π )
β
π’
π
β
π’
π
β
π
^
π
β§
π
^
π
,
2
β’
π
+
π
β’
(
|
π
^
π
β
π
^
π
|
)
and
πΏ
β’
π
β’
π
β’
π
π
,
π
0
β’
(
π
π
,
π
π
)
β π’ π β π’ π β π ^ π β§ π ^ π , 2 β’ π + π β’ ( | π ^ π β π ^ π | ) + π β’ ( | π π | + | π π | ) ,
since the LOPT barycentric projection of π π is basically π π without the mass that needs to be created from the reference (analogously for π π ). β
Appendix HApplications
For completeness, we will expand on the experiments and discussion presented in Section 4, as well as on Figure 1 which contrasts the HK technique with OPT from the point of view of interpolation of measures.
First, we recall that similar to LOT [41] and [34], the goal of LOPT is not exclusively to approximate OPT, but to propose new transport-based metrics (or discrepancy measures) that are computationally less expensive and easier to work with than OT or OPT, specifically when many measures must be compared.
Also, one of the main advantages of the linearization step is that it allows us to embed sets of probability (resp. positive finite) measures into a linear space ( π― π 0 space). Moreover, it does it in a way that allows us to use the π― π 0 -metric in that space as a proxy (or replacement) for more complicated transport metrics while preserving the natural properties of transport theory. As a consequence, data analysis can be performed using Euclidean metric in a simple vector space.
H.1Approximation of OPT Distance
For a better understanding of the errors plotted in Figure 2, the following Figure 6 shows the histograms of the relative errors for different values of π and each number of measures πΎ
5 , 10 , 15 .
Figure 6:Histogram of relatives errors (depicted in Figure 2) between π β’ π β’ π π and πΏ β’ π β’ π β’ π π , π 0 . (Number of samples π
500 . Number of repetitions
10 . Dimension
2 . βmeasures on β 2 β.)
As said in Section 4, we recall that for these experiments, we created πΎ point sets of size π for πΎ different Gaussian distributions in β 2 . In particular, π π βΌ π© β’ ( π π , πΌ ) , where π π is randomly selected such that β π π β
3 for π
1 , β¦ , πΎ . For the reference, we picked an π point representation of π 0 βΌ π© β’ ( π Β― , πΌ ) with π Β―
β π π / πΎ . For figures 2 and 6, the sample size π was set equal to 500 .
In what follows, we include tests for π
200 , 250 , 300 , 350 , β¦ , 900 , 950 , 1000 and πΎ
2 , 4 . For each ( π , πΎ ) , we repeated each experiment 10 times. The relative errors are shown in Figure 7. For large π , most mass is transported and π β’ π β’ ( π π , π π ) β π β’ π β’ π π β’ ( π π , π π ) , the performance of LOPT is close to that of LOT, and the relative error is small. For small π , almost no mass is transported, π β’ π β’ π π β’ ( π π , π π ) β π β’ ( | π π | + | π π | ) β π β’ ( | π π | β | π ^ π | + | π π | β | π ^ π | ) β πΏ β’ π β’ π β’ π π 0 , π β’ ( π π , π π ) , and we still have a small error. In between, e.g., π
5 , we have the largest relative error. Similar results were obtained by setting the reference as the OT barycenter.
Figure 7:The average relative error between π β’ π β’ π π and πΏ β’ π β’ π β’ π π , π 0 between all pairs of discrete measures π π and π π . H.2PCA analysis
For problems where doing pair-wise comparisons between πΎ distributions is needed, in the classical optimal (partial) transport setting we have to solve ( πΎ 2 ) OT (resp. OPT) problems. In the LOT (resp. LOPT) framework, however, one only needs to perform πΎ OT (resp. OPT) problems (matching each distribution with a reference measure).
One very ubiquitous example is to do clustering or classification of measures. For this case, the number of target measures πΎ representing different samples is usually very large. For the particular case of data analysis on Point Cloud MNIST, after using PCA in the embedding space (see Figure 5), it can be observed that the LOT framework is a natural option for separating different digits, but the equal mass requirement is too restrictive in the presence of noise. LOPT performs better since it does not have this restriction. This is one example where the number of measures πΎ
900 is much larger than 2.
H.3Point Cloud Interpolation
Here, we add the results of the experiments mentioned in Section 4 which complete Figure 4. In the new Figure 8, in fact, Figure 4 corresponds to the subfigure 8(b). We conducted experiments using three digits (0, 1, and 9) from the PointCloud MNIST dataset, with 300 point sets per digit. We utilized LOPT embedding to calculate and visualize the interpolation between pairs of digits. We chose the barycenter between 0, 1, and 9 as the reference for our experiment. However, to avoid redundant results, in the main paper, we only demonstrated the interpolation between 0 and 9 in our main paper. The remaining plots for the other digit pairs using different levels of noise π are included here for completeness. Later, in Section H.4, Figure 11 will add the plots for the HK technique and its linearized version.
(a) π
0 (b) π
0.5 (c) π
0.75 (d) π
0 (e) π
0.5 (f) π
0.75 Figure 8:Interpolation between two point-clouds at different times π‘ β { 0 , 0.25 , 0.5 , 0.75 , 1 } . Different values of noise π were considered for the different interpolation approaches (OT, LOP, OPT, LOPT). For LOT and LOPT the reference measure is the barycenter between the PointClouds 0 , 1 , and 9 with no noise. Top row: Interpolation between digits 0 and 9 . Bottom row: Interpolation between digits 0 and 1 . H.4Preliminary comparisons between LHK and LOPT
The contrasts between the Linearized Hellinger-Kantorovich (LHK) [10] and the LOPT approaches come from the differences between HK (Hellinger-Kantorovich) and OPT distances.
The main issue is that for HK transportation between two measures, the transported portion of the mass does not resemble the OT-geodesic where mass is preserved. In other words, HK changes the mass as it transports it, while OPT preserves it.
This issue is depicted in Figure 1. In that figure, both the initial (blue) and final (purple) distributions of masses are two unit-mass delta measures at different locations. Mass decreases and then increases while being transported for HK, while it remains constant for OPT. For HK, the transported portion of the mass does not resemble the OT-geodesic where mass is preserved. In other words, HK changes the mass as it transports it, while OPT preserves it.
To illustrate better this point, we incorporate here Figure 9 which is the two-dimensional analog of Figure 1.
Figure 9:HK vs. OPT interpolation between two delta measures of unit mass located at different positions. Two scenarios are shown for each transport framework varying the respective parameters ( π for HK, and π for OPT). Blue circles stand for the cases when the geodesic transports mass. Triangular shapes represent destruction and creation of mass. Triangles pointing down in orange indicate the locations where mass will be destroyed. Triangles pointing up in green indicate the locations where mass will be created. On top of that shadows are added to emphasize the change of mass from initial and final configurations. The Top two plots exhibit two extreme cases when performing HK geodesic. When π is small, everything is created/destroyed, when s is large everything is transported without mass-preservation. On the other side, the Bottom two plots show the two analogous cases for OPT geodesics. When π is small we observe creation/destruction, when π is large we have mass-preserving transportation. The intermediate cases are treated in the following.
In addition, in Figure 10 we not only compare HK and OPT, but also LHK and LOPT. The top subfigure 10(a) shows the measures π 1 (blue dots) and π 2 (orange crosses) to be interpolated with the different techniques in the next subfigures 10(b) and 10(c). Also, in 10(a) we plot the reference π 0 (green triangles) which is going to be used only in experiment 10(c). The measures π 1 and π 2 are interpreted as follows. The three masses on the interior, forming a triangle, can be considered as the signal information and the two masses on the corners can be considered noise. That is, we can assume they are just two noisy point cloud representations of the same distribution shifted. We want to transport the three masses in the middle without affecting their mass and relative positions too much. Subfigure 10(b) shows HK and OPT interpolation between the two measures π 1 and π 2 . In the HK cases, we notice not only how the masses vary, but also how their relative positions change obtaining a very different configuration at the end of the interpolation. OPT instead returns a more natural interpolation because of the mass preservation of the transported portion and the decoupling between transport and destruction/creation of mass. Finally, subfigure 10(c) shows the interpolation of π 1 and π 2 for LHK and LOPT. The reference measure π 0 is the same for both and we can see the denoising effect due to the fact that, for the three measures, the mass is concentrated in the triangular region in the center. However, while mass and structure-preserving transportation can be seen for LOPT, for LHK the shape of the configuration changes.
On top of that, as is often the case on quantization, point cloud representations of measures are given as samples with uniform mass. OPT/LOPT interpolation will not change the mass of each transported point. Therefore, the intermediate steps of an algorithm using OPT/LOPT transport benefit from conserving the same type of representation. That is, as a uniform set of points.
(a)Plot of two measures ( π 1 as blue circles and π 2 as orange crosses, where each point has mass one), and a reference measure π 0 (concentrated on the three green triangular locations, unit mass each). (b)HK and OPT interpolation between the two measures π 1 and π 2 at different times. The color code is the same as for Figure 9. (c)LHK and LOPT interpolation between the two measures π 1 and π 2 at different times using for both cases the same reference π 0 . Figure 10:HK vs OPT and LHK vs LOPT
For comparison on PointCloud interpolation using MNIST, we include Figure 11 that illustrates OPT, LOPT, HK, and LHK interpolation for digit pairs (0,1) and (0,9). In the visualizations, the size of each circle is plotted according to amount of the mass at each location.
(a)Samples of digits 0, 1, and 9 from MNIST Data Set. Top row: clean point cloud. Bottom row: 50 % of noise. (b)Noise level π
0 . (c)Noise level π
0.5 (d)Noise level π
0 . (e)Noise level π
0.5 . Figure 11:Point cloud interpolation using all the techniques unbalanced transportation HK, OPT, LHK, and LOPT.
However, we do not claim the presented LOPT tool to be a one size fits all kind of tool. We are working on the subtle differences between LHK and LOPT and expect to have a complete and clear picture in the future. The aim of this article was to present a new tool with an intuitive introduction and motivation so that the OT community would benefit from it.
H.5Barycenter computation
Can the linear embedding technique be used to compute the barycenter of a set of measures, e.g., by computing the barycenter of the embeddings and performing an inversion to the original space?
One can first calculate the mean of the embedded measures, and recover a measure from this mean. The recovered measure is not necessarily the OPT barycenter, however, one can repeat this process and obtain better approximations of the barycenter. Similar to LOT, we have numerically observed that such a process will converge to a measure that is close to the barycenter. However, there are several technical considerations that one needs to pay close attention. For instance, the choice of π and the choice of the initial reference measure are critical in this process.
Figure 12: The depiction of barycenters between digits 0 and 1 , and between 0 and 9 using the LOPT technique. Left panel: original measures (point-clouds) π 1 (digit 0 ), π 2 (digit 1 ), and π 3 (digit 9 ). We considered them with no noise on the top left panel ( π
0 ), and with corrupted under noise on the bottom left panel (level of noise π
0.5 ). Right panel: The first row is the result that both initial measure and data π 1 , π 2 , π 3 are clean data; the second row is the result of clean initial measure and noise corrupted data; the third row is the result for noise corrupted initial measure and data. Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 110 kB
- Xet hash:
- 2518549079057c17023f27cdd49138ad538002b294438a58cebc445795045f6f
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.