0% found this document useful (0 votes)
14 views

A Convex Optimization Framework For Regularized Ge

Uploaded by

Bharat in Spain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

A Convex Optimization Framework For Regularized Ge

Uploaded by

Bharat in Spain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

A Convex Optimization Framework for

Regularized Geodesic Distances


Michal Edelstein Nestor Guillen
Technion - Israel Institute of Technology Texas State University
Haifa, Israel San Marcos, TX, USA
[email protected] [email protected]

Justin Solomon Mirela Ben-Chen


Massachusetts Institute of Technology (MIT) Technion - Israel Institute of Technology
arXiv:2305.13101v1 [cs.GR] 22 May 2023

Cambridge, MA, USA Haifa, Israel


[email protected] [email protected]

Figure 1: Geodesic distances (a) may not have desired properties such as smoothness. We present a general framework for
regularized geodesic distances. Shown here are three examples of regularizers: (b,c) smoothness, (d) alignment to a vector
field, and (e) boundary invariance.
ABSTRACT ACM Reference Format:
We propose a general convex optimization problem for computing Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen.
2023. A Convex Optimization Framework for Regularized Geodesic Dis-
regularized geodesic distances. We show that under mild conditions
tances. In Special Interest Group on Computer Graphics and Interactive Tech-
on the regularizer the problem is well posed. We propose three niques Conference Conference Proceedings (SIGGRAPH ’23 Conference Pro-
different regularizers and provide analytical solutions in special ceedings), August 6–10, 2023, Los Angeles, CA, USA. ACM, New York, NY,
cases, as well as corresponding efficient optimization algorithms. USA, 11 pages. https://doi.org/10.1145/3588432.3591523
Additionally, we show how to generalize the approach to the all
pairs case by formulating the problem on the product manifold,
which leads to symmetric distances. Our regularized distances com-
pare favorably to existing methods, in terms of robustness and ease
of calibration.

CCS CONCEPTS 1 INTRODUCTION


• Computing methodologies -> Shape analysis. Distance computation is a central task in shape analysis. Distances
are required for many downstream geometry processing appli-
KEYWORDS cations, including shape correspondence, shape descriptors and
remeshing. In many cases, however, exact geodesic distances are
geodesic distance, convex optimization, triangle meshes
not required, and a distance-like function suffices. Moreover, it is
Permission to make digital or hard copies of part or all of this work for personal or often required to regularize the distance-like function to improve
classroom use is granted without fee provided that copies are not made or distributed performance of a downstream application.
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored. The geometry processing community has proposed myriad meth-
For all other uses, contact the owner/author(s). ods for computing geodesic distances [Crane et al. 2020], including
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA some regularized distances [Crane et al. 2013; Solomon et al. 2014].
© 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0159-7/23/08. However, a unified framework, including a controlled and easily
https://doi.org/10.1145/3588432.3591523 calibratable approach to regularization is still missing.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

We propose a flexible convex optimization framework for com- 2 BACKGROUND


puting regularized geodesic distances. We show that under rela-
2.1 Geodesic Distances by Convex
tively mild conditions, our formulation has a minimizer that con-
verges to the geodesic distance as the regularization weight van- Optimization
ishes. Furthermore, we propose a variety of regularizers, demon- Variational characterizations of the geodesic distance function are
strate their applicability, and provide corresponding efficient opti- natural from several perspectives. From probability, they relate to
mization algorithms. Finally, we formulate the all-pairs problem, large deviation estimates for the heat equation, as shown by Varad-
as a special case of our framework on the product manifold. This han [1967]. From a purely PDE perspective, they can be constructed
formulation has the additional advantage that the resulting regular- as the largest viscosity subsolution to the Eikonal equation, using
ized distances are symmetric with respect to swapping the source Ishii’s extension of the Perron method to Hamilton-Jacobi equation
and target points. [Ishii 1987]. Recently, it was shown [Belyaev and Fayolle 2020, 2015]
how the geodesic distance 𝑢 on a domain Ω from a source point 𝑥 0
can be computed by solving the convex optimization problem
1.1 Related Work ∫
Minimize𝑢 − Ω 𝑢 (𝑥) dVol(𝑥)
The work on geodesic distances is vast, and a full review is out of subject to |∇𝑢 (𝑥)| ≤ 1 for all 𝑥 ∈ Ω \ {𝑥 0 } (1)
scope. See the recent surveys [Crane et al. 2020; Peyré et al. 2010]. 𝑢 (𝑥 0 ) = 0.
As explained by Belyaev et al. [2020], we can thus use convex opti-
Geodesic Distances. Some approaches (e.g., MMP [Mitchell et al. mization methods, e.g. ADMM, to approximate geodesic distances.
1987], MMP extension [Surazhsky et al. 2005], VTP [Qin et al. 2016], Intuitively, since the function 𝑢 is maximized, the gradient norm
and others) compute the exact polyhedral geodesic distance on a reaches the maximal allowed value, which is 1. Therefore, while
triangle mesh. Other methods, e.g., Fast Marching [Kimmel and not constraining it directly, the solution will fulfill |∇𝑢 | = 1 at every
Sethian 1998], take a variational approach and compute approxi- point in the domain and thus will be a geodesic distance. The big
mate geodesic distances. More recently, convex optimization ap- advantage of this formulation, as opposed to directly constraining
proaches have been suggested for computing approximate geodesic |∇𝑢 | = 1, is that this optimization problem is convex. Furthermore,
distances [Belyaev and Fayolle 2020, 2015] and for computing the the point constraint 𝑢 (𝑥 0 ) = 0 may be relaxed to 𝑢 (𝑥 0 ) ≤ 0 without
cut locus [Générau et al. 2022a,b]. Our approach is also framed changing the solutions to the problem. To see why, note that if
as a convex optimization problem, but incorporates an additional 𝜙 : 𝑀 → R is such that |∇𝜙 (𝑥)| ≤ 1 for all 𝑥 ∈ Ω \ {𝑥 0 } and
general regularization term.
𝜙 (𝑥 0 ) < 0, then the function 𝜙˜ := 𝜙 − 𝜙 (𝑥 0 ) will satisfy the two
constraints in (1) and have a strictly smaller objective functional
Regularization. Exact geodesic distances have some shortcom- than 𝜙 since 𝜙 < 𝜙˜ everywhere.
ings in applications, e.g., the geodesic distance from a point 𝑝 is
not smooth near the cut locus of 𝑝. The heat method [Crane et al. 3 REGULARIZED GEODESIC DISTANCES
2013] computes smoothed geodesic distances, where the smooth- Given a compact surface 𝑀 and a closed set 𝐸 ⊂ 𝑀 (typically,
ing is controlled by a time parameter. The earth mover’s distance 𝐸 = {𝑥 0 }), our goal is to compute a function 𝑢 : 𝑀 → R which is “as-
(EMD) [Solomon et al. 2014] can also be used to compute geodesic geodesic-as-possible,” but has some additional property. Depending
distances, optionally smoothed by projecting on a reduced spectral on the application, one may require the function to be smooth, or to
basis. Compared to the heat and EMD methods, our framework be aligned to an input direction at some points on the surface. We
allows for a more direct control on the smoothness parameter. For assume that this additional information is encoded in a regularizer
triangle meshes, another option is to compute the graph-based dis- functional of the form
tances on the graph of the triangulation with a Dirichlet or Hessian ∫
regularization [Cao et al. 2020]. This approach is, however, triangu- E (𝑢) = 𝐹 (∇𝑢 (𝑥), 𝑥) dVol(𝑥), (2)
lation dependent, and requires the use of 2-ring neighborhoods for 𝑀
accurate results. Furthermore, we provide theoretical results that where 𝐹 is convex in the first argument.
guarantee that our optimization problem is well-posed for general Generalizing Eq.(1), we consider the following convex optimiza-
regularizers under some mild conditions, providing mathematical tion problem
footing required for future work to design additional regularizers. ∫
Minimize𝑢 𝛼 E (𝑢) − 𝑀 𝑢 (𝑥) dVol(𝑥)
subject to |∇𝑢 (𝑥)| ≤ 1 for all 𝑥 ∈ 𝑀 \ 𝐸 (3)
1.2 Contributions 𝑢 (𝑥) ≤ 0 for all 𝑥 ∈ 𝐸.
Our main contributions are: with some 𝛼 > 0. We discuss various options for E in Sections 3.1,
• A convex optimization problem for extracting regularized geo- 4 and the supplemental.
desic distances, with theoretical results for a general regularizer The problem (3) has a long history in the case of a domain Ω ⊂
under some mild conditions. R𝑑 with 𝐹 (∇𝑢 (𝑥), 𝑥) = |∇𝑢 (𝑥)| 2 , which is known as the elastic-
• Examples of regularizers with corresponding theoretical results plastic torsion problem. This is a free boundary problem, i.e., a PDE
and efficient optimization algorithms. involving an interface, unknown a priori, across which the PDE’s
• The all-pairs generalization, with a scalable optimization scheme. nature may change dramatically. In our case, this is reflected in two
A Convex Optimization Framework for Regularized Geodesic Distances SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

regions for the solution 𝑢𝛼 , one where it solves a Poisson equation, We analyze the case where 𝑀 is the circle S1 . Fix 𝛼 > 0. We param-
and one where it solves the Eikonal equation. Refer to [Caffarelli eterize S1 via the map 𝑥 ↦→ (cos(𝑥), sin(𝑥)), i.e., by real numbers
and Friedman 1979] and the book [Petrosyan et al. 2012] for more modulo 2𝜋. Moreover, we will use the group structure of S1 , which
background. For the Riemannian setting, this problem was first is given by R/2𝜋Z. In this case the problem amounts to looking for
studied in [Générau et al. 2022a], discussed further below. a 2𝜋-periodic function 𝑢 (𝑥) that minimizes
Even under these general conditions (see the supplemental for ∫ 2𝜋 ∫ 2𝜋
𝛼
detailed assumptions), we show that (a) the optimization problem |𝑢 ′ (𝑥)| 2 𝑑𝑥 − 𝑢 (𝑥) 𝑑𝑥
2 0 0
has a minimizer for every 𝛼 > 0 , (b) the minimizer is unique, and
(c) they converge uniformly to the exact geodesic distance as 𝛼 → 0. among all such 2𝜋-periodic functions satisfying the constraints
We gather these results under the next two theorems. 𝑢 (0) ≤ 0 and |𝑢 ′ (𝑥)| ≤ 1 for all 𝑥 ∈ (0, 2𝜋).
Theorem 3.1. There is a unique minimizer for problem (3). The minimizer 𝑢 (𝑥) for this problem has a simple analytical ex-
pression, given as follows: First, given 𝑥 ∈ R we set 𝑥ˆ = 𝑥 mod 2𝜋.
Theorem 3.2. Let 𝑢𝛼 denote the minimizer to the optimization
Then,
problem (3). Then, as 𝛼 → 0


 𝑥ˆ
 if 0 ≤ 𝑥ˆ ≤ 𝐿
max |𝑑 (𝑥, 𝐸) − 𝑢𝛼 (𝑥)| → 0, 𝑢𝛼 (𝑥) = 𝜋 − 12 𝛼 − 2𝛼 1 (𝑥ˆ − 𝜋) 2 if 𝐿 ≤ 𝑥ˆ ≤ 2𝜋 − 𝐿 (5)
𝑥 ∈𝑀 
 2𝜋 − 𝑥ˆ
 if 𝑥ˆ ≥ 𝐿
where 𝑑 (𝑥, 𝐸) is the geodesic distance from 𝑥 to the set 𝐸.
Here, 𝐿 = 𝐿(𝛼) is defined by
The proofs of Theorem 3.1 and Theorem 3.2, are in Supp. 2 and 3.
The unique minimizer 𝑢𝛼 provided by Thm. 3.1 is Lipschitz con- 𝐿(𝛼) = (𝜋 − 𝛼)+ . (6)
tinuous by construction. In addition, it has two distinct regimes This expression approximates the distance to the point correspond-
in the respective regions {|∇𝑢𝛼 | = 1} and {|∇𝑢𝛼 | < 1}. For gen- ing to 𝑥 = 0. Observe that for 𝛼 > 𝜋 the functions 𝑢𝛼 are all equal
eral second order elliptic regularizers, 𝑢𝛼 will be smooth in the to 𝑢𝜋 . In general, the solution 𝑢𝛼 has two regimes or regions, one
interior of {|∇𝑢𝛼 | < 1}, there 𝑢𝛼 will solve the unconstrained region where it matches the geodesic distance function exactly, and
Euler-Lagrange equation corresponding to the objective functional one where it is solving Poisson’s equation 𝑢𝛼′′ = −1/𝛼 and therefore
in (3), which would be a nonlinear elliptic equation. Accordingly, matches a concave parabola, with the condition that 𝑢𝛼 is 𝐶 1 across
standard elliptic theory guarantees that 𝑢𝛼 will be smooth in the these two regions. This is the standard condition for solutions to the
region where the gradient constraint is not active. In the other obstacle problem (see [Petrosyan et al. 2012]), which is intrinsically
region {|∇𝑢𝛼 | = 1} the function will solve the Eikonal equation in related to (3) in this particular case (see Supplemental 1 for further
the viscosity sense. discussion).
In the case 𝐹 = |∇𝑢 | 2 (Sec. 3.1) it was proved in [Générau et al. The inset figure shows the behavior
2022a] that for all 𝛼 < 𝛼 0 (𝛼 0 depending on the geometry of Ω) of the function on the circle. Note the
the minimizer 𝑢𝛼 agrees with the geodesic distance function in smoothing region, whose width de-
{|∇𝑢 | = 1}. Therefore, 𝑢𝛼 coincides with the distance function pends on the smoothing parameter 𝛼
everywhere save for a region around the cut locus of 𝑥 0 . In this and matches (5)-(6).
region 𝑢𝛼 solves the Poisson equation Δ𝑢𝛼 = −1/𝛼. As shown in Thanks to the group structure of S1
[Générau et al. 2022a], as 𝛼 → 0, the open set {|∇𝑢𝛼 | < 1} shrinks and the invariance of the problem under the group action (in other
and converges to the cut locus. We expect the theorems in [Générau words, by symmetry), we obtain a corresponding formula when the
et al. 2022a] to hold for general elliptic functionals (such as the 𝑝- source point is any point 𝑦 ∈ S1 . In particular, if 𝑢𝛼 (·, 𝑦) represents
Laplace equation), but this entails pointwise estimates of nonlinear the solution to the problem with source at 𝑦, then
elliptic equations on manifolds beyond the scope of this work.
𝑢𝛼 (𝑥, 𝑦) = 𝑢𝛼 (𝑥 − 𝑦), ∀ 𝑥, 𝑦 (7)
Our regularizing functionals (Sec. 3.1, 4) correspond to elliptic
energy functionals that promote smoothness and other desirable We highlight a notable fact about these functions in a theorem.
properties (non-negativity, symmetries) in the minimizer of (3).
Theorem 3.3. For every 𝛼 > 0, the function 𝑢𝛼 (𝑥, 𝑦) given by
Accordingly, the significance of Thm. 3.2 is in providing a smooth
(5)-(7) defines a metric on S1 .
approximation to the geodesic distance function solution. Moreover,
this approximation is in the 𝐿 ∞ metric, so the approximation error This theorem will be proved in the supplemental. For a general
can be made small for all 𝑥 ∈ 𝑀 provided 𝛼 is sufficiently small. 𝑀, it is not clear whether one can expect 𝑢𝛼 (𝑥, 𝑦) to be a metric.
At the very least, it might be that Theorem 3.3 may generalize
3.1 Dirichlet Regularizer to other groups or homogeneous spaces. In Section 6 we discuss
A natural regularizer (also considered in [Générau et al. 2022a,b]) an extension of problem (3) to the product manifold 𝑀 × 𝑀 that
is the Dirichlet energy treats all pairs (𝑥, 𝑦) at once, producing an approximation 𝑈𝛼 (𝑥, 𝑦)
∫ that we can prove will be symmetric in (𝑥, 𝑦). We do not prove
1
EDir (𝑢) = |∇𝑢 (𝑥)| 2 dVol(𝑥). (4) this general formulation has the triangle inequality but Figure 12
2 𝑀 provides some encouragement in that direction.
First, we look at the simple example where the solution to prob- Another simple example for which we can compute the analytical
lem (3) using the Dirichlet energy (4) is given by an explicit formula. solution is the disk. We discuss it in the Supplemental, Section 1.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

Figure 3: The distance computed using the Dirichlet energy


regularizer (left) and the curved Hessian (right). Note the dif-
ferences near the boundaries.

𝑉 (𝑥) versus general smoothness. This is equivalent to computing a


Figure 2: Vector field alignment regularization. (a) Dirich-
distance function using an anisotropic smoothing term, where the
let regularized distance. The two marked vector directions
anisotropic metric at each point on the surface is represented in R3
are not aligned with the regularized distance. (b) An inter-
using the following matrix: 𝐼 + 𝛽𝑉ˆ where 𝐼 is the identity matrix,
polated and (c) localized vector field based on the two di-
rections. (d-g) The corresponding regularized distance, us- and 𝑉ˆ = 𝑉𝑉 𝑇 . In terms of the Lagrangian 𝐹 (𝜉, 𝑥) in the regularizer
ing both Dirichlet and vector field alignment (Sec. 4.1), using functional, this problem corresponds to choosing
two regularization weights.
𝐹 (𝜉, 𝑥) = |𝜉 | 2 + 𝛽 ⟨𝑉 (𝑥), 𝜉⟩ 2 = |𝐴(𝑥)𝜉 | 2

where 𝐴(𝑥) = 𝐼 + 𝛽𝑉ˆ (𝑥).


For general triangle meshes, we provide the discrete formulation
We allow the user to either provide a line field at each point,
in Section 5. Figure 5 shows the behavior of our computed distance
or specify a sparse set of directions. If needed, we interpolate the
using the Dirichlet energy as 𝛼 changes. We plot the normalized
sparse constraints to a smooth line field, as suggested by Pluta et al.
error between 𝑢𝛼 and 𝑢 0 , showing the solution converges smoothly
[2021, Section 5.5.4]. Optionally, we scale the interpolated line field
towards 𝑢 0 . We also show the result for three 𝛼 values. For each
with a geodesic Gaussian (the geodesic distance is computed with
𝛼, we see both the level sets of the distance function (right), and
our method without regularization).
the norm of the gradient |∇𝑢 | (left). As 𝛼 increases the smoothing
Figure 2 shows the results. Starting from two vectors (a), we
area around the cut locus becomes larger. Note that far from the
interpolate a line field (b), or a localized line field (c), and compute
cut locus the norm of the gradient is exactly one, showing that our
the resulting vector field aligned regularized distance for two reg-
regularized distance is exactly a geodesic distance function there.
ularization parameters 𝛽 (d-g). The gradient norm shows where
the function deviates from being a geodesic distance as its isolines
4 REGULARIZERS align to the prescribed directions.
In Section 3.1 and Supplemental 1 we discussed two analytical
examples using the Dirichlet energy (4) as the regularizer. In this 4.2 Hessian for Natural Boundary Conditions
section we discuss other possible regularizers in various geometries
(not using analytical formulas). The first of those functionals is For the Dirichlet regularizer, if we do not impose any boundary
included in the class (2) covered by our theorems and is motivated conditions on the problem, the minimizer will have zero Neumann
by the question of alignment to a given vector field. The other one boundary values (sometimes called “natural boundary conditions”
is a Hessian functional that falls outside the theorems in our work in FEM). Recently, Stein et al. [2020] suggested using the Hessian
but for which we make several numerical experiments and which energy instead, given by
raises interesting theoretical questions. Lastly, we discuss one more ∫
1
example of a regularizer in Supplemental 6, with a non-quadratic E (𝑢) = |∇2𝑢 (𝑥)| 2 dVol(𝑥). (9)
2 𝑀
regularizer that takes advantage of the general form of E in (3).
Here, we use the Frobenius norm of the matrix ∇2𝑢 (accordingly,
4.1 Vector Field Alignment this norm relies on the Riemannian metric of 𝑀). This energy yields
In addition to smoothing, one might want to align the isolines of natural boundary conditions, making the result more robust to holes
the distance function to a given direction. We can align ∇𝑢 with or mesh boundaries. In the Supplemental, Section 4, we show the
the line field 𝑉 (𝑥) represented as a 3D vector at each point 𝑥 using analytical solution for the simple case of the circle.
the following regularizer: Figure 3 demonstrates this, using the Dirichlet energy (left) and
∫ the curved Hessian (right). For each method, the left image shows
1
E (𝑢) = |∇𝑢 (𝑥)| 2 + 𝛽 ⟨𝑉 (𝑥), ∇𝑢 (𝑥)⟩ 2 dVol(𝑥), (8) the level sets of the function, and the right image shows the norm
2 𝑀 of the gradient |∇𝑢 |. Note the difference near the mesh boundaries,
Here, (𝑉 (𝑥), ∇𝑢 (𝑥)) refers to the Riemmanian metric in 𝑀, which where using the Dirichlet energy leads to zero Neumann conditions,
amounts to the usual inner product between vectors when 𝑀 is a meaning that the isolines are perpendicular to the boundary, and
surface in R3 , for example. Note the additional parameter 𝛽 that the distance is smoothed, whereas when using the Hessian energy
gives the relative weight between alignment to the vector field the distance is unaffected by the boundary.
A Convex Optimization Framework for Regularized Geodesic Distances SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

5 OPTIMIZATION VIA ADMM ∑︁


5.1 Notation Minimize𝑢 −𝐴𝑇V 𝑢 + 12 𝛼𝑢𝑇 𝑊 𝑢 + 𝜒 (|𝑧 𝑓 | ≤ 1)
Discretely, we represent surfaces using triangle meshes 𝑀 = (V, F ), 𝑓 ∈F

where V are the vertices, F are the faces, and 𝑛 = |V |, 𝑚 = |F |. We subject to (G𝑢) 𝑓 = 𝑧 𝑓 for all 𝑓 ∈ F
use a piecewise-linear discretization of functions on the mesh with 𝑢𝑖 ≤ 0 for all 𝑖 ∈ 𝐸,
one value per vertex; hence, functions are represented as vectors of where 𝜒 (·) is the indicator function, i.e., 𝜒 (|𝑧 𝑓 | ≤ 1) = ∞ if |𝑧 𝑓 | > 1
length 𝑛. Vector fields are piecewise constant per face and can be and 0 otherwise.
represented in the trivial basis in R3 or in a local basis per face; we The corresponding augmented Lagrangian is:
represent them using vectors of length R3𝑚 or R2𝑚 , respectively.
∑︁
Vertex and face areas are denoted by 𝐴 V ∈ R𝑛 , 𝐴 F ∈ R𝑚 , where 𝐿(𝑢, 𝑦, 𝑧) = −𝐴𝑇V 𝑢 + 𝛼2 𝑢𝑇 𝑊 𝑢 + 𝜒 (|𝑧 𝑓 | ≤ 1)+
the area of a vertex is a third of the sum of the areas of its adjacent 𝑓 ∈F
faces. The diagonal matrices 𝑀 V ∈ R𝑛×𝑛 , 𝑀 F ∈ R3𝑚×3𝑚 contain ∑︁ 𝜌 𝐴 ∑︁

𝐴 V , 𝐴 F on their corresponding diagonals (repeated 3 times for 𝑎 𝑓 𝑦𝑇𝑓 ((G𝑢) 𝑓 − 𝑧 𝑓 ) + 𝑎 𝑓 |(G𝑢) 𝑓 − 𝑧 𝑓 | 2,
2
𝐴 F ). The total area of the mesh is 𝐴. We use standard differential 𝑓 ∈F 𝑓 ∈F
operators [Botsch et al. 2010, Chapter 3]. In particular, our formu- where 𝑎 𝑓 is the area of the face 𝑓 , 𝜌 ∈ R is the penalty parameter,
lation requires the cotangent Laplacian 𝑊𝐷 ∈ R𝑛×𝑛 , the gradient and 𝑦 ∈ R3𝑚 is the dual variable or lagrange multiplier.
G ∈ R3𝑚×𝑛 , and the divergence D = G𝑇 𝑀 F ∈ R𝑛×3𝑚 . The ADMM algorithm consists of iteratively repeating three
steps [Boyd et al. 2011, Section 3]. First, we perform 𝑢-minimization,
5.2 Optimization Problem then 𝑧-minimization, and finally the dual variable, 𝑦, is updated.
In this setting, the optimization problem in Eq. (3), becomes: The full derivation of the three steps appears in Supplemental 8,
and the resulting algorithm in Algorithm 1.
Minimize𝑢 −𝐴𝑇V 𝑢 + 𝛼𝐹𝑀 (G𝑢) Algorithm details. Note that the first step, the 𝑢-minimization,
subject to |(G𝑢) 𝑓 | ≤ 1 for all 𝑓 ∈ F (10) includes solving a linear system with a fixed coefficient matrix,
𝑢𝑖 ≤ 0 for all 𝑖 ∈ 𝐸, which is pre-factored and used for all the ADMM iterations, as well
as all distance computations. To enforce the constraint 𝑢𝐸 ≤ 0, we
where 𝐸 here is a subset of vertex indices where the distance should eliminate the relevant columns from the linear system and solve
be 0, and 𝐹𝑀 is a convex function that acts on the gradient of 𝑢. for 𝑢𝑖 for all 𝑖 ∈ V \ {𝐸}. We project intermediate 𝑧 values to
Note that this problem is convex whenever 𝐹 is convex, since the the unit ball, i.e. Proj(𝑧 𝑓 ∈ R3, B3 ) is equal to 𝑧 𝑓 /|𝑧 𝑓 | if |𝑧 𝑓 | >
objective will be convex, the first constraint is a second-order cone 1, and 𝑧 𝑓 otherwise. We use the stopping criteria suggested by
constraint, and the second constraint is a linear inequality. Boyd et al. [2011, Section 3.3.1], formulated for our problem. See
Supplemental 8 for details.
5.3 Quadratic Objectives Efficiency. We compare our approach with a solution imple-
In practice, the objectives we consider are quadratic, leading to the mented using commercial software, i.e., CVX [Grant and Boyd
following optimization problem 2008, 2014] with the MOSEK solver [ApS 2019]. Table 1 provides
a comparison of the running times, measured on a desktop ma-
Minimize𝑢 −𝐴𝑇V 𝑢 + 𝛼2 𝑢𝑇 𝑊 𝑢 chine with an Intel Core i9. For the optimization using CVX and
subject to |(G𝑢) 𝑓 | ≤ 1 for all 𝑓 ∈ F (11) MOSEK, we report both the total running times and the solver
𝑢𝑖 ≤ 0 for all 𝑖 ∈ 𝐸. running times. Our optimization scheme yields at least an order of
magnitude improvement.
Different functionals correspond to different weight matrices 𝑊 . Figure 4 shows our result for multiple sources: three isolated
To use the Dirichlet energy in Eq. (4), we set 𝑊 to the cotangent points, a vertex sampling of a path, and the boundary. Additional
Laplacian matrix 𝑊𝐷 . For the vector field alignment objective in results are shown in Section 7 and in the Supplemental.
Eq. (8) we construct the anisotropic smoothing matrix 𝑊𝑉 = D(𝐼 +
𝛽𝑉ˆ )G, where 𝐼 ∈ R3𝑚×3𝑚 , and 𝑉ˆ ∈ R3𝑚×3𝑚 is block diagonal, with
ALGORITHM 1: ADMM.
the 3 × 3 block of face 𝑓 ∈ F given by 𝑉𝑓 𝑉𝑓𝑇 (see also Section 4.1). input : 𝑀, 𝛼,𝑊 , 𝐸
Finally, to use the Hessian regularizer in Eq. (9) we take 𝑊 to be output :𝑢 ∈ R𝑛 - distance to 𝐸
the curved hessian matrix [Stein et al. 2020], denoted by 𝑊𝐻 . initialize 𝜌 ∈ R ; // penalty parameter
𝑧 ← 03𝑚 ; // auxiliary variable, G𝑢 = 𝑧
𝑦 ← 03𝑚 ; // dual variable
5.4 Efficient Optimization Algorithm 𝜌←𝜌 𝐴

We derive an alternating direction method of multipliers (ADMM) while algorithm did not converge do // See Supp. 8
algorithm [Boyd et al. 2011] to solve the optimization problem solve 𝛼𝑊 + 𝜌𝑊𝐷 𝑢 = 𝐴V − D𝑦 + 𝜌D𝑧 s.t. 𝑢𝐸 = 0
𝑧 𝑓 ← Proj( 𝜌1 𝑦 𝑓 + (G𝑢) 𝑓 , B3 ) for all 𝑓 ∈ F
in Eq. (11) efficiently. We reformulate the optimization problem,
𝑦 ← 𝑦 + 𝜌 (G𝑢 − 𝑧)
adding an auxiliary variable 𝑧 ∈ R3𝑚 representing the gradient of end
the distance function G𝑢. This leads to:
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

symmetry of the resulting regularization, and so we have the fol-


lowing theorems, proved in Supplemental 5.
Theorem 6.2. The function 𝑈𝛼 (𝑥, 𝑦) is symmetric in 𝑥 and 𝑦.

Figure 4: Distance to multiple sources: (a) 3 points, (b) a ver- Theorem 6.3. As 𝛼 → 0, we have
tex sampling of a path, and (c) the boundary. We show the ∥𝑑 (𝑥, 𝑦) − 𝑈𝛼 (𝑥, 𝑦) ∥ 𝐿∞ (𝑀×𝑀) → 0.
distance and the gradient norm.
Analogously to Theorem 3.2, this last theorem guarantees the
Table 1: Running times for computing the distance from a functions 𝑈𝛼 provide a uniform approximation to the full geodesic
single source. distance 𝑑 (𝑥, 𝑦) provided 𝛼 is chosen adequately.

Model |F| ADMM CVX (sec)


6.2 Scalable Optimization
(sec) Total MOSEK Discretely, we represent 𝑈 as an 𝑛 × 𝑛 matrix. We also express
Pipe, Fig 5 10K 0.075 1.16 0.36 ∇1𝑈 (𝑥, 𝑦), ∇2𝑈 (𝑥, 𝑦) as gradients over the row and columns of 𝑈 ,
Moai, Fig 8 43K 1.01 5.93 2.25
i.e., G𝑈 and G𝑈 𝑇 . The optimization problem in Eq. (12) becomes:
Armadilo, Supp. 10 346K 1.89 37.3733 21.88
Gardet, Supp. 10 989K 5.22 132.91 87.83
Dragon, Supp. 10 2349K 7.66 347.73 230.02 Minimize𝑈 −𝐴𝑇V 𝑈𝐴 V +
Sea star, Supp. 10 3500K 11.16 572.56 380.36 1𝛼 
2 Tr 𝑀 V 𝑈 𝑇 𝑊𝐷 𝑈 + 𝑈𝑊𝐷 𝑈 𝑇
6 SYMMETRIC ALL-PAIRS FORMULATION subject to |(∇𝑈 (𝑖,·) ) 𝑓 | ≤ 1 for all 𝑓 ∈ F , 𝑖 ∈ V (13)
In Section 3.1 we observed how the problem (3) produces a new |(∇𝑈 ( ·,𝑗) ) 𝑓 | ≤ 1 for all 𝑓 ∈ F , 𝑗 ∈ V
metric on S1 . It is not clear if problem (3) produces such a result in 𝑈𝑖,𝑖 ≤ 0 for all 𝑖 ∈ V,
general. Part of the problem is how (3) treats the 𝑥 and the source where 𝑋𝑖,𝑗 denotes the (𝑖, 𝑗)-th element of a matrix 𝑋 , 𝑋 (𝑖,·) denotes
𝑦 (if 𝐸 = {𝑦}) differently. If one is interested in approximating the the 𝑖-th row, and 𝑋 ( ·,𝑗) the 𝑗-th column.
full geodesic distance function 𝑑 (𝑥, 𝑦) it is of interest to have a for- The complexity here is significantly higher than computing the
mulation that works directly in all of 𝑀 × 𝑀 and which is naturally distance of all points to a closed set. A naive formulation of the
symmetric. This leads to the following variation on problem (3). ADMM for this problem leads to a per-iteration linear solve with a
Consider the manifold 𝑀 × 𝑀 with the product metric inherited system matrix of size 𝑛 2 × 𝑛 2 . To reduce it to 𝑛 solves with a system
from 𝑀. Naturally, given a function 𝑈 : 𝑀 × 𝑀 → R we can fix 𝑦 ∈ matrix of size 𝑛 × 𝑛, we derive a second ADMM algorithm, Alg. 2
𝑀 and consider the function 𝑥 ↦→ 𝑈 (𝑥, 𝑦) or fix 𝑥 ∈ 𝑀 and consider (see Supp), scalable to larger meshes. The symmetric formulation
the function 𝑦 ↦→ 𝑈 (𝑥, 𝑦). We define ∇1𝑈 (𝑥, 𝑦) and ∇2𝑈 (𝑥, 𝑦) to in Equation (12) arises naturally in this derivation.
be the respective gradients for these functions in 𝑀. Equivalently, Figure 6 shows an example of distances computed using this ap-
from the decomposition 𝑇 (𝑀 × 𝑀) (𝑥,𝑦) = (𝑇 𝑀)𝑥 ⊕ (𝑇 𝑀) 𝑦 we see proach. The fixed source formulation (Alg. 1) (left) is not symmetric.
that ∇𝑀×𝑀 𝑈 (𝑥, 𝑦) = (∇1𝑈 , ∇2𝑈 ). We can symmetrize the distance matrix (center), but this leads to
With this notation, consider the minimization problem

Minimize𝑈 𝛼 E𝑀×𝑀 (𝑈 ) − 𝑀×𝑀 𝑈 (𝑥, 𝑦) dVol(𝑥, 𝑦) ALGORITHM 2: Symmetric All-Pairs ADMM.
subject to |∇1𝑈 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦} input : 𝑀, 𝛼
(12)
|∇2𝑈 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦} output :𝑈 ∈ R𝑛×𝑛 ; // dual consensus variable
𝑈 (𝑥, 𝑦) ≤ 0 on {(𝑥, 𝑦) | 𝑥 = 𝑦} initialize 𝜌 1 , 𝜌 2 ∈ R ; // penalty parameters
𝑍, 𝑄 ← 03𝑚×𝑛 ; // auxiliary variables G𝑋 = 𝑍 , G𝑅 = 𝑄
Here, we are focusing on the Dirichlet energy functional 𝑌 , 𝑆 ← 03𝑚×𝑛 ;

// dual variables
1 𝐻, 𝐾 ← 0𝑛×𝑛 ; // dual consensus variables
E𝑀×𝑀 (𝑈 ) := |∇1𝑈 (𝑥, 𝑦)| 2 + |∇2𝑈 (𝑥, 𝑦)| 2 dVol(𝑥, 𝑦) √
𝜌 1 ← 𝜌 1 𝐴, 𝜌 2 ← 𝜌 2 𝐴−1

2 𝑀×𝑀
𝑊𝑃 ← (𝛼 + 𝜌 1 )𝑊𝐷 + 𝜌 2 𝑀V , 𝑀𝑃 ← 21 𝐴V 𝐴𝑇V 𝑀V −1
The optimization problem (12) is a natural extension of (3) if one is while algorithm did not converge do // See Supp. 9
interested in the full geodesic distance function. Indeed, for 𝛼 = 0, solve for 𝑋
problem (12) has only one minimizer, the geodesic distance function. 𝑊𝑃 𝑋 = 𝑀𝑃 − D𝑌 + 𝜌 1 D𝑍 − 𝑀V 𝐻 + 𝜌 2 𝑀V 𝑈
solve for 𝑅
𝑊𝑃 𝑅 = 𝑀𝑃 − D𝑆 + 𝜌 1 D𝑄 − 𝑀V 𝐾 + 𝜌 2 𝑀V 𝑈 𝑇
6.1 Theoretical Results
 
(𝑍 (·,𝑖 ) ) 𝑓 ←
Our discussion suggests that the solutions to (12), to the extent they Proj 𝜌11 (𝑌 (·,𝑖 ) ) 𝑓 + (G𝑋 (·,𝑖 ) ) 𝑓 , B3 for all 𝑖 ∈ V, 𝑓 ∈ F
exist, should converge to the geodesic distance as 𝛼 → 0. The next
 
(𝑄 (·,𝑖 ) ) 𝑓 ←
two theorems, counterparts to Theorems 3.1 and 3.2, address this. Proj 𝜌11 (𝑆 (·,𝑖 ) ) 𝑓 + (G𝑅 (·,𝑖 ) ) 𝑓 , B3 for all 𝑖 ∈ V, 𝑓 ∈ F
 
Theorem 6.1. There is a unique minimizer for problem (12). 𝑈 = max 𝐻2𝜌 +𝐾 𝑇
+ 𝑋 +𝑅2 ,0 ;
𝑇
𝑈𝑖,𝑖 = 0 for all 𝑖 ∈ V
2

(See Supplemental 5 for a proof). 𝑌 ← 𝑌 + 𝜌 1 (G𝑋 − 𝑍 ); 𝑆 ← 𝑆 + 𝜌 1 (G𝑅 − 𝑄)


𝐻 ← 𝐻 + 𝜌 2 (𝑋 − 𝑈 ); 𝐾 ← 𝐾 + 𝜌 2 (𝑅 − 𝑈 𝑇 )
We will denote the unique minimizer for this problem by 𝑈𝛼 (𝑥, 𝑦). end
The motivation for (12) was in part finding a way to guarantee the
A Convex Optimization Framework for Regularized Geodesic Distances SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

visible noise in the gradient norm. The all-pairs formulation (Alg. 2) length, and use a remesh with highly anisotropic triangles and
is both symmetric and has a smooth gradient norm. self-intersections. We show the distances and the gradient norm,
all with the same 𝛼. Note that the results are consistent between
7 EXPERIMENTAL RESULTS the different meshes.
Symmetry error. Our Alg. 1 is not symmetric. Figure 11 shows
7.1 Scale-Invariant Parameters the symmetry error √1 |𝑑 (𝑥, 𝑦) − 𝑑 (𝑦, 𝑥)| for 3 source points for
The parameter 𝛼 controls the size of the smoothing area. Therefore, 𝐴
our method and the heat method. Note that for all three points, the
scaling the mesh requires changing its value. To avoid that, and symmetry error is higher for the heat method.
enable more intuitive control of the smoothing area, we define a Triangle inequality error. Our method does not guarantee that
scale-invariant smoothing parameter 𝛼ˆ that is independent of the triangle inequality holds, while EMD does. However, experimen-
mesh area or resolution. For the Dirichlet and vector
√ field alignment ˆ Fig. 12 shows the triangle
tally it does hold for higher values of 𝛼.
energies, we achieve√that by setting 𝛼 = 𝛼ˆ 𝐴. For the Hessian inequality error of a fixed pair of vertices with respect to every
energy, we set 𝛼 = 𝛼ˆ 𝐴3 . We note that the parameter 𝛽 is already other vertex. We compare the heat method, Alg. 1, and Alg. 2. For
ˆ For our ADMM algorithms (Sec. 5, 6.2)
scale-invariant, i.e., 𝛽 = 𝛽. the first two, we symmetrize the computed distance matrix. We
to be scale-invariant, we normalize the penalty variables, residual show the results for three 𝑡 values for the heat method, and three
and feasibility tolerances. Figure 7 demonstrates this. We uniformly values of 𝛼ˆ for our approach. We visualize the distance from the
rescale an input mesh, and use the same smoothing parameter 𝛼. ˆ chosen point using isolines. Note the difference in the error scaling
Note that while the distances are different between the meshes, the between the two methods. Further, note that for higher values of 𝛼ˆ
scale of the smoothed region, i.e., the area where the norm of the our approach has no violations of the triangle inequality. Table 3
gradient is not 1, is similar. For all our experiments we use the scale shows the percentage of triplets violating the triangle inequality
invariant formulation, unless stated otherwise. for the same data. Note, that also when considering all the possi-
ble triplets, higher values of the smoothing parameter lead to less
7.2 Comparison violations.
In Fig. 8 we compare our Dirichlet regularized distances to “Geodesics
in Heat” [Crane et al. 2013] and regularized EMD [Solomon et al. 7.4 Volumetric Distances
2014], with two smoothing parameters for each. In addition, we
Our framework can compute distances on tetrahedral meshes. We re-
show the exact geodesics computed using MMP [Mitchell et al.
place the standard mass matrix, gradient, and divergence operators
1987] for reference. Note that while all approaches lead to a smoother
with their volumetric versions, as implemented in gptoolbox [Jacob-
solution compared to the exact geodesics, our approach is more
son et al. 2021]. Figure 13 demonstrates our Dirichlet regularized
stable, in the sense that the same scale of regularization is observed
volumetric distance on a human shape (a). We show the distance
on all meshes, for the same parameters. Thus, we conjecture that
from a point on the shoulder on two planar cuts (b,c), and the
for our approach the regularization parameter is easier to tune.
distance from the boundary using two 𝛼ˆ values (d,e).
Table 2 compares the running times, and the maximal error w.r.t
the MMP distance (as a % of the maximal distance). The distances
are computed with Geometry Central [Sharp et al. 2019] for the heat 7.5 Example Application: Distance Function
method and MMP, and with a Matlab implementation of our ADMM for Knitting
Algorithm 1. Note that both our method and the heat method have
Some approaches for generating knitting instructions for 3D models
comparable errors, and for both smoother solutions have larger
require a function whose isolines represent the knitting rows [Edel-
errors. A timing comparison for the all-pairs case is in the Supp.
stein et al. 2022; Narayanan et al. 2018]. Using the geodesic distance
We additionally show in the supplemental material a comparison
to an initial point (or a set of points) is a good choice since the stitch
of the representation error of the different approaches in a reduced
heights are constant, as are the distances between isolines. On the
basis (providing a quantitative measure of smoothness).
other hand, this choice limits the design freedom significantly, as
designers and knitters have no control over the knitting direction
7.3 Robustness on different areas of the shape. Using regularized distances with
Meshing. We demonstrate that our method is invariant to meshing, vector alignment solves this problem. For example, see Figure 2
and is applicable to non-uniform meshing without modifying 𝛼. and the C model. Using geodesic distances to the starting point
Fig. 9 compares our result with the heat method, for 3 remesh- will result in a non-symmetric shape (a) (see also [Edelstein et al.
ings of the same shape. Note that for the heat method with the 2022, Figure 10]. By adding 2 directional constraints (f), we obtain a
default smoothing parameter (left), the half-half mesh fails. This function whose isolines respect the symmetries of the shape. Note
is remedied by using a different parameter (center), however there that for the regularized distances, the gradient norm is no longer 1
are still differences between the different meshing (note especially everywhere, and thus the distances between isolines is not constant.
the gradient norm). Using our approach (right) we get very similar This can be addressed when knitting by using stitches of different
distance functions and gradient norm for all 3 meshings. Fig. 5 in heights. Figure 14 shows how adding alignment to the teddy’s arm
the supplemental shows additional results with bad triangulations. and legs aligns the knitting rows with the creases. Crease align-
Noise. Fig. 10 shows robustness to noise and bad meshing. We ment leads to better shaping [Edelstein et al. 2022, Section 9.3], and
add Gaussian normal noise with 𝜎 = 0.5, 0.8 of the mean edge prevents over-smoothing of the knit model.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

Table 2: Comparison of run-times (T) and the maximal error (𝜖) of the computed distance (in % of the maximal distance) for
the models in Figure 8.

Model |F| MMP Heat 𝑡 = 𝑒ˆ2 Heat 𝑡 = 20𝑒ˆ2 0


EMD 𝑑 W 100
EMD 𝑑 W Ours 𝛼ˆ = 0.02 Ours 𝛼ˆ = 0.1
T (sec) T (sec) 𝜖 (%) T (sec) 𝜖 (%) 𝜖 (%) 𝜖 (%) T (sec) 𝜖 (%) T (sec) 𝜖 (%)
Homer 23K 0.255 0.031 2.47 0.030 7.76 30.33 30.31 0.3664 3.40 0.1503 11.90
Elephant 10K 0.061 0.011 4.84 0.011 11.36 11.36 15.82 0.191 2.00 0.138 5.72
Armadilo 29K 0.279 0.041 2.83 0.041 9.33 20.55 14.82 0.454 2.35 0.192 6.78
Moai 43K 1.133 0.101 1.71 0.102 5.34 26.32 26.43 0.683 1.84 0.390 8.60
Koala 9K 0.063 0.013 2.30 0.010 7.17 55.26 41.15 0.124 2.35 0.059 8.64

Table 3: Percentage of triplets violating triangle inequality thank Hsueh-Ti Derek Liu for his Blender Toolbox, used for the
for the data in Fig. 12. Note that for our approach higher visualizations throughout the paper.
values of 𝛼 lead to less violations.
REFERENCES
Heat - Symmetrized Fixed-Source - Symmetrized All-Pairs
MOSEK ApS. 2019. The MOSEK optimization toolbox for MATLAB manual. Version 9.0.
(a) 1.84 2.04 1.23 http://docs.mosek.com/9.0/toolbox/index.html
(b) 2.20 1.28 0.88 Hédy Attouch, Jérôme Bolte, Patrick Redont, and Antoine Soubeyran. 2010. Proximal
(c) 2.20 0.25 0.09 alternating minimization and projection methods for nonconvex problems: An
approach based on the Kurdyka-Łojasiewicz inequality. Mathematics of operations
8 CONCLUSIONS AND FUTURE WORK research 35, 2 (2010), 438–457.
Alexander Belyaev and Pierre-Alain Fayolle. 2020. An ADMM-based scheme for
We presented a novel framework for constructing regularized geo- distance function approximation. Numerical Algorithms 84, 3 (2020), 983–996.
desic distances on triangle meshes. We demonstrated the versatility Alexander G Belyaev and Pierre-Alain Fayolle. 2015. On variational and PDE-based
distance function approximations. In Computer Graphics Forum, Vol. 34. Wiley
of our approach by presenting three regularizers, analyzing them, Online Library, 104–118.
and providing an efficient optimization scheme, as well as a sym- Iwan Boksebeld and Amir Vaxman. 2022. High-Order Directional Fields. ACM Trans-
metric formulation on the product manifold. The theoretical results actions on Graphics (TOG) 41, 6 (2022), 1–17.
Mario Botsch, Leif Kobbelt, Mark Pauly, Pierre Alliez, and Bruno Lévy. 2010. Polygon
and experiments in this work raise a number of interesting ques- mesh processing. CRC press.
tions for future research. One of them is whether the functions Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. 2011. Dis-
tributed optimization and statistical learning via the alternating direction method
𝑈𝛼 (𝑥, 𝑦) provide metrics in general, i.e., whether they satisfy the of multipliers. Foundations and Trends® in Machine learning 3, 1 (2011), 1–122.
triangle inequality; we are not aware of results where geodesic Luis A Caffarelli and Avner Friedman. 1979. The free boundary for elastic-plastic
distances can be regularized to have smooth metrics in 𝑀 × 𝑀. An- torsion problems. Trans. Amer. Math. Soc. 252 (1979), 65–97.
Luming Cao, Junhao Zhao, Jian Xu, Shuangmin Chen, Guozhu Liu, Shiqing Xin, Yuan-
other theoretical question involves convergence of the minimizers feng Zhou, and Ying He. 2020. Computing smooth quasi-geodesic distance field
in the Hessian energy-regularized problem, as discussed in Section (QGDF) with quadratic programming. Computer-Aided Design 127 (2020), 102879.
4. Algorithmically, the ADMM algorithm from Section 5.4 easily Keenan Crane, Marco Livesu, Enrico Puppo, and Yipeng Qin. 2020. A survey of
algorithms for geodesic paths and distances. arXiv preprint arXiv:2007.10430 (2020).
generalizes to other convex functions 𝐹𝑀 (e.g., 𝐿 1 norms) in Equa- Keenan Crane, Clarisse Weischedel, and Max Wardetzky. 2013. Geodesics in heat: A
tion 10; recent theory on nonconvex ADMM also suggests that new approach to computing distance based on heat flow. ACM Transactions on
Graphics (TOG) 32, 5 (2013), 1–11.
Algorithm 1 can be effective for nonconvex regularizers, possibly Michal Edelstein, Hila Peleg, Shachar Itzhaky, and Mirela Ben-Chen. 2022. AmiGo:
requiring large augmentation weights 𝜌 [Attouch et al. 2010; Gao Computational Design of Amigurumi Crochet Patterns. In Proceedings of the 7th
et al. 2020; Hong et al. 2016; Ouyang et al. 2020; Stein et al. 2022; Annual ACM Symposium on Computational Fabrication. 1–11.
Wenbo Gao, Donald Goldfarb, and Frank E Curtis. 2020. ADMM for multiaffine
Wang et al. 2019; Zhang et al. 2019; Zhang and Shen 2019]. constrained optimization. Optimization Methods and Software 35, 2 (2020), 257–303.
François Générau, Edouard Oudet, and Bozhidar Velichkov. 2022a. Cut locus on
ACKNOWLEDGMENTS compact manifolds and uniform semiconcavity estimates for a variational inequality.
Archive for Rational Mechanics and Analysis 246, 2 (2022), 561–602.
Michal Edelstein acknowledges funding from the Jacobs Qualcomm François Générau, Edouard Oudet, and Bozhidar Velichkov. 2022b. Numerical com-
Excellence Scholarship and the Zeff, Fine and Daniel Scholarship. putation of the cut locus via a variational approximation of the distance function.
ESAIM: Mathematical Modelling and Numerical Analysis 56, 1 (2022), 105–120.
Nestor Guillen was supported by the National Science Foundation Michael Grant and Stephen Boyd. 2008. Graph implementations for nonsmooth convex
through grant DMS-2144232. The MIT Geometric Data Process- programs. In Recent Advances in Learning and Control, V. Blondel, S. Boyd, and
H. Kimura (Eds.). Springer-Verlag Limited, 95–110. http://stanford.edu/~boyd/
ing group acknowledges the generous support of Army Research graph_dcp.html.
Office grants W911NF2010168 and W911NF2110293, of Air Force Michael Grant and Stephen Boyd. 2014. CVX: Matlab Software for Disciplined Convex
Office of Scientific Research award FA9550-19-1-031, of National Programming, version 2.1. http://cvxr.com/cvx.
Mingyi Hong, Zhi-Quan Luo, and Meisam Razaviyayn. 2016. Convergence analysis of
Science Foundation grants IIS-1838071 and CHS-1955697, from the alternating direction method of multipliers for a family of nonconvex problems.
CSAIL Systems that Learn program, from the MIT–IBM Watson AI SIAM Journal on Optimization 26, 1 (2016), 337–364.
Laboratory, from the Toyota–CSAIL Joint Research Center, from Hitoshi Ishii. 1987. Perron’s method for Hamilton-Jacobi equations. Duke Mathematical
Journal 55, 2 (1987), 369–384.
a gift from Adobe Systems, and from a Google Research Scholar Alec Jacobson et al. 2021. gptoolbox: Geometry Processing Toolbox.
award. Mirela Ben-Chen acknowledges the support of the Israel http://github.com/alecjacobson/gptoolbox.
Ron Kimmel and James A Sethian. 1998. Computing geodesic paths on manifolds.
Science Foundation (grant No. 1073/21), and the European Research Proceedings of the national academy of Sciences 95, 15 (1998), 8431–8435.
Council (ERC starting grant no. 714776 OPREP). We use the reposi- Joseph SB Mitchell, David M Mount, and Christos H Papadimitriou. 1987. The discrete
tories SHREC’07, SHREC’11, Windows 3D library, AIM@SHAPE, geodesic problem. SIAM J. Comput. 16, 4 (1987), 647–668.
Vidya Narayanan, Lea Albaugh, Jessica Hodgins, Stelian Coros, and James Mccann.
and Three D Scans, and thank Keenan Crane, Jan Knippers, Daniel 2018. Automatic machine knitting of 3D meshes. ACM Transactions on Graphics
Sonntag, and Yu Wang for providing additional models. We also (TOG) 37, 3 (2018), 1–15.
A Convex Optimization Framework for Regularized Geodesic Distances SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

Wenqing Ouyang, Yue Peng, Yuxin Yao, Juyong Zhang, and Bailin Deng. 2020. Ander-
son acceleration for nonconvex ADMM based on Douglas-Rachford splitting. In
Computer Graphics Forum, Vol. 39. Wiley Online Library, 221–239.
Arshak Petrosyan, Henrik Shahgholian, and Nina Nikolaevna Uraltseva. 2012. Regu-
larity of free boundaries in obstacle-type problems. Vol. 136. American Mathematical
Soc.
Gabriel Peyré, Mickael Péchaud, Renaud Keriven, Laurent D Cohen, et al. 2010. Ge-
odesic methods in computer vision and graphics. Foundations and Trends® in
Computer Graphics and Vision 5, 3–4 (2010), 197–397.
Kacper Pluta, Michal Edelstein, Amir Vaxman, and Mirela Ben-Chen. 2021. PH-CPF:
planar hexagonal meshing using coordinate power fields. ACM Transactions on
Graphics (TOG) 40, 4 (2021), 1–19.
Yipeng Qin, Xiaoguang Han, Hongchuan Yu, Yizhou Yu, and Jianjun Zhang. 2016.
Fast and exact discrete geodesic computation based on triangle-oriented wavefront
propagation. ACM Transactions on Graphics (TOG) 35, 4 (2016), 1–13.
Nicholas Sharp, Keenan Crane, et al. 2019. GeometryCentral: A modern C++ library
of data structures and algorithms for geometry processing. https://geometry-
central.net/. (2019).
Justin Solomon, Raif Rustamov, Leonidas Guibas, and Adrian Butscher. 2014. Earth
mover’s distances on discrete surfaces. ACM Transactions on Graphics (ToG) 33, 4
(2014), 1–12.
Oded Stein, Alec Jacobson, Max Wardetzky, and Eitan Grinspun. 2020. A smoothness
energy without boundary distortion for curved surfaces. ACM Transactions on
Graphics (TOG) 39, 3 (2020), 1–17.
Oded Stein, Jiajin Li, and Justin Solomon. 2022. A splitting scheme for flip-free
distortion energies. SIAM Journal on Imaging Sciences 15, 2 (2022), 925–959.
Vitaly Surazhsky, Tatiana Surazhsky, Danil Kirsanov, Steven J Gortler, and Hugues
Hoppe. 2005. Fast exact and approximate geodesics on meshes. ACM transactions
on graphics (TOG) 24, 3 (2005), 553–560.
Sathamangalam R Srinivasa Varadhan. 1967. On the behavior of the fundamental
solution of the heat equation with variable coefficients. Communications on Pure
and Applied Mathematics 20, 2 (1967), 431–455.
Yu Wang, Wotao Yin, and Jinshan Zeng. 2019. Global convergence of ADMM in
nonconvex nonsmooth optimization. Journal of Scientific Computing 78 (2019),
29–63.
Juyong Zhang, Yue Peng, Wenqing Ouyang, and Bailin Deng. 2019. Accelerating
ADMM for efficient simulation and optimization. ACM Transactions on Graphics
(TOG) 38, 6 (2019), 1–21.
Tao Zhang and Zhengwei Shen. 2019. A fundamental proof of convergence of alter-
nating direction method of multipliers for weakly convex optimization. Journal of
Inequalities and Applications 2019, 1 (2019), 1–21.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

Figure 10: Robustness to noise and bad meshing, all dis-


tances computed with the same 𝛼. ˆ Note the similarities of
the distances and gradient norm, despite the large normal
noise and badly shaped triangles.

Figure 8: A qualitative comparison between our Dirichlet


regularized distances, Heat method, and EMD, with two
Figure 11: Violation of the symmetric property for 3 source
choices of smoothing parameter per method. See the text for
points. Note that while our method is not symmetric by con-
details.
struction, the symmetry error is lower than the symmetry
error for the heat method.

Figure 6: The all-pairs formulation, Alg. 2 (right), vs the


fixed source formulation, Alg. 1 (left), and its symmetrized
Figure 9: Our results v.s. the heat method for different
version (center). See the text for details.
remeshings, we show the distance function and the gradient
norm. Note that our approach leads to distances which are
very similar for the three meshes, with the same smoothing
ˆ See the text for details.
parameter 𝛼.
A Convex Optimization Framework for Regularized Geodesic Distances SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

Figure 14: Vector field alignment of creases, useful for knit- Figure 13: Dirichlet regularized volumetric distances. (a) The
ting applications. input tetrahedral mesh. (b,c) Two cuts showing the distance
to a point on the shoulder. (d,e) Distance to the boundary,
where (d) is more smoothed than (e), i.e. has a larger 𝛼ˆ value.

Figure 7: Scale invariance. While the distances are differ-


ent between the uniformly scaled models, the area of the
smoothing (where the norm of the gradient is not 1) is simi-
lar for all meshes. See the text for details.

Figure 12: Triangle inequality violation. For a fixed pair of


points (visualized with the distance isolines) we compute
the triangle inequality error for all the other points. We
compare the symmetrized heat method, our symmetrized
method and the all-pairs formulation (which is symmetric
by construction). We use a few smoothing weights for each
approach. Note, that for our approach the violation reduces
as the smoothing weight grows. Figure 5: A numerical solution on triangle meshes. The dis-
tance converges towards 𝑢 0 as 𝛼 approaches 0. Note the dif-
ferent smoothing regions, whose width depends on 𝛼.
A Convex Optimization Framework for Regularized
Geodesic Distances - Supplemental Material
Michal Edelstein Nestor Guillen
Technion - Israel Institute of Technology Texas State University
Haifa, Israel San Marcos, TX, USA
[email protected] [email protected]

Justin Solomon Mirela Ben-Chen


Massachusetts Institute of Technology (MIT) Technion - Israel Institute of Technology
arXiv:2305.13101v1 [cs.GR] 22 May 2023

Cambridge, MA, USA Haifa, Israel


[email protected] [email protected]
ACM Reference Format: We claimed the solution where the source point is at 𝑥 = 0 is
Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen. given by the formula
2023. A Convex Optimization Framework for Regularized Geodesic Dis- 

tances - Supplemental Material. In Special Interest Group on Computer Graph-  𝑥ˆ
 if 0 ≤ 𝑥ˆ ≤ 𝐿
𝑢𝛼 (𝑥) = 𝜋 − 12 𝛼 − 2𝛼
1 (𝑥ˆ − 𝜋) 2 if 𝐿 ≤ 𝑥ˆ ≤ 2𝜋 − 𝐿 (5)
ics and Interactive Techniques Conference Conference Proceedings (SIGGRAPH 
 2𝜋 − 𝑥ˆ
’23 Conference Proceedings), August 6–10, 2023, Los Angeles, CA, USA. ACM,  if 𝑥ˆ ≥ 𝐿
New York, NY, USA, 8 pages. https://doi.org/10.1145/3588432.3591523 (recall 𝑥ˆ is the unique representative of 𝑥 in the interval [0, 2𝜋)
modulo 2𝜋), where 𝐿(𝛼) is given by
1 PROOFS REGARDING ANALYTICAL 𝐿(𝛼) = (𝜋 − 𝛼)+ . (6)
SOLUTIONS (SECTION 3.1) We are going to show the function 𝑢𝛼 is a solution to the obstacle
In this section we justify the formulas for the solutions to the problem (S1), which in this case reduces to
problems in Section 3.1. Essential to these computations is the ∫ 2𝜋 ∫ 2𝜋
obstacle problem. For flat geometries or for general 𝑀 but with 𝛼 Minimize𝑢 𝛼 0 |𝑢 ′ | 2 𝑑𝑥 − 0 𝑢 (𝑥) 𝑑𝑥
(S2)
sufficiently small, the solution to problem (3) is the same as the subject to 𝑢 (𝑥) ≤ 𝑑 (𝑥, 0) for all 𝑥 ∈ [0, 2𝜋]
solution to the obstacle problem with obstacle given by 𝑑 (𝑥, 𝐸). The
The function 𝑑 (𝑥, 0) for 𝑥 ∈ [0, 2𝜋] is equal to
obstacle problem in this case takes the form
𝑑 (𝑥, 0) = min{|𝑥 |, |𝑥 − 2𝜋 |} = 𝜋 − |𝑥 − 𝜋 |.

Minimize𝑢 𝛼 E (𝑢) − 𝑀 𝑢 (𝑥) dVol(𝑥) The classical theory for the obstacle problem (see [Petrosyan et al.
(S1)
subject to 𝑢 (𝑥) ≤ 𝑑 (𝑥, 𝐸) for all 𝑥 ∈ 𝑀. 2012]) says that if a function 𝑢 is of class 𝐶 1,1 in the entire domain
(i.e. its gradient is Lipschitz continuous), is of class 𝐶 2 in the interior
The equivalence between this problem and problem (3) is a classical of {𝑢 < 𝑑 (𝑥, 0)}, and solves
fact in the case where 𝑀 is an open domain of R𝑛 and 𝐸 = 𝜕𝑀, in this
𝛼𝑢 ′′ + 1 ≥ 0 in the sense of distributions,
classical setting problem (3) is known as the elastic-plastic torsion
problems. The equivalence in the situation 𝑀 is a Riemannian 𝛼𝑢 ′′ + 1 = 0 in the interior of {𝑢 < 𝑑 (𝑥, 0)},
manifold is a more recent result and can be found in [Générau et then that function 𝑢 will be the solution to the obstacle problem
al. 2022]. In the general Riemannian case the equivalence between (S2). Let us verify this in our current example. First, by direct com-
problem (3) and the obstacle problem might not hold for all values putation we can see 𝑢 has a continuous derivative in (0, 2𝜋), and
of 𝛼, but it will hold for all 𝛼 smaller than some 𝛼 0 = 𝛼 0 (𝑀).

 if 0 ≤ 𝑥ˆ ≤ 𝐿
 1

𝑢 ′ (𝑥) = − 𝛼1 (𝑥 − 𝜋) if 𝐿 ≤ 𝑥ˆ ≤ 2𝜋 − 𝐿 (S3)
1.1 Analytical Solution for the Circle 
 −1
 if ˆ
𝑥 ≥ 𝐿
We now prove the formula for the solution to the minimization
problem in the case of 𝑀 = S1 (Section 3.1). Recall that in Section We emphasize this function is continuous even at 𝑥 = 𝐿, 2𝜋 − 𝐿.
3.1 we have identified S1 with the real numbers modulo 2𝜋. Next, this function is twice differentiable away from 𝑥 = 𝐿, 2𝜋 − 𝐿
and in particular it is twice differentiable in the set {𝑢 < 𝑑 (𝑥, 0)} =
(𝐿, 2𝜋 − 𝐿). We have
Permission to make digital or hard copies of part or all of this work for personal or

 0 if 0 ≤ 𝑥ˆ < 𝐿
classroom use is granted without fee provided that copies are not made or distributed 

for profit or commercial advantage and that copies bear this notice and the full citation 𝑢 ′′ (𝑥) = − 𝛼1 if 𝐿 < 𝑥ˆ < 2𝜋 − 𝐿 (S4)
on the first page. Copyrights for third-party components of this work must be honored. 
 0
For all other uses, contact the owner/author(s).  if ˆ
𝑥 > 𝐿
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA This shows that 𝛼𝑢 ′′ + 1 = 0 in {𝑢 < 𝑑 (𝑥, 0)}. Lastly, since 𝑢 is
© 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0159-7/23/08. differentiable everywhere this means that as a measure the func-
https://doi.org/10.1145/3588432.3591523 tion 𝑢 ′′ is equal to −(1/𝛼) 𝜒 (𝐿,2𝜋 −𝐿) (𝑥), 𝜒 denoting the indicator
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

function. It follows that 𝛼𝑢 ′′ + 1 ≥ 0 in the sense of distributions.


This proves that 𝑢 is indeed the solution to the obstacle problem,
and in turn, of problem (3) in the case 𝑀 = S1 and 𝐸 = {0}.
That is, the function 𝑢𝛼 is of class 𝐶 2 away from 𝑥 = 𝐿, 2𝜋 − 𝐿
and 𝑢𝛼′′ is continuous everywhere except at 𝐿, 2𝜋 − 𝐿. From here
follows that 𝛼𝑢𝛼′′ + 1 is well defined as a measure, and that always
≥ 0 and is exactly zero in the interval (𝐿, 2𝜋 − 𝐿). This shows that
𝑢𝛼 solves
min{𝛼𝑢𝛼′′ + 1, (𝜋 − |𝑥 − 𝜋 |) − 𝑢𝛼 } = 0.
Lastly, we prove the function 𝑢𝛼 (𝑥, 𝑦) is indeed a metric.
Figure 1: The analytical solution for the regularized geodesic
1.2 Proof that 𝑢𝛼 (𝑥, 𝑦) is a metric in S1 distance using the Dirichlet regularizer on the disk (top). We
By definition, we have 𝑢𝛼 (𝑥, 𝑦) = 𝑢𝛼 (𝑥 −𝑦) where 𝑢𝛼 (𝑥) is as in (5). also display the gradient norm, |∇𝑢 | (bottom). Note the dif-
From here follows that 𝑢𝛼 (𝑥, 𝑦) ≥ 0 for all 𝑥, 𝑦, since the function ferent smoothing regions, whose width depends on 𝛼.
in (5) is non-negative. Moreover, the function in (5) only vanishes
at 𝑥 equal to an integer multiple of 2𝜋 (since in that case 𝑥ˆ = 0, per
ˆ therefore 𝑢𝛼 (𝑥, 𝑦) = 0 only if 𝑥 −𝑦 is a multiple
the definition of 𝑥),

of 2𝜋, i.e. only if 𝑥 and 𝑦 correspond to the same point in S1 . 1 |𝑥 | 2 + 𝑅 − 𝛼
− 4𝛼 if |𝑥 | ≤ 2𝛼
To prove symmetry, simply observe that in (5) we have 𝑢𝛼 (𝑥) = 𝑢𝛼 (𝑥) = (S5)
𝑅 − |𝑥 | if 2𝛼 < |𝑥 | ≤ 𝑅
𝑢𝛼 (−𝑥), so 𝑢𝛼 (𝑥 − 𝑦) = 𝑢𝛼 (𝑦 − 𝑥).
It remains to show 𝑢𝛼 (𝑥, 𝑦) satisfies the triangle inequality. That To prove this formula, we proceed similarly to the case of S1 .
is, we have to prove that for any 𝑥, 𝑦, 𝑧 we have This function is of class 𝐶 1,1 , first note that its gradient is given by
 1 𝑥
𝑢𝛼 (𝑥 − 𝑦) ≤ 𝑢𝛼 (𝑥 − 𝑧) + 𝑢𝛼 (𝑧 − 𝑦). − 2𝛼 if |𝑥 | ≤ 2𝛼
∇𝑢𝛼 (𝑥) =
− |𝑥𝑥 | if 2𝛼 < |𝑥 | ≤ 𝑅
By translation invariance (i.e. by symmetry) we may assume with-
out loss of generality that 𝑧 = 0. Then, all we have to prove is that and this vector-valued function is continuous across |𝑥 | = 2𝛼 (in
for all 𝑥, 𝑦 we have fact, it is Lipscthiz continuous). Moreover, for the Laplacian of 𝑢𝛼
we have
𝑢𝛼 (𝑥 − 𝑦) ≤ 𝑢𝛼 (𝑥) + 𝑢𝛼 (−𝑦). (
− 𝛼1 if |𝑥 | ≤ 2𝛼
Now, fix 𝑦 and consider the function Δ𝑢𝛼 (𝑥) =
− |𝑥1 | if 2𝛼 < |𝑥 | ≤ 𝑅
𝑣 (𝑥) := 𝑢𝛼 (𝑥 − 𝑦) − 𝑢𝛼 (−𝑦).
Accordingly, 𝛼Δ𝑢𝛼 + 1 ≥ 0 everywhere in the disk {|𝑥 | < 𝑅} and
What we want to prove amounts to the inequality 𝑣 (𝑥) ≤ 𝑢𝛼 (𝑥). 𝛼Δ𝑢𝛼 + 1 = 0 exactly when 𝑢𝛼 < 𝑑 (𝑥, 𝐸) = 𝑅 − |𝑥 |. From here we
The function 𝑣 (𝑥) satisfies the inequality conclude that the function 𝑢𝛼 given by (S5) is the solution.
𝑣 (𝑥) ≤ 𝑑 (𝑥, 0), for all 𝑥, Figure 1 shows the behavior of the function on the disk. Observe
that as in the case of the circle, the solution has two regimes, one
as well as the differential inequality
where it matches the distance function exactly, and one where it
𝛼𝑣 ′′ + 1 ≥ 0. solves Poisson’s equation Δ𝑢 = −1/𝛼. In this case, this results in
One well known characterization of the function 𝑢𝛼 (𝑥) is that it is the cone singularity being replaced by a concave quadratic function
the largest function having these two properties. In this case we that is differentiable and only has a discontinuity in its second
conclude that 𝑢𝛼 (𝑥) ≥ 𝑣 (𝑥), and the triangle inequality is proved. derivative.

1.3 Analytical Solution for the Disk 2 EXISTENCE AND UNIQUENESS OF THE
To illustrate how our method handles other choices for the source MINIMIZER (SECTION 3)
set 𝐸, we take the flat 2D disk and consider the regularized distance In our results, 𝑀 is a compact 𝐶 ∞ submanifold of 𝑁 -dimensional
to the boundary of the disk. Euclidean space R𝑁 , from where it inherits its Riemannian structure.
Using polar coordinates, we take 𝐸 = {(𝑟, 𝜃 )|𝑟 = 𝑅}, and mini- The function 𝐹 (𝜉, 𝑥), 𝐹 : R𝑁 × R𝑁 → R is assumed of class 𝐶 1 in
mize (𝜉, 𝑥). We make two further structural assumptions on 𝐹 :
∫ 𝑅 ∫ 2𝜋 ∫ 𝑅 ∫ 2𝜋 1) There are 𝑝 > 1 and 𝑐 0, 𝐶 0 positive such that
𝛼
|∇𝑢 (𝑟, 𝜃 )| 2 𝑑𝜃𝑑𝑟 − 𝑢 (𝑟, 𝜃 )𝑟 𝑑𝜃𝑑𝑟
2 0 0 0 0 𝑐 0 |𝜉 |𝑝 ≤ 𝐹 (𝜉, 𝑥) ≤ 𝐶 0 |𝜉 |𝑝 , ∀ 𝑥, 𝜉 ∈ R𝑁
with the constraints 2) The function 𝐹 is strictly convex in the first argument. This is
𝑢 (𝑅, 𝜃 ) ≤ 0 for all 𝜃 ∈ [0, 2𝜋], and meant in the following sense: given vectors 𝜉 1 ≠ 𝜉 2 and 𝑠 ∈ (0, 1)
|∇𝑢 (𝑟, 𝜃 )| ≤ 1 for all 𝑟 ∈ [0, 𝑅), 𝜃 ∈ [0, 2𝜋]. then we have the strict inequality for all 𝑥

In this case, the solution is: 𝐹 ((1 − 𝑠)𝜉 1 + 𝑠𝜉 2, 𝑥) < (1 − 𝑠)𝐹 (𝜉 1, 𝑥) + 𝑠𝐹 (𝜉 2, 𝑥).
A Convex Optimization Framework for Regularized Geodesic Distances - Supplemental SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

From these assumptions follows in particular that 𝐹 (𝜉, 𝑥) ≥ 0 for In particular,


all 𝜉 and 𝑥, and 𝐹 (𝜉, 𝑥) = 0 only if 𝜉 = 0. Observe that these ∫ ∫
assumptions include all 𝐹 ’s of the forms lim 𝑢𝑘′ (𝑥) dVol(𝑥) = 𝑢 ∗ (𝑥) dVol(𝑥).
𝑘 𝑀 𝑀
𝑝
𝐹 (𝜉, 𝑥) = |𝐴(𝑥)𝜉 | In this case, due to the weak convergence of ∇𝑢𝑘′ , as well as the
where 𝑝 > 1 and 𝐴(𝑥) is a smooth positive definite matrix whose convexity of 𝐹 in the first argument and its two-sided pointwise
eigenvalues are uniformly bounded away from zero and infinity. bounds, we conclude that
∫ ∫
Now we prove the existence and uniqueness for the general min-
lim inf 𝐹 (∇𝑢𝑘′ , 𝑥)dVol(𝑥) ≥ 𝐹 (∇𝑢 ∗, 𝑥)dVol(𝑥)
imization problem. The problem (see problem (3)) is a constrained 𝑘 𝑀 𝑀
minimization problem in the Sobolev space 𝑊 1,𝑝 (𝑀). Putting everything together, we ahve shown that
∫ ∫
Minimize𝑢 𝛼 𝑀 𝐹 (∇𝑢, 𝑥) dVol(𝑥) − 𝑀 𝑢 dVol(𝑥) lim inf 𝐽𝛼 (𝑢𝑘′ ) ≥ 𝐽𝛼 (𝑢 ∗ ).
𝑘
subject to 𝑢 ∈ 𝑊 1,𝑝 (𝑀)
(3) Since the 1-Lipschitz constraint as well as the constraint 𝑢𝑘 (𝑥) ≤ 0
|∇𝑢 (𝑥)| ≤ 1 for all 𝑥 ∈ 𝑀 \ 𝐸
for all 𝑥 ∈ 𝐸 are preserved by the uniform convergence, it follows
𝑢 (𝑥) ≤ 0 for all 𝑥 ∈ 𝐸.
that 𝑢 ∗ is admissible. Moreover, the last liminf inequality says 𝑢 ∗
The space 𝑊 1,𝑝 (𝑀) (1 ≤ 𝑝 < ∞) is defined as follows achieves the minimum of 𝐽𝛼 among all admissible functions, so 𝑢 ∗
  is a minimizer for the problem.
∇𝑢 exists as a distribution
𝑊 1,𝑝 (𝑀) := 𝑢 : 𝑀 → R | ∫ 𝑝 𝑝
The uniqueness follows from the strict convexity through a
𝑀
|𝑢 | + |∇𝑢 | 𝑑𝑥 < ∞ standard argument, which we review for completeness: suppose
Here, 𝐸 ⊂ 𝑀 is a non-empty closed subset of 𝑀. For us, the case there are two separate minimizers 𝑢 0 and 𝑢 1 . For each 𝑠 ∈ [0, 1] let
of chief interest is when 𝐸 = {𝑥 0 } for a given 𝑥 0 ∈ 𝑀. 𝑢𝑠 = (1 − 𝑠)𝑢 0 + 𝑠𝑢 1 .
In what follows, we will denote the objective functional by 𝐽𝛼 ,
∫ ∫ From the convexity assumption on 𝐹 we know that
𝐽𝛼 (𝑢) := 𝛼 𝐹 (∇𝑢, 𝑥) dVol(𝑥) − 𝑢 dVol(𝑥) 𝐹 (∇𝑢𝑠 , 𝑥) ≤ (1 − 𝑠)𝐹 (∇𝑢 0, 𝑥) + 𝑠𝐹 (∇𝑢 1, 𝑥).
𝑀 𝑀
We now prove the existence and uniqueness first theorem stated In terms of the functional, this gives us
in Section 3. 𝐽𝛼 (𝑢𝑠 ) ≤ (1 − 𝑠)𝐽𝛼 (𝑢 0 ) + 𝐽𝛼 (𝑢 1 )
Theorem 3.1. There is a unique minimizer for problem (3). by merely integrating the pointwise inequality, and at the same
time
Proof. Consider a minimizing sequence {𝑢𝑘 }𝑘 . First, we claim
that without loss of generality, we can assume that for each 𝑘, 𝐽𝛼 (𝑢𝑠 ) ≥ (1 − 𝑠)𝐽𝛼 (𝑢 0 ) + 𝐽𝛼 (𝑢 1 )
max 𝑢𝑘 ≥ 0. (S7) Since 𝑢 0 and 𝑢 1 are minimizers and 𝑢𝑠 is admissible for every 𝑠 ∈
𝑀 [0, 1]. This can only happen if
Indeed, if for some 𝑘 0 we had 𝑢𝑘0 is non-positive in all of 𝑀, it 𝐹 (∇𝑢𝑠 , 𝑥) = (1 − 𝑠)𝐹 (∇𝑢 0, 𝑥) + 𝑠𝐹 (∇𝑢 1, 𝑥) ∀ 𝑥 ∈ 𝑀,
would follow that
and by the assumption, this can only happen if ∇𝑢 0 = ∇𝑢 1 at every
𝐽𝛼 (𝑢𝑘0 ) ≥ 0 = 𝐽𝛼 (0). 𝑥. As 𝑢 0 and 𝑢 1 have to agree at least at one point, 𝑥 0 , this means
Thus the minimizing sequence will remain a minimizing sequence that 𝑢 0 = 𝑢 1 . □
if we replace every non-positive element of the sequence with the
zero function. 3 CONVERGENCE TO THE GEODESIC
Henceforth, we assume our sequence 𝑢𝑘 is such that (S7) holds DISTANCE (SECTION 3)
for all 𝑘. In this case, as the 𝑢𝑘 are all 1-Lipschitz, it follows that In this section we prove the convergence theorem from Section 3.
𝑢𝑘 (𝑥) ≥ max 𝑢𝑘 (𝑥) − diam(𝑀) ≥ −diam(𝑀) for all 𝑘.
𝑀 Theorem 3.2. The functions 𝑢𝛼 converge uniformly to 𝑑 (·, 𝐸),
A similar argument—using that max𝑥 ∈𝐸 𝑢𝑘 (𝑥) ≤ 0 for all 𝑘—provides ∥𝑢𝛼 − 𝑑 (·, 𝐸) ∥ 𝐿∞ → 0 as 𝛼 → 0+ .
the inequality in the opposite direction. In conclusion,
Proof. We make use of an elementary but often used fact in
∥𝑢𝑘 ∥ 𝐿∞ (𝑀) ≤ diam(𝑀) for all 𝑘. nonlinear PDE that says that compactness plus uniqueness of the
It follows that the sequence {𝑢𝑘 }𝑘 is 1-Lipschitz and uniformly limiting points of a sequence in turn guarantees convergence of
bounded. Then, by the Arzela-Ascoli theorem there is a subsequence the whole sequence.1 Concretely, and in two parts, we are going
𝑢𝑘′ and a function 𝑢 ∗ in 𝑀 such that to show 1) that given any sequence 𝛼𝑘 → 0+ we can pass to a
subsequence 𝛼𝑘′ which converges uniformly to some function 𝑢 ∗
∥𝑢𝑘′ − 𝑢 ∗ ∥ 𝐿∞ (𝑀) → 0 as 𝑘 → ∞.
∥𝑢𝛼 ′ − 𝑢 ∗ ∥ 𝐿∞ (𝑀) → 0 as 𝑘 → ∞,
Moreover, without loss of generality (we can always pass to another 𝑘

subsequence where this holds)) we also have 1 if


the sequence failed to converge as whole, there would be some 𝛿 > 0 an infinite
subsequence such that 𝑢𝛼 ′ stays a distance at least 𝛿 from 𝑑 (𝑥, 𝑥 0 ) , leading to a
∇𝑢𝑘′ → ∇𝑢 ∗ in weak-𝐿𝑝 (𝑀). contradiction
𝑘
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

and subsequently that 2) whatever function 𝑢 ∗ is obtained as one 5 EXISTENCE AND UNIQUENESS OF A
of these limits will have to be a minimizer for problem (1). Since MINIMIZER (PRODUCT MANIFOLD
that problem has as its unique solution, then 𝑢 ∗ = 𝑑 (𝑥, 𝑥 0 ) for all FORMULATION, SECTION 6)
such subsequences.
Indeed, first, note that the 1-Lipschitz constraint and the fact In Section 6 we introduced the following problem.

that 𝑢𝛼𝑘 (𝑥 0 ) = 0 for all 𝑘 implies that the sequence 𝑢𝛼𝑘 lies in a Minimize 𝛼 E𝑀×𝑀 (𝑈 ) − 𝑀×𝑀 𝑈 (𝑥, 𝑦) dVol(𝑥, 𝑦)
compact subset of 𝐶 (𝑀). Therefore, there is a subsequence 𝛼𝑘′ and subject to 𝑈 ∈ 𝑊 1,2 (𝑀 × 𝑀)
a 1-Lipschitz function 𝑢 ∗ ∈ 𝐶 (𝑀) such that |∇1𝑈 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦} (12)
|∇2𝑈 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦}
𝑢𝛼 ′ → 𝑢 ∗ uniformly in 𝑀.
𝑘 𝑈 (𝑥, 𝑦) ≤ 0 on {(𝑥, 𝑦) | 𝑥 = 𝑦}
Now, let 𝜙 : 𝑀 → R be a 1-Lipschitz function such that 𝜙 (𝑥 0 ) ≤ 0.
Theorem 6.1. There is a unique minimizer in problem (12).
Since 𝜙 is admissible for (3) for every 𝛼𝑘′ , it follows that
∫ ∫ ∫ Proof. At the big picture level this proof is basically the same
− 𝑢𝛼 ′ dVol(𝑥) ≤ 𝛼𝑘′ 𝐹 (∇𝑢𝛼 ′ , 𝑥)dVol(𝑥) − 𝑢𝛼 ′ 𝑑𝑥 as that of existence and uniqueness of a minimizer for problem (3).
𝑀 𝑘
∫𝑀
𝑘
∫ 𝑀
𝑘
We only highlight the points where things are different.
Therefore, take a minimizing sequence 𝑈𝑘 . Arguing similarly as
≤ 𝛼𝑘′ 𝐹 (∇𝜙, 𝑥)dVol(𝑥) − 𝜙 𝑑𝑥 .
𝑀 𝑀 before we can assume without loss of generality that
On the other hand, by the 1-Lipschitz constraint max 𝑈𝑘 ≥ 0 for all 𝑘.
𝑀×𝑀
∫ Now, 𝑈𝑘 is 1-Lipschitz in each variable 𝑥 and 𝑦, separately, so, if for
𝐹 (∇𝜙, 𝑥)dVol(𝑥) ≤ 𝐶Vol(𝑀). some 𝑘 (𝑥 0, 𝑦0 ) is a point where 𝑈𝑘 (𝑥 0, 𝑦0 ) ≥ 0, then for all other
𝑀
(𝑥, 𝑦) we have
This means in particular that
𝑈𝑘 (𝑥, 𝑦) ≥ 𝑈𝑘 (𝑥, 𝑦0 ) − 𝑑 (𝑦, 𝑦0 )
∫ ∫
≥ 𝑈𝑘 (𝑥 0, 𝑦0 ) − 𝑑 (𝑥, 𝑥 0 ) − 𝑑 (𝑦, 𝑦0 )
− 𝑢 ∗ dVol(x) = lim 𝑢𝛼 ′ dVol(𝑥)
𝑀 𝑘
𝑀 ∫
𝑘
∫  ≥ 𝑈𝑘 (𝑥 0, 𝑦0 ) − 2diam(𝑀).
≤ lim 𝛼𝑘′ 𝐹 (∇𝜙, 𝑥)dVol(𝑥) − 𝜙 𝑑𝑥 On the other hand, since 𝑈𝑘 (𝑥, 𝑥) ≤ 0 for all 𝑥 and 𝑦, we have,
∫ using the 1-Lipschtz condition in the first variable
𝑘 𝑀 𝑀

=− 𝜙 (𝑥) 𝑑𝑥 . 𝑈𝑘 (𝑥, 𝑦) ≤ 𝑈𝑘 (𝑥, 𝑥) + 𝑑 (𝑥, 𝑦)


𝑀
≤ 𝑑 (𝑥, 𝑦) ≤ diam(𝑀).
This shows that 𝑢 ∗ solves the minimization problem (1), and this
Putting all this together we have
problem has a known solution, so 𝑢 ∗ = 𝑑 (·, 𝐸). In summary we have
shown that given any sequence 𝛼𝑘 → 0 there is a subsequence 𝛼𝑘′ ∥𝑈𝑘 ∥ 𝐿∞ (𝑀) ≤ 2diam(𝑀) for all 𝑘.
such that 𝑢𝛼 ′ → 𝑑 (·, 𝐸) uniformly in 𝑀, finishing the proof. □ This means our sequence {𝑈𝑘 }𝑘 is uniformly bounded and equicon-
𝑘
tinuous (in fact, uniformly Lipschitz) in the compact space 𝑀 × 𝑀.
4 ANALYTICAL SOLUTION FOR THE By the Arzela-Ascoli theorem, there is a subsequence 𝑈𝑘′ and a
HESSIAN REGULARIZER IN 1D (HESSIAN function 𝑈 ∗ in 𝑀 × 𝑀 such that 𝑈𝑘 converges uniformly to 𝑈 ∗ . In
REGULARIZER, SECTION 4.2) particular, this function 𝑈𝑘 will be Lipschitz and the inequalities
In 1𝐷, the Hessian energy is the same as the bilaplacian energy, |∇1𝑈 ∗ (𝑥, 𝑦)| ≤ 1 and |∇2𝑈 ∗ (𝑥, 𝑦)|
and the optimization problem is: hold for a.e. (𝑥, 𝑦) ∈ 𝑀 × 𝑀. Moreover, 𝑈 ∗ (𝑥, 𝑥) ≤ 0 for all 𝑥 ∈ 𝑀.
∫ 2𝜋 ∫ 2𝜋 This shows that 𝑈 ∗ is admissible for problem (12). At the same time,
Minimize𝑢 𝛼2 0 |𝑢 ′′ (𝑥)| 2 d𝑥 − 0 𝑢 (𝑥) d𝑥 the uniform convergence of the 𝑈𝑘 and the compactness of 𝑀 × 𝑀
subject to |𝑢 ′ (𝑥)| ≤ 1 for all 𝑥 ∈ (0, 2𝜋) imply that
𝑢 (0) ≤ 0. ∫ ∫
lim 𝑈𝑘′ (𝑥, 𝑦)dVol(𝑥, 𝑦) = 𝑈 ∗ (𝑥, 𝑦)dVol(𝑥, 𝑦).
The minimizer 𝑢 (𝑥) is (for 𝑥 ∈ [0, 2𝜋]): 𝑘 𝑀×𝑀 𝑀×𝑀
Lastly, passing to another subsequence 𝑈𝑘′′ if needed, we have

 if 0 ≤ 𝑥 ≤ 𝜋 − 𝑐 ∫

𝑥


 1 4 𝑐2 2
𝑢 (𝑥) = 24𝛼 (𝑥 − 𝜋) 4 − 4𝛼 (𝑥 − 𝜋) if 𝜋 − 𝑐 ≤ 𝑥 ≤ 𝜋 + 𝑐 lim inf |∇1𝑈𝑘′′ | 2 + |∇2𝑈𝑘′′ | 2 dVol(𝑥, 𝑦)
 +𝜋 − 𝑐 +

5𝑐

𝑘

𝑀×𝑀
 24𝛼
 2𝜋 − 𝑥 if 𝑥 ≥ 𝜋 + 𝑐 ≥ |∇1𝑈 ∗ | 2 + |∇2𝑈 ∗ | 2 dVol(𝑥, 𝑦).
𝑀×𝑀

where 𝑐 = 3 3𝛼. From here it follows that 𝑈 ∗ is a minimizer for problem (12).
Note that the function and smoothing region are different than Uniqueness is proved again making use of the convexity of the
the ones in the Dirichlet regularizer case. functional.
A Convex Optimization Framework for Regularized Geodesic Distances - Supplemental SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA


A consequence of the uniqueness theorem is the symmetry of
the minimizers 𝑈𝛼 :
Theorem 6.2. The function 𝑈𝛼 (𝑥, 𝑦) is symmetric in 𝑥 and 𝑦.
Proof. This is a direct consequence of the uniqueness of the
minimizer to problem (12) as well as the symmetry of the under the
transformation (𝑥, 𝑦) ↦→ (𝑦, 𝑥). Indeed, given 𝛼 define the function
𝓋𝛼 (𝑥, 𝑦) := 𝑈𝛼 (𝑦, 𝑥),
Then it is clear that 𝓋 is still admissible for problem (12) and

𝛼 E𝑀×𝑀 (𝑈𝛼 ) − 𝑈𝛼 (𝑥, 𝑦) dVol(𝑥, 𝑦)

𝑀×𝑀

= 𝛼 E𝑀×𝑀 (𝓋) − 𝓋(𝑥, 𝑦) dVol(𝑥, 𝑦),


𝑀×𝑀 Figure 2: Non-quadratic regularizer example. We compare
so that 𝓋 also achieves the minimum of problem (12). Since there is the results between the quadratic Dirichlet energy (left) to
only one minimizer, 𝓋 = 𝑈𝛼 and the lemma follows. □ using the squared 𝐿∞ norm on two meshes with different
orientations (right). See the text for details.
6 CONVERGENCE TO THE FULL GEODESIC
DISTANCE (PRODUCT MANIFOLD
FORMULATION, SECTION 6) Since Φ is an arbitrary admissible function, it follows that 𝑈 ∗ =
In this section we will make use of the following characterization 𝑑 (𝑥, 𝑦). This proves that 𝑈𝑘 converges uniformly to 𝑑 (𝑥, 𝑦), and
of the geodesic distance function 𝑑 (𝑥, 𝑦). in turn, by the same argument as in the proof of Theorem 3.2
(Supplemental 3) that 𝑈𝛼 → 𝑑 (𝑥, 𝑦) as 𝛼 → 0. □

Minimize − 𝑀×𝑀 𝑣 (𝑥, 𝑦) dVol(𝑥, 𝑦)
7 NON-QUADRATIC REGULARIZERS
subject to |∇1 𝑣 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦}
(S8) The functional 𝐹 (𝜉, 𝑥) used for the regularizer term in the mini-
|∇2 𝑣 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦}
𝑣 (𝑥, 𝑦) ≤ 0 on {(𝑥, 𝑦) | 𝑥 = 𝑦} mization problem (3) allows for quite general norms or powers of
norms. Using a non-isotropic norm from the ambient space, one
The problem (S8) clearly resembles problem (12). Accordingly, the obtains 𝐹 ’s that have no dependence on 𝑥 but manifest behavior
proof Theorem 6.3 (just as the proof of Theorem 3.2, Supplemental that is sensitive to the position and orientation of 𝑀. We illustrate
3) will consist in using compactness and show all limit points of this with some numerical experiments with
the sequence 𝑈𝛼 as 𝛼 → 0 have to be just 𝑑 (𝑥, 𝑦).
𝐹 (𝜉, 𝑥) = ∥𝜉 ∥ 2∞, (S9)
Theorem 6.1. As 𝛼 → 0, we have
∥𝑑 (𝑥, 𝑦) − 𝑈𝛼 (𝑥, 𝑦) ∥ 𝐿∞ (𝑀×𝑀) → 0. which satisfies all of the assumptions for Theorems 3.1 and 3.2 (as
discussed at the start of Supplemental 2). Although (S9) involves a
Proof. Let 𝛼𝑘 → 0 be any sequence. The sequence {𝑈𝛼𝑘 }𝑘 is square, it is different from a quadratic polynomial on the entries
uniformly Lipschitz, accordingly, there is a subsequence {𝑈𝛼′𝑘 }𝑘 of 𝜉. In fact, 𝐹 in (S9) is not differentiable for all 𝜉, this can be
and a function 𝑈 ∗ (𝑥, 𝑦) such that 𝑈𝛼′𝑘 → 𝑈 ∗ uniformly in 𝑀 × 𝑀 seen by writing 𝐹 in terms of the components of the vector 𝜉, if
as 𝑘 → ∞. We are going to show 𝑈 ∗ must be the geodesic distance. 𝜉 𝑡 = (𝜉 1, 𝜉 2, 𝜉 3 ) then
Indeed, let Φ(𝑥, 𝑦) → R be any smooth admissible function for
problem (12). Then, for any 𝛼 > 0 we have ∥𝜉 ∥ ∞ = max{𝜉 12, 𝜉 22, 𝜉 32 }.

− 𝑈𝛼′𝑘 (𝑥, 𝑦) dVol(𝑥, 𝑦) This is a convex function of (𝜉 1, 𝜉 2, 𝜉 3 ), it is smooth in the open set
𝑀×𝑀
∫ {|𝜉𝑖 | ≠ |𝜉 𝑗 | if 𝑖 ≠ 𝑗 }, but it is not differentiable along the boundary
≤ 𝛼𝑘′ E𝑀×𝑀 (𝑈𝛼′𝑘 ) − 𝑈𝛼′𝑘 (𝑥, 𝑦) dVol(𝑥, 𝑦) of this set.
∫ 𝑀×𝑀
In Fig. 2, we see how the regularized geodesics with E using (S9)
≤ 𝛼𝑘′ 𝑠E𝑀×𝑀 (Φ) − Φ(𝑥, 𝑦) dVol(𝑥, 𝑦). look like for different orientations. Observe the anisotropic effects
𝑀×𝑀 as the pipe is rotated as well as the flatter level curves.
Taking the limit 𝛼𝑘′ → 0 with the last inequality, it follows that

8 DISTANCES TO A FIXED SOURCE ADMM
− 𝑈 ∗ (𝑥, 𝑦) dVol(𝑥, 𝑦)

𝑀×𝑀 DERIVATION (SECTION 5.4)
≤− Φ(𝑥, 𝑦) dVol(𝑥, 𝑦). In Section 5.4 we present the augmented Lagrangian used to derive
𝑀×𝑀 the ADMM algorithm.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

∑︁
𝐿(𝑢, 𝑦, 𝑧) = −𝐴𝑇V 𝑢 + 𝛼2 𝑢𝑇 𝑊 𝑢 + 𝜒 (|𝑧 𝑓 | ≤ 1)+ Minimize𝑈 − 12 𝐴𝑇V𝑋𝐴 V − 12 𝐴𝑇V 𝑅𝐴 V + 
𝑓 ∈F 1 𝛼 Tr 𝑀 𝑇 𝑊 𝑋 + 𝑅𝑇 𝑊 𝑅  +
∑︁ 𝜌 𝐴 ∑︁

2∑︁ ∑︁ V 𝑋 𝐷 𝐷
𝑎 𝑓 𝑦𝑇𝑓 ((G𝑢) 𝑓 − 𝑧𝑓 ) + 𝑎 𝑓 |(G𝑢) 𝑓 − 𝑧 𝑓 | 2, 𝜒 (|(𝑍 ( ·,𝑖) ) 𝑓 | ≤ 1) +
2
𝑓 ∈F 𝑓 ∈F 𝑓∑︁
∈ F 𝑖∑︁
∈V
where 𝑎 𝑓 is the area of the face 𝑓 , 𝜌 ∈ R is the penalty parameter, 𝜒 (|(𝑄 ( ·,𝑖) ) 𝑓 | ≤ 1)
and 𝑦 ∈ R3𝑚 is the dual variable or Lagrange multiplier. 𝑓 ∈ F 𝑖 ∈V
The ADMM algorithm iterates between three stages [Boyd et subject to (G𝑋 ( ·,𝑖) ) 𝑓 = (𝑍 ( ·,𝑖) ) 𝑓 for all 𝑓 ∈ F , 𝑖 ∈ V
al. 2011, Section 3]: 𝑢-minimization, 𝑧-minimization, and updating (G𝑅 ( ·,𝑖) ) 𝑓 = (𝑄 ( ·,𝑖) ) 𝑓 for all 𝑓 ∈ F , 𝑖 ∈ V
the dual variable. Where using this formulation, both 𝑢 and 𝑧 have 𝑋 =𝑈
closed-form solutions. 𝑋 =𝑈
The ADMM algorithm alternates between these three steps: 𝑅 = 𝑈𝑇
√ √
(1) 𝑢𝑘+1 = [𝛼𝑊 + 𝜌 𝐴𝑊𝐷 ] −1 [𝐴 V − D𝑦𝑘 + 𝜌 𝐴D𝑧𝑘 ] 𝑈𝑖,𝑖 ≤ 0 for all 𝑖 ∈ V
(2) 𝑧𝑘+1 = Proj( √1 𝑦𝑘𝑓 + (G𝑢𝑘+1 ) 𝑓 , B3 ) for all 𝑓 ∈ F 𝑈 ≥ 0,
𝑓 𝜌 𝐴
√ where 𝜒 (𝑍 ( ·,𝑖) ) 𝑓 | ≤ 1) = ∞ if 𝑍 ( ·,𝑖) ) 𝑓 | > 1 and 0 otherwise.
(3) 𝑦𝑘+1 = 𝑦𝑘 + 𝜌 𝐴(G𝑢𝑘+1 − 𝑧𝑘+1 ),
The corresponding augmented Lagrangian is:
where Proj(𝑧 𝑓 ∈ R3, B3 ) is equal to 𝑧 𝑓 /|𝑧 𝑓 | if |𝑧 𝑓 | > 1, and 𝑧 𝑓
otherwise.
We consider our algorithms to have converged when ∥𝑟 𝑘 ∥ ≤ 𝜖 𝑝𝑟𝑖 𝐿(𝑈 , 𝑌 , 𝑍 ) = − 12 𝐴𝑇V 𝑋𝐴 V − 21 𝐴𝑇V 𝑅𝐴 V + 
1 𝛼 Tr 𝑀 
and ∥𝑠 𝑘 ∥ ≤ 𝜖 𝑑𝑢𝑎𝑙 , where 𝑟 𝑘 and 𝑠 𝑘 are the primal and dual residuals, 𝑇
2∑︁ ∑︁ V 𝑋 𝑊𝐷 𝑋 + 𝑅 𝑊𝐷 𝑅
𝑇 +
resp. And 𝜖 𝑝𝑟𝑖 , 𝜖 𝑑𝑢𝑎𝑙 are the primal and dual feasibility tolerances, 𝜒 (|(𝑍 ( ·,𝑖) ) 𝑓 | ≤ 1) +
resp. These quantities can be computed as follows:
𝑓∑︁
∈ F 𝑖∑︁
√︁ √︁ ∈V
𝑟 𝑘 = 𝑀 F G𝑢𝑘 − 𝑀 F 𝑧𝑘 𝜒 (|(𝑄 ( ·,𝑖) ) 𝑓 | ≤ 1) +
𝑠 𝑘 = 𝜌D(𝑧𝑘 − 𝑧𝑘−1 ) √
√ √︁ √︁  𝑖 ∈V
𝑓 ∈F

𝜖 𝑝𝑟𝑖 = 3𝑚𝜖 𝑎𝑏𝑠 𝐴 + 𝜖 𝑟𝑒𝑙 𝐴max(∥ 𝑀 F G𝑢𝑘 ∥, ∥ 𝑀 F 𝑧𝑘 ∥) Tr 𝑀 V 𝑌 𝑇 𝑀 F (G𝑋 − 𝑍 ) + 𝑆𝑇 𝑀 F (G𝑅 − 𝑄) +
 
√ √
𝜖 𝑑𝑢𝑎𝑙 = 𝑛𝜖 𝑎𝑏𝑠 𝐴 + 𝜖 𝑟𝑒𝑙 𝐴∥D𝑦 ∥. 𝜌1 𝐴

𝑇
2√ Tr 𝑀 V G𝑋 − 𝑍 𝑀 F G𝑋 − 𝑍 +
In all our experiments, we set 𝜖 𝑎𝑏𝑠 = 5 · 10−6, 𝜖 𝑟𝑒𝑙 = 10−2 , and 𝜌 = 2.    
𝜌1 𝐴 𝑇
We define 𝜌, the residuals and feasibility tolerances such that they 2 Tr 𝑀 V G𝑅 −𝑄 𝑀F G𝑅 − 𝑄 +

 
are scale-invariance, as explained in Section 7.1. Tr 𝐻 𝑋 − 𝑈 𝑀 V + Tr 𝐾 𝑅 − 𝑈 𝑀 V +
𝑇 𝑇 𝑇
In addition, to accelerate the convergence, we also use the vary-   
ing penalty parameter and over-relaxation, exactly as described in Tr 𝐻 𝑇 𝑋 − 𝑈 𝑀 V +
√  𝑇  
[Boyd et al. 2011, Sections 3.4.1, 3.4.3]. 𝜌 2 𝐴−1
Tr 𝑋 − 𝑈 𝑀 V 𝑋 − 𝑈 𝑀 V +
2
 
𝑇 𝑇 𝑀 𝑇 𝑀

𝜌 2 𝐴−1
9 SYMMETRIC ALL-PAIRS ADMM 2 Tr 𝑅 − 𝑈 V 𝑅 − 𝑈 V ,
DERIVATION (SECTION 6.2)
where 𝜌 1, 𝜌 2 ∈ R are the penalty parameters, and 𝑌 , 𝑆 ∈ R3𝑚×𝑛 ,
Our discrete optimization problem, as introduced in Section 6.2, is:
𝐻, 𝐾 ∈ R𝑛×𝑛 are the dual variables.
Minimize𝑈 −𝐴𝑇V 𝑈𝐴 V + The ADMM algorithm for this optimization problem consists of

1 Tr 𝑀 V 𝑈 𝑇 𝑊𝐷 𝑈 + 𝑈𝑊𝐷 𝑈 𝑇 three stages. In the first stage, we optimize for 𝑍, 𝑅. In the second
2𝛼
subject to |(∇𝑈 (𝑖,·) ) 𝑓 | ≤ 1 for all 𝑓 ∈ F , 𝑖 ∈ V step, we minimize the auxiliary variables 𝑍, 𝑄, 𝑈 . Finally, in the
third step, we update the dual variables added in the augmented
|(∇𝑈 ( ·,𝑗) ) 𝑓 | ≤ 1 for all 𝑓 ∈ F , 𝑗 ∈ V
Lagrangian.
𝑈𝑖,𝑖 ≤ 0 for all 𝑖 ∈ V,
(1)
where 𝑋𝑖,𝑗 denotes the (𝑖, 𝑗)-th element of a matrix 𝑋 , 𝑋 (𝑖,·) denotes
the 𝑖-th row, and 𝑋 ( ·,𝑗) the 𝑗-th column.  √ √  −1
Our derivation is based on the consensus problem [Boyd et al . 𝑋 𝑘+1 = (𝛼 + 𝜌 1 𝐴)𝑊𝐷 + 𝜌 2 𝐴−1 𝑀 V
1 𝐴 𝐴𝑇 𝑀 −1 − D𝑌 𝑘 +
2011, Section 7], where we split 𝑈 into two variables 𝑋, 𝑅 ∈ R𝑛×𝑛 2 √ V√ V V √ 
to represent the gradient along the columns and rows, and use a 𝜌 1 𝐴 𝐴D𝑧𝑘 − 𝑀 V 𝐻 𝑘 + 𝜌 2 𝐴−1 𝑀 V 𝑈 𝑘
consensus auxiliary variable 𝑈 ∈ R𝑛×𝑛 to ensure consistency. We
also add two auxiliary variables 𝑍, 𝑄 ∈ R3𝑚×𝑛 representing the   −1
√ √
gradients along the columns and rows, i.e., G𝑋, G𝑅. We enforce 𝑅𝑘+1 = (𝛼 + 𝜌 1 𝐴)𝑊𝐷 + 𝜌 2 𝐴−1 𝑀 V
the diagonal constraint on the consensus variable 𝑈 to avoid solv- 1 𝐴 𝐴𝑇 𝑀 −1 − D𝑆 𝑘 +
ing huge linear systems. This leads to the following optimization 2√V V V √ 
problem: 𝜌 1 𝐴D𝑄 𝑘 − 𝑀 V 𝐾 𝑘 + 𝜌 2 𝐴−1 𝑀 V 𝑈 𝑘𝑇
A Convex Optimization Framework for Regularized Geodesic Distances - Supplemental SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA

(2) (𝑍 𝑘+1 )𝑓 = Table 1: Timings in seconds for the All-Pairs distance com-

( ·,𝑖)  putation on the cat model, |F | = 3898, Figure 10 (main paper).
Proj 1√ (𝑌 𝑘 ) + (G𝑋 𝑘+1 ) , B3 for all 𝑖 ∈ V, 𝑓 ∈ F
( ·,𝑖)
𝜌1 𝐴 𝑓 ( ·,𝑖) 𝑓
Heat - Symmetrized Fixed-Source - Symmetrized All-Pairs
(𝑄 𝑘+1 )𝑓 = [sec] [sec] [sec]

( ·,𝑖)  (a) 0.77 101.625 1124.312
Proj 1√ (𝑆 𝑘 )
+ (G𝑅𝑘+1 3 for all 𝑖 ∈ V, 𝑓 ∈ F
) , B (b) 0.77 59.583 837.063
𝜌 1 𝐴 ( ·,𝑖) 𝑓( ·,𝑖) 𝑓
  (c) 0.76 37.4745 794.9549
𝑘 𝑘𝑇 𝑋 𝑘+1 +𝑅𝑘𝑇 , 0
𝑈 𝑘+1 = max 𝐻 +𝐾
√ + 2
−1 2𝜌 2 𝐴
𝑘+1 = 0 for all 𝑖 ∈ V
𝑈𝑖,𝑖 10.2 Representation Error in a Spectral
√  Reduced Basis
(3) 𝑌 𝑘+1 = 𝑌 𝑘 + 𝜌 1 𝐴 G𝑋 𝑘+1 − 𝑧𝑘+1
√  Smoother functions are better represented in a reduced basis com-
𝑆 𝑘+1 = 𝑆 𝑘 + 𝜌 1 𝐴 G𝑅𝑘+1 − 𝑄 𝑘+1
√  prised of the eigenvectors of the Laplace-Beltrami operator. Namely,
𝐻 𝑘+1 = 𝐻 𝑘 + 𝜌 2 𝐴−1 𝑋 𝑘+1 − 𝑈 𝑘+1 they require less basis functions for the same representation error.
√ 
𝐾 𝑘+1 = 𝐾 𝑘 + 𝜌 2 𝐴−1 𝑅𝑘+1 − 𝑈 𝑘𝑇 In Fig. 4 (left) We compare the representation error in a reduced
basis of our approach, the heat method, and fast marching. Note
that our approach, both the fixed source (Alg. 1) and the all-pairs
Similarly to Section 5.4, the first steps include solving a linear (Alg 2.) formulations, achieves the lowest error (indicating that the
system with the same coefficient matrix, which can be pre-factored functions are smoothest in this sense). Similarly, we compare the
to accelerate the computation. symmetric formulations by symmetrizing our fixed source method,
We consider our algorithms to have converged when ∥𝑟 𝑘 ∥ ≤ 𝜖 𝑝𝑟𝑖 the heat method and the Fast Marching results, see Fig. 4 (right).
and ∥𝑠 𝑘 ∥ ≤ 𝜖 𝑑𝑢𝑎𝑙 , where 𝑟 𝑘 and 𝑠 𝑘 are the primal and dual residuals, Here we project on the eigenvectors of the LB operator on the
resp. And 𝜖 𝑝𝑟𝑖 , 𝜖 𝑑𝑢𝑎𝑙 are the primal and dual feasibility tolerances, product manifold. Here as well we achieve a lower error than the
resp. These quantities can be computed as follows: alternatives. The experiment was done on the “pipe” mesh, where
we computed the full distance matrix between all pairs of vertices.
√︁ √︁ For Fig 4 (left) we projected each column of the distance matrix
𝑟𝑘 = 𝑀 F G𝑢𝑘 − 𝑀 F 𝑧𝑘 (i.e., the distance from a single source vertex), and computed the
𝑠𝑘 𝜌D(𝑧𝑘 − 𝑧𝑘−1 ) mean of the representation errors. For Fig 4 (right), we projected
√︁ √︁
=√
𝜖 𝑝𝑟𝑖 = 3𝑚𝜖 𝑎𝑏𝑠 𝐴 + 𝜖 𝑟𝑒𝑙 max(∥ 𝑀 F G𝑢𝑘 ∥, ∥ 𝑀 F 𝑧𝑘 ∥) the full distance matrix on the eigenvectors of the LB operator on
√ 𝑎𝑏𝑠 2 𝑟𝑒𝑙 √︁ the product manifold.
𝜖 𝑑𝑢𝑎𝑙 = 𝑛𝜖 𝐴 + 𝜖 ∥ 𝑀 F D𝑌 ∥

10.3 Additional Results on Various


and equivalently for 𝑅, 𝑄, 𝑆. The residuals for the consensus part
are as follows: Triangulations
To further demonstrate the robustness of our algorithm, we show
𝑟 1𝑘 = 𝑀V (𝑋 𝑘 − 𝑈 𝑘 )𝑀 additional results on low-quality triangulations in Figure 5. The
V
𝑟 2𝑘 𝑘
= 𝑀 V (𝑅 − 𝑈 )𝑀 V𝑇 𝑘 leftmost column corresponds to a uniform triangulation and the
other three to non-uniform triangulations. Note that the results
𝑠𝑘 𝑘
= 𝜌 2 𝑀 V (𝑈 − 𝑈 𝑘−1
√︁
)𝑀 V
𝑝𝑟𝑖 √ 𝑎𝑏𝑠 √︁ remain similar for the different triangulations.
𝜖1 = 𝑛𝜖 + 𝜖 𝑟𝑒𝑙 max(∥𝑀 V 𝑋 𝑘 𝑀 V ∥, ∥ 𝑀 F 𝑧𝑘 𝑀 V 𝑀 V 𝑈 𝑘 𝑀 V ∥)
𝑝𝑟𝑖 √ 𝑎𝑏𝑠 √ 3 𝑟𝑒𝑙
= 𝑛𝜖 𝐴 + 𝜖 max(∥𝑀 V 𝑅𝑘 𝑀 V ∥, ∥𝑀 V 𝑈 𝑇 𝑘 𝑀 V ∥) 10.4 Timings for the All-Pairs Formulation
√︁ √︁ √︁ √︁
𝜖2
√ 𝑎𝑏𝑠 𝑟𝑒𝑙
𝜖 𝑑𝑢𝑎𝑙 = 𝑛𝜖 𝐴 + 𝜖 2 (∥ 𝑀 V 𝐻 𝑀 V ∥ + ∥ 𝑀 V 𝐾 𝑀 V ∥). Table 1 shows the running times for computing the all-pairs dis-
tances on the cat model. We compare the heat method, computed
We set 𝜖 𝑎𝑏𝑠 = 10−6, 𝜖 𝑟𝑒𝑙 = 2 · 10−4 and 𝜌 1 = 𝜌 2 = 2 in all our using Geometry Central [Sharp et al. 2019] (using the precompu-
experiments. Note that both the penalty variables, the residuals tation speed-up), our fixed source formulation (Alg. 1) and our
and feasibility thresholds are defined to be scale-invariance, as all-pairs approach (Alg. 2). Note that Alg. 2 has a higher memory
explained in section 7.1. overhead than Alg. 1, because we are working with large dense
In addition, to accelerate the convergence, we also use the vary- matrices. Therefore, in our non-optimized Matlab implementation
ing penalty parameter and over-relaxation, exactly as described in we may run out of memory for large meshes. We believe that a
[Boyd et al. 2011, Sections 3.4.1, 3.4.3]. more careful implementation can improve this considerably.

10.5 Quadratic Finite Elements


10 ADDITIONAL RESULTS
Piecewise linear elements are not good approximators of the ge-
10.1 Additional Examples odesic distance near the source. Intuitively, for coarse meshing,
Figure 3 shows more examples of our fixed source (Alg. 1) method instead of generating round isolines, PL elements lead to polyg-
for the meshes in Table 1, Section 5.4. onal isolines, see e.g. the output on the disk in Fig. 6 (left). Our
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen

Figure 3: The distance isolines and gradient norm with Dirichlet regularization for various meshes.

Figure 4: Comparison of the representation error of the


Dirichlet regularized distances in a reduced spectral basis.
See the text for details.

approach generalizes to piecewise quadratic elements in a straight-


forward way. Specifically, we replace the mass matrix, gradient and
Laplacian with the corresponding matrices for quadratic elements
[Boksebeld and Vaxman 2022, Appendix B]. The result is shown in
Figure 6 (center). Note that the quadratic elements lead to a better Figure 5: The regularized geodesic distance using the Dirich-
approximation (compare with the analytical solution, Fig. 6 (right)). let regularizer for various triangulations. For each triangu-
lation, we display the connectivity (top), the isoline of the
distance (middle) and the gradient norm, |∇𝑢 | (bottom). Note
that the results are qualitatively similar for all the triangu-
lations.

Figure 6: (left) Piecewise linear elements are not good ap-


proximators of geodsic distances near the source. (center)
Our approach easily generalizes to quadratic elements. Note
the improved accuracy (compare with the analytic solution
(right)).

You might also like