A Convex Optimization Framework For Regularized Ge
A Convex Optimization Framework For Regularized Ge
Figure 1: Geodesic distances (a) may not have desired properties such as smoothness. We present a general framework for
regularized geodesic distances. Shown here are three examples of regularizers: (b,c) smoothness, (d) alignment to a vector
field, and (e) boundary invariance.
ABSTRACT ACM Reference Format:
We propose a general convex optimization problem for computing Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen.
2023. A Convex Optimization Framework for Regularized Geodesic Dis-
regularized geodesic distances. We show that under mild conditions
tances. In Special Interest Group on Computer Graphics and Interactive Tech-
on the regularizer the problem is well posed. We propose three niques Conference Conference Proceedings (SIGGRAPH ’23 Conference Pro-
different regularizers and provide analytical solutions in special ceedings), August 6–10, 2023, Los Angeles, CA, USA. ACM, New York, NY,
cases, as well as corresponding efficient optimization algorithms. USA, 11 pages. https://doi.org/10.1145/3588432.3591523
Additionally, we show how to generalize the approach to the all
pairs case by formulating the problem on the product manifold,
which leads to symmetric distances. Our regularized distances com-
pare favorably to existing methods, in terms of robustness and ease
of calibration.
regions for the solution 𝑢𝛼 , one where it solves a Poisson equation, We analyze the case where 𝑀 is the circle S1 . Fix 𝛼 > 0. We param-
and one where it solves the Eikonal equation. Refer to [Caffarelli eterize S1 via the map 𝑥 ↦→ (cos(𝑥), sin(𝑥)), i.e., by real numbers
and Friedman 1979] and the book [Petrosyan et al. 2012] for more modulo 2𝜋. Moreover, we will use the group structure of S1 , which
background. For the Riemannian setting, this problem was first is given by R/2𝜋Z. In this case the problem amounts to looking for
studied in [Générau et al. 2022a], discussed further below. a 2𝜋-periodic function 𝑢 (𝑥) that minimizes
Even under these general conditions (see the supplemental for ∫ 2𝜋 ∫ 2𝜋
𝛼
detailed assumptions), we show that (a) the optimization problem |𝑢 ′ (𝑥)| 2 𝑑𝑥 − 𝑢 (𝑥) 𝑑𝑥
2 0 0
has a minimizer for every 𝛼 > 0 , (b) the minimizer is unique, and
(c) they converge uniformly to the exact geodesic distance as 𝛼 → 0. among all such 2𝜋-periodic functions satisfying the constraints
We gather these results under the next two theorems. 𝑢 (0) ≤ 0 and |𝑢 ′ (𝑥)| ≤ 1 for all 𝑥 ∈ (0, 2𝜋).
Theorem 3.1. There is a unique minimizer for problem (3). The minimizer 𝑢 (𝑥) for this problem has a simple analytical ex-
pression, given as follows: First, given 𝑥 ∈ R we set 𝑥ˆ = 𝑥 mod 2𝜋.
Theorem 3.2. Let 𝑢𝛼 denote the minimizer to the optimization
Then,
problem (3). Then, as 𝛼 → 0
𝑥ˆ
if 0 ≤ 𝑥ˆ ≤ 𝐿
max |𝑑 (𝑥, 𝐸) − 𝑢𝛼 (𝑥)| → 0, 𝑢𝛼 (𝑥) = 𝜋 − 12 𝛼 − 2𝛼 1 (𝑥ˆ − 𝜋) 2 if 𝐿 ≤ 𝑥ˆ ≤ 2𝜋 − 𝐿 (5)
𝑥 ∈𝑀
2𝜋 − 𝑥ˆ
if 𝑥ˆ ≥ 𝐿
where 𝑑 (𝑥, 𝐸) is the geodesic distance from 𝑥 to the set 𝐸.
Here, 𝐿 = 𝐿(𝛼) is defined by
The proofs of Theorem 3.1 and Theorem 3.2, are in Supp. 2 and 3.
The unique minimizer 𝑢𝛼 provided by Thm. 3.1 is Lipschitz con- 𝐿(𝛼) = (𝜋 − 𝛼)+ . (6)
tinuous by construction. In addition, it has two distinct regimes This expression approximates the distance to the point correspond-
in the respective regions {|∇𝑢𝛼 | = 1} and {|∇𝑢𝛼 | < 1}. For gen- ing to 𝑥 = 0. Observe that for 𝛼 > 𝜋 the functions 𝑢𝛼 are all equal
eral second order elliptic regularizers, 𝑢𝛼 will be smooth in the to 𝑢𝜋 . In general, the solution 𝑢𝛼 has two regimes or regions, one
interior of {|∇𝑢𝛼 | < 1}, there 𝑢𝛼 will solve the unconstrained region where it matches the geodesic distance function exactly, and
Euler-Lagrange equation corresponding to the objective functional one where it is solving Poisson’s equation 𝑢𝛼′′ = −1/𝛼 and therefore
in (3), which would be a nonlinear elliptic equation. Accordingly, matches a concave parabola, with the condition that 𝑢𝛼 is 𝐶 1 across
standard elliptic theory guarantees that 𝑢𝛼 will be smooth in the these two regions. This is the standard condition for solutions to the
region where the gradient constraint is not active. In the other obstacle problem (see [Petrosyan et al. 2012]), which is intrinsically
region {|∇𝑢𝛼 | = 1} the function will solve the Eikonal equation in related to (3) in this particular case (see Supplemental 1 for further
the viscosity sense. discussion).
In the case 𝐹 = |∇𝑢 | 2 (Sec. 3.1) it was proved in [Générau et al. The inset figure shows the behavior
2022a] that for all 𝛼 < 𝛼 0 (𝛼 0 depending on the geometry of Ω) of the function on the circle. Note the
the minimizer 𝑢𝛼 agrees with the geodesic distance function in smoothing region, whose width de-
{|∇𝑢 | = 1}. Therefore, 𝑢𝛼 coincides with the distance function pends on the smoothing parameter 𝛼
everywhere save for a region around the cut locus of 𝑥 0 . In this and matches (5)-(6).
region 𝑢𝛼 solves the Poisson equation Δ𝑢𝛼 = −1/𝛼. As shown in Thanks to the group structure of S1
[Générau et al. 2022a], as 𝛼 → 0, the open set {|∇𝑢𝛼 | < 1} shrinks and the invariance of the problem under the group action (in other
and converges to the cut locus. We expect the theorems in [Générau words, by symmetry), we obtain a corresponding formula when the
et al. 2022a] to hold for general elliptic functionals (such as the 𝑝- source point is any point 𝑦 ∈ S1 . In particular, if 𝑢𝛼 (·, 𝑦) represents
Laplace equation), but this entails pointwise estimates of nonlinear the solution to the problem with source at 𝑦, then
elliptic equations on manifolds beyond the scope of this work.
𝑢𝛼 (𝑥, 𝑦) = 𝑢𝛼 (𝑥 − 𝑦), ∀ 𝑥, 𝑦 (7)
Our regularizing functionals (Sec. 3.1, 4) correspond to elliptic
energy functionals that promote smoothness and other desirable We highlight a notable fact about these functions in a theorem.
properties (non-negativity, symmetries) in the minimizer of (3).
Theorem 3.3. For every 𝛼 > 0, the function 𝑢𝛼 (𝑥, 𝑦) given by
Accordingly, the significance of Thm. 3.2 is in providing a smooth
(5)-(7) defines a metric on S1 .
approximation to the geodesic distance function solution. Moreover,
this approximation is in the 𝐿 ∞ metric, so the approximation error This theorem will be proved in the supplemental. For a general
can be made small for all 𝑥 ∈ 𝑀 provided 𝛼 is sufficiently small. 𝑀, it is not clear whether one can expect 𝑢𝛼 (𝑥, 𝑦) to be a metric.
At the very least, it might be that Theorem 3.3 may generalize
3.1 Dirichlet Regularizer to other groups or homogeneous spaces. In Section 6 we discuss
A natural regularizer (also considered in [Générau et al. 2022a,b]) an extension of problem (3) to the product manifold 𝑀 × 𝑀 that
is the Dirichlet energy treats all pairs (𝑥, 𝑦) at once, producing an approximation 𝑈𝛼 (𝑥, 𝑦)
∫ that we can prove will be symmetric in (𝑥, 𝑦). We do not prove
1
EDir (𝑢) = |∇𝑢 (𝑥)| 2 dVol(𝑥). (4) this general formulation has the triangle inequality but Figure 12
2 𝑀 provides some encouragement in that direction.
First, we look at the simple example where the solution to prob- Another simple example for which we can compute the analytical
lem (3) using the Dirichlet energy (4) is given by an explicit formula. solution is the disk. We discuss it in the Supplemental, Section 1.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen
where V are the vertices, F are the faces, and 𝑛 = |V |, 𝑚 = |F |. We subject to (G𝑢) 𝑓 = 𝑧 𝑓 for all 𝑓 ∈ F
use a piecewise-linear discretization of functions on the mesh with 𝑢𝑖 ≤ 0 for all 𝑖 ∈ 𝐸,
one value per vertex; hence, functions are represented as vectors of where 𝜒 (·) is the indicator function, i.e., 𝜒 (|𝑧 𝑓 | ≤ 1) = ∞ if |𝑧 𝑓 | > 1
length 𝑛. Vector fields are piecewise constant per face and can be and 0 otherwise.
represented in the trivial basis in R3 or in a local basis per face; we The corresponding augmented Lagrangian is:
represent them using vectors of length R3𝑚 or R2𝑚 , respectively.
∑︁
Vertex and face areas are denoted by 𝐴 V ∈ R𝑛 , 𝐴 F ∈ R𝑚 , where 𝐿(𝑢, 𝑦, 𝑧) = −𝐴𝑇V 𝑢 + 𝛼2 𝑢𝑇 𝑊 𝑢 + 𝜒 (|𝑧 𝑓 | ≤ 1)+
the area of a vertex is a third of the sum of the areas of its adjacent 𝑓 ∈F
faces. The diagonal matrices 𝑀 V ∈ R𝑛×𝑛 , 𝑀 F ∈ R3𝑚×3𝑚 contain ∑︁ 𝜌 𝐴 ∑︁
√
𝐴 V , 𝐴 F on their corresponding diagonals (repeated 3 times for 𝑎 𝑓 𝑦𝑇𝑓 ((G𝑢) 𝑓 − 𝑧 𝑓 ) + 𝑎 𝑓 |(G𝑢) 𝑓 − 𝑧 𝑓 | 2,
2
𝐴 F ). The total area of the mesh is 𝐴. We use standard differential 𝑓 ∈F 𝑓 ∈F
operators [Botsch et al. 2010, Chapter 3]. In particular, our formu- where 𝑎 𝑓 is the area of the face 𝑓 , 𝜌 ∈ R is the penalty parameter,
lation requires the cotangent Laplacian 𝑊𝐷 ∈ R𝑛×𝑛 , the gradient and 𝑦 ∈ R3𝑚 is the dual variable or lagrange multiplier.
G ∈ R3𝑚×𝑛 , and the divergence D = G𝑇 𝑀 F ∈ R𝑛×3𝑚 . The ADMM algorithm consists of iteratively repeating three
steps [Boyd et al. 2011, Section 3]. First, we perform 𝑢-minimization,
5.2 Optimization Problem then 𝑧-minimization, and finally the dual variable, 𝑦, is updated.
In this setting, the optimization problem in Eq. (3), becomes: The full derivation of the three steps appears in Supplemental 8,
and the resulting algorithm in Algorithm 1.
Minimize𝑢 −𝐴𝑇V 𝑢 + 𝛼𝐹𝑀 (G𝑢) Algorithm details. Note that the first step, the 𝑢-minimization,
subject to |(G𝑢) 𝑓 | ≤ 1 for all 𝑓 ∈ F (10) includes solving a linear system with a fixed coefficient matrix,
𝑢𝑖 ≤ 0 for all 𝑖 ∈ 𝐸, which is pre-factored and used for all the ADMM iterations, as well
as all distance computations. To enforce the constraint 𝑢𝐸 ≤ 0, we
where 𝐸 here is a subset of vertex indices where the distance should eliminate the relevant columns from the linear system and solve
be 0, and 𝐹𝑀 is a convex function that acts on the gradient of 𝑢. for 𝑢𝑖 for all 𝑖 ∈ V \ {𝐸}. We project intermediate 𝑧 values to
Note that this problem is convex whenever 𝐹 is convex, since the the unit ball, i.e. Proj(𝑧 𝑓 ∈ R3, B3 ) is equal to 𝑧 𝑓 /|𝑧 𝑓 | if |𝑧 𝑓 | >
objective will be convex, the first constraint is a second-order cone 1, and 𝑧 𝑓 otherwise. We use the stopping criteria suggested by
constraint, and the second constraint is a linear inequality. Boyd et al. [2011, Section 3.3.1], formulated for our problem. See
Supplemental 8 for details.
5.3 Quadratic Objectives Efficiency. We compare our approach with a solution imple-
In practice, the objectives we consider are quadratic, leading to the mented using commercial software, i.e., CVX [Grant and Boyd
following optimization problem 2008, 2014] with the MOSEK solver [ApS 2019]. Table 1 provides
a comparison of the running times, measured on a desktop ma-
Minimize𝑢 −𝐴𝑇V 𝑢 + 𝛼2 𝑢𝑇 𝑊 𝑢 chine with an Intel Core i9. For the optimization using CVX and
subject to |(G𝑢) 𝑓 | ≤ 1 for all 𝑓 ∈ F (11) MOSEK, we report both the total running times and the solver
𝑢𝑖 ≤ 0 for all 𝑖 ∈ 𝐸. running times. Our optimization scheme yields at least an order of
magnitude improvement.
Different functionals correspond to different weight matrices 𝑊 . Figure 4 shows our result for multiple sources: three isolated
To use the Dirichlet energy in Eq. (4), we set 𝑊 to the cotangent points, a vertex sampling of a path, and the boundary. Additional
Laplacian matrix 𝑊𝐷 . For the vector field alignment objective in results are shown in Section 7 and in the Supplemental.
Eq. (8) we construct the anisotropic smoothing matrix 𝑊𝑉 = D(𝐼 +
𝛽𝑉ˆ )G, where 𝐼 ∈ R3𝑚×3𝑚 , and 𝑉ˆ ∈ R3𝑚×3𝑚 is block diagonal, with
ALGORITHM 1: ADMM.
the 3 × 3 block of face 𝑓 ∈ F given by 𝑉𝑓 𝑉𝑓𝑇 (see also Section 4.1). input : 𝑀, 𝛼,𝑊 , 𝐸
Finally, to use the Hessian regularizer in Eq. (9) we take 𝑊 to be output :𝑢 ∈ R𝑛 - distance to 𝐸
the curved hessian matrix [Stein et al. 2020], denoted by 𝑊𝐻 . initialize 𝜌 ∈ R ; // penalty parameter
𝑧 ← 03𝑚 ; // auxiliary variable, G𝑢 = 𝑧
𝑦 ← 03𝑚 ; // dual variable
5.4 Efficient Optimization Algorithm 𝜌←𝜌 𝐴
√
We derive an alternating direction method of multipliers (ADMM) while algorithm did not converge do // See Supp. 8
algorithm [Boyd et al. 2011] to solve the optimization problem solve 𝛼𝑊 + 𝜌𝑊𝐷 𝑢 = 𝐴V − D𝑦 + 𝜌D𝑧 s.t. 𝑢𝐸 = 0
𝑧 𝑓 ← Proj( 𝜌1 𝑦 𝑓 + (G𝑢) 𝑓 , B3 ) for all 𝑓 ∈ F
in Eq. (11) efficiently. We reformulate the optimization problem,
𝑦 ← 𝑦 + 𝜌 (G𝑢 − 𝑧)
adding an auxiliary variable 𝑧 ∈ R3𝑚 representing the gradient of end
the distance function G𝑢. This leads to:
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen
Figure 4: Distance to multiple sources: (a) 3 points, (b) a ver- Theorem 6.3. As 𝛼 → 0, we have
tex sampling of a path, and (c) the boundary. We show the ∥𝑑 (𝑥, 𝑦) − 𝑈𝛼 (𝑥, 𝑦) ∥ 𝐿∞ (𝑀×𝑀) → 0.
distance and the gradient norm.
Analogously to Theorem 3.2, this last theorem guarantees the
Table 1: Running times for computing the distance from a functions 𝑈𝛼 provide a uniform approximation to the full geodesic
single source. distance 𝑑 (𝑥, 𝑦) provided 𝛼 is chosen adequately.
visible noise in the gradient norm. The all-pairs formulation (Alg. 2) length, and use a remesh with highly anisotropic triangles and
is both symmetric and has a smooth gradient norm. self-intersections. We show the distances and the gradient norm,
all with the same 𝛼. Note that the results are consistent between
7 EXPERIMENTAL RESULTS the different meshes.
Symmetry error. Our Alg. 1 is not symmetric. Figure 11 shows
7.1 Scale-Invariant Parameters the symmetry error √1 |𝑑 (𝑥, 𝑦) − 𝑑 (𝑦, 𝑥)| for 3 source points for
The parameter 𝛼 controls the size of the smoothing area. Therefore, 𝐴
our method and the heat method. Note that for all three points, the
scaling the mesh requires changing its value. To avoid that, and symmetry error is higher for the heat method.
enable more intuitive control of the smoothing area, we define a Triangle inequality error. Our method does not guarantee that
scale-invariant smoothing parameter 𝛼ˆ that is independent of the triangle inequality holds, while EMD does. However, experimen-
mesh area or resolution. For the Dirichlet and vector
√ field alignment ˆ Fig. 12 shows the triangle
tally it does hold for higher values of 𝛼.
energies, we achieve√that by setting 𝛼 = 𝛼ˆ 𝐴. For the Hessian inequality error of a fixed pair of vertices with respect to every
energy, we set 𝛼 = 𝛼ˆ 𝐴3 . We note that the parameter 𝛽 is already other vertex. We compare the heat method, Alg. 1, and Alg. 2. For
ˆ For our ADMM algorithms (Sec. 5, 6.2)
scale-invariant, i.e., 𝛽 = 𝛽. the first two, we symmetrize the computed distance matrix. We
to be scale-invariant, we normalize the penalty variables, residual show the results for three 𝑡 values for the heat method, and three
and feasibility tolerances. Figure 7 demonstrates this. We uniformly values of 𝛼ˆ for our approach. We visualize the distance from the
rescale an input mesh, and use the same smoothing parameter 𝛼. ˆ chosen point using isolines. Note the difference in the error scaling
Note that while the distances are different between the meshes, the between the two methods. Further, note that for higher values of 𝛼ˆ
scale of the smoothed region, i.e., the area where the norm of the our approach has no violations of the triangle inequality. Table 3
gradient is not 1, is similar. For all our experiments we use the scale shows the percentage of triplets violating the triangle inequality
invariant formulation, unless stated otherwise. for the same data. Note, that also when considering all the possi-
ble triplets, higher values of the smoothing parameter lead to less
7.2 Comparison violations.
In Fig. 8 we compare our Dirichlet regularized distances to “Geodesics
in Heat” [Crane et al. 2013] and regularized EMD [Solomon et al. 7.4 Volumetric Distances
2014], with two smoothing parameters for each. In addition, we
Our framework can compute distances on tetrahedral meshes. We re-
show the exact geodesics computed using MMP [Mitchell et al.
place the standard mass matrix, gradient, and divergence operators
1987] for reference. Note that while all approaches lead to a smoother
with their volumetric versions, as implemented in gptoolbox [Jacob-
solution compared to the exact geodesics, our approach is more
son et al. 2021]. Figure 13 demonstrates our Dirichlet regularized
stable, in the sense that the same scale of regularization is observed
volumetric distance on a human shape (a). We show the distance
on all meshes, for the same parameters. Thus, we conjecture that
from a point on the shoulder on two planar cuts (b,c), and the
for our approach the regularization parameter is easier to tune.
distance from the boundary using two 𝛼ˆ values (d,e).
Table 2 compares the running times, and the maximal error w.r.t
the MMP distance (as a % of the maximal distance). The distances
are computed with Geometry Central [Sharp et al. 2019] for the heat 7.5 Example Application: Distance Function
method and MMP, and with a Matlab implementation of our ADMM for Knitting
Algorithm 1. Note that both our method and the heat method have
Some approaches for generating knitting instructions for 3D models
comparable errors, and for both smoother solutions have larger
require a function whose isolines represent the knitting rows [Edel-
errors. A timing comparison for the all-pairs case is in the Supp.
stein et al. 2022; Narayanan et al. 2018]. Using the geodesic distance
We additionally show in the supplemental material a comparison
to an initial point (or a set of points) is a good choice since the stitch
of the representation error of the different approaches in a reduced
heights are constant, as are the distances between isolines. On the
basis (providing a quantitative measure of smoothness).
other hand, this choice limits the design freedom significantly, as
designers and knitters have no control over the knitting direction
7.3 Robustness on different areas of the shape. Using regularized distances with
Meshing. We demonstrate that our method is invariant to meshing, vector alignment solves this problem. For example, see Figure 2
and is applicable to non-uniform meshing without modifying 𝛼. and the C model. Using geodesic distances to the starting point
Fig. 9 compares our result with the heat method, for 3 remesh- will result in a non-symmetric shape (a) (see also [Edelstein et al.
ings of the same shape. Note that for the heat method with the 2022, Figure 10]. By adding 2 directional constraints (f), we obtain a
default smoothing parameter (left), the half-half mesh fails. This function whose isolines respect the symmetries of the shape. Note
is remedied by using a different parameter (center), however there that for the regularized distances, the gradient norm is no longer 1
are still differences between the different meshing (note especially everywhere, and thus the distances between isolines is not constant.
the gradient norm). Using our approach (right) we get very similar This can be addressed when knitting by using stitches of different
distance functions and gradient norm for all 3 meshings. Fig. 5 in heights. Figure 14 shows how adding alignment to the teddy’s arm
the supplemental shows additional results with bad triangulations. and legs aligns the knitting rows with the creases. Crease align-
Noise. Fig. 10 shows robustness to noise and bad meshing. We ment leads to better shaping [Edelstein et al. 2022, Section 9.3], and
add Gaussian normal noise with 𝜎 = 0.5, 0.8 of the mean edge prevents over-smoothing of the knit model.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen
Table 2: Comparison of run-times (T) and the maximal error (𝜖) of the computed distance (in % of the maximal distance) for
the models in Figure 8.
Table 3: Percentage of triplets violating triangle inequality thank Hsueh-Ti Derek Liu for his Blender Toolbox, used for the
for the data in Fig. 12. Note that for our approach higher visualizations throughout the paper.
values of 𝛼 lead to less violations.
REFERENCES
Heat - Symmetrized Fixed-Source - Symmetrized All-Pairs
MOSEK ApS. 2019. The MOSEK optimization toolbox for MATLAB manual. Version 9.0.
(a) 1.84 2.04 1.23 http://docs.mosek.com/9.0/toolbox/index.html
(b) 2.20 1.28 0.88 Hédy Attouch, Jérôme Bolte, Patrick Redont, and Antoine Soubeyran. 2010. Proximal
(c) 2.20 0.25 0.09 alternating minimization and projection methods for nonconvex problems: An
approach based on the Kurdyka-Łojasiewicz inequality. Mathematics of operations
8 CONCLUSIONS AND FUTURE WORK research 35, 2 (2010), 438–457.
Alexander Belyaev and Pierre-Alain Fayolle. 2020. An ADMM-based scheme for
We presented a novel framework for constructing regularized geo- distance function approximation. Numerical Algorithms 84, 3 (2020), 983–996.
desic distances on triangle meshes. We demonstrated the versatility Alexander G Belyaev and Pierre-Alain Fayolle. 2015. On variational and PDE-based
distance function approximations. In Computer Graphics Forum, Vol. 34. Wiley
of our approach by presenting three regularizers, analyzing them, Online Library, 104–118.
and providing an efficient optimization scheme, as well as a sym- Iwan Boksebeld and Amir Vaxman. 2022. High-Order Directional Fields. ACM Trans-
metric formulation on the product manifold. The theoretical results actions on Graphics (TOG) 41, 6 (2022), 1–17.
Mario Botsch, Leif Kobbelt, Mark Pauly, Pierre Alliez, and Bruno Lévy. 2010. Polygon
and experiments in this work raise a number of interesting ques- mesh processing. CRC press.
tions for future research. One of them is whether the functions Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. 2011. Dis-
tributed optimization and statistical learning via the alternating direction method
𝑈𝛼 (𝑥, 𝑦) provide metrics in general, i.e., whether they satisfy the of multipliers. Foundations and Trends® in Machine learning 3, 1 (2011), 1–122.
triangle inequality; we are not aware of results where geodesic Luis A Caffarelli and Avner Friedman. 1979. The free boundary for elastic-plastic
distances can be regularized to have smooth metrics in 𝑀 × 𝑀. An- torsion problems. Trans. Amer. Math. Soc. 252 (1979), 65–97.
Luming Cao, Junhao Zhao, Jian Xu, Shuangmin Chen, Guozhu Liu, Shiqing Xin, Yuan-
other theoretical question involves convergence of the minimizers feng Zhou, and Ying He. 2020. Computing smooth quasi-geodesic distance field
in the Hessian energy-regularized problem, as discussed in Section (QGDF) with quadratic programming. Computer-Aided Design 127 (2020), 102879.
4. Algorithmically, the ADMM algorithm from Section 5.4 easily Keenan Crane, Marco Livesu, Enrico Puppo, and Yipeng Qin. 2020. A survey of
algorithms for geodesic paths and distances. arXiv preprint arXiv:2007.10430 (2020).
generalizes to other convex functions 𝐹𝑀 (e.g., 𝐿 1 norms) in Equa- Keenan Crane, Clarisse Weischedel, and Max Wardetzky. 2013. Geodesics in heat: A
tion 10; recent theory on nonconvex ADMM also suggests that new approach to computing distance based on heat flow. ACM Transactions on
Graphics (TOG) 32, 5 (2013), 1–11.
Algorithm 1 can be effective for nonconvex regularizers, possibly Michal Edelstein, Hila Peleg, Shachar Itzhaky, and Mirela Ben-Chen. 2022. AmiGo:
requiring large augmentation weights 𝜌 [Attouch et al. 2010; Gao Computational Design of Amigurumi Crochet Patterns. In Proceedings of the 7th
et al. 2020; Hong et al. 2016; Ouyang et al. 2020; Stein et al. 2022; Annual ACM Symposium on Computational Fabrication. 1–11.
Wenbo Gao, Donald Goldfarb, and Frank E Curtis. 2020. ADMM for multiaffine
Wang et al. 2019; Zhang et al. 2019; Zhang and Shen 2019]. constrained optimization. Optimization Methods and Software 35, 2 (2020), 257–303.
François Générau, Edouard Oudet, and Bozhidar Velichkov. 2022a. Cut locus on
ACKNOWLEDGMENTS compact manifolds and uniform semiconcavity estimates for a variational inequality.
Archive for Rational Mechanics and Analysis 246, 2 (2022), 561–602.
Michal Edelstein acknowledges funding from the Jacobs Qualcomm François Générau, Edouard Oudet, and Bozhidar Velichkov. 2022b. Numerical com-
Excellence Scholarship and the Zeff, Fine and Daniel Scholarship. putation of the cut locus via a variational approximation of the distance function.
ESAIM: Mathematical Modelling and Numerical Analysis 56, 1 (2022), 105–120.
Nestor Guillen was supported by the National Science Foundation Michael Grant and Stephen Boyd. 2008. Graph implementations for nonsmooth convex
through grant DMS-2144232. The MIT Geometric Data Process- programs. In Recent Advances in Learning and Control, V. Blondel, S. Boyd, and
H. Kimura (Eds.). Springer-Verlag Limited, 95–110. http://stanford.edu/~boyd/
ing group acknowledges the generous support of Army Research graph_dcp.html.
Office grants W911NF2010168 and W911NF2110293, of Air Force Michael Grant and Stephen Boyd. 2014. CVX: Matlab Software for Disciplined Convex
Office of Scientific Research award FA9550-19-1-031, of National Programming, version 2.1. http://cvxr.com/cvx.
Mingyi Hong, Zhi-Quan Luo, and Meisam Razaviyayn. 2016. Convergence analysis of
Science Foundation grants IIS-1838071 and CHS-1955697, from the alternating direction method of multipliers for a family of nonconvex problems.
CSAIL Systems that Learn program, from the MIT–IBM Watson AI SIAM Journal on Optimization 26, 1 (2016), 337–364.
Laboratory, from the Toyota–CSAIL Joint Research Center, from Hitoshi Ishii. 1987. Perron’s method for Hamilton-Jacobi equations. Duke Mathematical
Journal 55, 2 (1987), 369–384.
a gift from Adobe Systems, and from a Google Research Scholar Alec Jacobson et al. 2021. gptoolbox: Geometry Processing Toolbox.
award. Mirela Ben-Chen acknowledges the support of the Israel http://github.com/alecjacobson/gptoolbox.
Ron Kimmel and James A Sethian. 1998. Computing geodesic paths on manifolds.
Science Foundation (grant No. 1073/21), and the European Research Proceedings of the national academy of Sciences 95, 15 (1998), 8431–8435.
Council (ERC starting grant no. 714776 OPREP). We use the reposi- Joseph SB Mitchell, David M Mount, and Christos H Papadimitriou. 1987. The discrete
tories SHREC’07, SHREC’11, Windows 3D library, AIM@SHAPE, geodesic problem. SIAM J. Comput. 16, 4 (1987), 647–668.
Vidya Narayanan, Lea Albaugh, Jessica Hodgins, Stelian Coros, and James Mccann.
and Three D Scans, and thank Keenan Crane, Jan Knippers, Daniel 2018. Automatic machine knitting of 3D meshes. ACM Transactions on Graphics
Sonntag, and Yu Wang for providing additional models. We also (TOG) 37, 3 (2018), 1–15.
A Convex Optimization Framework for Regularized Geodesic Distances SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA
Wenqing Ouyang, Yue Peng, Yuxin Yao, Juyong Zhang, and Bailin Deng. 2020. Ander-
son acceleration for nonconvex ADMM based on Douglas-Rachford splitting. In
Computer Graphics Forum, Vol. 39. Wiley Online Library, 221–239.
Arshak Petrosyan, Henrik Shahgholian, and Nina Nikolaevna Uraltseva. 2012. Regu-
larity of free boundaries in obstacle-type problems. Vol. 136. American Mathematical
Soc.
Gabriel Peyré, Mickael Péchaud, Renaud Keriven, Laurent D Cohen, et al. 2010. Ge-
odesic methods in computer vision and graphics. Foundations and Trends® in
Computer Graphics and Vision 5, 3–4 (2010), 197–397.
Kacper Pluta, Michal Edelstein, Amir Vaxman, and Mirela Ben-Chen. 2021. PH-CPF:
planar hexagonal meshing using coordinate power fields. ACM Transactions on
Graphics (TOG) 40, 4 (2021), 1–19.
Yipeng Qin, Xiaoguang Han, Hongchuan Yu, Yizhou Yu, and Jianjun Zhang. 2016.
Fast and exact discrete geodesic computation based on triangle-oriented wavefront
propagation. ACM Transactions on Graphics (TOG) 35, 4 (2016), 1–13.
Nicholas Sharp, Keenan Crane, et al. 2019. GeometryCentral: A modern C++ library
of data structures and algorithms for geometry processing. https://geometry-
central.net/. (2019).
Justin Solomon, Raif Rustamov, Leonidas Guibas, and Adrian Butscher. 2014. Earth
mover’s distances on discrete surfaces. ACM Transactions on Graphics (ToG) 33, 4
(2014), 1–12.
Oded Stein, Alec Jacobson, Max Wardetzky, and Eitan Grinspun. 2020. A smoothness
energy without boundary distortion for curved surfaces. ACM Transactions on
Graphics (TOG) 39, 3 (2020), 1–17.
Oded Stein, Jiajin Li, and Justin Solomon. 2022. A splitting scheme for flip-free
distortion energies. SIAM Journal on Imaging Sciences 15, 2 (2022), 925–959.
Vitaly Surazhsky, Tatiana Surazhsky, Danil Kirsanov, Steven J Gortler, and Hugues
Hoppe. 2005. Fast exact and approximate geodesics on meshes. ACM transactions
on graphics (TOG) 24, 3 (2005), 553–560.
Sathamangalam R Srinivasa Varadhan. 1967. On the behavior of the fundamental
solution of the heat equation with variable coefficients. Communications on Pure
and Applied Mathematics 20, 2 (1967), 431–455.
Yu Wang, Wotao Yin, and Jinshan Zeng. 2019. Global convergence of ADMM in
nonconvex nonsmooth optimization. Journal of Scientific Computing 78 (2019),
29–63.
Juyong Zhang, Yue Peng, Wenqing Ouyang, and Bailin Deng. 2019. Accelerating
ADMM for efficient simulation and optimization. ACM Transactions on Graphics
(TOG) 38, 6 (2019), 1–21.
Tao Zhang and Zhengwei Shen. 2019. A fundamental proof of convergence of alter-
nating direction method of multipliers for weakly convex optimization. Journal of
Inequalities and Applications 2019, 1 (2019), 1–21.
SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA Michal Edelstein, Nestor Guillen, Justin Solomon, and Mirela Ben-Chen
Figure 14: Vector field alignment of creases, useful for knit- Figure 13: Dirichlet regularized volumetric distances. (a) The
ting applications. input tetrahedral mesh. (b,c) Two cuts showing the distance
to a point on the shoulder. (d,e) Distance to the boundary,
where (d) is more smoothed than (e), i.e. has a larger 𝛼ˆ value.
1.3 Analytical Solution for the Disk 2 EXISTENCE AND UNIQUENESS OF THE
To illustrate how our method handles other choices for the source MINIMIZER (SECTION 3)
set 𝐸, we take the flat 2D disk and consider the regularized distance In our results, 𝑀 is a compact 𝐶 ∞ submanifold of 𝑁 -dimensional
to the boundary of the disk. Euclidean space R𝑁 , from where it inherits its Riemannian structure.
Using polar coordinates, we take 𝐸 = {(𝑟, 𝜃 )|𝑟 = 𝑅}, and mini- The function 𝐹 (𝜉, 𝑥), 𝐹 : R𝑁 × R𝑁 → R is assumed of class 𝐶 1 in
mize (𝜉, 𝑥). We make two further structural assumptions on 𝐹 :
∫ 𝑅 ∫ 2𝜋 ∫ 𝑅 ∫ 2𝜋 1) There are 𝑝 > 1 and 𝑐 0, 𝐶 0 positive such that
𝛼
|∇𝑢 (𝑟, 𝜃 )| 2 𝑑𝜃𝑑𝑟 − 𝑢 (𝑟, 𝜃 )𝑟 𝑑𝜃𝑑𝑟
2 0 0 0 0 𝑐 0 |𝜉 |𝑝 ≤ 𝐹 (𝜉, 𝑥) ≤ 𝐶 0 |𝜉 |𝑝 , ∀ 𝑥, 𝜉 ∈ R𝑁
with the constraints 2) The function 𝐹 is strictly convex in the first argument. This is
𝑢 (𝑅, 𝜃 ) ≤ 0 for all 𝜃 ∈ [0, 2𝜋], and meant in the following sense: given vectors 𝜉 1 ≠ 𝜉 2 and 𝑠 ∈ (0, 1)
|∇𝑢 (𝑟, 𝜃 )| ≤ 1 for all 𝑟 ∈ [0, 𝑅), 𝜃 ∈ [0, 2𝜋]. then we have the strict inequality for all 𝑥
In this case, the solution is: 𝐹 ((1 − 𝑠)𝜉 1 + 𝑠𝜉 2, 𝑥) < (1 − 𝑠)𝐹 (𝜉 1, 𝑥) + 𝑠𝐹 (𝜉 2, 𝑥).
A Convex Optimization Framework for Regularized Geodesic Distances - Supplemental SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA
and subsequently that 2) whatever function 𝑢 ∗ is obtained as one 5 EXISTENCE AND UNIQUENESS OF A
of these limits will have to be a minimizer for problem (1). Since MINIMIZER (PRODUCT MANIFOLD
that problem has as its unique solution, then 𝑢 ∗ = 𝑑 (𝑥, 𝑥 0 ) for all FORMULATION, SECTION 6)
such subsequences.
Indeed, first, note that the 1-Lipschitz constraint and the fact In Section 6 we introduced the following problem.
∫
that 𝑢𝛼𝑘 (𝑥 0 ) = 0 for all 𝑘 implies that the sequence 𝑢𝛼𝑘 lies in a Minimize 𝛼 E𝑀×𝑀 (𝑈 ) − 𝑀×𝑀 𝑈 (𝑥, 𝑦) dVol(𝑥, 𝑦)
compact subset of 𝐶 (𝑀). Therefore, there is a subsequence 𝛼𝑘′ and subject to 𝑈 ∈ 𝑊 1,2 (𝑀 × 𝑀)
a 1-Lipschitz function 𝑢 ∗ ∈ 𝐶 (𝑀) such that |∇1𝑈 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦} (12)
|∇2𝑈 (𝑥, 𝑦)| ≤ 1 in {(𝑥, 𝑦) | 𝑥 ≠ 𝑦}
𝑢𝛼 ′ → 𝑢 ∗ uniformly in 𝑀.
𝑘 𝑈 (𝑥, 𝑦) ≤ 0 on {(𝑥, 𝑦) | 𝑥 = 𝑦}
Now, let 𝜙 : 𝑀 → R be a 1-Lipschitz function such that 𝜙 (𝑥 0 ) ≤ 0.
Theorem 6.1. There is a unique minimizer in problem (12).
Since 𝜙 is admissible for (3) for every 𝛼𝑘′ , it follows that
∫ ∫ ∫ Proof. At the big picture level this proof is basically the same
− 𝑢𝛼 ′ dVol(𝑥) ≤ 𝛼𝑘′ 𝐹 (∇𝑢𝛼 ′ , 𝑥)dVol(𝑥) − 𝑢𝛼 ′ 𝑑𝑥 as that of existence and uniqueness of a minimizer for problem (3).
𝑀 𝑘
∫𝑀
𝑘
∫ 𝑀
𝑘
We only highlight the points where things are different.
Therefore, take a minimizing sequence 𝑈𝑘 . Arguing similarly as
≤ 𝛼𝑘′ 𝐹 (∇𝜙, 𝑥)dVol(𝑥) − 𝜙 𝑑𝑥 .
𝑀 𝑀 before we can assume without loss of generality that
On the other hand, by the 1-Lipschitz constraint max 𝑈𝑘 ≥ 0 for all 𝑘.
𝑀×𝑀
∫ Now, 𝑈𝑘 is 1-Lipschitz in each variable 𝑥 and 𝑦, separately, so, if for
𝐹 (∇𝜙, 𝑥)dVol(𝑥) ≤ 𝐶Vol(𝑀). some 𝑘 (𝑥 0, 𝑦0 ) is a point where 𝑈𝑘 (𝑥 0, 𝑦0 ) ≥ 0, then for all other
𝑀
(𝑥, 𝑦) we have
This means in particular that
𝑈𝑘 (𝑥, 𝑦) ≥ 𝑈𝑘 (𝑥, 𝑦0 ) − 𝑑 (𝑦, 𝑦0 )
∫ ∫
≥ 𝑈𝑘 (𝑥 0, 𝑦0 ) − 𝑑 (𝑥, 𝑥 0 ) − 𝑑 (𝑦, 𝑦0 )
− 𝑢 ∗ dVol(x) = lim 𝑢𝛼 ′ dVol(𝑥)
𝑀 𝑘
𝑀 ∫
𝑘
∫ ≥ 𝑈𝑘 (𝑥 0, 𝑦0 ) − 2diam(𝑀).
≤ lim 𝛼𝑘′ 𝐹 (∇𝜙, 𝑥)dVol(𝑥) − 𝜙 𝑑𝑥 On the other hand, since 𝑈𝑘 (𝑥, 𝑥) ≤ 0 for all 𝑥 and 𝑦, we have,
∫ using the 1-Lipschtz condition in the first variable
𝑘 𝑀 𝑀
□
A consequence of the uniqueness theorem is the symmetry of
the minimizers 𝑈𝛼 :
Theorem 6.2. The function 𝑈𝛼 (𝑥, 𝑦) is symmetric in 𝑥 and 𝑦.
Proof. This is a direct consequence of the uniqueness of the
minimizer to problem (12) as well as the symmetry of the under the
transformation (𝑥, 𝑦) ↦→ (𝑦, 𝑥). Indeed, given 𝛼 define the function
𝓋𝛼 (𝑥, 𝑦) := 𝑈𝛼 (𝑦, 𝑥),
Then it is clear that 𝓋 is still admissible for problem (12) and
∫
𝛼 E𝑀×𝑀 (𝑈𝛼 ) − 𝑈𝛼 (𝑥, 𝑦) dVol(𝑥, 𝑦)
∫
𝑀×𝑀
∑︁
𝐿(𝑢, 𝑦, 𝑧) = −𝐴𝑇V 𝑢 + 𝛼2 𝑢𝑇 𝑊 𝑢 + 𝜒 (|𝑧 𝑓 | ≤ 1)+ Minimize𝑈 − 12 𝐴𝑇V𝑋𝐴 V − 12 𝐴𝑇V 𝑅𝐴 V +
𝑓 ∈F 1 𝛼 Tr 𝑀 𝑇 𝑊 𝑋 + 𝑅𝑇 𝑊 𝑅 +
∑︁ 𝜌 𝐴 ∑︁
√
2∑︁ ∑︁ V 𝑋 𝐷 𝐷
𝑎 𝑓 𝑦𝑇𝑓 ((G𝑢) 𝑓 − 𝑧𝑓 ) + 𝑎 𝑓 |(G𝑢) 𝑓 − 𝑧 𝑓 | 2, 𝜒 (|(𝑍 ( ·,𝑖) ) 𝑓 | ≤ 1) +
2
𝑓 ∈F 𝑓 ∈F 𝑓∑︁
∈ F 𝑖∑︁
∈V
where 𝑎 𝑓 is the area of the face 𝑓 , 𝜌 ∈ R is the penalty parameter, 𝜒 (|(𝑄 ( ·,𝑖) ) 𝑓 | ≤ 1)
and 𝑦 ∈ R3𝑚 is the dual variable or Lagrange multiplier. 𝑓 ∈ F 𝑖 ∈V
The ADMM algorithm iterates between three stages [Boyd et subject to (G𝑋 ( ·,𝑖) ) 𝑓 = (𝑍 ( ·,𝑖) ) 𝑓 for all 𝑓 ∈ F , 𝑖 ∈ V
al. 2011, Section 3]: 𝑢-minimization, 𝑧-minimization, and updating (G𝑅 ( ·,𝑖) ) 𝑓 = (𝑄 ( ·,𝑖) ) 𝑓 for all 𝑓 ∈ F , 𝑖 ∈ V
the dual variable. Where using this formulation, both 𝑢 and 𝑧 have 𝑋 =𝑈
closed-form solutions. 𝑋 =𝑈
The ADMM algorithm alternates between these three steps: 𝑅 = 𝑈𝑇
√ √
(1) 𝑢𝑘+1 = [𝛼𝑊 + 𝜌 𝐴𝑊𝐷 ] −1 [𝐴 V − D𝑦𝑘 + 𝜌 𝐴D𝑧𝑘 ] 𝑈𝑖,𝑖 ≤ 0 for all 𝑖 ∈ V
(2) 𝑧𝑘+1 = Proj( √1 𝑦𝑘𝑓 + (G𝑢𝑘+1 ) 𝑓 , B3 ) for all 𝑓 ∈ F 𝑈 ≥ 0,
𝑓 𝜌 𝐴
√ where 𝜒 (𝑍 ( ·,𝑖) ) 𝑓 | ≤ 1) = ∞ if 𝑍 ( ·,𝑖) ) 𝑓 | > 1 and 0 otherwise.
(3) 𝑦𝑘+1 = 𝑦𝑘 + 𝜌 𝐴(G𝑢𝑘+1 − 𝑧𝑘+1 ),
The corresponding augmented Lagrangian is:
where Proj(𝑧 𝑓 ∈ R3, B3 ) is equal to 𝑧 𝑓 /|𝑧 𝑓 | if |𝑧 𝑓 | > 1, and 𝑧 𝑓
otherwise.
We consider our algorithms to have converged when ∥𝑟 𝑘 ∥ ≤ 𝜖 𝑝𝑟𝑖 𝐿(𝑈 , 𝑌 , 𝑍 ) = − 12 𝐴𝑇V 𝑋𝐴 V − 21 𝐴𝑇V 𝑅𝐴 V +
1 𝛼 Tr 𝑀
and ∥𝑠 𝑘 ∥ ≤ 𝜖 𝑑𝑢𝑎𝑙 , where 𝑟 𝑘 and 𝑠 𝑘 are the primal and dual residuals, 𝑇
2∑︁ ∑︁ V 𝑋 𝑊𝐷 𝑋 + 𝑅 𝑊𝐷 𝑅
𝑇 +
resp. And 𝜖 𝑝𝑟𝑖 , 𝜖 𝑑𝑢𝑎𝑙 are the primal and dual feasibility tolerances, 𝜒 (|(𝑍 ( ·,𝑖) ) 𝑓 | ≤ 1) +
resp. These quantities can be computed as follows:
𝑓∑︁
∈ F 𝑖∑︁
√︁ √︁ ∈V
𝑟 𝑘 = 𝑀 F G𝑢𝑘 − 𝑀 F 𝑧𝑘 𝜒 (|(𝑄 ( ·,𝑖) ) 𝑓 | ≤ 1) +
𝑠 𝑘 = 𝜌D(𝑧𝑘 − 𝑧𝑘−1 ) √
√ √︁ √︁ 𝑖 ∈V
𝑓 ∈F
𝜖 𝑝𝑟𝑖 = 3𝑚𝜖 𝑎𝑏𝑠 𝐴 + 𝜖 𝑟𝑒𝑙 𝐴max(∥ 𝑀 F G𝑢𝑘 ∥, ∥ 𝑀 F 𝑧𝑘 ∥) Tr 𝑀 V 𝑌 𝑇 𝑀 F (G𝑋 − 𝑍 ) + 𝑆𝑇 𝑀 F (G𝑅 − 𝑄) +
√ √
𝜖 𝑑𝑢𝑎𝑙 = 𝑛𝜖 𝑎𝑏𝑠 𝐴 + 𝜖 𝑟𝑒𝑙 𝐴∥D𝑦 ∥. 𝜌1 𝐴
√
𝑇
2√ Tr 𝑀 V G𝑋 − 𝑍 𝑀 F G𝑋 − 𝑍 +
In all our experiments, we set 𝜖 𝑎𝑏𝑠 = 5 · 10−6, 𝜖 𝑟𝑒𝑙 = 10−2 , and 𝜌 = 2.
𝜌1 𝐴 𝑇
We define 𝜌, the residuals and feasibility tolerances such that they 2 Tr 𝑀 V G𝑅 −𝑄 𝑀F G𝑅 − 𝑄 +
are scale-invariance, as explained in Section 7.1. Tr 𝐻 𝑋 − 𝑈 𝑀 V + Tr 𝐾 𝑅 − 𝑈 𝑀 V +
𝑇 𝑇 𝑇
In addition, to accelerate the convergence, we also use the vary-
ing penalty parameter and over-relaxation, exactly as described in Tr 𝐻 𝑇 𝑋 − 𝑈 𝑀 V +
√ 𝑇
[Boyd et al. 2011, Sections 3.4.1, 3.4.3]. 𝜌 2 𝐴−1
Tr 𝑋 − 𝑈 𝑀 V 𝑋 − 𝑈 𝑀 V +
2
𝑇 𝑇 𝑀 𝑇 𝑀
√
𝜌 2 𝐴−1
9 SYMMETRIC ALL-PAIRS ADMM 2 Tr 𝑅 − 𝑈 V 𝑅 − 𝑈 V ,
DERIVATION (SECTION 6.2)
where 𝜌 1, 𝜌 2 ∈ R are the penalty parameters, and 𝑌 , 𝑆 ∈ R3𝑚×𝑛 ,
Our discrete optimization problem, as introduced in Section 6.2, is:
𝐻, 𝐾 ∈ R𝑛×𝑛 are the dual variables.
Minimize𝑈 −𝐴𝑇V 𝑈𝐴 V + The ADMM algorithm for this optimization problem consists of
1 Tr 𝑀 V 𝑈 𝑇 𝑊𝐷 𝑈 + 𝑈𝑊𝐷 𝑈 𝑇 three stages. In the first stage, we optimize for 𝑍, 𝑅. In the second
2𝛼
subject to |(∇𝑈 (𝑖,·) ) 𝑓 | ≤ 1 for all 𝑓 ∈ F , 𝑖 ∈ V step, we minimize the auxiliary variables 𝑍, 𝑄, 𝑈 . Finally, in the
third step, we update the dual variables added in the augmented
|(∇𝑈 ( ·,𝑗) ) 𝑓 | ≤ 1 for all 𝑓 ∈ F , 𝑗 ∈ V
Lagrangian.
𝑈𝑖,𝑖 ≤ 0 for all 𝑖 ∈ V,
(1)
where 𝑋𝑖,𝑗 denotes the (𝑖, 𝑗)-th element of a matrix 𝑋 , 𝑋 (𝑖,·) denotes
the 𝑖-th row, and 𝑋 ( ·,𝑗) the 𝑗-th column. √ √ −1
Our derivation is based on the consensus problem [Boyd et al . 𝑋 𝑘+1 = (𝛼 + 𝜌 1 𝐴)𝑊𝐷 + 𝜌 2 𝐴−1 𝑀 V
1 𝐴 𝐴𝑇 𝑀 −1 − D𝑌 𝑘 +
2011, Section 7], where we split 𝑈 into two variables 𝑋, 𝑅 ∈ R𝑛×𝑛 2 √ V√ V V √
to represent the gradient along the columns and rows, and use a 𝜌 1 𝐴 𝐴D𝑧𝑘 − 𝑀 V 𝐻 𝑘 + 𝜌 2 𝐴−1 𝑀 V 𝑈 𝑘
consensus auxiliary variable 𝑈 ∈ R𝑛×𝑛 to ensure consistency. We
also add two auxiliary variables 𝑍, 𝑄 ∈ R3𝑚×𝑛 representing the −1
√ √
gradients along the columns and rows, i.e., G𝑋, G𝑅. We enforce 𝑅𝑘+1 = (𝛼 + 𝜌 1 𝐴)𝑊𝐷 + 𝜌 2 𝐴−1 𝑀 V
the diagonal constraint on the consensus variable 𝑈 to avoid solv- 1 𝐴 𝐴𝑇 𝑀 −1 − D𝑆 𝑘 +
ing huge linear systems. This leads to the following optimization 2√V V V √
problem: 𝜌 1 𝐴D𝑄 𝑘 − 𝑀 V 𝐾 𝑘 + 𝜌 2 𝐴−1 𝑀 V 𝑈 𝑘𝑇
A Convex Optimization Framework for Regularized Geodesic Distances - Supplemental SIGGRAPH ’23 Conference Proceedings, August 6–10, 2023, Los Angeles, CA, USA
(2) (𝑍 𝑘+1 )𝑓 = Table 1: Timings in seconds for the All-Pairs distance com-
( ·,𝑖) putation on the cat model, |F | = 3898, Figure 10 (main paper).
Proj 1√ (𝑌 𝑘 ) + (G𝑋 𝑘+1 ) , B3 for all 𝑖 ∈ V, 𝑓 ∈ F
( ·,𝑖)
𝜌1 𝐴 𝑓 ( ·,𝑖) 𝑓
Heat - Symmetrized Fixed-Source - Symmetrized All-Pairs
(𝑄 𝑘+1 )𝑓 = [sec] [sec] [sec]
( ·,𝑖) (a) 0.77 101.625 1124.312
Proj 1√ (𝑆 𝑘 )
+ (G𝑅𝑘+1 3 for all 𝑖 ∈ V, 𝑓 ∈ F
) , B (b) 0.77 59.583 837.063
𝜌 1 𝐴 ( ·,𝑖) 𝑓( ·,𝑖) 𝑓
(c) 0.76 37.4745 794.9549
𝑘 𝑘𝑇 𝑋 𝑘+1 +𝑅𝑘𝑇 , 0
𝑈 𝑘+1 = max 𝐻 +𝐾
√ + 2
−1 2𝜌 2 𝐴
𝑘+1 = 0 for all 𝑖 ∈ V
𝑈𝑖,𝑖 10.2 Representation Error in a Spectral
√ Reduced Basis
(3) 𝑌 𝑘+1 = 𝑌 𝑘 + 𝜌 1 𝐴 G𝑋 𝑘+1 − 𝑧𝑘+1
√ Smoother functions are better represented in a reduced basis com-
𝑆 𝑘+1 = 𝑆 𝑘 + 𝜌 1 𝐴 G𝑅𝑘+1 − 𝑄 𝑘+1
√ prised of the eigenvectors of the Laplace-Beltrami operator. Namely,
𝐻 𝑘+1 = 𝐻 𝑘 + 𝜌 2 𝐴−1 𝑋 𝑘+1 − 𝑈 𝑘+1 they require less basis functions for the same representation error.
√
𝐾 𝑘+1 = 𝐾 𝑘 + 𝜌 2 𝐴−1 𝑅𝑘+1 − 𝑈 𝑘𝑇 In Fig. 4 (left) We compare the representation error in a reduced
basis of our approach, the heat method, and fast marching. Note
that our approach, both the fixed source (Alg. 1) and the all-pairs
Similarly to Section 5.4, the first steps include solving a linear (Alg 2.) formulations, achieves the lowest error (indicating that the
system with the same coefficient matrix, which can be pre-factored functions are smoothest in this sense). Similarly, we compare the
to accelerate the computation. symmetric formulations by symmetrizing our fixed source method,
We consider our algorithms to have converged when ∥𝑟 𝑘 ∥ ≤ 𝜖 𝑝𝑟𝑖 the heat method and the Fast Marching results, see Fig. 4 (right).
and ∥𝑠 𝑘 ∥ ≤ 𝜖 𝑑𝑢𝑎𝑙 , where 𝑟 𝑘 and 𝑠 𝑘 are the primal and dual residuals, Here we project on the eigenvectors of the LB operator on the
resp. And 𝜖 𝑝𝑟𝑖 , 𝜖 𝑑𝑢𝑎𝑙 are the primal and dual feasibility tolerances, product manifold. Here as well we achieve a lower error than the
resp. These quantities can be computed as follows: alternatives. The experiment was done on the “pipe” mesh, where
we computed the full distance matrix between all pairs of vertices.
√︁ √︁ For Fig 4 (left) we projected each column of the distance matrix
𝑟𝑘 = 𝑀 F G𝑢𝑘 − 𝑀 F 𝑧𝑘 (i.e., the distance from a single source vertex), and computed the
𝑠𝑘 𝜌D(𝑧𝑘 − 𝑧𝑘−1 ) mean of the representation errors. For Fig 4 (right), we projected
√︁ √︁
=√
𝜖 𝑝𝑟𝑖 = 3𝑚𝜖 𝑎𝑏𝑠 𝐴 + 𝜖 𝑟𝑒𝑙 max(∥ 𝑀 F G𝑢𝑘 ∥, ∥ 𝑀 F 𝑧𝑘 ∥) the full distance matrix on the eigenvectors of the LB operator on
√ 𝑎𝑏𝑠 2 𝑟𝑒𝑙 √︁ the product manifold.
𝜖 𝑑𝑢𝑎𝑙 = 𝑛𝜖 𝐴 + 𝜖 ∥ 𝑀 F D𝑌 ∥
Figure 3: The distance isolines and gradient norm with Dirichlet regularization for various meshes.