Shypar: A Spectral Coarsening Approach To Hypergraph Partitioning
Shypar: A Spectral Coarsening Approach To Hypergraph Partitioning
(node) contraction heuristics and focus on local hypergraph clustering, flow-based clustering, and multilevel hypergraph
structures, our framework, for the first time, incorporates partitioning. Section IV applies our framework to an es-
spectral (global) properties into the multilevel coarsening and tablished hypergraph partitioning tool, showcasing extensive
Applications
partitioning tasks in
as Chip Designs
depicted in Figure 1. To achieve these experimental results on various real-world VLSI design bench-
goals, the paper is organized in a structured sequence of steps: marks. The paper concludes with Section V, which summarizes
a) Scalable Spectral Hypergraph Coarsening Algo- the findings and implications of this work.
rithms: This paper presents a two-phase scalable algorithmic
framework for the spectral coarsening of large-scale hyper- II. P RELIMINARIES AND BACKGROUND
graphs, which exploits hyperedge effective resistances and
strongly local flow-based methods [46], [47]. The proposed A. Spectral (Hyper)graph Theory
Gate-level
methods facilitate Circuit
the decomposition Hypergraph
of hypergraphs into Model Partitioning Chip Placement
1) Graph Laplacian matrix: In an undirected graph G =
multiple strongly-connected node clusters with minimal inter- (V, E, z), the symbol V represents a set of nodes (vertices),
cluster hyperedges, by incorporating the latest diffusion-based E represents a set of undirected edges, and z indicates the
nonlinear quadratic operators defined on hypergraphs. weights associated with these edges. We denote D as a diag-
b) Multilevel Hypergraph Partitioning via Spectral onal matrix where each diagonal element D(i, i) corresponds
Coarsening: This paper develops a brand-new multilevel to the weighted degree of node i. Additionally, A is defined as
hypergraph partitioning tool by seamlessly integrating the the adjacency matrix for the undirected graph G, as described
proposed spectral coarsening methods into the hypergraph below:
partitioning platform. By replacing the traditional simple
coarsening heuristics with our theoretically rigorous spectral
(
z(i, j) if (i, j) ∈ E
methods, the multilevel hypergraph partitioning tools devel- A(i, j) = (1)
0 otherwise .
oped through this research will potentially offer significantly
improved partitioning solutions without compromising runtime Subsequently, the Laplacian matrix of the graph G is deter-
efficiency. mined by the formula L = D − A. This matrix adheres to
c) Validations of Multilevel Hypergraph Partitioning several key properties: (1) the sum of the elements in each row
Tools: This paper comprehensively validates the developed or column is zero; (2) all elements outside the main diagonal
hypergraph partitioning tools, focusing on an increasingly are non-positive; (3) the graph Laplacian is symmetric and
important application: integrated circuit (IC) computer-aided diagonally dominant (SDD), characterized by non-negative
design. Both the solution quality and runtime efficiency will eigenvalues.
be carefully assessed by testing the tools on a wide range 2) Courant-Fischer Minimax Theorem: The k-th largest
of public-domain data sets. Additionally, the developed open- eigenvalue of the Laplacian matrix L ∈ R|V|×|V| over the
source software packages will be made available for public subspace U of RV , can be computed as follows:
assessment.
The structure of this paper is organized as follows: Section x⊤ Lx
λk (L) = min max ⊤ , (2)
II provides a foundational overview of the essential concepts dim(U )=k x∈U x x
x̸=0
and preliminaries in spectral hypergraph theory. Section III
introduces the proposed method for hypergraph partitioning This can be leveraged for compute the spectrum of the
via spectral clustering, including resistance-based hypergraph Laplacian matrix L.
3
3) Graph conductance: In a graph G = (V, E, z) where methods for hypergraphs explicitly builds the Laplacian matrix
vertices are partitioned into subsets (S, Ŝ), the conductance to analyze the spectral properties of hypergraphs. A method
of partition S is defined as: has been proposed to create the Laplacian matrix of a hyper-
P graph and generalize graph learning algorithms for hypergraph
w(S, Ŝ) / z(i, j)
∈S
ΦG (S) := = (i,j)∈E:i∈S,j
, applications [51]. A more mathematically rigorous approach
min vol(S), vol(Ŝ) min vol(S), vol(Ŝ) by Chan et al. introduced a nonlinear diffusion process for
(3) defining the hypergraph Laplacian operator by measuring the
where the volume of the partition, vol(S), is the sum of flow distribution within each hyperedge [40], [41]. Moreover,
Cheeger’s inequality has been proven for hypergraphs under
P weighted degrees of vertices in S, defined as vol(S) :=
the
i∈S d(i). The graph’s conductance [48] is defined as:
the diffusion-based nonlinear Laplacian operator [40].
7) Hypergraph conductance: A hypergraph H = (V, E, w)
ΦG := min Φ(S). (4) consists of a vertex set V and a set of hyperedges E with
∅̸⊆S⊆V unit weight w = 1. The degree of a vertex dv is defined
4) Cheegers’ inequality: Research has demonstrated that as: dv := Σe∈E:v∈e w(e), where w(e) represents the weight
the conductance ΦG of the graph G closely correlates with of each hyperedge. The volume of a node set S ⊆ V in the
its spectral properties, as articulated by Cheeger’s inequality hypergraph is defined as: vol(S) := Σv∈S dv . The conductance
[48]: of a subset S within the hypergraph is then calculated as:
√ cut(S, Ŝ)
ω2 /2 ≤ ΦG ≤ 2ω2 , (5) Φ(S) := , (7)
min{vol(S), vol(Ŝ)}
where ω2 is the second smallest eigenvalue of the normalized
Laplacian matrix L,e defined as Le = D−1/2 LD−1/2 . where cut(S, Ŝ) quantifies the number of hyperedges that
5) Effective resistance distance: Let G = (V, E, z) repre- cross between S and Ŝ. This computation uses an ”all or noth-
sent a connected, undirected graph with weights z ∈ RE≥0 . ing” splitting function that uniformly penalizes the splitting of
let bp ∈ RV denote the standard basis vector characterized hyperedges. The hypergraph’s overall conductance is defined
by zero entries except for a one in the p-th position, and let as:
bpq = bp − bq , The effective resistance between nodes p and ΦH := min Φ(S), (8)
∅̸⊆S⊆V
q, (p, q) ∈ V can be computed by:
|V|
†
X (u⊤ bpq )2 (x⊤ bpq )2 B. Hypergraph Partitioning Methods
Ref f (p, q) = b⊤
pq LG bpq =
i
= max ,
λi x∈RV x⊤ LG x The previous hypergraph partitioners leverage a multilevel
i=2
(6) paradigm to construct a hierarchy of coarser hypergraphs
where L†G represents the Moore-Penrose pseudo-inverse of the using local clustering methods. Computing a sequence of
graph Laplacian matrix LG , and ui ∈ RV for i = 1, ..., |V| coarser hypergraphs that preserve the structural properties of
represents the unit-length, mutually-orthogonal eigenvectors the original hypergraph is a key step in every partitioning
corresponding to Laplacian eigenvalues λi for i = 1, ..., |V|. method. The coarsening algorithm in existing partitioning
6) Spectral methods for hypergraphs: Classical spectral methods either computes matching or clustering at each level
graph theory shows that the structure of a simple graph is by utilizing a rating function to cluster strongly correlated ver-
closely related to the graph’s spectral properties. Specifically, tices. Hyperedge matching and vertex similarity methods are
Cheeger’s inequality demonstrates the close connection be- used in the coarsening phase to cluster the nodes and contract
tween expansion (or conductance) and the first few eigenvalues the hyperedges. Existing well-known hypergraph partitioners,
of graph Laplacians [16]. Moreover, the Laplacian quadratic such as hMETIS [2], KaHyPar [45], PaToH [9], and Zoltan [7],
form computed with the Fiedler vector (the eigenvector corre- all use heuristic clustering methods to compute the sequence
sponding to the smallest nonzero Laplacian eigenvalue) has of coarser hypergraphs.
been exploited to find the minimum boundary size or cut 1) Hypergraph coarsening: Multi-level coarsening tech-
for graph partitioning tasks [15]. However, there has been niques typically employ either matchings or clusterings on
very limited progress in developing spectral algorithms for each level of the coarsening hierarchy. These algorithms utilize
hypergraphs. For instance, a classical spectral method has various rating functions to decide whether vertices should be
been proposed for hypergraphs by converting each hyper- matched or grouped together, using the contracted vertices to
edge into undirected edges using star or clique expansions form the vertex set of the coarser hypergraph at the subsequent
[49]. This naive hyperedge conversion scheme may result level. In contrast, n-level partitioning algorithms, such as
in lower performance due to ignoring the multi-way high- the graph partitioner KaSPar [52], establish a hierarchy of
order relationships between the entities. A more rigorous (nearly) n levels by contracting just one vertex pair between
approach by Tasuku and Yuichi [50] generalized spectral graph two levels. This approach eliminates the need for matching
sparsification for the hypergraph setting by sampling each or clustering algorithms during the graph reduction process.
hyperedge according to a probability determined based on KaSPar utilizes a priority queue to determine the next vertex
the ratio of the hyperedge weight to the minimum degree of pair to be contracted. After each contraction, it updates the
two vertices inside the hyperedge. Another family of spectral priority of every neighboring vertex of the contracted vertex
4
to maintain consistency in priorities. However, in hypergraphs, coarsening. Specifically, we proposed a two-phase spectral
this method faces significant speed limitations because the hypergraph coarsening scheme based on the recent research
size of the neighborhood can greatly expand due to a single on spectral hypergraph clustering [46], [47]. Phase A utilizes
large hyperedge. To address this limitation, KaHyPar [45] spectral hypergraph coarsening (HyperEF) to decompose a
adopts the heavy-edge rating function. This strategy involves given hypergraph into smaller node partitions with bounded
initially selecting a random vertex p and contracting it with effective-resistance diameters [47]. This is followed by Phase
the best neighboring node that has the highest rating. The B, which guides the coarsening stage using a flow-based
rating function specifically selects a vertex pair (p, q) that is community detection method (HyperSF) aimed at minimizing
involved in a large number of hyperedges with relatively small the ratio cut [46]. Next, we exploit the proposed two-phase
sizes, optimizing the coarsening process based on the most spectral hypergraph coarsening method for multilevel hyper-
significant connections between vertices: graph partitioning: the prior heuristic hypergraph coarsening
X w(e) schemes will be replaced by the proposed spectral coarsening
r(p, q) = , (9) methods to create a hierarchy of coarser hypergraphs that
|e| − 1
e=(p,q)∈E can preserve the key structural properties of the original
where r(p, q) is the rating of the vertex pair, w(e) is the weight hypergraph.
of hyperedge e, and |e| is the hyperedge cardinality.
2) Community Detection: The coarsening phase aims to A. Resistance-Based Hypergraph Clustering (Phase A)
generate progressively smaller yet structurally consistent ap- The existing coarsening algorithms, contract vertices at
proximations of the input hypergraph. However, certain sce- each level of the hierarchy. The primary method involves
narios may arise where the inherent structure becomes ob- contracting each vertex with the best neighboring node. This is
scured. For example, tie-breaking decisions may be necessary commonly done by using rating functions to identify and con-
when multiple neighbors of a vertex share the same rating. tract highly connected vertices. However, these determinations
Consequently, to improve coarsening schemes, existing par- often rely solely on the weights and sizes of the hyperedges.
titioners like KaHyPar utilize a preprocessing step involving Rating functions, such as those described in Eq. (9), based
community detection to guide the coarsening phase. In this on hyperedge size are limited to the local structural properties
approach, the hypergraph is divided into several communities of the hypergraph and do not account for its global structure.
and then the coarsening phase is applied to each community In contrast, the effective-resistance diameter provides a more
separately. Existing community detection algorithms, such comprehensive criterion, which considers the global structure.
as the Louvain algorithm, partition hypergraph vertices into Through an example, we show that hyperedge size-based
communities characterized by dense internal connections and score functions are not optimal, occasionally disrupting the
sparse external ones. This method reformulates the problem global structure of the hypergraph. For instance, in scenarios
into a task of modularity maximization in graphs. where the hypergraph contains a bridge with few nodes (small
3) Partitioning objectives: Hypergraph partitioning extends size hyperedge), as illustrated in Figure 2, algorithms that
the concept of graph partitioning. Its objective is to dis- use hyperedge size tend to inappropriately contract bridge
tribute the vertex set into multiple disjoint subsets while nodes (node 4 and node 7). This contraction can lead to the
minimizing a specified cut metric and adhering to certain collapse of the overall structure of the hypergraph. Since the
imbalance constraints. The process of dividing into two subsets effective resistance of a bridge is high, algorithms based on
is known as bipartitioning, whereas dividing into multiple effective resistance do not contract these nodes and preserve
subsets, typically referred to as k-way partitioning, involves the integrity of the hypergraph structure.
partitioning into k parts. More formally, consider a hypergraph
H = (V, E, w), where k is a positive integer (with k ≥ 2) and
ϵ is a positive real number (where ϵ ≤ k1 ). The objective of
k-way balanced hypergraph partitioning is to divide V into k
disjoint subsets S = {V0 , V1 , . . . , Vk−1 } such that:
1 1
P
• ( k − ϵ)W ≤ v∈Vi wv ≤ ( k + ϵ)W , for 0 ≤ i ≤ k − 1
P
• cutsizeH (S) = {e|e̸⊆Vi for any i} we is minimized
III. SH Y PAR : H YPERGRAPH PARTITIONING VIA In [35], the authors introduced a spectral algorithm based
S PECTRAL C OARSENING on effective resistance to sparsify hypergraphs. This method
To address the limitations of existing hypergraph coarsening achieves nearly-linear-sized sparsifiers by sampling hyper-
methods that rely on simple heuristics, we proposed a theoreti- edges according to their effective resistances [35]. Despite
cally sound and practically efficient framework for hypergraph its theoretical appeal, the technique involves a non-trivial
5
(l)
weights is denoted as η (l) ∈ RV≥0 , initially set to all zeros for conductance (HLC) with respective to a node-set S is defined
the original hypergraph. At each level l, η is updated by: as follows [67]:
|S|
cut(S, Ŝ)
HLCC (S) = , (20)
(l) ˆ
X
ηϑ := η(vj ). v∈V (l)
:v∈S (18) vol(S ∩ C) − βvol(S ∩ C)
j=1 where C ⊆ V is reference node set, and β is a locality
parameter that modulates the penalty for incorporating nearby
6) Node Weight Propagation (NWP): The vector of effec- nodes outside set C.
tive resistance R is updated based on the node weights η to A spectral hypergraph coarsening algorithm (HyperSF)
transmit the clustering information from previous levels to the is proposed by minimizing the HLC, which has achieved
current level: promising results in hypergraph coarsening and partitioning
in realistic VLSI designs.
|e|
(l) 1) Overview of Coarsening Refinement (HyperSF): Figure
X
Re(l) = η(vk ) + Re(l) . (19)
k=1
5 shows an overview of the HyperSF method . In this work,
HyperSF is leveraged for only refining the most imbalanced
Consequently, the effective resistance of a hyperedge at a node clusters (initially identified by HyperEF) with signif-
(l)
coarser level depends not only on the effective resistance (Re ) icantly smaller resistance diameters compared to the rest.
evaluated at the current level, but also on the data transferred Specifically, HyperSF aggregates strongly-coupled node clus-
from all previous (finer) levels. ters via minimizing Eq. (20): for each selected node set with
The experimental results indicate that using Eq. (19) for ef- large imbalance, HyperSF repeatedly solves a max s-t flow,
fective resistances estimation yields more balanced hypergraph min s-t cut problem to detect a set of neighboring node clusters
clustering outcomes compared to approaches that ignores the that minimizes the local conductance HLC in Eq. (20). To this
previous clustering information. The complete workflow for end, the following key steps are applied (as shown in Figure 6):
the effective resistance clustering algorithm, HyperEF, used (Step 1) an auxiliary hypergraph is constructed by introducing
for coarsening a hypergraph H across L levels, is detailed in a source vertex s and sink vertex t; (Step 2) each hyperedge
Algorithm 2. is replaced with a directed graph; (Step 3) each seed node-set
is iteratively updated by including new nodes into the set to
Algorithm 2 The HyperEF algorithm for hypergraph cluster- minimize the HLC by repeatedly solving the max s-t flow,
ing flow min s-t cut problem:
Input: Hypergraph H = (V, E, w), δ, L, η.
ˆ (21)
cuts−t (S) = cutH (S) + volH (Ŝ ∩ C) + βvolH (S ∩ C).
Output: A coarsened hypergraph H ′ = (V ′ , E ′ , w′ ) that
|V ′ | ≪ |V |.
(Step 4) The node sets obtained from flow-based methods
′
1: Initialize H ← H that minimize the local conductance are exploited to produce
2: for l ← 1 to L do a smaller hypergraph with fewer nodes while preserving the
3: Call Algorithm 1 to compute a vector of effective resistance key structural properties of the original hypergraph.
R with the size |E ′ | for given hypergraph H ′ .
4: Compute the node weights using (18). 2) Local clustering algorithms: An algorithm is local if the
5: Update the effective resistance vector R by applying (19).
input is a small portion of the original dataset. The HyperSF
6: Sort the hyperedges with ascending R values.
7: Starting with the hyperedges that have the lowest effective algorithm can be made strongly local when expanding the
resistances, contract (cluster) the hyperedge (nodes) if Re < δ. network around the seed nodes C, which obviously benefits the
proposed hypergraph coarsening framework: (1) applying the
8: Construct a coarsened hypergraph H ′ accordingly. max s-t flow, min s-t cut problem on the local neighborhood of
9: end for
the seed nodes restricts node-aggregation locally and keeps the
10: Return H ′ .
global hypergraph structure intact; (2) such a local clustering
scheme significantly improves the algorithm efficiency due to
the small-scale input dataset.
B. Flow-Based Community Detection (Phase B)
3) Flow-based Local Clustering in HyperSF: First, we ap-
To improve coarsening schemes for hypergraph partition- ply HyperEF (Algorithm 2) to the hypergraph H = (V, E, w)
ing, we utilize a community structure that integrates global to compute a coarsened hypergraph H ′ = (V ′ , E ′ , w′ ) and
hypergraph information into the coarsening process. This com- identify isolated nodes, denoted by C. To achieve flow-
munity structure directs the coarsening phase by permitting based local clustering of hypergraph nodes, HyperSF then
contractions solely within clusters. constructs a sub-hypergraph HL′ by iteratively expanding the
In the flow-based community detection, we employ multi- hypergraph around the seed node set C and then repeatedly
level clustering through HyperEF, which enhances the spec- solve the hypergraph cut problem to minimize HLC until no
tral hypergraph clustering method by integrating a multilevel significant changes in local conductance are observed. Define
coarsening approach. Let H = (V, E, w), the hypergraph local E ′ (S) = ∪v′ ∈V ′ ,v′ ∈{e′ }e′ ∈E′ E ′ (v) for any set S ⊆ V ′ and let
8
smaller (coarser) but spectrally-similar hypergraphs via repeat- of the hypergraph. Ultimately, HyperSF aims to produce clus-
edly contracting groups of vertices (clusters). Initially, the node ters characterized by low hypergraph local conductance (HLC)
spectral embeddings are computed by applying the Krylov to preserve the spectral properties of the original hypergraph.
subspace method in linear time to obtain the vectors of various In our approach, we perform community detection on
spectra. The optimization-based effective-resistance estimation hypergraphs by translating this challenge into a hypergraph
formulation is extended to the hypergraph settings by leverag- clustering problem. Our algorithm identifies the node cluster
ing the nonlinear quadratic operator of the hypergraphs. Then, with the smallest effective-resistance diameter and refines it
a low-resistance-diameter hypergraph decomposition method by applying a strongly-local flow-based method (HyperSF).
determines the highly connected vertices in multi-dimensional This is followed by a community-aware coarsening phase
spectral space. that applies the hypergraph coarsening algorithm to each
We introduce a new rating function based on the effective community separately.
resistance formula, which selects a pair of vertex (p, q) from
hyperedges that have a large number of heavy nets with low IV. E XPERIMENTAL VALIDATION
effective resistance. There are many applications related to hypergraph partition-
X w(e) ing. This paper focuses on comprehensively evaluating the per-
r(p, q) = , (23)
|Re | − 1 formance of the proposed hypergraph partitioning framework
e=(p,q)∈E
for increasingly important applications related to integrated
where r(p, q) is the rating of the vertex pair, w(e) is the weight circuits computer-aided design. Both the solution quality and
of hyperedge e, and |Re | is the hyperedge effective resistance runtime efficiency will be carefully assessed by testing them
calculated by Eq. (17). on a wide range of public-domain data sets. The developed
2) Multilevel coarsening with HyperEF: The resistance- open-source software packages will also be made available
based hypergraph clustering algorithm allows transferring hy- for public assessment.
pergraph structural information through the levels by assigning To assess the performance of the proposed multilevel
a weight (that is equal to the effective-resistance diameter hypergraph partitioning tools in applications related to VLSI
of each cluster evaluated at the previous level) to the new designs, We apply the multilevel hypergraph partitioning
corresponding vertices. As a result, the effective hyperedge tool developed to partition public-domain VLSI design
resistance at a coarse level depends on not only the evaluated benchmarks. For example, the ISPD98 benchmarks that
effective resistance computed at the current level but also the include “IBM01”, “IBM02”, ..., “IBM18” hypergraph
results transferred from all the previous (finer) levels. models with 13, 000 to 210, 000 nodes be adopted [68]. The
performance metrics for multilevel hypergraph partitioning,
including the total hyperedge cut, imbalance factors, and
runtime efficiency, will be considered for comparisons with
state-of-the-art hypergraph partitioning tools, such as hMETIS
[2], and KaHyPar [45].
1) HyperEF vs hMETIS for Hypergraph Coarsening: TABLE II: HyperSF vs hMETIS local coductance (NR=75%)
HyperEF is compared to hMETIS for hypergraph coarsening HLCavg HLCavg T (seconds) T (seconds)
Benchmark
considering both solution quality and runtime efficiency. The HyperSF hMETIS HyperSF hMETIS
IBM01 0.44 0.65 9.4 29 (3×)
following average conductance of the node clusters is used to IBM02 0.52 0.69 22.6 49 (2×)
analyze the performance of each method. IBM03 0.48 0.67 14.1 53 (4×)
IBM04 0.47 0.68 15 60 (4×)
|S| IBM05 0.55 0.65 29.2 62 (2×)
1 X IBM06 0.51 0.68 30.1 78 (3×)
Φavg = Φ(Si ) (24)
|S| i=1 IBM07 0.48 0.68 26.4 115 (4×)
IBM08 0.48 0.68 43.3 125 (3×)
IBM09 0.47 0.69 24.5 131 (5×)
Where Φ(Si ) denotes the conductance of node cluster Si . IBM10 0.48 0.68 45.2 181 (4×)
Figure 9 demonstrates the node clustering results for a small IBM11 0.46 0.69 30.1 176 (6×)
hypergraph obtained using HyperEF and hMETIS. Both meth- IBM12 0.50 0.71 42.4 191 (5×)
IBM13 0.48 0.69 50.4 229 (5×)
ods partition the hypergraph into four clusters, and the av- IBM14 0.48 0.67 85.7 393 (5×)
erage conductance of node clusters has been computed to IBM15 0.47 0.71 96 486 (5×)
evaluate the performance of each method. The results show IBM16 0.50 0.70 116.8 533 (5×)
that HyperEF outperforms hMETIS by creating node clusters IBM17 0.51 0.73 141.3 568 (4×)
IBM18 0.46 0.68 129 602 (5×)
with a significantly lower average conductance. In addition,
Table I shows the average conductance of node clusters Φavg
computed with both HyperEF and hMETIS by decomposing B. Hypergraph Partitioning with Spectral Coarsening
the hypergraph with the same node reduction ratios (NRs).
With an NR = 75% (3× node reduction) HyperEF outperforms We compared SHyPar with leading hypergraph partitioners
hMETIS in average conductance while achieving 24 − 38× hMETIS [2], SpecPart [43], KaHyPar [45], and MedPart [69]
speedups over hMETIS. using two sets of publicly available benchmarks: the ISPD98
VLSI Circuit Benchmark Suite [68] and the Titan23 Suite [70].
TABLE I: HyperEF vs hMETIS coductance (NR=75%) The details of these benchmarks are outlined in Table III and
Table IV. All tests were conducted on a server equipped with
Φavg Φavg T (seconds) T (seconds) Intel(R) Xeon(R) Gold 6244 processors and an NVIDIA Tesla
Benchmark
HyperEF hMETIS HyperEF hMETIS
IBM01 0.62 0.65 1.23 29 (24×) V100S GPU with 1546GB of memory.
IBM02 0.62 0.67 1.41 49 (35×) 1) Experimental Setup: To implement SHyPar, we have
IBM03 0.63 0.66 2.11 53 (25×) developed new hypergraph partitioning tools based on the
IBM04 0.64 0.66 2.37 60 (25×)
IBM05 0.59 0.63 2.34 62 (26×)
existing open-source multilevel hypergraph partitioner, KaHy-
IBM06 0.64 0.66 2.63 78 (30×) Par. We are utilizing the proposed two-phase spectral hyper-
IBM07 0.63 0.67 3.54 115 (32×) graph coarsening method. Specifically, the heuristic coarsening
IBM08 0.61 0.67 4.15 125 (30×) scheme has been replaced by our novel spectral coarsening
IBM09 0.64 0.66 4.38 131 (30×)
IBM10 0.63 0.67 5.79 181 (31×) algorithm to create a hierarchy of coarser hypergraphs that pre-
IBM11 0.64 0.67 5.73 176 (31×) serve the key structural properties of the original hypergraph.
IBM12 0.65 0.7 5.94 191 (32×) Accordingly, we have substituted the existing coarsening
IBM13 0.65 0.68 6.87 229 (33×)
IBM14 0.62 0.66 11.51 393 (34×)
method in KaHyPar with our proposed method, incorporating
IBM15 0.66 0.69 14.44 486 (34×) a new rating function. Additionally, the existing algorithm for
IBM16 0.63 0.67 14.62 533 (36×) community detection, Louvain, has been replaced with our
IBM17 0.66 0.7 15.22 568 (37×) proposed flow-based community detection method (Phase 2).
IBM18 0.6 0.67 15.79 602 (38×)
2) SHyPar Performance on ISPD98 Benchmarks: Table III
presents a comparison of the cut sizes achieved by SHyPar on
2) HyperSF vs hMETIS for Hypergraph Coarsening: In this the ISPD98 VLSI circuit benchmark against those obtained
section, we evaluate the performance of HyperSF against the from hMETIS, SpecPart, KaHyPar, and MedPart. The results
hMETIS hypergraph partitioning tool. We measure the average for SHyPar show an average improvement of approximately
local conductance HLCavg of the node clusters generated by 0.54% for ϵ = 2% and 0.4% for ϵ = 10%, affirming its
each method, calculated as follows: superiority over the best-published results. In several instances,
|S|
SHyPar outperforms the best-published results by up to 5%;
1 X these instances are specifically underlined for emphasis. Figure
HLCavg = HLC(S i ). (25)
|S| i=1 10 depicts the cut sizes obtained by SHyPar, KaHyPar, and
hMETIS, normalized against the KaHyPar results. It is evident
Table II presents the average local conductance HLCavg for that SHyPar significantly enhances performance over both
various methods under the same hypergraph reduction ratio KaHyPar and hMETIS across many tests. Moreover, when
(RR), where we reduce the number of nodes in each original SHyPar was applied to four partitions with ϵ = 1%, the
hypergraph by 75%. The experimental data illustrate that improvements were consistent, as demonstrated in Figure 11,
HyperSF significantly enhances the average local conductance which compares the cut sizes with those from KaHyPar and
compared to the hMETIS method in all test scenarios. hMETIS, also normalized by KaHyPar results.
11
TABLE III: Statistics of ISPD98 VLSI circuit benchmark suite. The best results among all the methods are colored red.
Benchmark Statistics ϵ = 2% ϵ = 10%
|V | |E| SpecPart hMETIS KaHyPar MedPart SHyPar SpecPart hMETIS KaHyPar MedPart SHyPar
IBM01 12,752 14,111 202 213 202 202 201 171 190 173 166 166
IBM02 19,601 19,584 336 339 328 352 327 262 262 262 264 262
IBM03 23,136 27,401 959 972 958 955 952 952 960 950 955 950
IBM04 27,507 31,970 593 617 579 583 579 388 388 388 389 388
IBM05 29,347 28,446 1720 1744 1712 1748 1707 1688 1733 1645 1675 1645
IBM06 32,498 34,826 963 1037 963 1000 969 733 760 735 788 733
IBM07 45,926 48,117 935 975 894 913 882 760 796 760 773 760
IBM08 51,309 50,513 1146 1146 1157 1158 1140 1140 1145 1120 1131 1120
IBM09 53,395 60,902 620 637 620 625 620 519 535 519 520 519
IBM10 69,429 75,196 1318 1313 1318 1327 1254 1261 1284 1250 1259 1244
IBM11 70,558 81,454 1062 1114 1062 1069 1051 764 782 769 774 763
IBM12 71,076 77,240 1920 1982 2163 1955 1986 1842 1940 1841 1914 1841
IBM13 84,199 99,666 848 871 848 850 831 693 721 693 697 655
IBM14 147,605 152,772 1859 1967 1849 1876 1842 1768 1665 1534 1639 1534
IBM15 161,570 186,608 2741 2886 2737 2896 2728 2235 2262 2135 2169 2135
IBM16 183,484 190,048 1915 2095 1952 1972 1887 1619 1708 1619 1645 1619
IBM17 185,495 189,581 2354 2520 2284 2336 2285 1989 2300 1989 2024 1989
IBM18 210,613 201,920 1535 1587 1915 1955 1521 1537 1550 1915 1829 1520
95
VI. ACKNOWLEDGMENTS
90 This work is supported in part by the National Science
85 Foundation under Grants CCF-2417619, CCF-2021309, CCF-
2011412, CCF-2212370, and CCF-2205572.
80 KaHyPar
SHyPar R EFERENCES
75
hMETIS
[1] U. V. Catalyurek and C. Aykanat, “Hypergraph-partitioning-based de-
70 composition for parallel sparse-matrix vector multiplication,” IEEE
IBM01
IBM02
IBM03
IBM04
IBM05
IBM06
IBM07
IBM08
IBM09
IBM10
IBM11
IBM12
IBM13
IBM14
IBM15
IBM16
IBM17
IBM18
Transactions on parallel and distributed systems, vol. 10, no. 7, pp. 673–
693, 1999.
[2] G. Karypis, R. Aggarwal, V. Kumar, and S. Shekhar, “Multilevel hyper-
Fig. 11: ISPD98 benchmarks with unit weights ϵ = 1% k = 4. graph partitioning: Applications in vlsi domain,” IEEE Transactions on
Very Large Scale Integration (VLSI) Systems, vol. 7, no. 1, pp. 69–79,
1999.
12
TABLE IV: Statistics of Titan23 benchmark suite and cut sizes of different approaches. The best results among all the methods
are colored red.
Benchmark Statistics ϵ = 2% ϵ = 20%
|V | |E| SpecPart hMETIS KaHyPar MedPart SHyPar SpecPart hMETIS KaHyPar MedPart SHyPar
sparcT1 core 91,976 92,827 1012 1066 974 1067 974 903 1290 873 624 631
neuron 92,290 125,305 252 260 244 262 243 206 270 244 270 244
stereo vision 94,050 127,085 180 180 169 176 169 91 143 91 93 91
des90 111,221 139,557 402 402 380 372 379 358 441 380 349 345
SLAM spheric 113,115 142,408 1061 1061 1061 1061 1061 1061 1061 1061 1061 1061
cholesky mc 113,250 144,948 285 285 283 283 283 345 667 591 281 479
segmemtation 138,295 179,051 126 136 107 114 107 78 141 78 78 78
bitonic mesh 192,064 235,328 585 614 593 594 586 483 590 592 493 506
dart 202,354 223,301 807 844 924 805 784 540 603 594 549 539
openCV 217,453 284,108 510 511 560 635 499 518 554 501 554 473
stap qrd 240,240 290,123 399 399 371 386 371 295 295 275 287 275
minres 261,359 320,540 215 215 207 215 207 189 189 199 181 191
cholesky bdti 266,422 342,688 1156 1157 1156 1161 1156 947 1024 1120 1024 848
denoise 275,638 356,848 416 722 416 516 416 224 478 244 224 220
sparcT2 core 300,109 302,663 1244 1273 1186 1319 1183 1245 1972 1186 1081 918
gsm switch 493,260 507,821 1827 5974 1759 1714 1621 1407 5352 1719 1503 1407
mes noc 547,544 577,664 634 699 649 699 651 617 633 755 633 617
LU230 574,372 669,477 3273 4070 4012 3452 3602 2677 3276 3751 2720 2923
LU Network 635,456 726,999 525 550 524 550 524 524 528 524 528 524
sparcT1 chip2 820,886 821,274 899 1524 874 1129 873 783 1029 856 877 757
directrf 931,275 1,374,742 574 646 646 646 632 295 379 295 317 295
bitcoin miner 1,089,284 1,448,151 1297 1570 1576 1562 1514 1225 1255 1287 1255 1282
[3] D. Kucar, S. Areibi, and A. Vannelli, “Hypergraph partitioning tech- and higher-order cheeger inequalities,” Journal of the ACM (JACM),
niques,” DYNAMICS OF CONTINUOUS DISCRETE AND IMPULSIVE vol. 61, no. 6, p. 37, 2014.
SYSTEMS SERIES A, vol. 11, pp. 339–368, 2004. [17] R. Peng, H. Sun, and L. Zanetti, “Partitioning well-clustered graphs:
[4] K. A. Murgas, E. Saucan, and R. Sandhu, “Hypergraph geometry Spectral clustering works,” in Proceedings of The 28th Conference on
reflects higher-order dynamics in protein interaction networks,” Scientific Learning Theory (COLT), pp. 1423–1455, 2015.
reports, vol. 12, no. 1, pp. 1–12, 2022. [18] T. N. Kipf and M. Welling, “Semi-supervised classification with graph
[5] Ü. V. Çatalyürek, K. D. Devine, M. F. Faraj, L. Gottesbüren, T. Heuer, convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
H. Meyerhenke, P. Sanders, S. Schlag, C. Schulz, D. Seemaier, et al., [19] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural
“More recent advances in (hyper) graph partitioning,” ACM Computing networks on graphs with fast localized spectral filtering,” in Advances
Surveys, 2022. in Neural Information Processing Systems, pp. 3844–3852, 2016.
[6] M. R. Garey and D. S. Johnson, “Computers and intractability,” A Guide [20] Y. Koren, “On spectral graph drawing,” in International Computing and
to the, 1979. Combinatorics Conference, pp. 496–508, Springer, 2003.
[7] K. D. Devine, E. G. Boman, R. T. Heaphy, R. H. Bisseling, and U. V. [21] X. Hu, A. Lu, and X. Wu, “Spectrum-based network visualization for
Catalyurek, “Parallel hypergraph partitioning for scientific computing,” topology analysis,” IEEE Computer Graphics and Applications, vol. 33,
in Proceedings 20th IEEE International Parallel & Distributed Process- no. 1, pp. 58–68, 2013.
ing Symposium, pp. 10–pp, IEEE, 2006. [22] P. Eades, Q. Nguyen, and S.-H. Hong, “Drawing big graphs using
[8] B. Vastenhouw and R. H. Bisseling, “A two-dimensional data distri- spectral sparsification,” in International Symposium on Graph Drawing
bution method for parallel sparse matrix-vector multiplication,” SIAM and Network Visualization, pp. 272–286, Springer, 2017.
review, vol. 47, no. 1, pp. 67–95, 2005. [23] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Van-
[9] Ü. V. Çatalyürek and C. Aykanat, “Patoh (partitioning tool for hy- dergheynst, “The emerging field of signal processing on graphs: Ex-
pergraphs),” in Encyclopedia of Parallel Computing, pp. 1479–1487, tending high-dimensional data analysis to networks and other irregular
Springer, 2011. domains,” IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83–98,
2013.
[10] R. Shaydulin, J. Chen, and I. Safro, “Relaxation-based coarsening for
[24] F. Galasso, M. Keuper, T. Brox, and B. Schiele, “Spectral graph
multilevel hypergraph partitioning,” Multiscale Modeling and Simula-
reduction for efficient image and streaming video segmentation,” in
tion, vol. 17, pp. 482–506, Jan 2019.
Proceedings of the IEEE conference on computer vision and pattern
[11] S.-H. Teng, “Scalable algorithms for data and network analysis,” Foun- recognition, pp. 49–56, 2014.
dations and Trends® in Theoretical Computer Science, vol. 12, no. 1–2, [25] A. Ortega, P. Frossard, J. Kovačević, J. M. Moura, and P. Vandergheynst,
pp. 1–274, 2016. “Graph signal processing: Overview, challenges, and applications,” Pro-
[12] D. Spielman and S. Teng, “Nearly linear time algorithms for precondi- ceedings of the IEEE, vol. 106, no. 5, pp. 808–828, 2018.
tioning and solving symmetric, diagonally dominant linear systems,” [26] X. Zhao, Z. Feng, and C. Zhuo, “An efficient spectral graph sparsification
SIAM Journal on Matrix Analysis and Applications, vol. 35, no. 3, approach to scalable reduction of large flip-chip power grids,” in Proc.
pp. 835–885, 2014. of IEEE/ACM ICCAD, pp. 218–223, 2014.
[13] J. A. Kelner, Y. T. Lee, L. Orecchia, and A. Sidford, “An almost-linear- [27] L. Han, X. Zhao, and Z. Feng, “An Adaptive Graph Sparsification
time algorithm for approximate max flow in undirected graphs, and Approach to Scalable Harmonic Balance Analysis of Strongly Nonlinear
its multicommodity generalizations,” in Proceedings of the twenty-fifth Post-Layout RF Circuits,” Computer-Aided Design of Integrated Circuits
annual ACM-SIAM symposium on Discrete algorithms, pp. 217–226, and Systems, IEEE Transactions on, vol. 34, no. 2, pp. 173–185, 2015.
SIAM, 2014. [28] Z. Feng, “Spectral graph sparsification in nearly-linear time leveraging
[14] P. Christiano, J. Kelner, A. Madry, D. Spielman, and S. Teng, “Electrical efficient spectral perturbation analysis,” in Design Automation Confer-
flows, laplacian systems, and faster approximation of maximum flow in ence (DAC), 2016 53nd ACM/EDAC/IEEE, pp. 1–6, IEEE, 2016.
undirected graphs,” in Proc. ACM STOC, pp. 273–282, 2011. [29] Z. Zhao and Z. Feng, “A spectral graph sparsification approach to
[15] D. Spielman and S. Teng, “Spectral partitioning works: Planar graphs scalable vectorless power grid integrity verification,” in Proceedings
and finite element meshes,” in Foundations of Computer Science of the 54th Annual Design Automation Conference 2017, p. 68, ACM,
(FOCS), 1996. Proceedings., 37th Annual Symposium on, pp. 96–105, 2017.
IEEE, 1996. [30] Z. Zhao, Y. Wang, and Z. Feng, “SAMG: Sparsified Graph Theoretic
[16] J. R. Lee, S. O. Gharan, and L. Trevisan, “Multiway spectral partitioning Algebraic Multigrid for Solving Large Symmetric Diagonally Dominant
13
(SDD) Matrices,” in Proceedings of the 36th International Conference [54] S. Segarra, A. G. Marques, and A. Ribeiro, “Optimal graph-filter
on Computer-Aided Design (ICCAD), ACM, 2017. design and applications to distributed linear network operators,” IEEE
[31] Z. Feng, “Similarity-aware spectral sparsification by edge filtering,” in Transactions on Signal Processing, vol. 65, no. 15, pp. 4117–4131, 2017.
Design Automation Conference (DAC), 2018 55nd ACM/EDAC/IEEE, [55] D. I. Shuman, P. Vandergheynst, D. Kressner, and P. Frossard, “Dis-
IEEE, 2018. tributed signal processing via chebyshev polynomial approximation,”
[32] D. Spielman and S. Teng, “Spectral sparsification of graphs,” SIAM IEEE Transactions on Signal and Information Processing over Networks,
Journal on Computing, vol. 40, no. 4, pp. 981–1025, 2011. vol. 4, no. 4, pp. 736–751, 2018.
[33] Z. Feng, “Grass: Graph spectral sparsification leveraging scalable spec- [56] C. Deng, Z. Zhao, Y. Wang, Z. Zhang, and Z. Feng, “GraphZoom:
tral perturbation analysis,” IEEE Transactions on Computer-Aided De- A Multi-Level Spectral Approach for Accurate and Scalable Graph
sign of Integrated Circuits and Systems, vol. 39, no. 12, pp. 4944–4957, Embedding,” Int’l Conf. on Learning Representations (ICLR), Apr 2020.
2020. [57] M. B. Cohen, J. Kelner, J. Peebles, R. Peng, A. B. Rao, A. Sidford,
[34] Y. T. Lee and H. Sun, “An SDP-based Algorithm for Linear-sized Spec- and A. Vladu, “Almost-linear-time algorithms for Markov chains and
tral Sparsification,” in Proceedings of the 49th Annual ACM SIGACT new spectral primitives for directed graphs,” in Proceedings of the 49th
Symposium on Theory of Computing, STOC 2017, (New York, NY, Annual ACM SIGACT Symposium on Theory of Computing, pp. 410–
USA), pp. 678–687, ACM, 2017. 419, ACM, 2017.
[35] M. Kapralov, R. Krauthgamer, J. Tardos, and Y. Yoshida, “Spectral [58] M. B. Cohen, J. Kelner, R. Kyng, J. Peebles, R. Peng, A. B. Rao, and
hypergraph sparsifiers of nearly linear size,” in 2021 IEEE 62nd Annual A. Sidford, “Solving Directed Laplacian Systems in Nearly-Linear Time
Symposium on Foundations of Computer Science (FOCS), pp. 1159– through Sparse LU Factorizations,” in Foundations of Computer Science
1170, IEEE, 2022. (FOCS), 2018 59st Annual IEEE Symposium on, pp. 898–909, IEEE,
[36] M. Kapralov, R. Krauthgamer, J. Tardos, and Y. Yoshida, “Towards tight 2018.
bounds for spectral sparsification of hypergraphs,” in Proceedings of [59] N. Bell and M. Garland, “Efficient sparse matrix-vector multiplication
the 53rd Annual ACM SIGACT Symposium on Theory of Computing, on CUDA,” tech. rep., Nvidia Technical Report NVR-2008-004, Nvidia
pp. 598–611, 2021. Corporation, 2008.
[37] Y. Zhang, Z. Zhao, and Z. Feng, “Sf-grass: Solver-free graph spectral [60] J. L. Greathouse and M. Daga, “Efficient sparse matrix-vector multipli-
sparsification,” in 2020 IEEE/ACM International Conference On Com- cation on GPUs using the CSR storage format,” in Proceedings of the
puter Aided Design (ICCAD), pp. 1–8, IEEE, 2020. International Conference for High Performance Computing, Networking,
[38] A. Loukas and P. Vandergheynst, “Spectrally approximating large graphs Storage and Analysis, pp. 769–780, IEEE Press, 2014.
with smaller graphs,” in International Conference on Machine Learning, [61] J. Fowers, K. Ovtcharov, K. Strauss, E. S. Chung, and G. Stitt, “A
pp. 3243–3252, 2018. high memory bandwidth FPGA accelerator for sparse matrix-vector
[39] Z. Zhao and Z. Feng, “Effective-resistance preserving spectral reduction multiplication,” in 2014 IEEE 22nd Annual International Symposium
of graphs,” in Proceedings of the 56th Annual Design Automation on Field-Programmable Custom Computing Machines, pp. 36–43, IEEE,
Conference 2019, DAC ’19, (New York, NY, USA), pp. 109:1–109:6, 2014.
ACM, 2019. [62] D. Merrill and M. Garland, “Merge-based parallel sparse matrix-vector
[40] T.-H. H. Chan, A. Louis, Z. G. Tang, and C. Zhang, “Spectral properties multiplication,” in Proceedings of the International Conference for High
of hypergraph laplacian and approximation algorithms,” Journal of the Performance Computing, Networking, Storage and Analysis, p. 58, IEEE
ACM (JACM), vol. 65, no. 3, pp. 1–48, 2018. Press, 2016.
[63] D. Buono, F. Petrini, F. Checconi, X. Liu, X. Que, C. Long, and T.-C.
[41] T.-H. H. Chan and Z. Liang, “Generalizing the hypergraph laplacian
Tuan, “Optimizing sparse matrix-vector multiplication for large-scale
via a diffusion process with mediators,” Theoretical Computer Science,
data analytics,” in Proceedings of the 2016 International Conference on
vol. 806, pp. 416–428, 2020.
Supercomputing, p. 37, ACM, 2016.
[42] I. Bustany, A. B. Kahng, I. Koutis, B. Pramanik, and Z. Wang, “K-
[64] M. Steinberger, R. Zayer, and H.-P. Seidel, “Globally homogeneous,
specpart: Supervised embedding algorithms and cut overlay for im-
locally adaptive sparse matrix-vector multiplication on the GPU,” in
proved hypergraph partitioning,” IEEE Transactions on Computer-Aided
Proceedings of the International Conference on Supercomputing, p. 13,
Design of Integrated Circuits and Systems, 2023.
ACM, 2017.
[43] I. Bustany, A. B. Kahng, I. Koutis, B. Pramanik, and Z. Wang, “Specpart: [65] C. Hong, A. Sukumaran-Rajam, B. Bandyopadhyay, J. Kim, S. E. Kurt,
A supervised spectral framework for hypergraph partitioning solution I. Nisa, S. Sabhlok, Ü. V. Çatalyürek, S. Parthasarathy, and P. Sa-
improvement,” in Proceedings of the 41st IEEE/ACM International dayappan, “Efficient sparse-matrix multi-vector product on GPUs,” in
Conference on Computer-Aided Design, pp. 1–9, 2022. Proceedings of the 27th International Symposium on High-Performance
[44] G. Karypis and V. Kumar, “Multilevel k-way hypergraph partitioning,” Parallel and Distributed Computing, pp. 66–79, ACM, 2018.
VLSI design, vol. 11, no. 3, pp. 285–300, 2000. [66] Z. Zhang, H. Wang, S. Han, and W. J. Dally, “Sparch: Efficient
[45] S. Schlag, T. Heuer, L. Gottesbüren, Y. Akhremtsev, C. Schulz, and architecture for sparse matrix multiplication,” in 2020 IEEE Interna-
P. Sanders, “High-quality hypergraph partitioning,” ACM Journal of tional Symposium on High Performance Computer Architecture (HPCA),
Experimental Algorithmics, vol. 27, pp. 1–39, 2023. pp. 261–274, IEEE, 2020.
[46] A. Aghdaei, Z. Zhao, and Z. Feng, “Hypersf: Spectral hypergraph coars- [67] N. Veldt, A. R. Benson, and J. Kleinberg, “Minimizing localized ratio
ening via flow-based local clustering,” in 2021 IEEE/ACM International cut objectives in hypergraphs,” in Proceedings of the 26th ACM SIGKDD
Conference On Computer Aided Design (ICCAD), pp. 1–8, ACM, 2021. International Conference on Knowledge Discovery & Data Mining,
[47] A. Aghdaei and Z. Feng, “Hyperef: Spectral hypergraph coarsening pp. 1708–1718, 2020.
by effective-resistance clustering,” in 2022 IEEE/ACM International [68] C. J. Alpert, “The ispd98 circuit benchmark suite,” in Proceedings of
Conference On Computer Aided Design (ICCAD), pp. 1–9, ACM, 2022. the 1998 international symposium on Physical design, pp. 80–85, 1998.
[48] F. R. Chung and F. C. Graham, Spectral graph theory. No. 92, American [69] R. Liang, A. Agnesina, and H. Ren, “Medpart: A multi-level evolution-
Mathematical Soc., 1997. ary differentiable hypergraph partitioner,” in Proceedings of the 2024
[49] L. Hagen and A. Kahng, “New spectral methods for ratio cut partition- International Symposium on Physical Design, pp. 3–11, 2024.
ing and clustering,” IEEE Transactions on Computer-Aided Design of [70] K. E. Murray, S. Whitty, S. Liu, J. Luu, and V. Betz, “Titan: Enabling
Integrated Circuits and Systems, vol. 11, no. 9, pp. 1074–1085, 1992. large and complex benchmarks in academic cad,” in 2013 23rd Inter-
[50] T. Soma and Y. Yoshida, “Spectral sparsification of hypergraphs,” in national Conference on Field programmable Logic and Applications,
Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete pp. 1–8, IEEE, 2013.
Algorithms, pp. 2570–2581, SIAM, 2019.
[51] D. Zhou, J. Huang, and B. Schölkopf, “Learning with hypergraphs: Clus-
tering, classification, and embedding,” Advances in neural information
processing systems, vol. 19, pp. 1601–1608, 2006.
[52] V. Osipov and P. Sanders, “n-level graph partitioning,” in Algorithms–
ESA 2010: 18th Annual European Symposium, Liverpool, UK, Septem-
ber 6-8, 2010. Proceedings, Part I 18, pp. 278–289, Springer, 2010.
[53] V. L. Alev, N. Anari, L. C. Lau, and S. Oveis Gharan, “Graph clustering
using effective resistance,” in 9th Innovations in Theoretical Computer
Science Conference (ITCS 2018), Schloss Dagstuhl-Leibniz-Zentrum
fuer Informatik, 2018.
14