Rapid advancement of distributed sensing and imaging technology brings the proliferation of high-dimensional spatiotemporal data, i.e., y(s; t) and x(s; t) in manufacturing and healthcare systems. Traditional regression is not generally applicable for predictive modeling in these complex structured systems. For example, infrared cameras are commonly used to capture dynamic thermal images of 3D parts in additive manufacturing. The temperature distribution within parts enables engineers to investigate how process conditions impact the strength, residual stress and microstructures of fabricated products. The ECG sensor network is placed on the body surface to acquire the distribution of electric potentials y(s; t), also named body surface potential mapping (BSPM). Medical scientists call for the estimation of electric potentials x(s; t) on the heart surface from BSPM y(s; t) so as to investigate cardiac pathological activities (e.g., tissue damages in the heart). However, spatiotemporally varying data and complex geometries (e.g., human heart or mechanical parts) defy traditional regression modeling and regularization methods. This talk will present a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex manufacturing and healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. In the end, we will introduce our lab at Penn State and future research directions will also be discussed.
In this talk, we discuss some recent advances in probabilistic schemes for high-dimensional PIDEs. It is known that traditional PDE solvers, e.g., finite element, finite difference methods, do not scale well with the increase of dimension. The idea of probabilistic schemes is to link a wide class of nonlinear parabolic PIDEs to stochastic Levy processes based on nonlinear version of the Feynman-Kac theory. As such, the solution of the PIDE can be represented by a conditional expectation (i.e., a high-dimensional integral) with respect to a stochastic dynamical system driven by Levy processes. In other words, we can solve the PIDEs by performing high-dimensional numerical integration. A variety of quadrature methods could be applied, including MC, QMC, sparse grids, etc. The probabilistic schemes have been used in many application problems, e.g., particle transport in plasmas (e.g., Vlasov-Fokker-Planck equations), nonlinear filtering (e.g., Zakai equations), and option pricing, etc.
1) The document discusses detection and attribution in climate science, which refers to statistical techniques used to identify the contributions of different forcing factors (like greenhouse gases or solar activity) to changes in climate signals over time.
2) It provides context on the history and development of detection and attribution methods, beginning with early work in the 1970s-1990s and more recent Bayesian approaches.
3) A key paper discussed is one by Katzfuss, Hammerling and Smith (2017) that introduced a Bayesian hierarchical model for climate change detection and attribution to help address uncertainties.
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...Hiroyuki KASAI
The presentation in ICML2016 at New York, USA on June 20, 2016.
We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint. A novel Riemannian metric or inner product is proposed that exploits the least-squares structure of the cost function and takes into account the structured symmetry that exists in Tucker decomposition. The specific metric allows to use the versatile framework of Riemannian optimization on quotient manifolds to develop preconditioned nonlinear conjugate gradient and stochastic gradient descent algorithms for batch and online setups, respectively. Concrete matrix representations of various optimization-related ingredients are listed. Numerical comparisons suggest that our proposed algorithms robustly outperform state-of-the-art algorithms across different synthetic and real-world datasets.
Beginnig with reviewing Basyain Theorem and chain rule, then explain MAP Estimation; Maximum A Posteriori Estimation.
In the framework of MAP Estimation, we can describe a lot of famous models; naive bayes, regularized redge regression, logistic regression, log-linear model, and gaussian process.
MAP estimation is powerful framework to understand the above models from baysian point of view and cast possibility to extend models to semi-supervised ones.
The document proposes a lattice-based approach for consensus clustering. It introduces the consensus clustering problem and existing approaches. It then describes a least-squares criterion for ensemble and combined consensus clustering. A lattice-based algorithm is presented that finds a consensus partition by identifying an antichain of concepts in the lattice formed from a partition context. Computational experiments on synthetic datasets are used to evaluate the lattice-based approach and compare it to state-of-the-art algorithms, using adjusted rand index to measure similarity between partitions.
The document discusses achieving higher-order convergence for integration on RN using quasi-Monte Carlo (QMC) rules. It describes the problem that when using tensor product QMC rules on truncated domains, the convergence rate scales with the dimension s as (α log N)sN-α. The goal is to obtain a convergence rate independent of the dimension s. The document proposes using a multivariate decomposition method (MDM) to decompose an infinite-dimensional integral into a sum of finite-dimensional integrals, then applying QMC rules to each integral to achieve the desired higher-order convergence rate.
Pattern-based classification of demographic sequencesDmitrii Ignatov
We have proposed prefix-based gapless sequential patterns for classification of demographic sequences. In comparison to black-box machine learning techniques, this one provides interpretable patterns suitable for treatment by professional demographers. As for the language, we have used Pattern Structures as an extension of Formal Concept Analysis for the case of complex data like sequences, graphs, intervals, etc.
This document provides an overview of distributed real time systems and timed automata. It defines key concepts such as timed runs, timed words, timed languages, clock constraints, timed transition systems, and timed regular languages. It also describes timed automata through definitions, examples, and discussions of emptiness checking, clock regions, and the region automaton. The document outlines these topics over multiple sections and provides figures and examples to illustrate the concepts.
Seminar at IEEE Computational Intelligence Society, Singapore Chapter at School of Electrical and Electronic Engineering, NTU, Singapore, 20 February 2019
This document summarizes a talk on supersymmetric Q-balls and boson stars in (d+1) dimensions. It introduces Q-balls and boson stars as non-topological solitons stabilized by a conserved Noether charge. It discusses properties like existence conditions and the thin wall approximation for Q-balls. For boson stars, it covers different models and properties like rotating and charged boson stars. The document also discusses applications like the AdS/CFT correspondence and holographic superconductors using boson stars in anti-de Sitter spacetime.
The document proposes a new method called Spectral Regression Discriminant Analysis (SRDA) to address the computational challenges of Linear Discriminant Analysis (LDA) on large, high-dimensional datasets. SRDA combines spectral graph analysis and regression to reduce the time complexity of LDA from quadratic to linear. It works by using the eigenvectors of the within-class scatter matrix to define a regression problem, the solution of which provides the projection vectors that maximize class separability. Experiments on four datasets show SRDA has comparable classification accuracy to LDA but can scale to much larger problems.
This document describes the divide-and-conquer algorithm design strategy and provides examples of its use. It discusses:
1) The divide-and-conquer strategy involves dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions.
2) Examples where divide-and-conquer is applied include sorting algorithms like mergesort and quicksort, binary tree traversals, binary search, and multiplying large integers.
3) Analysis techniques like recurrence relations and the master theorem are introduced for analyzing divide-and-conquer algorithms' run times. Specific algorithms like mergesort, quicksort, and binary search are analyzed in more detail.
Sinc collocation linked with finite differences for Korteweg-de Vries Fraction...IJECEIAES
A novel numerical method is proposed for Korteweg-de Vries Fractional Equation. The fractional derivatives are described based on the Caputo sense. We construct the solution using different approach, that is based on using collocation techniques. The method combining a finite difference approach in the time-fractional direction, and the Sinc-Collocation in the space direction, where the derivatives are replaced by the necessary matrices, and a system of algebraic equations is obtained to approximate solution of the problem. The numerical results are shown to demonstrate the efficiency of the newly proposed method. Easy and economical implementation is the strength of this method.
Variants of the Christ-Kiselev lemma and an application to the maximal Fourie...VjekoslavKovac1
1. The document discusses variants of the Christ-Kiselev lemma and its application to maximal Fourier restriction estimates.
2. The Christ-Kiselev lemma allows block-diagonal and block-triangular truncations of operators while controlling their operator norms.
3. These lemmas can be used to prove maximal and variational estimates for the restriction of the Fourier transform to surfaces, which has applications in harmonic analysis.
This document provides an outline and overview of key concepts for estimating curves and surfaces from data using basis functions and penalized least squares regression. It discusses representing a curve or surface using basis functions, fitting the coefficients using ordinary least squares, and adding a penalty term to the least squares objective function to produce a smoothed estimate. The smoothing parameter λ controls the tradeoff between fit to the data and smoothness of the estimate. Cross-validation can be used to choose λ.
This document summarizes a research paper that proposes using motif discovery in time series data to make predictions. It begins by defining key concepts in time series analysis like motifs, distances, and ARIMA models. It then describes the specific approach taken - using the Chouakria index with CID similarity measure to identify motifs, which are then used to build prediction models. The results of this approach on several time series are compared to ARIMA models, finding the motif-based models provide better performance according to error metrics.
A common fixed point theorem for two random operators using random mann itera...Alexander Decker
This academic article presents a common fixed point theorem for two random operators using a random Mann iteration scheme. It proves that if a sequence defined by the random Mann iteration of two random operators converges, then the limit point is a common fixed point of the two operators. The paper defines relevant concepts such as random operators and random fixed points. It then presents the main theorem and proof that under a contractive condition, the limit of the random Mann iteration is a common fixed point. The proof uses properties of measurable mappings and the convergence of the iterative sequence.
On maximal and variational Fourier restrictionVjekoslavKovac1
Workshop talk slides, Follow-up workshop to trimester program "Harmonic Analysis and Partial Differential Equations", Hausdorff Institute, Bonn, May 2019.
Trilinear embedding for divergence-form operatorsVjekoslavKovac1
The document discusses a trilinear embedding theorem for divergence-form operators with complex coefficients. It proves that if matrices A, B, C are appropriately p,q,r-elliptic, then there is a bound on the integral of the product of the gradients of the semigroups associated with the operators. The proof uses a Bellman function technique and shows the relationship to the concept of p-ellipticity. It generalizes previous work on bilinear embeddings to the trilinear case.
The document discusses depth-first search (DFS) and breadth-first search (BFS) algorithms for graph traversal. It explains that DFS uses a stack to systematically visit all vertices in a graph by exploring neighboring vertices before moving to the next level, while BFS uses a queue to explore neighboring vertices at the same level before moving to the next. Examples are provided to illustrate how DFS can be used to check for graph connectivity and cyclicity.
The document discusses algorithms for finding minimum spanning trees in graphs. It describes Prim's algorithm and Kruskal's algorithm. Prim's algorithm works by gradually adding the closest vertex and edge to a growing spanning tree. Kruskal's algorithm sorts all the edges by weight and adds edges to the spanning tree if they do not form cycles. The running time of Prim's algorithm is O(V^2) while Kruskal's algorithm has a running time of O(E log E + V) where V is vertices and E is edges. Examples are provided to illustrate how each algorithm works on sample graphs.
ESL 4.4.3-4.5: Logistic Reression (contd.) and Separating HyperplaneShinichi Tamura
The presentation material for the reading club of Element of Statistical Learning by Hastie et al.
The contents of the sections cover
- Properties of logistic regression compared to least square s fitting
- Difference between logistic regression vs. linear discriminant analysis
- Rosenblatt's perceptron algorithm
- Derivation of optimal hyperplane, which offers the basis for SVM
-------------------------------------------------------------------------
研究室での『統計学習の基礎』(Hastieら著)の輪講用発表資料(ぜんぶ英語)です。
担当範囲は
・最小二乗法との類推で見るロジスティック回帰の特徴
・ロジスティック回帰と線形判別分析の比較
・ローゼンブラットのパーセプトロンアルゴリズム
・SVMの基礎となる最適分離超平面の導出
Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equati...DrSebastianEngel
This document summarizes topics related to sparse-Bayesian approaches for inverse problems involving partial differential equations. Specifically, it discusses Bayesian inversion for identifying sound sources using the Helmholtz equation and optimal control/inversion for the wave equation with functions of bounded variation in time. The document provides motivation for Bayesian inversion to deal with inherent errors in models and data. It introduces the Bayesian framework for inverse problems, including prior distributions, likelihoods, and obtaining the posterior distribution using Bayes' theorem. Finite and infinite dimensional examples are presented using Gaussian priors.
The document summarizes research on the energy of graphs. It defines the energy of a graph as the sum of the absolute values of its eigenvalues. It shows that for any positive epsilon, there exist infinitely many values of n for which a k-regular graph of order n exists whose energy is arbitrarily close to the known upper bound for k-regular graphs. It also establishes the existence of equienergetic graphs that are not cospectral. Equienergetic graphs have the same energy even if they do not have the same spectrum of eigenvalues.
A method for constructing fuzzy test statistics with applicationAlexander Decker
This document discusses a method for constructing fuzzy test statistics to test fuzzy hypotheses when parameter information is imprecise. It begins with an introduction to fuzzy hypotheses testing and outlines some preliminaries. It then explains how to formulate fuzzy hypotheses using membership functions and defines fuzzy Bayesian hypothesis testing without and with a loss function. As an application, it examines fuzzy hypotheses about the percentage of defectives in a production process. Specifically, it constructs test statistics to test fuzzy hypotheses about whether the defective percentage is between 0.2-0.4% or not based on a sample. It computes the test statistics numerically and provides example results, concluding that the proposed method can test fuzzy hypotheses when information is uncertain.
Efficient Identification of Improving Moves in a Ball for Pseudo-Boolean Prob...jfrchicanog
The document proposes a new method to efficiently identify improving moves within a ball of radius r for k-bounded pseudo-Boolean optimization problems. The key ideas are to (1) decompose the scores of potential moves into scores of individual subfunctions, and (2) update only a constant number of subfunction scores in constant time as the solution moves within the ball, rather than recomputing all scores from scratch. This avoids the typical computational cost of O(nr) and allows identifying improving moves in constant time O(1), independent of the problem size n.
Density theorems for Euclidean point configurationsVjekoslavKovac1
1. The document discusses density theorems for point configurations in Euclidean space. Density theorems study when a measurable set A contained in Euclidean space can be considered "large".
2. One classical result is that for any measurable set A contained in R2 with positive upper Banach density, there exist points in A whose distance is any sufficiently large real number. This has been generalized to higher dimensions and other point configurations.
3. Open questions remain about determining all point configurations P for which one can show that a sufficiently large measurable set A contained in high dimensional Euclidean space must contain a scaled copy of P.
In my Thesis, Over Levi and I have presented several novel approaches to regularization problem.
1. Develop the 2D Discrete Picard condition
2. Designed a new Hybrid (L1,L2) Norm
3. Implemented an amalgamation of convex function optimization
We also show the effects of the following on inverse problem.
1. L1,L2 regularization
2. TSVD regularization
3. L-curve optimization
4. 1D,2D Discrete Picard condition
Stochastic Alternating Direction Method of MultipliersTaiji Suzuki
This document discusses stochastic optimization methods for solving regularized learning problems with structured regularization and large datasets. It proposes applying the alternating direction method of multipliers (ADMM) in a stochastic manner. Specifically, it introduces two stochastic ADMM methods for online data: RDA-ADMM, which extends regularized dual averaging with ADMM updates; and OPG-ADMM, which extends online proximal gradient descent with ADMM updates. These methods allow the regularization term to be optimized in batches, resolving computational difficulties, while the loss is optimized online using only a small number of samples per iteration.
Seminar at IEEE Computational Intelligence Society, Singapore Chapter at School of Electrical and Electronic Engineering, NTU, Singapore, 20 February 2019
This document summarizes a talk on supersymmetric Q-balls and boson stars in (d+1) dimensions. It introduces Q-balls and boson stars as non-topological solitons stabilized by a conserved Noether charge. It discusses properties like existence conditions and the thin wall approximation for Q-balls. For boson stars, it covers different models and properties like rotating and charged boson stars. The document also discusses applications like the AdS/CFT correspondence and holographic superconductors using boson stars in anti-de Sitter spacetime.
The document proposes a new method called Spectral Regression Discriminant Analysis (SRDA) to address the computational challenges of Linear Discriminant Analysis (LDA) on large, high-dimensional datasets. SRDA combines spectral graph analysis and regression to reduce the time complexity of LDA from quadratic to linear. It works by using the eigenvectors of the within-class scatter matrix to define a regression problem, the solution of which provides the projection vectors that maximize class separability. Experiments on four datasets show SRDA has comparable classification accuracy to LDA but can scale to much larger problems.
This document describes the divide-and-conquer algorithm design strategy and provides examples of its use. It discusses:
1) The divide-and-conquer strategy involves dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions.
2) Examples where divide-and-conquer is applied include sorting algorithms like mergesort and quicksort, binary tree traversals, binary search, and multiplying large integers.
3) Analysis techniques like recurrence relations and the master theorem are introduced for analyzing divide-and-conquer algorithms' run times. Specific algorithms like mergesort, quicksort, and binary search are analyzed in more detail.
Sinc collocation linked with finite differences for Korteweg-de Vries Fraction...IJECEIAES
A novel numerical method is proposed for Korteweg-de Vries Fractional Equation. The fractional derivatives are described based on the Caputo sense. We construct the solution using different approach, that is based on using collocation techniques. The method combining a finite difference approach in the time-fractional direction, and the Sinc-Collocation in the space direction, where the derivatives are replaced by the necessary matrices, and a system of algebraic equations is obtained to approximate solution of the problem. The numerical results are shown to demonstrate the efficiency of the newly proposed method. Easy and economical implementation is the strength of this method.
Variants of the Christ-Kiselev lemma and an application to the maximal Fourie...VjekoslavKovac1
1. The document discusses variants of the Christ-Kiselev lemma and its application to maximal Fourier restriction estimates.
2. The Christ-Kiselev lemma allows block-diagonal and block-triangular truncations of operators while controlling their operator norms.
3. These lemmas can be used to prove maximal and variational estimates for the restriction of the Fourier transform to surfaces, which has applications in harmonic analysis.
This document provides an outline and overview of key concepts for estimating curves and surfaces from data using basis functions and penalized least squares regression. It discusses representing a curve or surface using basis functions, fitting the coefficients using ordinary least squares, and adding a penalty term to the least squares objective function to produce a smoothed estimate. The smoothing parameter λ controls the tradeoff between fit to the data and smoothness of the estimate. Cross-validation can be used to choose λ.
This document summarizes a research paper that proposes using motif discovery in time series data to make predictions. It begins by defining key concepts in time series analysis like motifs, distances, and ARIMA models. It then describes the specific approach taken - using the Chouakria index with CID similarity measure to identify motifs, which are then used to build prediction models. The results of this approach on several time series are compared to ARIMA models, finding the motif-based models provide better performance according to error metrics.
A common fixed point theorem for two random operators using random mann itera...Alexander Decker
This academic article presents a common fixed point theorem for two random operators using a random Mann iteration scheme. It proves that if a sequence defined by the random Mann iteration of two random operators converges, then the limit point is a common fixed point of the two operators. The paper defines relevant concepts such as random operators and random fixed points. It then presents the main theorem and proof that under a contractive condition, the limit of the random Mann iteration is a common fixed point. The proof uses properties of measurable mappings and the convergence of the iterative sequence.
On maximal and variational Fourier restrictionVjekoslavKovac1
Workshop talk slides, Follow-up workshop to trimester program "Harmonic Analysis and Partial Differential Equations", Hausdorff Institute, Bonn, May 2019.
Trilinear embedding for divergence-form operatorsVjekoslavKovac1
The document discusses a trilinear embedding theorem for divergence-form operators with complex coefficients. It proves that if matrices A, B, C are appropriately p,q,r-elliptic, then there is a bound on the integral of the product of the gradients of the semigroups associated with the operators. The proof uses a Bellman function technique and shows the relationship to the concept of p-ellipticity. It generalizes previous work on bilinear embeddings to the trilinear case.
The document discusses depth-first search (DFS) and breadth-first search (BFS) algorithms for graph traversal. It explains that DFS uses a stack to systematically visit all vertices in a graph by exploring neighboring vertices before moving to the next level, while BFS uses a queue to explore neighboring vertices at the same level before moving to the next. Examples are provided to illustrate how DFS can be used to check for graph connectivity and cyclicity.
The document discusses algorithms for finding minimum spanning trees in graphs. It describes Prim's algorithm and Kruskal's algorithm. Prim's algorithm works by gradually adding the closest vertex and edge to a growing spanning tree. Kruskal's algorithm sorts all the edges by weight and adds edges to the spanning tree if they do not form cycles. The running time of Prim's algorithm is O(V^2) while Kruskal's algorithm has a running time of O(E log E + V) where V is vertices and E is edges. Examples are provided to illustrate how each algorithm works on sample graphs.
ESL 4.4.3-4.5: Logistic Reression (contd.) and Separating HyperplaneShinichi Tamura
The presentation material for the reading club of Element of Statistical Learning by Hastie et al.
The contents of the sections cover
- Properties of logistic regression compared to least square s fitting
- Difference between logistic regression vs. linear discriminant analysis
- Rosenblatt's perceptron algorithm
- Derivation of optimal hyperplane, which offers the basis for SVM
-------------------------------------------------------------------------
研究室での『統計学習の基礎』(Hastieら著)の輪講用発表資料(ぜんぶ英語)です。
担当範囲は
・最小二乗法との類推で見るロジスティック回帰の特徴
・ロジスティック回帰と線形判別分析の比較
・ローゼンブラットのパーセプトロンアルゴリズム
・SVMの基礎となる最適分離超平面の導出
Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equati...DrSebastianEngel
This document summarizes topics related to sparse-Bayesian approaches for inverse problems involving partial differential equations. Specifically, it discusses Bayesian inversion for identifying sound sources using the Helmholtz equation and optimal control/inversion for the wave equation with functions of bounded variation in time. The document provides motivation for Bayesian inversion to deal with inherent errors in models and data. It introduces the Bayesian framework for inverse problems, including prior distributions, likelihoods, and obtaining the posterior distribution using Bayes' theorem. Finite and infinite dimensional examples are presented using Gaussian priors.
The document summarizes research on the energy of graphs. It defines the energy of a graph as the sum of the absolute values of its eigenvalues. It shows that for any positive epsilon, there exist infinitely many values of n for which a k-regular graph of order n exists whose energy is arbitrarily close to the known upper bound for k-regular graphs. It also establishes the existence of equienergetic graphs that are not cospectral. Equienergetic graphs have the same energy even if they do not have the same spectrum of eigenvalues.
A method for constructing fuzzy test statistics with applicationAlexander Decker
This document discusses a method for constructing fuzzy test statistics to test fuzzy hypotheses when parameter information is imprecise. It begins with an introduction to fuzzy hypotheses testing and outlines some preliminaries. It then explains how to formulate fuzzy hypotheses using membership functions and defines fuzzy Bayesian hypothesis testing without and with a loss function. As an application, it examines fuzzy hypotheses about the percentage of defectives in a production process. Specifically, it constructs test statistics to test fuzzy hypotheses about whether the defective percentage is between 0.2-0.4% or not based on a sample. It computes the test statistics numerically and provides example results, concluding that the proposed method can test fuzzy hypotheses when information is uncertain.
Efficient Identification of Improving Moves in a Ball for Pseudo-Boolean Prob...jfrchicanog
The document proposes a new method to efficiently identify improving moves within a ball of radius r for k-bounded pseudo-Boolean optimization problems. The key ideas are to (1) decompose the scores of potential moves into scores of individual subfunctions, and (2) update only a constant number of subfunction scores in constant time as the solution moves within the ball, rather than recomputing all scores from scratch. This avoids the typical computational cost of O(nr) and allows identifying improving moves in constant time O(1), independent of the problem size n.
Density theorems for Euclidean point configurationsVjekoslavKovac1
1. The document discusses density theorems for point configurations in Euclidean space. Density theorems study when a measurable set A contained in Euclidean space can be considered "large".
2. One classical result is that for any measurable set A contained in R2 with positive upper Banach density, there exist points in A whose distance is any sufficiently large real number. This has been generalized to higher dimensions and other point configurations.
3. Open questions remain about determining all point configurations P for which one can show that a sufficiently large measurable set A contained in high dimensional Euclidean space must contain a scaled copy of P.
In my Thesis, Over Levi and I have presented several novel approaches to regularization problem.
1. Develop the 2D Discrete Picard condition
2. Designed a new Hybrid (L1,L2) Norm
3. Implemented an amalgamation of convex function optimization
We also show the effects of the following on inverse problem.
1. L1,L2 regularization
2. TSVD regularization
3. L-curve optimization
4. 1D,2D Discrete Picard condition
Stochastic Alternating Direction Method of MultipliersTaiji Suzuki
This document discusses stochastic optimization methods for solving regularized learning problems with structured regularization and large datasets. It proposes applying the alternating direction method of multipliers (ADMM) in a stochastic manner. Specifically, it introduces two stochastic ADMM methods for online data: RDA-ADMM, which extends regularized dual averaging with ADMM updates; and OPG-ADMM, which extends online proximal gradient descent with ADMM updates. These methods allow the regularization term to be optimized in batches, resolving computational difficulties, while the loss is optimized online using only a small number of samples per iteration.
Not Enough Measurements, Too Many MeasurementsMike McCann
This document summarizes a talk on supervised image reconstruction from measurements. It discusses how convolutional neural networks (CNNs) have been used to learn image reconstruction mappings from training data, either by augmenting direct reconstruction methods, taking inspiration from variational methods, or learning the entire mapping. Examples are given for low-dose X-ray CT reconstruction and single-particle cryo-electron microscopy reconstruction using generative adversarial networks. The document also discusses learning regularizers from data for image reconstruction within a variational framework.
This document summarizes Nuno Brás's PhD dissertation on developing new approaches for magnetic induction tomography (MIT). The dissertation aimed to implement a new MIT prototype and numerical framework to handle large numbers of measurements. It developed an experimental moving coil system that achieved state-of-the-art sensitivity. A new 3D eddy current solver was also created with competitive speed and low error. Additionally, an alternating direction method of multipliers was developed and shown to provide advantages for large datasets, being applied to 2D and 3D inverse problems.
This document studies the sensitivity of body surface potentials to variations in cardiac size using numerical and analytical boundary element methods. A spherical heart and torso model is used. Simulation results show there is a linear relationship between heart size and body surface potentials, with about a 0.01% rise in body surface potential and 5.14% rise in epicardial potential reported for a 10% increase in heart size. This confirms electromagnetic laws relating potential to source-observation distance. The study establishes a direct relationship between heart size and body surface potentials while neglecting other factors.
This document describes using the linear sampling method to reconstruct the shape of two-dimensional dielectric targets from scattered field measurements. The linear sampling method solves an integral equation to determine if a test point is inside the target support. Regularization is used to address ill-posedness. Numerical results show the reconstructed shape varies slightly with frequency and accuracy improves with more transmitters/receivers. The method provides fast reconstruction of target support but not material properties. Future work includes extending to 3D imaging and using linear sampling method results to initialize other reconstruction algorithms.
This document describes using the linear sampling method to reconstruct the shape of two-dimensional dielectric targets from scattered field measurements. The method involves solving an integral equation to determine if a test point is inside the target support. Regularization is used to handle ill-posedness. Results show the reconstructed shape varies slightly with frequency and accuracy increases with more transmitters and receivers. The method provides fast reconstruction of target support but cannot determine material properties.
Alexander Litvinenko's research interests include developing efficient numerical methods for solving stochastic PDEs using low-rank tensor approximations. He has made contributions in areas such as fast techniques for solving stochastic PDEs using tensor approximations, inexpensive functional approximations of Bayesian updating formulas, and modeling uncertainties in parameters, coefficients, and computational geometry using probabilistic methods. His current research focuses on uncertainty quantification, Bayesian updating techniques, and developing scalable and parallel methods using hierarchical matrices.
This document summarizes research on edge-preserving image reconstruction methods for electrical impedance tomography (EIT) applied to lung imaging. It presents three main contributions:
1. A level set-based reconstruction algorithm that was tested on clinical EIT data from lung patients, showing improved results over conventional methods.
2. An algorithm using level sets and L1 norms that better preserves edges and is more robust to noise and outliers.
3. A generalized inverse problem formulation using weighted L1 and L2 norms on data and regularization terms, improving robustness.
Evaluation of the methods showed they produced more accurate shapes and were more robust to uncertainties compared to traditional techniques. Future work is proposed on combining level
Compatible discretizations in our hearts and mindsMarie E. Rognes
This document discusses a total pressure augmented formulation for simulating fluid flow in porous media, such as modeling cerebral fluid flow in the brain. The formulation introduces total pressure as a variable to overcome issues with Poisson locking in the incompressible limit. The formulation results in a coupled system of equations that describes solid displacement, total pressure, and fluid pressures. Finite element methods are developed using this formulation that achieve optimal convergence rates, including in the incompressible limit, using Taylor-Hood elements. Numerical experiments demonstrate the improved convergence rates over standard formulations.
A New Hybrid Inversion Method For 2D Nuclear Magnetic Resonance Combining TSV...Pedro Craggett
This paper presents a new hybrid method for inverting 2D nuclear magnetic resonance (NMR) data that combines truncated singular value decomposition (TSVD) and Tikhonov regularization. The method computes the exact TSVD of the kernel matrix using its Kronecker product structure, avoiding approximations. It then solves a Tikhonov-like optimization problem using the truncated kernel. The paper also proposes using the Discrete Picard Condition to automatically select both the TSVD truncation index and Tikhonov regularization parameter. The performance of the new hybrid method is evaluated on simulated and real NMR data.
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...WiMLDSMontreal
"Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy Data"
By Sergül Aydöre, Assistant Professor at Stevens Institute of Technology
Abstract:
The use of complex models –with many parameters– is challenging with high-dimensional small-sample
problems: indeed, they face rapid overfitting. Such situations are common when data collection is expensive,
as in neuroscience, biology, or geology. Dedicated regularization can be crafted to tame overfit, typically via
structured penalties. But rich penalties require mathematical expertise and entail large computational costs.
Stochastic regularizers such as dropout are easier to implement: they prevent overfitting by random perturbations.
Used inside a stochastic optimizer, they come with little additional cost. We propose a structured stochastic
regularization that relies on feature grouping. Using a fast clustering algorithm, we define a family of
groups of features that capture feature covariations. We then randomly select these groups inside a stochastic
gradient descent loop. This procedure acts as a structured regularizer for high-dimensional correlated data
without additional computational cost and it has a denoising effect. We demonstrate the performance of our
approach for logistic regression both on a sample-limited face image dataset with varying additive noise and on
a typical high-dimensional learning problem, brain image classification.
IRJET- Image Reconstruction Algorithm for Electrical Impedance Tomographic Sy...IRJET Journal
This document summarizes research on using electrical impedance tomography (EIT) to reconstruct images showing conductivity distributions within a volume. It describes using MATLAB to simulate EIT data and evaluate different reconstruction algorithms, including back projection, filtered back projection, Gauss-Newton, Tikhonov regularization, NOSER, total variation, and dynamic regularization. Simulation results show total variation and dynamic regularization produce clearer reconstructed images compared to other methods. The accuracy of EIT imaging depends on hardware, electrodes, conductivity distributions, and the reconstruction algorithm.
This document summarizes Tony Chan's talk on total variation and geometric regularization for inverse problems. It discusses total variation regularization, which allows for edge-preserving restoration of images while controlling smoothness. It also discusses geometric regularization using level set methods, which can automatically detect objects and handle changes in topology. Applications discussed include image restoration, segmentation, and medical tomography. Level set representations are described for representing curves and surfaces that can merge or break apart during evolution.
Lagrangian Fluid Simulation with Continuous Convolutionsfarukcankaya
Fluids considered as a set of particles and interaction between particles are computed by point clouds. It makes spatial convolutions by using spherical coordinates. Besides, the differentiable network can simulate drastically different geometry and estimate the material property used for the inverse problem. Results demonstrate that the continuous convolution network outperforms prior formulations in terms of accuracy and speed.
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domainjohn236zaq
This document summarizes a research paper that presents a novel strategy for high-fidelity image restoration. It establishes a joint statistical model in an adaptive hybrid space-transform domain to characterize both local smoothness and nonlocal self-similarity of natural images. A new minimization functional is formulated using this joint statistical model within a regularization framework. A Split Bregman-based algorithm is developed to efficiently solve the severely underdetermined inverse problem and recover images from degradation while preserving details. Experiments on image inpainting, deblurring and denoising demonstrate the effectiveness of the proposed approach.
This document proposes a method called Fast DiffusionMBIR to solve 3D inverse problems using pre-trained 2D diffusion models. It augments the 2D diffusion prior with a model-based total variation prior to encourage consistency across image slices. The method performs denoising across image slices in parallel using a 2D diffusion score function, and then jointly optimizes data consistency and the total variation prior between slices. It shares primal and dual variables between iterations for faster convergence. Results on sparse-view CT reconstruction show coherent volumetric results across all slices.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
About two billion people worldwide (30% of the world population) have been infected with TB. It can virtually affect any organ but lungs are the most frequent and initial site of involvement.
Airborne Mycobacteria (1-5 micrometer) are transmitted via droplets. Individuals exposed have the probability of getting infected based upon various factors (infectiousness of the source, the environment, duration of exposure and the immune status of the exposed individual).
Title: Nerve Injury in Oral Surgery – Overview, Classification & Management
Description:
This presentation provides a comprehensive overview of nerve injuries relevant to oral and maxillofacial surgery. It covers the types of nerve injuries, common causes during dental procedures, clinical features, diagnostic approaches, and current management strategies. Ideal for dental students, interns, and professionals looking to enhance their understanding of neurotrauma in oral surgery.
*Thyroid Gland – Anatomy**
**Location*
* Located in the **anterior neck**, below the **larynx**.
* Lies over the **trachea**, from the level of **C5 to T1 vertebrae**.
*Structure**
Bilobed gland**: Right and left lobes connected by a thin bridge called the **isthmus**.
* Sometimes a pyramidal lobe is present (remnant of thyroglossal duct).
**Size & Weight**
* ~4-6 cm in length.
* Weighs about **15–25 grams** in adults.
**Capsules**
* **True capsule**: Thin fibrous capsule.
* **False capsule**: Formed by the pretracheal fascia.
*Blood Supply**
*Arterial supply**:
* **Superior thyroid artery** (from external carotid)
* **Inferior thyroid artery** (from thyrocervical trunk)
* Occasionally: **Thyroid ima artery** (from brachiocephalic trunk or aorta)
* **Venous drainage**:
* **Superior**, **middle**, and **inferior thyroid veins** → drain into internal jugular and brachiocephalic veins.
**Lymphatic Drainage**
* Prelaryngeal, pretracheal, and paratracheal lymph nodes → deep cervical nodes.
Nerve Supply**
Sympathetic**: Cervical sympathetic ganglia (vasomotor).
Parasympathetic**: From the vagus nerve** (via recurrent laryngeal and external laryngeal nerves).
* Recurrent laryngeal nerve runs close to the thyroid — important surgically.
**Development**
* Develops from a **midline endodermal outgrowth** of the **floor of the pharynx** (foramen cecum).
* Migrates downward via the **thyroglossal duct** (normally obliterated).
Biophysics of Ion Channels – A Key Concept in Cellular Communication
Dive into the fascinating world of ion channels with this SlideShare presentation designed for nursing, allied health, and biomedical students. Understand how these microscopic gatekeepers regulate essential physiological functions.
Covered in this presentation:
🔹 Types of ion channels – voltage-gated, ligand-gated, and mechanically-gated
🔹 Mechanism of ion movement across membranes
🔹 Generation of membrane potential and action potential
🔹 Role in nerve signaling and muscle contraction
🔹 Channelopathies – clinical disorders related to ion channel dysfunction
🔹 Drug targets and pharmacological relevance
This visually structured content helps simplify complex biophysical concepts and links them directly to clinical practice and disease understanding.
✅ Ideal for BSc Nursing, Post-RN, and medical science learners
🔗 Follow for more Biophysics and Nursing content!
EXPLORING THE ROLE OF PROBIOTICS AND POSTBIOTICS IN MODERN MEDICNE.pptxKeerthanaJagan
This presentation is all about gut microbiome, discovery of probiotics, Postbiotics, prebiotic. Their definition, comparison, clinical and non clinical application, future perspectives of probiotics and Postbiotics, limitations of probiotics, why Postbiotics is superior to probiotics, advantages of Postbiotics, role in autism, acute pancreatitis, type 1 diabetes mellitus, type 2 diabetes mellitus, clostridium deficile infection, colorectal cancer, NEC in Very low birth weight newborns, VAP, Biological doping, food biofilm removal, anti obesogenic role, immunomodulation, SCFAs, cosmetic applications, hypertension management, gut brain axis, migrain management,
Biophysical Processes – Core Concepts in Biophysics for Nursing & Allied Health
Explore the essential biophysical processes that govern life at the cellular and systemic level in this visual and concise SlideShare presentation.
Topics covered include:
🔹 Diffusion and osmosis
🔹 Filtration and active transport
🔹 Surface tension and viscosity
🔹 Physical laws: Fick’s Law, Poiseuille’s Law
🔹 Real-life examples from human physiology
🔹 Clinical relevance in nursing and medicine
Designed for nursing, paramedical, and allied health science students, this slide deck simplifies complex physical principles and shows how they apply in real healthcare scenarios.
✅ Great for exam prep, lectures, and quick revision
🔗 Follow for more educational content in Biophysics and Nursing Education!
A transducer is a device that converts energy from one form to another. Ultrasound transducers are used to convert. An electrical signal into ultrasonic energy that can be transmitted into tissue. Ultrasonic energy reflected back from the tissue into an electrical signal. Transducer and scan preset can be selected for each patient before the exam
Order and Disorder in a Biological System – Biophysics Explained
Understand how life maintains balance between order and disorder through key biophysical principles in this engaging SlideShare presentation. Designed for nursing and allied health students, this resource links thermodynamics to biological structure and function.
Key topics include:
🔹 Concept of entropy in biological systems
🔹 Thermodynamic laws and spontaneity of biological processes
🔹 How living organisms maintain order (e.g., enzyme activity, energy flow)
🔹 Examples of disorder in disease and molecular breakdown
🔹 Role of homeostasis in balancing internal systems
🔹 Real-world clinical correlations
This slide deck simplifies abstract concepts and demonstrates their importance in cell biology, physiology, and patient care.
✅ Perfect for BSc Nursing, Post-RN, and paramedical students
🔗 Follow for more content on Biophysics, Nursing, and Medical Science!
Understanding the Urinary System: From Filtration to ExcretionDr Arathy R Nath
This presentation provides a clear, concise, and informative overview of the human urinary system, designed for students, educators, and healthcare professionals. It covers essential aspects of urinary anatomy and physiology, focusing on:
🔹 Anatomical structure of kidneys, ureters, bladder, and urethra
🔹 Urine formation through filtration, reabsorption, and secretion
🔹 The Juxtaglomerular Apparatus (JGA) and its role in renal regulation
🔹 The Renin-Angiotensin-Aldosterone System (RAAS) in blood pressure and fluid control
🔹 Acid-base balance mechanisms managed by the kidneys
🔹 Clearance tests for evaluating renal function
🔹 The micturition reflex and control of urination
Backed by standard textbooks and clinical sources, this deck serves as a great resource for learning or revising urinary system physiology. Includes references for further reading.
The neurocranium has a dome-like roof, the calvaria (skullcap), and a floor or cranial base (basicranium)
The bones forming the calvaria are primarily flat bones (frontal, parietal, and occipital).
The cranial base (basicranium) is the inferior portion of the neurocranium (floor of the cranial cavity) and viscerocranium minus the mandible
Landmark breast cancer trials used in NCCN ,ESMO,ASCOarmin40254
There are several clinical trials which are used in International guidelines including NCCN ,ESMO,ASCO to develop treatment protocols for breast cancer. I summarize some of the landmark trials and describes the Famous Milan trial,it’s a surgical trial.
guidelines for the safe use of contrast agents in radiology. PPTkhaleelhadi10
guidelines for the use of IV contrast in radiology.
information for the radiologist for the safe use of contrast agents in CT and MRI. risk of contrast reaction and renal injury after the use of intravenous contrast , precautions and preparation for the safe use of these agents.
Collection, Transportation and Processing of Clinical Samples by Dhineshkumar...DhineshkumarNpM
Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling
1. Physics-driven Spatiotemporal Regularization for
High-dimensional Predictive Modeling
Bing Yao and Hui Yang
杨 徽
Associate Professor
Complex Systems Monitoring, Modeling and Control Lab
The Pennsylvania State University
University Park, PA 16802
November 25, 2017
9. Introduction Methodology Experiments References
High-dimensional Predictive Modeling
BSPM y(s,t) Heart-surface Potential
Mapping x(s,t)
Inverse
Forward
Y (s, t) = RX(s, t) +
Traditional regression is not generally applicable!
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 9 / 42
10. Introduction Methodology Experiments References
Challenges
Spatially-temporally big data
Dimensionality
Velocity - sampling in milliseconds
Veracity - data uncertainty
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 10 / 42
11. Introduction Methodology Experiments References
Challenges
Complex structured systems
Complex geometries of AM builds
Complex torso-heart geometry
(*from CIMP-3D @ PSU)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 11 / 42
12. Introduction Methodology Experiments References
Challenges
Y (s, t) = RX(s, t) +
Outer surface profiles y(s, t) ⇒ Inner surface profiles x(s, t)
Transfer matrix R ?
Physical principles
Additive manufacturing - Heat transfer model
Heart - Electrical wave propagation
Ill-conditioned system
Linear systems involving high-dimensional data
Condition number: cond(R) = R R−1
A measure of the relative sensitivity of the solution to changes in y
∆x
x
cond(R)
∆y
y
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 12 / 42
13. Introduction Methodology Experiments References
State of the Art
Tikhonov regularization
min
x(s,t)
{ y(s, t) − Rx(s, t) 2
2 + λ2
Γx(s, t) 2
2}
L1 regularization
min
x(s,t)
{ y(s, t) − Rx(s, t) 2
2 + λ2
Γx(s, t) 1}
Zeroth-order Γ = I
Directly penalize the magnitude of x(s, t)
Sparsity vs. Regularity
Not account for spatial or temporal correlations
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 13 / 42
14. Introduction Methodology Experiments References
State of the Art
First-order Regularization
The first-order derivative: Γx(s, t) = ∂x(s, t)/∂τ
Align x(s, t) in one column as {x(s1|t), x(s2|t), ..., x(sN |t)}T
Apply the bidiagonal gradient matrix
Normal derivative operator: Γx(s, t) = ∂x(s, t)/∂n
Γ =
−1 1
−1 1
...
...
−1 1
n
τ
Γx(s,t)
Tangent plane
Need to fill the gaps
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 14 / 42
15. Introduction Methodology Experiments References
Physics-driven Spatiotemporal Regularization
Objective function
min
x(s,t)
T
t=1
{ y(s, t)−Rx(s, t) 2
+λ2
s ∆sx(s, t) 2
+λ2
t
t+ w
2
τ=t− w
2
x(s, t)−x(s, τ) 2
}
Parameter Matrix R - physics-based interrelationship
Spatial regularity - handle approximation errors by spatial correlation
Temporal regularity - model robustness to measurement noises
Algorithm - generalized dipole multiplicative update method
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 15 / 42
16. Introduction Methodology Experiments References
Parameter Matrix R
Divergence theorem: if F is a vector
field which is continuously differentiable
and defined on a volume V ⊂ R3 with a
piecewise-smooth boundary S, then
V
( · F )dV =
S
(F · n)dS
Electric Field Body Surface SB
Heart Surface SH
Green’s second identity: If φ and ψ are twice continuously
differentiable on V , and let F = φ ψ − ψ φ, then
S
(φ ψ − ψ φ) · ndS =
V
(φ 2
ψ − ψ 2
φ)dV
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 16 / 42
17. Introduction Methodology Experiments References
Parameter Matrix R
Heart - a bioelectric source
Torso - a homogeneous and isotropic volume conductor
(*from marvel.com)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 17 / 42
18. Introduction Methodology Experiments References
Parameter Matrix R
Heart - a bioelectric source
Torso - a homogeneous and isotropic volume conductor
Green’s second identity:
S
(φ ψ − ψ φ) · ndS =
V
(φ 2
ψ − ψ 2
φ)dV
ψ = 1/r; φ = electric potentials
No electrical source between SH and SB: 2
φ = 0
Electric field outside SB is negligible: φ = 0 on SB
SH
n
SB
n
∇2
φ = 0
∇y(s,t)=0
y(s,t)
x(s,t)
dΩBB
dSB
dΩBH
dSH
Heart Surface
Body
surface
Volume
conductor
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 18 / 42
19. Introduction Methodology Experiments References
Parameter Matrix R
Boundary element method
Body surface potential on SB
y(s, t) = −
1
4π SH
x(s, t)dΩBH −
1
4π SH
x(s, t) · n
rBH
dSH +
1
4π SB
y(s, t)dΩBB
Heart surface potential on SH
x(s, t) = −
1
4π SH
x(s, t)dΩHH −
1
4π SH
x(s, t) · n
rHH
dSH +
1
4π SB
y(s, t)dΩHB
Numerical integration
ABBy(s, t) + ABHx(s, t) + MBHN(s, t) = 0
AHBy(s, t) + AHHx(s, t) + MHHN(s, t) = 0
Parameter matrix R:
R = (ABB − MBHM−1
HHAHB)−1
(MBHM−1
HHAHH − ABH)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 19 / 42
20. Introduction Methodology Experiments References
Physics-driven Spatiotemporal Regularization
Objective function
min
x(s,t)
T
t=1
{ y(s, t)−Rx(s, t) 2
+λ2
s ∆sx(s, t) 2
+λ2
t
t+ w
2
τ=t− w
2
x(s, t)−x(s, τ) 2
}
Parameter Matrix R - physics-based interrelationship
Spatial regularity - handle approximation errors by spatial correlation
Temporal regularity - model robustness to measurement noises
Algorithm - generalized dipole multiplicative update method
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 20 / 42
21. Introduction Methodology Experiments References
Spatial Regularity
Surface laplacian ∆s for a square lattice
Surface laplacian at the node p0
x1 = x0 + a
∂x
∂v p0
+
1
2
a2 ∂2x
∂v2 p0
x2 = x0 − a
∂x
∂u p0
+
1
2
a2 ∂2x
∂u2 p0
x3 = x0 − a
∂x
∂v p0
+
1
2
a2 ∂2x
∂v2 p0
x4 = x0 + a
∂x
∂u p0
+
1
2
a2 ∂2x
∂u2 p0
⇒ x1 + x2 + x3 + x4 = 4x0 + a2
(
∂2x
∂u2
+
∂2x
∂v2
)
p0
= 4x0 + a2
∆x0
⇒ ∆x0 =
1
a2
(
4
i=1
xi − 4x0) =
4
a2
(¯x − x0)
Laplacian matrix for the square lattice
∆ij =
− 4
a2 , if i = j
1
a2 , if i = j, pj ∈ neighborhood of pi
0, otherwise
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 21 / 42
22. Introduction Methodology Experiments References
Spatial Regularity
Linear interpolation (imaginary nearest neighbor):
xt(j) = xt(i) +
¯di
dij
(xt(j) − xt(i))
dij is the distance between pi and pj
¯di = 1
ni
ni
j=1 dij
ni is the number of neighbors of pi
Surface laplacian at the node pi
∆sxt(i) =
4
¯di
2
(
1
ni
ni
j=1
xt(j) − xt(i))
=
4
¯di
(
1
ni
ni
j=1
xt(j)
dij
− (
1
di
)xt(i))
Laplacian matrix for 3D triangle mesh
∆ij =
− 4
¯di
( 1
di
), if i = j
4
¯di
1
ni
1
dij
, if i = j, pj ∈ neighborhood of pi
0, otherwise
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 22 / 42
23. Introduction Methodology Experiments References
Physics-driven Spatiotemporal Regularization
Objective function
min
x(s,t)
T
t=1
{ y(s, t)−Rx(s, t) 2
+λ2
s ∆sx(s, t) 2
+λ2
t
t+ w
2
τ=t− w
2
x(s, t)−x(s, τ) 2
}
Parameter Matrix R - physics-based interrelationship
Spatial regularity - handle approximation errors by spatial correlation
Temporal regularity - model robustness to measurement noises
Algorithm - generalized dipole multiplicative update method
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 23 / 42
24. Introduction Methodology Experiments References
Temporal Regularity
Spatiotemporal data x(s, t) and y(s, t) - dynamically evolving over
time and have temporal correlations
T
t=1
t+w
2
τ=t−w
2
x(s, t) − x(s, τ) 2
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 24 / 42
25. Introduction Methodology Experiments References
Physics-driven Spatiotemporal Regularization
Objective function
min
x(s,t)
T
t=1
{ y(s, t)−Rx(s, t) 2
+λ2
s ∆sx(s, t) 2
+λ2
t
t+ w
2
τ=t− w
2
x(s, t)−x(s, τ) 2
}
Parameter Matrix R - physics-based interrelationship
Spatial regularity - handle approximation errors by spatial correlation
Temporal regularity - model robustness to measurement noises
Algorithm - generalized dipole multiplicative update method
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 25 / 42
26. Introduction Methodology Experiments References
DMU Algorithm
Objective function - both spatial and temporal terms, and is difficult
to be solved analytically.
Iterative algorithm - traditional multiplicative update method
requires x(s, t) to be nonnegative
Heart surface - negative and positive electric potentials
A new dipole multiplicative update algorithm for generalized
spatiotemporal regularization
xt = x+
t − x−
t , x+
t = max{0, xt} x−
t = max{0, −xt}
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 26 / 42
27. Introduction Methodology Experiments References
DMU Algorithm
If we define
A = A+
− A−
= RT
R + λ2
s∆T
s ∆s + 2λ2
t wI
B = yT
t R + 2λ2
t
t−1
τ=t− w
2
xT
τ + 2λ2
t
t+ w
2
τ=t+1
xT
τ
The objective function can be rewritten as:
J =
T
t=1
{xT
t Axt − Bxt − xT
t BT
}
= ((xT
t )+
)A+
x+
t − ((xT
t )+
)Ax−
t − ((xT
t )−
)Ax+
t − ((xT
t )+
)A−
x+
t
+((xT
t )−
)A+
x−
t − ((xT
t )−
)A−
x−
t − B(x+
t − x−
t ) − (x+
t − x−
t )T
BT
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 27 / 42
28. Introduction Methodology Experiments References
DMU Algorithm
If we define
a+
i = (2A+
x+
t )i a−
i = (2A+
x−
t )i
b+
i = −(2Ax−
t )i − 2BT
i b−
i = −(2Ax+
t )i + 2BT
i
c+
i = (2A−
x+
t )i c−
i = (2A−
x−
t )i
New update rules
(x+
t )i ←
−b+
i + (b+
i )2 + 4a+
i c+
i
2a+
i
(x+
t )i
(x−
t )i ←
−b−
i + (b−
i )2 + 4a−
i c−
i
2a−
i
(x−
t )i
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 28 / 42
29. Introduction Methodology Experiments References
DMU Algorithm
Table: The Proposed Dipole Multiplicative Update Algorithm for STRE
1: Set constants λs, λt and w. Let
A = A+
− A−
= RT
R + λ2
s∆T
s ∆s + 2λ2
t wI
B = yT
t R + 2λ2
t
t−1
τ=t− w
2
xT
τ + 2λ2
t
t+ w
2
τ=t+1 xT
τ
2: Initialize {x+
t } and {x−
t } as positive random matrices.
3: Repeat
4: for i = 1, . . . , T do
(x+
t )i ←
(Ax−
t )i+Bi+ ((Ax−
t )i+Bi)2+4(A+x+
t )i(A−x+
t )i
(2A+x+
t )i
(x+
t )i
(x−
t )i ←
(Ax+
t )i−Bi+ ((Ax+
t )i−Bi)2+4(A+x−
t )i(A−x−
t )i
(2A+x−
t )i
(x−
t )i
5: end for
6: until convergence
7: Solution: ˆxt = x+
t − x−
t
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 29 / 42
30. Introduction Methodology Experiments References
Experiments - Simulation in a Two-sphere Geometry
Dynamic distributions of electric potentials on the inner surface
x(s, t) and outer surface y(s, t) are calculated analytically
x(s, t) =
1
4πσ
p(t) · rH (s)
r2
BrH
[
2rH
rB
+ (
rB
rH
)2
]
y(s, t) =
3
4πσ
p(t) · rB(s)
r3
B
Gaussian noise ∼ N(0, σ2) is added to y(s, t)
(a) (b)
Figure: (a) Parameters of the two-sphere geometry; (b) Each sphere is triangulated
with 184 nodes and 364 triangles
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 30 / 42
31. Introduction Methodology Experiments References
Results
σε
(a) (b)
Tikh-0thTikh-1st L1-1st STRE
RE
0
0.05
0.1
0.15
0.1 0.2 0.3 0.4 0.5
RE
0.08
0.1
0.12
0.14
0.16
0.18
0.2 Tikh-0th
Tikh-1st
L1-1st
STRE
Figure: (a) The comparisons of relative error (RE) between the proposed STRE model
and other regularization methods (i.e., Tikhonov zero-order, Tikhonov first-order and L1
first-order methods) in the two-sphere geometry when there is no noise on the potential
map y(s, t) of the outer sphere; (b) The comparisons of RE for different noise levels
σ = 0.1; 0.2; 0.3; 0.4; 0.5 on the potential map y(s, t) of the outer sphere.
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 31 / 42
32. Introduction Methodology Experiments References
Results
Dynamic distribution of electric potentials on the inner sphere x(s, t)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 32 / 42
33. Introduction Methodology Experiments References
Results
Potential mapping on the inner sphere x(s, t), t = 150ms
Reference
Tikh_0th
RE=0.1475
Tikh_1st
RE=0.1026
L1_1st
RE=0.1025
STRE
RE=0.006
Tikh_0th
RE=0.208
Tikh_1st
RE=0.1528
L1_1st
RE=0.1569
STRE
RE=0.0769
(a)
(b)
(c)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 33 / 42
34. Introduction Methodology Experiments References
Experiments - Realistic Torso-heart Geometry
Heart surface - 257 nodes and 510 triangles
Body surface - 771 nodes and 1538 triangles
y(s, t) - body area sensor network
Data uncertainty - gaussian noise ∼ N(0, σ2).
Five different noise levels: σ = 0.005, 0.01, 0.05, 0.1, 0.2
(a) (b)
Front Back
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 34 / 42
35. Introduction Methodology Experiments References
Results
σε
(a) (b)
Tikh-0th Tikh-1st L1-1st STRE
RE
0
0.05
0.1
0.15
0.2
0.25
0.3
0 0.05 0.1 0.15 0.2
RE
0.5
1
1.5
2
2.5
3
Tikh-0th
Tikh-1st
L1-1st
STRE
Figure: (a) The comparisons of relative error (RE) between the proposed STRE model
and other regularization methods (i.e., Tikhonov zero-order, Tikhonov first-order and L1
first-order methods) in the realistic torso-heart geometry when there is no extra noise on
the potential map y(s, t) of the body surface; (b) The comparisons of RE for different
noise levels σ = 0.005; 0.01; 0.05; 0.1; 0.2 on the potential map y(s, t).
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 35 / 42
36. Introduction Methodology Experiments References
Results
Dynamic distribution of electric potentials on the heart surface x(s, t)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 36 / 42
37. Introduction Methodology Experiments References
Results
Potential mapping on the heart surface x(s, t), t = 50ms
Reference
Tikh_0th
RE=0.2488
Tikh_1st
RE=0.2839
L1_1st
RE=0.2735
STRE
RE=0.0997
STRE
RE=0.2386
Tikh_0th
RE=0.557
Tikh_1st
RE=0.972
L1_1st
RE=1.248
(a)
(b)
(c)
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 37 / 42
38. Introduction Methodology Experiments References
Summary
Challenges
Spatiotemporal data (predictor and response variables)
Complex-structured system
Ill-conditioned system
Methodology: Physics-driven spatiotemporal regularization
Parameter Matrix R - physics-based interrelationship
Spatial regularity - handle approximation errors by spatial correlation
Temporal regularity - model robustness to measurement noises
Algorithm - generalized dipole multiplicative update method
Significance
A novel approach to solve ECG inverse problem
A new dipole multiplicative update algorithm for generalized
spatiotemporal regularization
Broad applications: thermal effects in additive manufacturing
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 38 / 42
39. Introduction Methodology Experiments References
References
B. Yao, R. Zhu, and H. Yang*, “Characterizing the Location and Extent of Myocardial
Infarctions with Inverse ECG Modeling and Spatiotemporal Regularization,”IEEE Journal
of Biomedical and Health Informatics, page 1-11, 2017, DOI:
10.1109/JBHI.2017.2768534
B. Yao and H. Yang*, “Physics-driven spatiotemporal regularization for high-dimensional
predictive modeling,”Scientific Reports 6, 39012, 2016. DOI:
www.nature.com/articles/srep39012
B, Yao and H. Yang*, “Mesh Resolution Impacts the Accuracy of Inverse and Forward
ECG problems,”Proceedings of 2016 IEEE Engineering in Medicine and Biology Society
Conference (EMBC), August 16-20, 2016, Orlando, FL, United States. DOI:
10.1109/EMBC.2016.7591615
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 39 / 42
40. Introduction Methodology Experiments References
Acknowledgements
NSF CAREER Award
NSF CMMI-1617148
NSF CMMI-1646660
NSF CMMI-1619648
NSF IIP-1447289
NSF IOS-1146882
James A. Haley Veterans’ Hospital
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 40 / 42
41. Introduction Methodology Experiments References
Contact Information
Hui Yang, PhD
Associate Professor
Complex Systems Monitoring Modeling and Control Laboratory
Harold and Inge Marcus Department of Industrial and Manufacturing
Engineering
The Pennsylvania State University
Tel: (814) 865-7397
Fax: (814) 863-4745
Email: [email protected]
Web: http://www.personal.psu.edu/huy25/
Hui Yang (PSU) Spatiotemporal Regularization November 25, 2017 41 / 42