Identification of unknown parameters and prediction with hierarchical matrice...Alexander Litvinenko
We compare four numerical methods for the prediction of missing values in four different datasets.
These methods are 1) the hierarchical maximum likelihood estimation (H-MLE), and three machine learning (ML) methods, which include 2) k-nearest neighbors (kNN), 3) random forest, and 4) Deep Neural Network (DNN).
From the ML methods, the best results (for considered datasets) were obtained by the kNN method with three (or seven) neighbors.
On one dataset, the MLE method showed a smaller error than the kNN method, whereas, on another, the kNN method was better.
The MLE method requires a lot of linear algebra computations and works fine on almost all datasets. Its result can be improved by taking a smaller threshold and more accurate hierarchical matrix arithmetics. To our surprise, the well-known kNN method produces similar results as H-MLE and worked much faster.
Why mathematics is easy to understand, easy to do, and easy to prove in base 60. The number theory behind the creation story and the beginning of time.
Bellman-Ford-Moore Algorithm and Dijkstra’s AlgorithmFulvio Corno
Bellman-Ford-Moore Algorithm and Dijkstra’s Algorithm
Teaching material for the course of "Tecniche di Programmazione" at Politecnico di Torino in year 2012/2013. More information: http://bit.ly/tecn-progr
This document provides information about support vector machines (SVM). It introduces Vladimir Vapnik, one of the main developers of SVM. It then explains the concepts of hard margin SVM, including finding the optimal hyperplane using Lagrange multipliers and the Karush-Kuhn-Tucker conditions. It also discusses kernels, soft margin SVM, and provides an example of applying hard margin SVM to classify 3 data points.
This document contains 30 math problems involving solving equations and inequalities for real numbers. The problems cover topics like solving single variable equations, systems of equations, and inequalities. They range in complexity from basic to more advanced problems involving multiple steps, fractions, exponents, and algebraic manipulation of terms.
This document contains 50 math word problems involving solving equations and inequalities for real numbers. The problems cover topics like solving linear, quadratic, and rational equations; comparing expressions using inequalities; and solving systems of equations. They are presented in Vietnamese without translations.
This document contains examples of operations with powers and radicals. It includes:
- Notes on the basic operations and properties of powers and radicals.
- 23 example problems working through simplifying expressions with powers and radicals. The examples show applying order of operations, properties of exponents, and simplifying radicals.
- The goal is to practice simplifying complex expressions involving multiple operations with powers and radicals down to their simplest form through step-by-step working.
This document appears to be the table of contents and problems from Chapter 0 of a mathematics textbook. The table of contents lists 17 chapters and their corresponding page numbers. The problems cover a range of algebra topics including integers, rational numbers, properties of operations, solving equations, and rational expressions. There are over 70 problems presented without solutions for students to work through.
This document contains a 25 question Additional Mathematics trial exam for the SPM 2013 examination. The exam consists of multiple choice and short answer questions testing concepts in arithmetic progressions, geometric progressions, functions, trigonometry, probability, and statistics. The student is expected to show working to receive partial credit on questions.
This document provides 30 equations and inequalities and asks the reader to solve them on the set of real numbers. It uses variables like x, square roots, exponents, and basic arithmetic operations. The problems range from simple one-variable equations to more complex expressions with multiple variables. The goal is to calculate the value(s) of the variable(s) that satisfy each equation or inequality.
The document discusses factoring quadratic sequences into linear factors. It provides examples of factoring sequences into two linear sequences and deriving the formulas for each. Elimination strategies are discussed to reduce the number of attempts needed to find the correct factors, by rejecting factors that don't produce continuously increasing or decreasing sequences. Practice problems are included to have the reader practice applying the techniques.
IRJET- On Binary Quadratic Equation 2x2-3y2=-4IRJET Journal
The document analyzes the binary quadratic Diophantine equation 432 22 - = - yx , which represents a hyperbola. It finds integer solutions for this equation and establishes recurrence relations among the solutions. It then uses the solutions to this equation to find solutions for other choices of hyperbolas, parabolas, and a Pythagorean triangle. Several interesting properties of the solutions are presented, including that they are all even integers and that certain expressions of the solutions represent nasty numbers or cubical integers.
Solutions manual for business math 10th edition by cleavesCooKi5472
Full clear download(no error formatting) at: https://goo.gl/sbq3Di
business math 10th edition pdf
business math brief 10th edition
business mathematics cleaves
business math 9th edition pdf
answers to business math questions
business math book answers
CAPE PURE MATHEMATICS UNIT 2 MODULE 1 PRACTICE QUESTIONSCarlon Baird
dy/dx = (x - 3y)/(6x - 4)
The stationary points on the curve C occur when tan(x) = 2.
The equation of the tangent to C at the point where x=0 is y = 2ex.
IRJET- On the Binary Quadratic Diophantine Equation Y2=272x2+16IRJET Journal
This document presents an analysis of the binary quadratic Diophantine equation y^2 = 272x^2 + 16. It finds the integral solutions to this equation and explores some properties among the solutions. Several recurrence relations satisfied by the solutions are provided. It is shown that certain expressions involving the solutions represent cubical and biquadratic integers. Pythagorean triangles corresponding to some of the integral solutions are also obtained.
Boas mathematical methods in the physical sciences 3ed instructors solutions...Praveen Prashant
This document contains mathematical expressions, series expansions, and convergence tests. It provides series expansions for various functions around points using Taylor series. It also tests the convergence of infinite series using tests like the limit comparison test, ratio test, and integral test. Several problems provide the interval of convergence for Taylor series expansions of different functions.
This document contains solutions to exercises from an intermediate algebra textbook chapter on equations and inequalities in two variables and functions. It provides worked out solutions showing the step-by-step process for solving various types of problems involving linear equations, finding slopes of lines, parallel and perpendicular lines, and word problems involving rates of change. The document demonstrates how to graph linear equations by finding intercepts and plotting points.
How To Do KS2 Maths SATs Paper A Multiplication Questions (Part 1)Chris James
The document provides guidance on multiplication questions that may be asked in KS2 Maths SAT Paper A exams. It explains that students may be asked to calculate multiplication sums, and provides examples such as 635 x 6 or 603 x 57. It then provides a series of practice multiplication calculations for students to work through.
CAPE PURE MATHEMATICS UNIT 2 MODULE 2 PRACTICE QUESTIONSCarlon Baird
This document contains practice questions on sequences, series, and approximations from a CAPE Pure Mathematics unit. Question 1 covers finding terms of sequences defined recursively and evaluating finite sums. Question 2 involves finding expressions for terms of sequences defined recursively and finding their sums. Later questions cover topics like proving identities using induction, evaluating infinite series, approximating functions using Taylor series, and finding roots of equations numerically. The questions provide worked examples of key concepts in sequences, series, and approximations.
How To Do KS2 Maths SATs Paper A Multiplication Questions (Part 2)Chris James
The document provides examples and guidance for solving single-step word problems using multiplication on a KS2 Maths SAT Paper A. It explains that these types of questions will ask the reader to find the total or amount by multiplying two given numbers. Several example word problems are shown step-by-step, such as finding the total number of eggs if each box holds 6 eggs and there are 26 boxes. The document concludes by providing additional multiplication practice problems for the reader to try.
This chapter introduces complex numbers. It defines a complex number as having the form x + iy, where x and y are real numbers. It describes how to represent complex numbers graphically on an Argand diagram and defines the modulus and argument of a complex number. It explains how to perform arithmetic operations like addition, subtraction, multiplication and division on complex numbers in both Cartesian (x + iy) and polar forms. It also introduces concepts like the conjugate of a complex number and using real and imaginary parts to solve equations. The chapter aims to explain the basic properties and manipulations of complex numbers.
This document contains 15 math word problems involving exponents and powers from a Class VIII math assignment. The problems include evaluating expressions with exponents, expressing numbers with positive exponents, simplifying expressions using exponent rules, solving equations with exponents, writing numbers in standard form, and performing other operations involving exponents.
The document provides examples of calculating angles between lines and planes in 3 dimensions. It includes calculating angles between a line and plane using tangent, and between two planes. It also provides practice questions involving finding angles between lines and planes given dimensional information about the lines and planes.
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
H-matrix approximation of large Mat\'{e}rn covariance matrices, Gaussian log-likelihoods.
Identifying unknown parameters and making predictions
Comparison with machine learning methods.
kNN is easy to implement and shows promising results.
Application of Parallel Hierarchical Matrices in Spatial Statistics and Param...Alexander Litvinenko
Part 1: Parallel H-matrices in spatial statistics
1. Motivation: improve statistical model
2. Tools: Hierarchical matrices [Hackbusch 1999]
3. Matern covariance function and joint Gaussian likelihood
4. Identification of unknown parameters via maximizing Gaussian
log-likelihood
5. Implementation with HLIBPro.
This document contains solutions to various equations and inequalities involving radicals on the set of real numbers. It is divided into 6 sections, with multiple problems provided in each section ranging from simple single-term radical equations to more complex multi-term radical equations and inequalities. The document provides the step-by-step workings for solving each problem.
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practicesPerform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform g
This document contains examples of operations with powers and radicals. It includes:
- Notes on the basic operations and properties of powers and radicals.
- 23 example problems working through simplifying expressions with powers and radicals. The examples show applying order of operations, properties of exponents, and simplifying radicals.
- The goal is to practice simplifying complex expressions involving multiple operations with powers and radicals down to their simplest form through step-by-step working.
This document appears to be the table of contents and problems from Chapter 0 of a mathematics textbook. The table of contents lists 17 chapters and their corresponding page numbers. The problems cover a range of algebra topics including integers, rational numbers, properties of operations, solving equations, and rational expressions. There are over 70 problems presented without solutions for students to work through.
This document contains a 25 question Additional Mathematics trial exam for the SPM 2013 examination. The exam consists of multiple choice and short answer questions testing concepts in arithmetic progressions, geometric progressions, functions, trigonometry, probability, and statistics. The student is expected to show working to receive partial credit on questions.
This document provides 30 equations and inequalities and asks the reader to solve them on the set of real numbers. It uses variables like x, square roots, exponents, and basic arithmetic operations. The problems range from simple one-variable equations to more complex expressions with multiple variables. The goal is to calculate the value(s) of the variable(s) that satisfy each equation or inequality.
The document discusses factoring quadratic sequences into linear factors. It provides examples of factoring sequences into two linear sequences and deriving the formulas for each. Elimination strategies are discussed to reduce the number of attempts needed to find the correct factors, by rejecting factors that don't produce continuously increasing or decreasing sequences. Practice problems are included to have the reader practice applying the techniques.
IRJET- On Binary Quadratic Equation 2x2-3y2=-4IRJET Journal
The document analyzes the binary quadratic Diophantine equation 432 22 - = - yx , which represents a hyperbola. It finds integer solutions for this equation and establishes recurrence relations among the solutions. It then uses the solutions to this equation to find solutions for other choices of hyperbolas, parabolas, and a Pythagorean triangle. Several interesting properties of the solutions are presented, including that they are all even integers and that certain expressions of the solutions represent nasty numbers or cubical integers.
Solutions manual for business math 10th edition by cleavesCooKi5472
Full clear download(no error formatting) at: https://goo.gl/sbq3Di
business math 10th edition pdf
business math brief 10th edition
business mathematics cleaves
business math 9th edition pdf
answers to business math questions
business math book answers
CAPE PURE MATHEMATICS UNIT 2 MODULE 1 PRACTICE QUESTIONSCarlon Baird
dy/dx = (x - 3y)/(6x - 4)
The stationary points on the curve C occur when tan(x) = 2.
The equation of the tangent to C at the point where x=0 is y = 2ex.
IRJET- On the Binary Quadratic Diophantine Equation Y2=272x2+16IRJET Journal
This document presents an analysis of the binary quadratic Diophantine equation y^2 = 272x^2 + 16. It finds the integral solutions to this equation and explores some properties among the solutions. Several recurrence relations satisfied by the solutions are provided. It is shown that certain expressions involving the solutions represent cubical and biquadratic integers. Pythagorean triangles corresponding to some of the integral solutions are also obtained.
Boas mathematical methods in the physical sciences 3ed instructors solutions...Praveen Prashant
This document contains mathematical expressions, series expansions, and convergence tests. It provides series expansions for various functions around points using Taylor series. It also tests the convergence of infinite series using tests like the limit comparison test, ratio test, and integral test. Several problems provide the interval of convergence for Taylor series expansions of different functions.
This document contains solutions to exercises from an intermediate algebra textbook chapter on equations and inequalities in two variables and functions. It provides worked out solutions showing the step-by-step process for solving various types of problems involving linear equations, finding slopes of lines, parallel and perpendicular lines, and word problems involving rates of change. The document demonstrates how to graph linear equations by finding intercepts and plotting points.
How To Do KS2 Maths SATs Paper A Multiplication Questions (Part 1)Chris James
The document provides guidance on multiplication questions that may be asked in KS2 Maths SAT Paper A exams. It explains that students may be asked to calculate multiplication sums, and provides examples such as 635 x 6 or 603 x 57. It then provides a series of practice multiplication calculations for students to work through.
CAPE PURE MATHEMATICS UNIT 2 MODULE 2 PRACTICE QUESTIONSCarlon Baird
This document contains practice questions on sequences, series, and approximations from a CAPE Pure Mathematics unit. Question 1 covers finding terms of sequences defined recursively and evaluating finite sums. Question 2 involves finding expressions for terms of sequences defined recursively and finding their sums. Later questions cover topics like proving identities using induction, evaluating infinite series, approximating functions using Taylor series, and finding roots of equations numerically. The questions provide worked examples of key concepts in sequences, series, and approximations.
How To Do KS2 Maths SATs Paper A Multiplication Questions (Part 2)Chris James
The document provides examples and guidance for solving single-step word problems using multiplication on a KS2 Maths SAT Paper A. It explains that these types of questions will ask the reader to find the total or amount by multiplying two given numbers. Several example word problems are shown step-by-step, such as finding the total number of eggs if each box holds 6 eggs and there are 26 boxes. The document concludes by providing additional multiplication practice problems for the reader to try.
This chapter introduces complex numbers. It defines a complex number as having the form x + iy, where x and y are real numbers. It describes how to represent complex numbers graphically on an Argand diagram and defines the modulus and argument of a complex number. It explains how to perform arithmetic operations like addition, subtraction, multiplication and division on complex numbers in both Cartesian (x + iy) and polar forms. It also introduces concepts like the conjugate of a complex number and using real and imaginary parts to solve equations. The chapter aims to explain the basic properties and manipulations of complex numbers.
This document contains 15 math word problems involving exponents and powers from a Class VIII math assignment. The problems include evaluating expressions with exponents, expressing numbers with positive exponents, simplifying expressions using exponent rules, solving equations with exponents, writing numbers in standard form, and performing other operations involving exponents.
The document provides examples of calculating angles between lines and planes in 3 dimensions. It includes calculating angles between a line and plane using tangent, and between two planes. It also provides practice questions involving finding angles between lines and planes given dimensional information about the lines and planes.
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
H-matrix approximation of large Mat\'{e}rn covariance matrices, Gaussian log-likelihoods.
Identifying unknown parameters and making predictions
Comparison with machine learning methods.
kNN is easy to implement and shows promising results.
Application of Parallel Hierarchical Matrices in Spatial Statistics and Param...Alexander Litvinenko
Part 1: Parallel H-matrices in spatial statistics
1. Motivation: improve statistical model
2. Tools: Hierarchical matrices [Hackbusch 1999]
3. Matern covariance function and joint Gaussian likelihood
4. Identification of unknown parameters via maximizing Gaussian
log-likelihood
5. Implementation with HLIBPro.
This document contains solutions to various equations and inequalities involving radicals on the set of real numbers. It is divided into 6 sections, with multiple problems provided in each section ranging from simple single-term radical equations to more complex multi-term radical equations and inequalities. The document provides the step-by-step workings for solving each problem.
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practicesPerform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform good laboratory practices
Perform g
This document contains 4 math exercises involving matrix operations and linear algebra calculations. Exercise 1 involves multiplying and adding matrices. Exercise 2 calculates determinants and inverses of matrices. Exercise 3 solves systems of linear equations. Exercise 4 shows that the determinant of AB is equal to the determinant of A times the determinant of B.
We propose a strategy for the problem of Portfolio Diversification in Financial Economics. This task can be seen as a clustering process performed with the exploitation of Non-Negative Matrix Factorization techniques.
Fast and accurate metrics. Is it actually possible?Bogdan Storozhuk
This document discusses optimizations that can be made to sliding window reservoir sampling algorithms for metrics collection. It begins with an introduction and plan to analyze the "garbage problem" with these algorithms and provide step-by-step optimizations. Various approaches are presented and benchmarked, with the goal of reducing memory usage and garbage collection pauses. The final approach shown achieves better performance than earlier versions.
The document discusses an image scalar hardware algorithm and proposes improvements. It analyzes issues with the current approach of no burst support and non-continuous addressing. It suggests adding burst mode support to increase efficiency, allow easier idle states, and reduce bus access time. This would help pipeline the computation and decrease hardware resource needs with buffering.
This document contains notes on compound inequalities in algebra. It explains that for inequalities combined with "and", the values of x that satisfy both inequalities are graphed as the intersection. For inequalities combined with "or", the values of x that satisfy either inequality are graphed. Several examples of compound inequalities are given and practiced.
This document discusses statistical process control and control charts. It defines the goals of control charts as collecting and visually presenting data to see when trends or out-of-control points occur. Process control charts graph sample data over time and show the process average and upper and lower control limits. Attribute control charts indicate whether points are in or out of tolerance, while variables charts measure attributes like length, weight or temperature over time. Examples are provided to illustrate p-charts, R-charts and X-bar charts using hotel luggage delivery time data.
Approximation of large Matern covariance functions in the H-matrix format. We computed relative errors in spectral, Frobenius norms as well as the Kullback-Leibler divergence. Storage and computational costs are drastically reduced.
The document discusses binary heaps and algorithms for building, inserting, and deleting elements from heaps. It begins by defining min-heaps and max-heaps, and the heap property that parent nodes are smaller than child nodes in min-heaps and larger in max-heaps. It then covers algorithms for inserting new elements by placing them at the bottom and bubbling up, deleting the minimum/maximum element by replacing it with the last element and bubbling down. The document concludes by explaining that building a heap from a list of elements can be done in linear time by placing elements in an array and percolating them down level-by-level.
Numerical Methods: curve fitting and interpolationNikolai Priezjev
This document discusses curve fitting and interpolation techniques. It covers linear regression using the least squares method to fit data to a straight line model. It also discusses fitting data to other functions like exponential, logarithmic and polynomial models. For polynomial regression, a second order polynomial is presented which requires solving a system of equations to determine the coefficients that minimize the residual errors between measured and modeled data. An example demonstrates applying these methods to find the coefficients for a straight line and second order polynomial fit to sample data sets.
This document contains an instructor's resource manual for a chapter on preliminaries in mathematics. It includes:
1. A concepts review section covering rational numbers and dense sets.
2. A problem set with 56 problems involving rational numbers, fractions, decimals, and approximations of irrational numbers.
3. Hints and solutions for working through the problems.
The document discusses statistical forecasting and Monte Carlo simulation techniques for project estimation. It provides examples of using historical lead time and work in progress data to model distributions and sample values to simulate project progress over time, generating a range of potential completion dates. The examples demonstrate how this statistical approach can help predict a project timeline compared to relying only on expert opinions.
The document contains a practice problem involving systems of linear inequalities. There are 21 problems where the learner is asked to write systems of inequalities to model word problems, graph the systems to visualize the solution sets, and provide possible solutions. The document focuses on representing and solving word problems using systems of linear inequalities.
This document provides teaching materials on multiplication for grade 2 students. It includes the competencies, indicators, examples and explanations of multiplication as repeated addition, and properties like the commutative and associative properties. It also contains practice problems for students involving word problems with multiplication. The goal is for students to understand multiplication concepts and be able to solve problems with numbers up to two digits.
This document provides notes and examples on operations with powers and radicals. It includes:
1) Ten rules for operations with powers such as multiplying and dividing powers.
2) Four rules for operations with radicals such as rationalizing the denominator.
3) Twenty-four math problems worked through step-by-step as examples of applying the power and radical rules. The examples involve simplifying expressions and rationalizing denominators.
This is the first topic in the Integral calculus, to find the area approximations to calculate the area under the curve. This is a topic that will be covered under the AP Calculus AB. This topic bassically covers the three important methods of the area approxiamtions which are called the rectangular approxiamtion methods. there are thrree types of the rectnagular meh
Solve Sudoku using Constraint Propagation- Search and Genetic AlgorithmAi Sha
The document compares two algorithms for solving Sudoku puzzles: a genetic algorithm (GA) and a constraint propagation-search algorithm. It tests both algorithms on 30 random Sudoku puzzles. The constraint propagation-search algorithm solved all puzzles in an average of 0.01 seconds, while the GA took an average of 3.94 seconds to solve each puzzle, demonstrating that the constraint propagation-search algorithm is much more efficient for solving Sudoku puzzles.
Poster to be presented at Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024, Kaust, Saudi Arabia, https://cemse.kaust.edu.sa/stochnum/events/event/snsl-workshop-2024.
In this work we have considered a setting that mimics the Henry problem \cite{Simpson2003,Simpson04_Henry}, modeling seawater intrusion into a 2D coastal aquifer. The pure water recharge from the ``land side'' resists the salinisation of the aquifer due to the influx of saline water through the ``sea side'', thereby achieving some equilibrium in the salt concentration. In our setting, following \cite{GRILLO2010}, we consider a fracture on the sea side that significantly increases the permeability of the porous medium.
The flow and transport essentially depend on the geological parameters of the porous medium, including the fracture. We investigated the effects of various uncertainties on saltwater intrusion. We assumed uncertainties in the fracture width, the porosity of the bulk medium, its permeability and the pure water recharge from the land side. The porosity and permeability were modeled by random fields, the recharge by a random but periodic intensity and the thickness by a random variable. We calculated the mean and variance of the salt mass fraction, which is also uncertain.
The main question we investigated in this work was how well the MLMC method can be used to compute statistics of different QoIs. We found that the answer depends on the choice of the QoI. First, not every QoI requires a hierarchy of meshes and MLMC. Second, MLMC requires stable convergence rates for $\EXP{g_{\ell} - g_{\ell-1}}$ and $\Var{g_{\ell} - g_{\ell-1}}$. These rates should be independent of $\ell$. If these convergence rates vary for different $\ell$, then it will be hard to estimate $L$ and $m_{\ell}$, and MLMC will either not work or be suboptimal. We were not able to get stable convergence rates for all levels $\ell=1,\ldots,5$ when the QoI was an integral as in \eqref{eq:integral_box}. We found that for $\ell=1,\ldots 4$ and $\ell=5$ the rate $\alpha$ was different. Further investigation is needed to find the reason for this. Another difficulty is the dependence on time, i.e. the number of levels $L$ and the number of sums $m_{\ell}$ depend on $t$. At the beginning the variability is small, then it increases, and after the process of mixing salt and fresh water has stopped, the variance decreases again.
The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level. These estimates depend on the minimisation function in the MLMC algorithm.
To achieve the efficiency of the MLMC approach presented in this work, it is essential that the complexity of the numerical solution of each random realisation is proportional to the number of grid vertices on the grid levels.
We investigated the applicability and efficiency of the MLMC approach to the Henry-like problem with uncertain porosity, permeability and recharge. These uncertain parameters were modelled by random fields with three independent random variables. Permeability is a function of porosity. Both functions are time-dependent, have multi-scale behaviour and are defined for two layers. The numerical solution for each random realisation was obtained using the well-known ug4 parallel multigrid solver. The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level.
The MLMC method was used to compute the expected value and variance of several QoIs, such as the solution at a few preselected points $(t,\bx)$, the solution integrated over a small subdomain, and the time evolution of the freshwater integral. We have found that some QoIs require only 2-3 mesh levels and samples from finer meshes would not significantly improve the result. Other QoIs require more grid levels.
1. Investigated efficiency of MLMC for Henry problem with
uncertain porosity, permeability, and recharge.
2. Uncertainties are modeled by random fields.
3. MLMC could be much faster than MC, 3200 times faster !
4. The time dependence is challenging.
Remarks:
1. Check if MLMC is needed.
2. The optimal number of samples depends on the point (t;x)
3. An advanced MLMC may give better estimates of L and m`.
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
In this work, we solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir such as porosity and permeability.
Therefore, we estimated the mean value and the variance of the solution, as well as the propagation of uncertainties from the random input parameters to the solution.
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (\gPC) approach and used it to estimate the required statistics of the mass fraction.
Utilizing the \gPC method allowed us
to reduce the computational cost compared to the classical quasi Monte Carlo method.
\gPC assumes that the output function $\sol(t,\bx,\thetab)$ is square-integrable and smooth w.r.t uncertain input variables $\btheta$.
Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the \gPC method a non-trivial task.
We used an easy-to-implement, but only sub-optimal \gPC technique to quantify the uncertainty. For example, it is known that by increasing the degree of global polynomials (Hermite, Langange and similar), Runge's phenomenon appears. Here, probably local polynomials, splines or their mixtures would be better. Additionally, we used an easy-to-parallelise quadrature rule, which was also only suboptimal. For instance, adaptive choice of sparse grid (or collocation) points \cite{ConradMarzouk13,nobile-sg-mc-2015,Sudret_sparsePCE,CONSTANTINE12,crestaux2009polynomial} would be better, but we were limited by the usage of parallel methods. Adaptive quadrature rules are not (so well) parallelisable. In conclusion, we can report that: a) we developed a highly parallel method to quantify uncertainty in the Elder-like problem; b) with the \gPC of degree 4 we can achieve similar results as with the \QMC method.
In the numerical section we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. One of our goals was to see how the domain geometry influences the formation, the number and the shape of fingers.
Since the considered problem is nonlinear,
a high variance in the porosity may result in totally different solutions; for instance, the number of fingers, their intensity and shape, the propagation time, and the velocity may vary considerably.
The number of cells in the presented experiments varied from $241{,}152$ to $15{,}433{,}728$ for the cylindrical domain and from $524{,}288$ to $4{,}194{,}304$ for the parallelepiped. The maximal number of parallel processing units was $600\times 32$, where $600$ is the number of parallel nodes and $32$ is the number of computing cores on each node. The total computing time varied from 2 hours for the coarse mesh to 24 hours for the finest mesh.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case.
The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements,
and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction.
The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
We use the solution obtained from the quasi-Monte Carlo method as a reference solution.
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points $(t,\bx)$, and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
Here the interest is mainly to compute characterisations like the entropy,
the Kullback-Leibler divergence, more general $f$-divergences, or other such characteristics based on
the probability density. The density is often not available directly,
and it is a computational challenge to just represent it in a numerically
feasible fashion in case the dimension is even moderately large. It
is an even stronger numerical challenge to then actually compute said characteristics
in the high-dimensional case.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.\
$O(d n r^2 )$ for the TT format. Here $n$ is the number of discretisation
points in one direction, $r<<n$ is the maximal tensor rank, and $d$ the problem dimension.
This document proposes a method for weakly supervised regression on uncertain datasets. It combines graph Laplacian regularization and cluster ensemble methodology. The method solves an auxiliary minimization problem to determine the optimal solution for predicting uncertain parameters. It is tested on artificial data to predict target values using a mixture of normal distributions with labeled, inaccurately labeled, and unlabeled samples. The method is shown to outperform a simplified version by reducing mean Wasserstein distance between predicted and true values.
Computing f-Divergences and Distances of High-Dimensional Probability Density...Alexander Litvinenko
Poster presented on Stochastic Numerics and Statistical Learning: Theory and Applications Workshop in KAUST, Saudi Arabia.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
Even for moderate dimension $d$, the full storage and computation with such objects become very quickly infeasible.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.
O(d n r^2) for the TT format. Here $n$ is the number of discretisation
points in one direction, r<n is the maximal tensor rank, and d the problem dimension.
The particular data format is rather unimportant,
any of the well-known tensor formats (CP, Tucker, hierarchical Tucker, tensor-train (TT)) can be used,
and we used the TT data format. Much of the presentation and in fact the central train
of discussion and thought is actually independent of the actual representation.
In the beginning it was motivated through three possible ways how one may
arrive at such a representation of the pdf. One was if the pdf was given in some approximate
analytical form, e.g. like a function tensor product of lower-dimensional pdfs with a
product measure, or from an analogous representation of the pcf and subsequent use of the
Fourier transform, or from a low-rank functional representation of a high-dimensional
RV, again via its pcf.
The theoretical underpinnings of the relation between pdfs and pcfs as well as their
properties were recalled in Section: Theory, as they are important to be preserved in the
discrete approximation. This also introduced the concepts of the convolution and of
the point-wise multiplication Hadamard algebra, concepts which become especially important if
one wants to characterise sums of independent RVs or mixture models,
a topic we did not touch on for the sake of brevity but which follows very naturally from
the developments here. Especially the Hadamard algebra is also
important for the algorithms to compute various point-wise functions in the sparse formats.
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
Talk presented on SIAM IS 2022 conference.
Very often, in the course of uncertainty quantification tasks or
data analysis, one has to deal with high-dimensional random variables (RVs)
(with values in $\Rd$). Just like any other RV,
a high-dimensional RV can be described by its probability density (\pdf) and/or
by the corresponding probability characteristic functions (\pcf),
or a more general representation as
a function of other, known, random variables.
Here the interest is mainly to compute characterisations like the entropy, the Kullback-Leibler, or more general
$f$-divergences. These are all computed from the \pdf, which is often not available directly,
and it is a computational challenge to even represent it in a numerically
feasible fashion in case the dimension $d$ is even moderately large. It
is an even stronger numerical challenge to then actually compute said characterisations
in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose
to approximate density by a low-rank tensor.
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
This document summarizes a presentation on computing divergences and distances between high-dimensional probability density functions (pdfs) represented using tensor formats. It discusses:
1) Motivating the problem using examples from stochastic PDEs and functional representations of uncertainties.
2) Computing Kullback-Leibler divergence and other divergences when pdfs are not directly available.
3) Representing probability characteristic functions and approximating pdfs using tensor decompositions like CP and TT formats.
4) Numerical examples computing Kullback-Leibler divergence and Hellinger distance between Gaussian and alpha-stable distributions using these tensor approximations.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
This document describes using the Continuation Multi-Level Monte Carlo (CMLMC) method to compute electromagnetic fields scattered from dielectric objects of uncertain shapes. CMLMC optimally balances statistical and discretization errors using fewer samples on fine meshes and more on coarse meshes. The method is tested by computing scattering cross sections for randomly perturbed spheres under plane wave excitation and comparing results to the unperturbed sphere. Computational costs and errors are analyzed to demonstrate the efficiency of CMLMC for this scattering problem with uncertain geometry.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
Major Goal: estimate risks of the pollution in a subsurface flow.
How?: we solve density-driven groundwater flow with uncertain porosity and permeability.
We set up density-driven groundwater flow problem,
review stochastic modeling and stochastic methods, use UG4 framework (https://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4),
model uncertainty in porosity and permeability,
2D and 3D numerical experiments.
Simulation of propagation of uncertainties in density-driven groundwater flowAlexander Litvinenko
Consider stochastic modelling of the density-driven subsurface flow in 3D. This talk was presented by Dmitry Logashenko on the IMG conference in Kunming, China, August 2019.
Large data sets result large dense matrices, say with 2.000.000 rows and columns. How to work with such large matrices? How to approximate them? How to compute log-likelihood? determination? inverse? All answers are in this work.
This document summarizes a semi-supervised regression method that combines graph Laplacian regularization with cluster ensemble methodology. It proposes using a weighted averaged co-association matrix from the cluster ensemble as the similarity matrix in graph Laplacian regularization. The method (SSR-LRCM) finds a low-rank approximation of the co-association matrix to efficiently solve the regression problem. Experimental results on synthetic and real-world datasets show SSR-LRCM achieves significantly better prediction accuracy than an alternative method, while also having lower computational costs for large datasets. Future work will explore using a hierarchical matrix approximation instead of low-rank.
一比一原版(UMKC毕业证书)密苏里大学堪萨斯分校毕业证如何办理办硕士毕业证【q微1954292140】密苏里大学堪萨斯分校offer/学位证、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作【q微1954292140】Buy University of MissouriKansas City Diploma购买美国毕业证,购买英国毕业证,购买澳洲毕业证,购买加拿大毕业证,以及德国毕业证,购买法国毕业证(q微1954292140)购买荷兰毕业证、购买瑞士毕业证、购买日本毕业证、购买韩国毕业证、购买新西兰毕业证、购买新加坡毕业证、购买西班牙毕业证、购买马来西亚毕业证等。包括了本科毕业证,硕士毕业证。
特殊原因导致无法毕业,也可以联系我们帮您办理相关材料:
1:在密苏里大学堪萨斯分校挂科了,不想读了,成绩不理想怎么办?
2:打算回国了,找工作的时候,需要提供认证《UMKC成绩单购买办理密苏里大学堪萨斯分校毕业证书范本》
美国密苏里大学堪萨斯分校毕业证(UMKC毕业证书)UMKC文凭【q微1954292140】高仿真还原美国文凭证书和外壳,定制美国密苏里大学堪萨斯分校成绩单和信封。成绩单温感光标UMKC毕业证【q微1954292140】2025年新版学历认证证书密苏里大学堪萨斯分校offer/学位证办本科成绩单、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决密苏里大学堪萨斯分校学历学位认证难题。
帮您解决在美国密苏里大学堪萨斯分校未毕业难题(University of MissouriKansas City)文凭购买、毕业证购买、大学文凭购买、大学毕业证购买、买文凭、日韩文凭、英国大学文凭、美国大学文凭、澳洲大学文凭、加拿大大学文凭(q微1954292140)新加坡大学文凭、新西兰大学文凭、爱尔兰文凭、西班牙文凭、德国文凭、教育部认证,买毕业证,毕业证购买,买大学文凭,【q微1954292140】学位证1:1完美还原海外各大学毕业材料上的工艺:水印,阴影底纹,钢印LOGO烫金烫银,LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。《密苏里大学堪萨斯分校成绩单VOID底纹防伪美国毕业证书办理UMKC学位证书丢失补办》
【办理密苏里大学堪萨斯分校成绩单Buy University of MissouriKansas City Transcripts】
购买日韩成绩单、英国大学成绩单、美国大学成绩单、澳洲大学成绩单、加拿大大学成绩单(q微1954292140)新加坡大学成绩单、新西兰大学成绩单、爱尔兰成绩单、西班牙成绩单、德国成绩单。成绩单的意义主要体现在证明学习能力、评估学术背景、展示综合素质、提高录取率,以及是作为留信认证申请材料的一部分。
密苏里大学堪萨斯分校成绩单能够体现您的的学习能力,包括密苏里大学堪萨斯分校课程成绩、专业能力、研究能力。(q微1954292140)具体来说,成绩报告单通常包含学生的学习技能与习惯、各科成绩以及老师评语等部分,因此,成绩单不仅是学生学术能力的证明,也是评估学生是否适合某个教育项目的重要依据!
主营项目:
1、真实教育部国外学历学位认证《美国毕业文凭证书快速办理密苏里大学堪萨斯分校原版高仿成绩单》【q微1954292140】《论文没过密苏里大学堪萨斯分校正式成绩单》,教育部存档,教育部留服网站100%可查.
2、办理UMKC毕业证,改成绩单《UMKC毕业证明办理密苏里大学堪萨斯分校文凭办理》【Q/WeChat:1954292140】Buy University of MissouriKansas City Certificates《正式成绩单论文没过》,密苏里大学堪萨斯分校Offer、在读证明、学生卡、信封、证明信等全套材料,从防伪到印刷,从水印到钢印烫金,高精仿度跟学校原版100%相同.
3、真实使馆认证(即留学人员回国证明),使馆存档可通过大使馆查询确认.
【Q/WeChat:1954292140】Buy University of MissouriKansas City Diploma《正式成绩单论文没过》有文凭却得不到认证。又该怎么办???美国毕业证购买,美国文凭购买,【q微1954292140】美国文凭购买,美国文凭定制,美国文凭补办。专业在线定制美国大学文凭,定做美国本科文凭,【q微1954292140】复制美国University of MissouriKansas City completion letter。在线快速补办美国本科毕业证、硕士文凭证书,购买美国学位证、密苏里大学堪萨斯分校Offer,美国大学文凭在线购买。
4、留信网认证,国家专业人才认证中心颁发入库证书,留信网存档可查.
购买日韩毕业证、英国大学毕业证、美国大学毕业证、澳洲大学毕业证、加拿大大学毕业证(q微1954292140)新加坡大学毕业证、新西兰大学毕业证、爱尔兰毕业证、西班牙毕业证、德国毕业证,回国证明,留信网认证,留信认证办理,学历认证。从而完成就业。密苏里大学堪萨斯分校毕业证办理,密苏里大学堪萨斯分校文凭办理,密苏里大学堪萨斯分校成绩单办理和真实留信认证、留服认证、密苏里大学堪萨斯分校学历认证。学院文凭定制,密苏里大学堪萨斯分校原版文凭补办,成绩单温感光标,扫描件文凭定做,100%文凭复刻。
Lovely Professional University (LPU) is deeply committed to promoting environmental sustainability through a variety of impactful initiatives on its campus. The university has adopted green infrastructure practices, including energy-efficient buildings, solar panel installations, and rainwater harvesting systems. Waste is managed responsibly through segregation, composting, and recycling programs, while water conservation is prioritized with treated water use and awareness drives. LPU also promotes eco-friendly transportation with electric vehicles and bicycle sharing options. Regular plantation drives, biodiversity zones, and student-led environmental clubs further highlight the active role both the institution and its students play in creating a greener future. Through these comprehensive efforts, LPU stands out as a model for sustainable campus development in India.
Green technology, also known as clean tech or environmental technology, encompasses innovative solutions that minimize environmental impact and promote sustainability. It focuses on reducing energy consumption, minimizing waste, and mitigating the effects of climate change. This field includes a wide range of applications, from renewable energy sources like solar and wind power to sustainable agriculture and waste management technologies. Green technology aims to create a more environmentally friendly and resource-efficient future.
Green technology, also known as environmental or clean technology, encompasses innovations and advancements that aim to minimize environmental impact and promote sustainability. It focuses on developing products, processes, and systems that are less harmful to the environment, conserve resources, and contribute to a more sustainable future. Green technology is crucial for tackling environmental challenges like climate change and pollution, and it offers numerous benefits, including cost savings, resource efficiency, and economic growth.
Green technology covers a wide range of areas, including renewable energy sources (solar, wind, hydropower), sustainable transportation (electric vehicles, public transit), green building practices, and waste management solutions. It also includes technologies that address pollution, such as carbon capture and sequestration, and innovations that promote water conservation and resource efficiency.
The adoption of green technology is driven by increasing concerns about climate change, environmental degradation, and the depletion of natural resources. Governments and businesses are investing heavily in green technology to develop innovative solutions that can reduce emissions, improve resource efficiency, and protect the environment. Green technology is not just about environmental protection; it also offers significant economic opportunities, creating new industries and job markets. By embracing green technology, we can build a more sustainable and prosperous future for all.Green technology is a broad term. It refers to the use of research and innovation to create products and services that are less damaging to the environment and society.
Clean technology, or clean tech, is a subset of green technology
Green technology encompasses the broad range of research, development, and implementation of technologies that aim to reduce environmental impact and promote sustainability. It's a diverse field, encompassing everything from renewable energy sources and energy-efficient buildings to sustainable agriculture and pollution control. Green technology is essentially about finding ways to innovate and create products and services that are less harmful to the environment and contribute to a more sustainable future.
Key areas of green technology include:
Renewable Energy:
This focuses on harnessing natural resources like solar, wind, hydro, and geothermal power to generate electricity and other
Analysis of fresh fruit bunches production based on oil palm bioenvironmental...Open Access Research Paper
Oil palm plantations are a source of foreign exchange for the country. Good management really determines the production of Fresh Fruit Bunches (FFB) of Oil Palm. Sustainable management must take into account all the factors that affect FFB production and have a model that can be used as a guide in oil palm plantations. This study aims to analyze all the factors that affect the production of oil palm FFB and create a model for managing FFB production in oil palm plantations. This study uses survey and observation methods. Observation parameters are the results of interviews, afdeling and research blocks, planting age (year), land area (ha), number of principal production (palm), SPH (space per hectare, palm/ha), type of weeds attack, type of pest attack, type of disease attack, biological agent, number of shoots per tree, average weight (kg/plant), FFB production (tone/ha), fertilization (kg/palm, weed control, pest control, disease control, soil conservation), water conservation, rainfall and rainy days, landscape, area and slope shape, soil family, soil pH, base saturation, slope and land suitability. Data were analyzed by Cobb-Douglas Multiple Linear Regression, SPSS 18. The results showed that the management model of oil palm plantations includes ten bioenvironmental factors, namely bio-physical, bio-agent, topography, climate, fertilization, soil conservation, water conservation, weeds, pests, and diseases. The increase and decrease in the management process affect the production of FFB.
Impact of Carbon Management on Different IndustriesStellarix
Effective carbon management is crucial for achieving climate goals. Implementing carbon taxes and emissions trading systems has been shown to reduce per capita CO₂ emissions.
Innovations such as AI-driven emissions tracking, carbon capture, and blockchain-based carbon credit systems are enhancing transparency and efficiency in emissions reduction efforts.
Since 1970, we have developed noise measurement instrument, including sound level meters, noise dosimeters, and environmental noise monitors. Our flagship products come with an industry leading, 15-year no-quibble warranty.
Take your outdoor noise monitoring to the next level
The Quantum Outdoor noise monitor is a powerful outdoor noise monitor with built-in cloud connectivity, ideal for long-term unattended noise monitoring applications. It offers a complete noise monitoring solution by offering all the benefits of remote 24/7 noise monitoring and seeing noise level data on the MyCirrus cloud platform anytime, anywhere.
With the ability to trigger events and record audio, The Quantum Outdoor noise monitor can send alerts and notifications to users using various methods allowing you to take corrective action in real-time. Configurable in any way as standalone instruments or combinations, The Quantum Outdoor noise monitor can be powered by PoE, mains, solar or battery. The Quantum Outdoor noise monitor can work in any environment for any industry:
- Unattended environmental noise measurements
- Industrial noise monitoring
- Construction and demolition site noise monitoring
- Measurement of noise to meet standards such as BS 4142 and BS 5228
- Outdoor music, sports, and entertainment monitoring
This study material offers a great deal of information about the different types of wastes, their impact and the management strategies exercised for sustainable development.
Overview of sparse and low-rank matrix / tensor techniques
1. Overview of Low-rank and Sparse Techniques in
Spatial Statistics and Parameter Identification
Alexander Litvinenko
Bayesian Computational Statistics & Modeling, KAUST
https://bayescomp.kaust.edu.sa/
KAUST Biostatistics Group Seminar,
October 3, 2018
2. 4*
The structure of the talk
We collect a lot of data, it is easy and cheap nowdays.
Major Goal: Develop new statistical tools to address new problems.
1. Low-rank matrices
2. Sparse matrices
3. Hierarchical matrices
4. Approximation of Mat´ern covariance functions and joint
Gaussian likelihoods
5. Identification of unknown parameters via maximizing Gaussian
log-likelihood
6. Low-rank tensor methods
2
3. 4*
Motivation, problem 1
Task: to predict temperature, velocity, salinity, estimate parameters of
covariance
Grid: 50Mi locations on 50 levels, 4*(X*Y*Z) + X*Y= 4*500*500*50 +
500*500 = 50Mi.
High-resolution time-dependent data about Red Sea: zonal velocity and
temperature
3
4. 4*
Motivation, problem 2
Tasks: 1) to improve statistical model, which predicts moisture; 2) use
this improved model to forecast missing values.
Given: n ≈ 2.5Mi observations of moisture data
−120 −110 −100 −90 −80 −70
253035404550
Soil moisture
longitude
latitude
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
High-resolution daily soil moisture data at the top layer of the Mississippi
basin, U.S.A., 01.01.2014 (Chaney et al., in review).
Important for agriculture, defense, ...
4
6. 4*
Five tasks in statistics to solve
Task 1. Approximate a Mat´ern covariance function in a low-rank
format.
Task 2. Computing Cholesky C = LLT or C1/2
Task 3. Kriging estimate ˆs = Csy C−1
yy y.
Task 4. Geostatistical design
φA = N−1
trace Css|y , and φC = zT
(Css|y )z,
where Css|y := Css − Csy C−1
yy Cys
Task 5. Computing the joint Gaussian log-likelihood function
L(θ) = −
N
2
log(2π) −
1
2
log det{C(θ)} −
1
2
(zT
· C(θ)−1
z).
6
7. 4*
Task 6: Estimation of unknown parameters
H-matrix rank
3 7 9 12
nu
0.42
0.44
0.46
0.48
0.5
0.52
0.54
0.56
0.58
number of measurements
1000 2000 4000 8000 16000 32000
nu
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
(left) Dependence of the boxplots for ν on the H-matrix rank,
when n = 16,000; (right) Convergence of the boxplots for ν with
increasing n; 100 replicates.
7
9. 4*
Overview
1. sparse: storage O(n), many data formats, not easy
algorithms, 5-6 existing packages, limited applicability
2. low-rank: storage O(n), based on SVD, easy algorithms, a lot
of packages, limited applicability
3. H-matrices: O(knlogn), not trivial implementation, very wide
applicability
4. low-rank tensors: for d-dimensional problems (d > 3), many
formats, O(dnk), based on SVD/QR
5. combination of above:
9
11. 4*
Low-rank (rank-k) matrices
M ∈ Rn×m, U ≈ U ∈ Rn×k, V ≈ V ∈ Rm×k, k min(n, m).
The storage M = UΣV T is k(n + m) instead of n · m for M
represented in the full matrix format.
VU Σ
T=M
U
VΣ∼
∼ ∼ T
=M
∼
Figure: Reduced SVD, only k biggest singular values are taken.
11
13. 4*
H-matrices (Hackbusch ’99), main steps
1. Build cluster tree TI and block cluster tree TI×I .
I
I
I I
I
I
I I I I1
1
2
2
11 12 21 22
I11
I12
I21
I22
13
14. 4*
Admissible condition
2. For each (t × s) ∈ TI×I , t, s ∈ TI , check
admissibility condition
min{diam(Qt), diam(Qs)} ≤ η · dist(Qt, Qs).
if(adm=true) then M|t×s is a rank-k matrix
block
if(adm=false) then divide M|t×s further or de-
fine as a dense matrix block, if small enough.
Q
Qt
S
dist
H=
t
s
All steps: Grid → cluster tree (TI ) + admissibility condition →
block cluster tree (TI×I ) → H-matrix → H-matrix arithmetics.
14
17. 4*
Identifying unknown parameters
Given: Z = {Z(s1), . . . , Z(sn)} , where Z(s) is a Gaussian
random field indexed by a spatial location s ∈ Rd , d ≥ 1.
Assumption: Z has mean zero and stationary parametric
covariance function, e.g. Mat´ern:
C(θ) =
2σ2
Γ(ν)
r
2
ν
Kν
r
, θ = (σ2
, ν, ).
To identify: unknown parameters θ := (σ2, ν, ).
17
18. 4*
Identifying unknown parameters
Statistical inference about θ is then based on the Gaussian
log-likelihood function:
L(θ) = −
n
2
log(2π) −
1
2
log|C(θ)| −
1
2
Z C(θ)−1
Z, (3)
where the covariance matrix C(θ) has entries C(si − sj ; θ),
i, j = 1, . . . , n. The maximum likelihood estimator of θ is the value
θ that maximizes (3).
18
19. 4*
Details of the identification
To maximize the log-likelihood function we use the Brent’s method
[Brent’73] (combining bisection method, secant method and
inverse quadratic interpolation).
1. H-matrix: C(θ) ≈ C(θ, k) or ≈ C(θ, ε).
2. H-Cholesky: C(θ, k) = L(θ, k)L(θ, k)T
3. ZT C−1Z = ZT (LLT )−1Z = vT · v, where v is a solution of
L(θ, k)v(θ) := Z.
log det{C} = log det{LLT
} = log det{
n
i=1
λ2
i } = 2
n
i=1
logλi ,
L(θ, k) = −
N
2
log(2π) −
N
i=1
log{Lii (θ, k)} −
1
2
(v(θ)T
· v(θ)). (4)
19
20. Dependence of log-likelihood and its ingredients on parameters,
n = 4225. k = 8 in the first row and k = 16 in the second.
First column: = 0.2337 fixed; Second column: ν = 0.5 fixed.
20
21. 4*
Remark: stabilization with nugget
To avoid instability in computing Cholesky, we add: Cm = C + δ2I.
Let λi be eigenvalues of C, then eigenvalues of Cm will be λi + δ2,
log det(Cm) = log n
i=1(λi + δ2) = n
i=1 log(λi + δ2).
(left) Dependence of the negative log-likelihood on parameter
with nuggets {0.01, 0.005, 0.001} for the Gaussian covariance.
(right) Zoom of the left figure near minimum; n = 2000 random
points from moisture example, rank k = 14, σ2 = 1, ν = 0.5.
21
22. 4*
Error analysis
Theorem (1)
Let C be an H-matrix approximation of matrix C ∈ Rn×n such that
ρ(C−1
C − I) ≤ ε < 1.
Then
|log|C| − log|C|| ≤ −nlog(1 − ε), (5)
Proof: See [Ballani, Kressner 14] and [Ipsen 05].
Remark: factor n is pessimistic and is not really observed
numerically.
22
23. 4*
Error in the log-likelihood
Theorem (2)
Let C ≈ C ∈ Rn×n and Z be a vector, Z ≤ c0 and C−1 ≤ c1.
Let ρ(C−1C − I) ≤ ε < 1. Then it holds
|L(θ) − L(θ)| =
1
2
(log|C| − log|C|) +
1
2
|ZT
C−1
− C−1
Z|
≤ −
1
2
· nlog(1 − ε) +
1
2
|ZT
C−1
C − C−1
C C−1
Z|
≤ −
1
2
· nlog(1 − ε) +
1
2
c2
0 · c1 · ε.
23
24. 2 4 6 8 10
0.55
0.6
0.65
0.7
0.75
0.8
0.85
2 4 6 8 10
0.55
0.6
0.65
0.7
0.75
0.8
0.85
2 4 6 8 10
0.55
0.6
0.65
0.7
0.75
0.8
0.85
Functional boxplots of the estimated parameters as a function of
the accuracy ε, based on 50 replicates with
n = {4000, 8000, 32000} observations (left to right columns). True
parameters θ∗
= ( , ν, σ2) = (0.7, 0.9, 1.0) represented by the
green dotted lines.
24
25. 4*
How much memory is needed?
0 0.5 1 1.5 2 2.5
ℓ, ν = 0.325, σ2
= 0.98
5
5.5
6
6.5
size,inbytes
×10 6
1e-4
1e-6
0.2 0.4 0.6 0.8 1 1.2 1.4
ν, ℓ = 0.58, σ2
= 0.98
5.2
5.4
5.6
5.8
6
6.2
6.4
6.6
size,inbytes
×10 6
1e-4
1e-6
(left) Dependence of the matrix size on the covariance length ,
and (right) the smoothness ν for two different accuracies in the
H-matrix sub-blocks ε = {10−4, 10−6}, for n = 2, 000 locations in
the domain [32.4, 43.4] × [−84.8, −72.9].
25
27. 4*
How log-likelihood depends on n?
Figure: Dependence of negative log-likelihood function on different
number of locations n = {2000, 4000, 8000, 16000, 32000} in log-scale.
27
28. 4*
Time and memory for parallel H-matrix approximation
Maximal # cores is 40, ν = 0.325, = 0.64, σ2 = 0.98
n ˜C ˜L˜L
time size kB/dof time size I − (˜L˜L )−1 ˜C 2
sec. MB sec. MB
32,000 3.3 162 5.1 2.4 172.7 2.4 · 10−3
128,000 13.3 776 6.1 13.9 881.2 1.1 · 10−2
512,000 52.8 3420 6.7 77.6 4150 3.5 · 10−2
2,000,000 229 14790 7.4 473 18970 1.4 · 10−1
28
29. 4*
Parameter identification
64 32 16 8 4 2
n, samples in thousands
0.07
0.08
0.09
0.1
0.11
0.12
ℓ
64 32 16 8 4 2
n, samples in thousands
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
ν
Synthetic data with known parameters. Boxplots for and ν for
n = 1, 000 × {64, 32, ..., 4, 2}; 100 replicates.
29
30. 4*
Parameter estimation vs H-matrix accuracy
10 -7
10 -6
10 -5
10 -4
0.55
0.6
0.65
0.7
0.75
0.8
32000
truth
10 -7
10 -6
10 -5
10 -4
0.88
0.89
0.9
0.91
0.92
32000
truth
10 -7
10 -6
10 -5
10 -4
0.8
1
1.2
32000
truth
Estimated parameters as a function of the accuracy ε, based on 30
replicates (black solid curves) with n = 32,000 observations. True
parameters θ = ( , ν, σ2) = (0.7, 0.9.1.0) represented by the green
doted lines. Replicates on (a) identify ˆ; on (b) identify ˆν; and on
(c) identify ˆσ2.
30
31. 4*
Canonical (left) and Tucker (right) decompositions of 3D tensors.
Canonical (left) and Tucker (right) decompositions
of tensors.
31
32. 4*
Canonical (left) and Tucker (right) decompositions of 3D tensors.
R
(3)
1
(2)
2
(1)
1
(3)
R
(2)
R
(1)
R
.
.
b
c1
u
u
u u
u
u
c+ . . . +
A
n3
n1
n2
r3
3
2
r
1
r
r
r1
B
1
3
2
r
2
A
V
(1)
V
(2)
V
(3)
n
n
n
n
1
n
3
n
2
I
I
I
2
3
1
I3
A
I2
A(1)I
1
[A. Litvinenko, D. Keyes, V. Khoromskaia, B. N. Khoromskij, H. G. Matthies, Tucker Tensor Analysis of Mat´ern
Functions in Spatial Statistics, 2018]
32
33. 4*
Approximating Mat´ern covariance in a low-rank tensor format
≈ r
i=1
d
µ=1 Ciµ
≈ R
i=1
d
ν=1 ˜uiν
Cν, (r) = 21−ν
Γ(ν)
√
2νr
ν
Kν
√
2νr
fα,ν(ρ) := Γ(ν+d/2)α2ν
πd/2Γ(ν)
1
(α2+ρ2)ν+d/2
?
IFFT
Low-rank
approximation
FFT
Two possible ways to find a low rank approximation of the Mat´ern
covariance matrix. The Fourier transformation is analytically
known and has known low-rank approximation. The inverse Fourier
transformation (IFFT) can be computed numerically and does not
change the rank.
33
34. 4*
Convergence
fα,ν(ρ) :=
C
(α2 + ρ2)ν+d/2
, (6)
where α ∈ (0.1, 100) and d = 1, 2, 3.
Tucker rank
5 10 15
Frobeniuserror
10 -10
10 -5
10 0 SD Matern, α = 0.1
ν=0.1
ν=0.2
ν=0.4
ν=0.8
Tucker rank
5 10 15
Frobeniuserror
10 -20
10 -15
10 -10
10 -5 SD Matern, α = 100
ν=0.1
ν=0.2
ν=0.4
ν=0.8
Figure: Convergence w.r.t the Tucker rank of 3D spectral density of
Mat´ern covariance (6) with α = 0.1 (left) and α = 100 (right).
34
35. 4*
Example of a very large problem
n = 100 n = 500 n = 1000
d = 1000 3.7 67 491
Table: Computing time (in sec.) to set up and to compute the trace of
˜C =
R
j=1
d
ν=1 Cjν, R = 10. Matrix ˜C is of size N × N, where N = nd
,
d = 1000 and n = {100, 500, 1000}. A linux cluster with 40 processors
and 128 GB RAM was used.
Example 2: Cholesky decomposition of the Gaussian covariance
matrix in 3D. A numerical experiment contains a grid with 60003
mesh points. The algorithm requires 15 seconds (11 seconds for
matrix setup and 4 seconds for Cholesky).
35
36. 4*
Properties
Let cov(x, y) = exp−|x−y|2
, where
x = (x1, .., xd ), y = (y1, ..., yd ) ∈ D ∈ R3.
cov(x, y) = exp−|x1−y1|2
⊗ exp−|x2−y2|2
⊗ exp−|x3−y3|2
.
C = C1 ⊗ ... ⊗ Cd .
If d Cholesky decompositions exist, i.e, Ci = Li · LT
i , i = 1..d.
Then
C1⊗...⊗Cd = (L1LT
1 )⊗...⊗(Ld LT
d ) = (L1⊗...⊗Ld )·(LT
1 ⊗...⊗LT
d ) =: L·LT
where L := L1 ⊗ ... ⊗ Ld , LT := LT
1 ⊗ ... ⊗ LT
d are also low- and
upper-triangular matrices.
(C1 ⊗ ... ⊗ Cd )−1
= C−1
1 ⊗ ... ⊗ C−1
d .
Reduced from O(NlogN), N = nd , to O(dnlogn).
37. 4*
Properties
Let C ≈ ˜C = r
i=1
d
µ=1 Ciµ, then
diag(˜C) = diag
r
i=1
d
µ=1
Ciµ
=
r
i=1
d
µ=1
diag (Ciµ) , (7)
trace(˜C) = trace
r
i=1
d
µ=1
Ciµ
=
r
i=1
d
µ=1
trace(Ciµ). (8)
det(C1 ⊗ C2) = det(C1)n2
· det(C2)n1
log det(C1⊗C2) = log(det(C1)n2
·det(C2)n1
) = n2log det C1+n1log det C2.
log det(C1⊗C2⊗C3) = n2n3log det C1+n1n3log det C2+n1n2log det C3.
37
38. 4*
Log-Likelihood in tensor format
L = −
n1n2n3
2
log(2π)−n2n3log det C1−n1n3log det C2−n1n2log det C3
−
r
i=1
r
j=1
(uT
i uj )(wT
i wj ).
38
39. 4*
Conclusion
Sparse, low-rank, hierarchical (pros and cons)
Approximated Mat´ern covariance
Provided error estimate of L − L .
Researched influence of H-matrix approximation error on the
estimated parameters (boxplots).
With application of H-matrices
we extend the class of covariance functions to work with,
we allow non-regular discretization of the covariance function
on large spatial grids.
tensor rank is not equal to the matrix rank
matrix rank splits x from y and tensor rank splits (xi − yi )
from (xj − yj ), i, j > 1, i = j.
40. 4*
Literature
1. A. Litvinenko, HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and
Likelihoods with Applications in Parameter Identification, preprint arXiv:1709.08625, 2017
2. A. Litvinenko, Y. Sun, M.G. Genton, D. Keyes, Likelihood Approximation With Hierarchical Matrices For Large
Spatial Datasets, preprint arXiv:1709.04419, 2017
3. B.N. Khoromskij, A. Litvinenko, H.G. Matthies, Application of hierarchical matrices for computing the
Karhunen-Lo´eve expansion, Computing 84 (1-2), 49-67, 31, 2009
4. H.G. Matthies, A. Litvinenko, O. Pajonk, B.V. Rosi´c, E. Zander, Parametric and uncertainty computations with
tensor product representations, Uncertainty Quantification in Scientific Computing, 139-150, 2012
5. W. Nowak, A. Litvinenko, Kriging and spatial design accelerated by orders of magnitude: Combining low-rank
covariance approximations with FFT-techniques, Mathematical Geosciences 45 (4), 411-435, 2013
6. P. Waehnert, W.Hackbusch, M. Espig, A. Litvinenko, H. Matthies: Efficient low-rank approximation of the
stochastic Galerkin matrix in the tensor format, Computers & Mathematics with Applications, 67 (4), 818-829,
2014
7. M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander, Efficient analysis of high dimensional data in
tensor formats, Sparse Grids and Applications, 31-56, Springer, Berlin, 2013
8. A. Litvinenko, D. Keyes, V. Khoromskaia, B. N. Khoromskij, H. G. Matthies, Tucker Tensor Analysis of Mat´ern
Functions in Spatial Statistics, DOI: https://doi.org/10.1515/cmam-2018-0022, Computational Methods in
Applied Mathematics , 2018
40
41. 4*
Literature
9. S. Dolgov, B.N. Khoromskij, A. Litvinenko, H.G. Matthies, Computation of the Response Surface in the
Tensor Train data format arXiv preprint arXiv:1406.2816, 2014
10. S. Dolgov, B.N. Khoromskij, A. Litvinenko, H.G. Matthies, Polynomial Chaos Expansion of Random
Coefficients and the Solution of Stochastic Partial Differential Equations in the Tensor Train Format,
IAM/ASA J. Uncertainty Quantification 3 (1), 1109-1135, 2015
11. A. Litvinenko, Application of Hierarchical matrices for solving multiscale problems, PhD Thesis, Leipzig
University, Germany, https://www.wire.tu-bs.de/mitarbeiter/litvinen/diss.pdf, 2006
12. B.N. Khoromskij, A. Litvinenko, Domain decomposition based H-matrix preconditioners for the skin
problem, Domain Decomposition Methods in Science and Engineering XVII, pp 175-182, 2006
41
42. 4*
Used Literature and Slides
Book of W. Hackbusch 2012,
Dissertations of I. Oseledets and M. Espig
Articles of Tyrtyshnikov et al., De Lathauwer et al., L.
Grasedyck, B. Khoromskij, M. Espig
Lecture courses and presentations of Boris and Venera
Khoromskij
Software T. Kolda et al.; M. Espig et al.; D. Kressner, K.
Tobler; I. Oseledets et al.
42
43. 4*
Tensor Software
Ivan Oseledets et al., Tensor Train toolbox (Matlab),
http://spring.inm.ras.ru/osel
D.Kressner, C. Tobler, Hierarchical Tucker Toolbox (Matlab),
http://www.sam.math.ethz.ch/NLAgroup/htucker toolbox.html
M. Espig, et al
Tensor Calculus library (C): http://gitorious.org/tensorcalculus
43