The document discusses Konstantinos Giannakis's dissertation defense on unconventional computing methods using automata. It summarizes the dissertation's structure, including discussions on standard computation, infinite computation using automata, membrane computing using novel membrane automata definitions, and quantum computing using periodic quantum automata. It provides examples and definitions related to each of these topics.
This document presents a new class of restricted quantum membrane systems. The key ideas are to define membrane systems that operate under strictly unitary quantum evolution rules, avoiding problems associated with transferring objects between membranes. A cascading membrane P system is defined, where membranes are arranged hierarchically with input and output spaces coupled in a pipeline. Computation proceeds by applying unitary operators to manipulate qubit registers representing object degrees in each membrane. This model is shown to be capable of simulating classical automata. The approach aims to combine variants of P systems with quantum computing techniques in a way that is consistent with underlying quantum physics.
Presentation of "Quantum automata for infinite periodic words" for the 6th International Conference on Information, Intelligence, Systems and Applications (IISA 2015)
Monte Caro Simualtions, Sampling and Markov Chain Monte CarloXin-She Yang
Pseudorandom
Pseudorandom The document discusses Monte Carlo methods and Markov chain Monte Carlo (MCMC). It provides examples of using Monte Carlo simulations to estimate pi and solve Buffon's needle problem. It also discusses random walks in Markov chains, the PageRank algorithm used by Google, and challenges with high-dimensional integrals and distributions that do not have a closed-form inverse. MCMC methods are presented as a way to address these challenges.
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
Markov chain Monte Carlo (MCMC) methods are commonly used to approximate properties of target probability distributions. However, MCMC estimators are generally biased for any fixed number of samples. The document discusses various techniques for constructing unbiased estimators from MCMC output, including regeneration, sequential Monte Carlo samplers, and coupled Markov chains. Specifically, running two Markov chains in parallel and taking the difference in their values at meeting times can yield an unbiased estimator, though certain conditions must hold.
Hidden Markov Models with applications to speech recognitionbutest
This document provides an introduction to hidden Markov models (HMMs). It discusses how HMMs can be used to model sequential data where the underlying states are not directly observable. The key aspects of HMMs are: (1) the model has a set of hidden states that evolve over time according to transition probabilities, (2) observations are emitted based on the current hidden state, (3) the four basic problems of HMMs are evaluation, decoding, training, and model selection. Examples discussed include modeling coin tosses, balls in urns, and speech recognition. Learning algorithms for HMMs like Baum-Welch and Viterbi are also summarized.
Why should you care about Markov Chain Monte Carlo methods?
→ They are in the list of "Top 10 Algorithms of 20th Century"
→ They allow you to make inference with Bayesian Networks
→ They are used everywhere in Machine Learning and Statistics
Markov Chain Monte Carlo methods are a class of algorithms used to sample from complicated distributions. Typically, this is the case of posterior distributions in Bayesian Networks (Belief Networks).
These slides cover the following topics.
→ Motivation and Practical Examples (Bayesian Networks)
→ Basic Principles of MCMC
→ Gibbs Sampling
→ Metropolis–Hastings
→ Hamiltonian Monte Carlo
→ Reversible-Jump Markov Chain Monte Carlo
The Metropolis Hastings algorithm is an MCMC method for obtaining a sequence of samples from a probability distribution when direct sampling is difficult. It constructs a Markov chain that has the desired target distribution as its stationary distribution. At each step, a candidate sample is generated and either accepted, replacing the current state, or rejected, keeping the current state. The acceptance ratio is determined by the ratio of probabilities of the candidate and current states. The algorithm is a generalization of the Metropolis algorithm that allows for non-symmetric proposal distributions. When the chain satisfies ergodicity conditions, the sample distribution will converge to the target distribution as the number of samples increases.
The document provides an introduction to Markov Chain Monte Carlo (MCMC) methods. It discusses using MCMC to sample from distributions when direct sampling is difficult. Specifically, it introduces Gibbs sampling and the Metropolis-Hastings algorithm. Gibbs sampling updates variables one at a time based on their conditional distributions. Metropolis-Hastings proposes candidate samples and accepts or rejects them to converge to the target distribution. The document provides examples and outlines the algorithms to construct Markov chains that sample distributions of interest.
This a short presentation for a 15 minutes talk at Bayesian Inference for Stochastic Processes 7, on the SMC^2 algorithm.
http://arxiv.org/abs/1101.1528
Ordinal Regression and Machine Learning: Applications, Methods, MetricsFrancesco Casalegno
What do movie recommender systems, disease progression evaluation, and sovereign credit ranking have in common?
→ ordinal regression sits between classification and regression
→ target values are categorical and discrete, but ordered
→ many challenges to face when training and evaluating models
What will you find in this presentation?
→ real life, clear examples of ordinal regression you see everyday
→ learning to rank: predict user preferences and items relevance
→ best solution methods: naïve, binary decomposition, threshold
→ how to measure performance: understand & choose metrics
This document provides an introduction to Bayesian analysis and Metropolis-Hastings Markov chain Monte Carlo (MCMC). It explains the foundations of Bayesian analysis and how MCMC sampling methods like Metropolis-Hastings can be used to draw samples from posterior distributions that are intractable. The Metropolis-Hastings algorithm works by constructing a Markov chain with the target distribution as its stationary distribution. The document provides an example of using MCMC to perform linear regression in a Bayesian framework.
1) Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from complex probability distributions and are useful for problems that cannot be solved efficiently using other methods.
2) Common MCMC algorithms include Metropolis-Hastings, which samples from a target distribution using a proposal distribution, and Gibbs sampling, which efficiently samples multidimensional distributions by updating variables sequentially.
3) MCMC methods like simulated annealing can find global maxima of probability distributions and have applications in statistical mechanics, optimization, and Bayesian inference.
The document discusses Markov chains and their relationship to random walks on graphs and electrical networks. Some key points:
- A Markov chain is a process that transitions between a finite set of states based on transition probabilities that depend only on the current state.
- For a strongly connected Markov chain, there exists a unique stationary distribution that the long-term probabilities of the chain converge to, regardless of the starting state.
- Random walks on undirected graphs can be modeled as Markov chains, where the transition probabilities are proportional to edge conductances in an analogous electrical network. The stationary distribution of such a random walk is proportional to vertex degrees or conductances.
- The document discusses various techniques for Markov chain Monte Carlo (MCMC) sampling, including rejection sampling, Metropolis-Hastings, and Gibbs sampling.
- It explains how MCMC can be used for approximate probabilistic inference in complex models by constructing a Markov chain that converges to the target distribution.
- Diagnostics are discussed for checking if the Markov chain has converged, such as visual inspection of trace plots, and Geweke and Gelman-Rubin tests of the within-chain and between-chain variances.
This document discusses Markov chain Monte Carlo (MCMC) methods. It begins with an outline of the Metropolis-Hastings algorithm, which is a generic MCMC method for obtaining a sequence of random samples from a probability distribution when direct sampling is difficult. The document then provides details on the Metropolis-Hastings algorithm, including its convergence properties. It also discusses the independent Metropolis-Hastings algorithm as a special case and provides an example to illustrate it.
1. Hidden Markov Models (HMMs) are used to model sequential data where the underlying process generating the observable outputs is not visible but assumed to be a Markov process with hidden states.
2. HMMs define transition probabilities between hidden states and emission probabilities of observable outputs for each state.
3. There are three typical problems for HMMs: likelihood computation, decoding the most likely sequence of hidden states, and learning the transition and emission probabilities from data.
Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from probability distributions for use in Monte Carlo simulations. The Metropolis-Hastings algorithm proposes transitions to new states in the chain and either accepts or rejects those states based on a probability calculation, allowing it to sample from complex, high-dimensional distributions. The Gibbs sampler is a special case of MCMC where each variable is updated conditional on the current values of the other variables, ensuring all proposed moves are accepted. These MCMC methods allow approximating integrals that are difficult to compute directly.
Supervised Hidden Markov Chains.
Here, we used the paper by Rabiner as a base for the presentation. Thus, we have the following three problems:
1.- How efficiently compute the probability given a model.
2.- Given an observation to which class it belongs
3.- How to find the parameters given data for training.
The first two follow Rabiner's explanation, but in the third one I used the Lagrange Multiplier Optimization because Rabiner lacks a clear explanation about how solving the issue.
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
This document provides an overview of Markov chain Monte Carlo (MCMC) methods. It begins with motivations for using MCMC, such as computational difficulties that arise in models with latent variables like mixture models. It then discusses likelihood-based and Bayesian approaches, noting limitations of maximum likelihood methods. Conjugate priors are described that allow tractable Bayesian inference for some simple models. However, conjugate priors are not available for more complex models, motivating the use of MCMC methods which can approximate integrals and distributions of interest for more complex models.
This document provides an introduction to hidden Markov models (HMMs). It explains what HMMs are, where they are used, and why they are useful. Key aspects of HMMs covered include the Markov chain process, notation used in HMMs, an example of applying an HMM to temperature data, and the three main problems HMMs are used to solve: scoring observation sequences, finding optimal state sequences, and training a model. The document also outlines the forward, backward, and other algorithms used to efficiently solve these three problems.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
This document provides an overview of hidden Markov models (HMMs). It defines HMMs as statistical Markov models that include both observed and hidden states. The key components of an HMM are states (Q), observations (V), initial state probabilities (p), state transition probabilities (A), and emission probabilities (E). HMMs find applications in areas like protein structure prediction, sequence alignment, and gene finding. The Viterbi algorithm is described as a dynamic programming approach for finding the most likely sequence of hidden states in an HMM. Advantages of HMMs include their statistical power and modularity, while disadvantages include assumptions of state independence and potential for overfitting.
This document summarizes a talk given by Pierre E. Jacob on recent developments in unbiased Markov chain Monte Carlo methods. It discusses:
1. The bias inherent in standard MCMC estimators due to the initial distribution not being the target distribution.
2. A method for constructing unbiased estimators using coupled Markov chains, where two chains are run in parallel until they meet, at which point an estimator involving the differences in the chains' values is returned.
3. Conditions under which the coupled chain estimators are unbiased and have finite variance. Examples are given of how to construct coupled versions of common MCMC algorithms like Metropolis-Hastings and Gibbs sampling.
Hidden Markov Model - The Most Probable PathLê Hòa
This document provides an overview of hidden Markov models including:
- The components of hidden Markov models including states, transition probabilities, emission probabilities, and observation sequences.
- How the Viterbi algorithm can be used to find the most probable hidden state sequence that explains an observed sequence by calculating likelihoods recursively and backtracking through the model.
- An example application of the Viterbi algorithm to find the most probable hidden weather sequence given observed data from a weather HMM model.
This document provides an overview of Hidden Markov Models (HMM). HMMs are statistical models used to model systems where an underlying process produces observable outputs. In HMMs, the observations are modeled as a Markov process with hidden states that are not directly observable, but can only be inferred through the observable outputs. The document describes the key components of HMMs including transition probabilities, emission probabilities, and the initial distribution. Examples of applications like speech recognition and bioinformatics are provided. Finally, common algorithms for HMMs like Forward, Baum-Welch, Backward, and Viterbi are listed for performing inference on the hidden states given observed sequences.
The document discusses Markov chain Monte Carlo (MCMC) methods, which use Markov chains to generate dependent samples from probability distributions that are difficult to directly sample from. It introduces Gibbs sampling and the Metropolis-Hastings algorithm as two common MCMC techniques. Gibbs sampling works by iteratively sampling each parameter from its conditional distribution given current values of other parameters. Metropolis-Hastings also iteratively proposes new parameter values but only accepts them probabilistically, based on the target distribution. Both techniques generate Markov chains that can be used to approximate integrals and obtain quantities of interest from complex distributions.
Spacey random walks and higher-order data analysisDavid Gleich
My talk at TMA 2016 (The workshop on Tensors, Matrices, and their Applications) on the relationship between a spacey random walk process and tensor eigenvectors
The document provides an overview of quantum computing, including its history, data representation using qubits, quantum gates and operations, and Shor's algorithm for integer factorization. Shor's algorithm uses quantum parallelism and the quantum Fourier transform to find the period of a function, from which the factors of a number can be determined. While quantum computing holds promise for certain applications, classical computers will still be needed and future computers may be a hybrid of classical and quantum components.
This a short presentation for a 15 minutes talk at Bayesian Inference for Stochastic Processes 7, on the SMC^2 algorithm.
http://arxiv.org/abs/1101.1528
Ordinal Regression and Machine Learning: Applications, Methods, MetricsFrancesco Casalegno
What do movie recommender systems, disease progression evaluation, and sovereign credit ranking have in common?
→ ordinal regression sits between classification and regression
→ target values are categorical and discrete, but ordered
→ many challenges to face when training and evaluating models
What will you find in this presentation?
→ real life, clear examples of ordinal regression you see everyday
→ learning to rank: predict user preferences and items relevance
→ best solution methods: naïve, binary decomposition, threshold
→ how to measure performance: understand & choose metrics
This document provides an introduction to Bayesian analysis and Metropolis-Hastings Markov chain Monte Carlo (MCMC). It explains the foundations of Bayesian analysis and how MCMC sampling methods like Metropolis-Hastings can be used to draw samples from posterior distributions that are intractable. The Metropolis-Hastings algorithm works by constructing a Markov chain with the target distribution as its stationary distribution. The document provides an example of using MCMC to perform linear regression in a Bayesian framework.
1) Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from complex probability distributions and are useful for problems that cannot be solved efficiently using other methods.
2) Common MCMC algorithms include Metropolis-Hastings, which samples from a target distribution using a proposal distribution, and Gibbs sampling, which efficiently samples multidimensional distributions by updating variables sequentially.
3) MCMC methods like simulated annealing can find global maxima of probability distributions and have applications in statistical mechanics, optimization, and Bayesian inference.
The document discusses Markov chains and their relationship to random walks on graphs and electrical networks. Some key points:
- A Markov chain is a process that transitions between a finite set of states based on transition probabilities that depend only on the current state.
- For a strongly connected Markov chain, there exists a unique stationary distribution that the long-term probabilities of the chain converge to, regardless of the starting state.
- Random walks on undirected graphs can be modeled as Markov chains, where the transition probabilities are proportional to edge conductances in an analogous electrical network. The stationary distribution of such a random walk is proportional to vertex degrees or conductances.
- The document discusses various techniques for Markov chain Monte Carlo (MCMC) sampling, including rejection sampling, Metropolis-Hastings, and Gibbs sampling.
- It explains how MCMC can be used for approximate probabilistic inference in complex models by constructing a Markov chain that converges to the target distribution.
- Diagnostics are discussed for checking if the Markov chain has converged, such as visual inspection of trace plots, and Geweke and Gelman-Rubin tests of the within-chain and between-chain variances.
This document discusses Markov chain Monte Carlo (MCMC) methods. It begins with an outline of the Metropolis-Hastings algorithm, which is a generic MCMC method for obtaining a sequence of random samples from a probability distribution when direct sampling is difficult. The document then provides details on the Metropolis-Hastings algorithm, including its convergence properties. It also discusses the independent Metropolis-Hastings algorithm as a special case and provides an example to illustrate it.
1. Hidden Markov Models (HMMs) are used to model sequential data where the underlying process generating the observable outputs is not visible but assumed to be a Markov process with hidden states.
2. HMMs define transition probabilities between hidden states and emission probabilities of observable outputs for each state.
3. There are three typical problems for HMMs: likelihood computation, decoding the most likely sequence of hidden states, and learning the transition and emission probabilities from data.
Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from probability distributions for use in Monte Carlo simulations. The Metropolis-Hastings algorithm proposes transitions to new states in the chain and either accepts or rejects those states based on a probability calculation, allowing it to sample from complex, high-dimensional distributions. The Gibbs sampler is a special case of MCMC where each variable is updated conditional on the current values of the other variables, ensuring all proposed moves are accepted. These MCMC methods allow approximating integrals that are difficult to compute directly.
Supervised Hidden Markov Chains.
Here, we used the paper by Rabiner as a base for the presentation. Thus, we have the following three problems:
1.- How efficiently compute the probability given a model.
2.- Given an observation to which class it belongs
3.- How to find the parameters given data for training.
The first two follow Rabiner's explanation, but in the third one I used the Lagrange Multiplier Optimization because Rabiner lacks a clear explanation about how solving the issue.
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
This document provides an overview of Markov chain Monte Carlo (MCMC) methods. It begins with motivations for using MCMC, such as computational difficulties that arise in models with latent variables like mixture models. It then discusses likelihood-based and Bayesian approaches, noting limitations of maximum likelihood methods. Conjugate priors are described that allow tractable Bayesian inference for some simple models. However, conjugate priors are not available for more complex models, motivating the use of MCMC methods which can approximate integrals and distributions of interest for more complex models.
This document provides an introduction to hidden Markov models (HMMs). It explains what HMMs are, where they are used, and why they are useful. Key aspects of HMMs covered include the Markov chain process, notation used in HMMs, an example of applying an HMM to temperature data, and the three main problems HMMs are used to solve: scoring observation sequences, finding optimal state sequences, and training a model. The document also outlines the forward, backward, and other algorithms used to efficiently solve these three problems.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
This document provides an overview of hidden Markov models (HMMs). It defines HMMs as statistical Markov models that include both observed and hidden states. The key components of an HMM are states (Q), observations (V), initial state probabilities (p), state transition probabilities (A), and emission probabilities (E). HMMs find applications in areas like protein structure prediction, sequence alignment, and gene finding. The Viterbi algorithm is described as a dynamic programming approach for finding the most likely sequence of hidden states in an HMM. Advantages of HMMs include their statistical power and modularity, while disadvantages include assumptions of state independence and potential for overfitting.
This document summarizes a talk given by Pierre E. Jacob on recent developments in unbiased Markov chain Monte Carlo methods. It discusses:
1. The bias inherent in standard MCMC estimators due to the initial distribution not being the target distribution.
2. A method for constructing unbiased estimators using coupled Markov chains, where two chains are run in parallel until they meet, at which point an estimator involving the differences in the chains' values is returned.
3. Conditions under which the coupled chain estimators are unbiased and have finite variance. Examples are given of how to construct coupled versions of common MCMC algorithms like Metropolis-Hastings and Gibbs sampling.
Hidden Markov Model - The Most Probable PathLê Hòa
This document provides an overview of hidden Markov models including:
- The components of hidden Markov models including states, transition probabilities, emission probabilities, and observation sequences.
- How the Viterbi algorithm can be used to find the most probable hidden state sequence that explains an observed sequence by calculating likelihoods recursively and backtracking through the model.
- An example application of the Viterbi algorithm to find the most probable hidden weather sequence given observed data from a weather HMM model.
This document provides an overview of Hidden Markov Models (HMM). HMMs are statistical models used to model systems where an underlying process produces observable outputs. In HMMs, the observations are modeled as a Markov process with hidden states that are not directly observable, but can only be inferred through the observable outputs. The document describes the key components of HMMs including transition probabilities, emission probabilities, and the initial distribution. Examples of applications like speech recognition and bioinformatics are provided. Finally, common algorithms for HMMs like Forward, Baum-Welch, Backward, and Viterbi are listed for performing inference on the hidden states given observed sequences.
The document discusses Markov chain Monte Carlo (MCMC) methods, which use Markov chains to generate dependent samples from probability distributions that are difficult to directly sample from. It introduces Gibbs sampling and the Metropolis-Hastings algorithm as two common MCMC techniques. Gibbs sampling works by iteratively sampling each parameter from its conditional distribution given current values of other parameters. Metropolis-Hastings also iteratively proposes new parameter values but only accepts them probabilistically, based on the target distribution. Both techniques generate Markov chains that can be used to approximate integrals and obtain quantities of interest from complex distributions.
Spacey random walks and higher-order data analysisDavid Gleich
My talk at TMA 2016 (The workshop on Tensors, Matrices, and their Applications) on the relationship between a spacey random walk process and tensor eigenvectors
The document provides an overview of quantum computing, including its history, data representation using qubits, quantum gates and operations, and Shor's algorithm for integer factorization. Shor's algorithm uses quantum parallelism and the quantum Fourier transform to find the period of a function, from which the factors of a number can be determined. While quantum computing holds promise for certain applications, classical computers will still be needed and future computers may be a hybrid of classical and quantum components.
Bag of Pursuits and Neural Gas for Improved Sparse CodinKarlos Svoboda
This document proposes a new method called Bag of Pursuits and Neural Gas for learning overcomplete dictionaries from sparse data representations. It improves upon existing methods like MOD and K-SVD by employing a "bag of pursuits" approach that considers multiple sparse coding approximations for each data point, rather than just the optimal one. This allows the use of a generalized Neural Gas algorithm to learn the dictionary in a soft-competitive manner, leading to better performance even with less sparse representations. The bag of pursuits extends orthogonal matching pursuit to retrieve not just the single best sparse code but an approximate set of the top sparse codes for each point.
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The document discusses determining matrices for single-delay autonomous linear neutral control systems. It derives expressions for the determining matrices through a series of lemmas, theorems, and corollaries. The determining matrices constitute an optimal way to determine controllability and compactness of control systems. The paper establishes valid expressions for determining matrices of these types of systems and explores their relationships to partial derivatives of the systems.
TMPA-2015: Implementing the MetaVCG Approach in the C-light SystemIosif Itkin
Alexei Promsky, Dmitry Kondtratyev, A.P. Ershov Institute of Informatics Systems, Novosibirsk
12 - 14 November 2015
Tools and Methods of Program Analysis in St. Petersburg
The document contains a tutorial on discrete mathematics concepts including:
1) Expressing system specifications using propositions and logical connectives.
2) Showing that logical expressions are tautologies.
3) Constructing truth tables for compound propositions.
4) Building combinatorial circuits to represent logical expressions.
5) Solving problems involving propositional logic, quantifiers, and validity of arguments.
The document discusses pairwise sequence alignment and dynamic programming algorithms for computing optimal alignments. It covers:
- Assumptions of sequence evolution including substitutions, insertions, deletions, duplications, and domain reuse.
- Using sequence comparison to discover functional and evolutionary relationships by identifying similar sequences and orthologs with similar functions.
- The dot plot method for discovering sequence similarity by plotting sequences against each other in a matrix and identifying diagonals of matches.
- Dynamic programming algorithms that compute the optimal alignment score in quadratic time and linear space by breaking the problem into overlapping subproblems.
- Extensions of the basic algorithm to handle affine gap penalties by introducing three matrices to track alignments ending in matches, gaps
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
Machine-learning models are behind many recent technological advances, including high-accuracy translations of the text and self-driving cars. They are also increasingly used by researchers to help in solving physics problems, like Finding new phases of matter, Detecting interesting outliers
in data from high-energy physics experiments, Founding astronomical objects are known as gravitational lenses in maps of the night sky etc. The rudimentary algorithm that every Machine Learning enthusiast starts with is a linear regression algorithm. In statistics, linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent
variables). Linear regression analysis (least squares) is used in a physics lab to prepare the computer-aided report and to fit data. In this article, the application is made to experiment: 'DETERMINATION OF DIELECTRIC CONSTANT OF NON-CONDUCTING LIQUIDS'. The entire computation is made through Python 3.6 programming language in this article.
A simple method to find a robust output feedback controller by random search ...ISA Interchange
A random search algorithm is proposed to find a robust output feedback controller for uncertain linear systems. The algorithm generates random feedback gain matrices and evaluates the closed-loop pole locations. If the poles lie in a specified stable region, the gain matrix is a solution. The probability of finding a solution and the number of trials needed can be estimated. Simulation results demonstrate the effectiveness of using this random approach. The method provides a simple way to design output feedback controllers for systems where traditional techniques are intractable, such as problems with nonconvex constraints.
The document provides an introduction to discrete structures and mathematical reasoning. It discusses key concepts like propositions, logical operators, quantification, and proof techniques. Propositions can be combined using logical operators like negation, conjunction, disjunction, etc. Quantifiers like universal and existential are used to represent statements about all or some elements. Mathematical reasoning involves using axioms, rules of inference, and deductive proofs to establish theorems from given conditions.
This document presents a dissertation on improving the baby step giant step algorithm for solving the elliptic curve discrete logarithmic problem. It begins with an overview of cryptography, symmetric and asymmetric encryption, and elliptic curve cryptography. It then discusses the elliptic curve discrete logarithmic problem and surveys existing literature. The proposed approach improves the baby step giant step algorithm by using a smaller baby step set size. Experimental results on two examples show that the proposed approach has faster runtime than the previous method. A complexity analysis is also presented.
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
This document summarizes a research paper that analyzes the transient behavior of the overflow probability in a queuing system with fixed-size batch arrivals. It introduces a set of polynomials that generalize Chebyshev polynomials and can be used to assess the transient behavior. The key findings are:
1≤k ≤ B
k≥B+1
which is just the generating function of the Chebyshev
polynomials of the second kind.
Furthermore, if we consider the special case when B = 1 in
(9), and make the substitution x → 2x, we obtain
k =0
0≤k ≤ B
Pk −1
λ
μ
This document provides examples of how linear algebra is useful across many domains:
1) Linear algebra can be used to represent and analyze networks and graphs through adjacency matrices.
2) Differential equations describing complex systems like bridges and molecules can be understood through matrix representations and eigenvalues.
3) Quantum computing uses linear algebra operations like matrix multiplication to represent computations on quantum bits.
4) Many other areas like coding/encryption, data compression, solving systems of equations, computer graphics, statistics, games, and neural networks rely on concepts from linear algebra.
The document discusses developing quantitative structure-activity relationship (QSAR) models to predict the biological responses of nanomaterials. It describes using descriptors of pristine and weathered nanomaterials, as well as experimental parameters, to develop linear regression models between descriptors and responses. Partial least squares regression is used to handle correlations between descriptors. The data is also analyzed using k-means clustering to identify separate descriptor clusters, and QSAR models are developed for each cluster to improve predictions. The resulting models could then be used to predict responses of emerging nanomaterials based on their similarity to existing clusters.
Spacey random walks from Householder Symposium XX 2017Austin Benson
1. Spacey random walks are a stochastic process that provides a probabilistic interpretation of tensor eigenvectors. The spacey random walk forgets its previous state but guesses it randomly, resulting in a limiting distribution that is a tensor eigenvector.
2. Higher-order Markov chains can be modeled as spacey random walks, which converge to tensor eigenvectors. This provides an algorithm for computing eigenvectors via numerical integration rather than algebraic methods.
3. Spacey random walks generalize Pólya urn processes and have applications in transportation modeling, clustering multi-relational data, and ranking. Learning the transition tensor from taxi trajectory data supports the spacey random walk hypothesis.
Point Placement Algorithms: An Experimental StudyCSCJournals
The point location problem is to determine the position of n distinct points on a line, up to translation and reflection by the fewest possible pairwise (adversarial) distance queries. In this paper we report on an experimental study of a number of deterministic point placement algorithms and an incremental randomized algorithm, with the goal of obtaining a greater insight into the practical utility of these algorithms, particularly of the randomized one.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
Root locus description in lucid way by ME IITBAmreshAman
The document discusses root locus analysis, which is a graphical method to analyze how the poles of a closed-loop system change with variations in system parameters like gain. It provides rules for plotting the root locus, including that branches originate from open-loop poles and terminate at zeros or infinity. The number of branches ending at infinity equals the number of open-loop poles minus zeros. Asymptotes indicate how branches approach infinity. The root locus helps understand how stability changes with varying parameters without recalculating closed-loop poles each time.
A quantum-inspired optimization heuristic for the multiple sequence alignment...Konstantinos Giannakis
The document presents a quantum-inspired heuristic for solving the multiple sequence alignment problem in bioinformatics. It models the sequence similarity as a traveling salesman problem instance using a normalized similarity matrix. The method applies a quantum-inspired generalized variable neighborhood search metaheuristic to approximate the shortest Hamiltonian path and generate an initial alignment. Evaluation on real biological sequences shows it outperforms progressive methods, producing alignments with good sum-of-pairs scores, especially for large sequence sets.
Computing probabilistic queries in the presence of uncertainty via probabilis...Konstantinos Giannakis
1) The document proposes using probabilistic automata to compute probabilistic queries on RDF-like data structures that contain uncertainty. It shows how to assign a probabilistic automaton corresponding to a particular query.
2) An example query is provided that finds all nodes influenced by a starting node with a probability above a threshold. The probabilistic automata calculations allow filtering results by probability.
3) Benefits cited include leveraging well-studied probabilistic automata results and efficient handling of uncertainty. Future work could expand the models to infinite data and provide more empirical results.
Initialization methods for the tsp with time windows using variable neighborh...Konstantinos Giannakis
This document presents an initialization method for solving the travelling salesman problem with time windows (TSP-TW) using variable neighborhood search (VNS). The authors implement a VNS metaheuristic that uses both random and sorted initial solutions and performs local search. Their results show that for some problem instances, a sorted initial solution does not find a feasible solution as often as a random initial solution. The authors propose using alternative random initialization procedures with different probability distributions for future work.
The document discusses querying Linked Data using Büchi automata. It introduces Linked Data and SPARQL queries, and notes the infinite nature of social networking applications and Linked Open Numbers. It then discusses using Büchi automata to verify webs of Linked Data by modeling their infinite behavior. The authors propose representing SPARQL queries on infinite webs of Linked Data using Büchi automata with infinite input to check for eventual computability.
The document describes a model of mitochondrial fusion using membrane automata. It investigates the biological function of mitochondrial fusion and models it using membrane automata and brane calculus. The model combines P automata and BioAmbients calculus to represent the hierarchical membrane structure and biomolecular rules governing mitochondrial fusion. It translates the biological model of fusion expressed in BioAmbient calculus into rewriting rules of P automata to simulate the process in a more visual and well-established framework.
The document summarizes an experiment evaluating a simulated listening typewriter for composing letters. Eighteen participants, including 10 novices and 8 professionals, used different versions of the typewriter to compose letters. The versions varied vocabulary size and allowed isolated or continuous speech. Results showed that isolated word speech with a large vocabulary produced letters of similar quality to traditional dictation and handwriting. However, some participants found the slow speed of the simulated system frustrating. The conclusion discusses limitations like the immaturity of speech recognition technology at the time and opportunities for further evaluating the system's potential benefits for disabled users.
The document discusses developing a Space Invaders video game. Space Invaders was a classic 1978 arcade game that inspired many subsequent games. It involved destroying rows of aliens that moved horizontally across the screen in increasingly difficult waves. The goal is to recreate Space Invaders using Microsoft XNA Framework and C# with game states like logo, menu, play, pause, win, and lose. The game will include levels that scale in difficulty.
User Requirements for Gamifying Sports Software- in 3rd International Workshop on Games and Software Engineering (GAS 2013) in ICSE 2013 May 18, 2013 San Francisco, California, U.S.A.
Web Mining to Create Semantic Content: A Case Study for the EnvironmentKonstantinos Giannakis
This document discusses using web mining techniques to create semantic content from environmental web data sources. It introduces concepts like web mining, the semantic web, and ecoinformatics. It describes related projects that use semantic information and focuses on mining the Encyclopedia of Earth. The proposed concept involves hierarchical clustering of mined data. Future work includes demos and integrating with semantic frameworks. The authors believe it is necessary to create semantic environmental content to assist in areas like preventing fires and pollution.
Revision of the Proteaceae Macrofossil Record from Patagonia, ArgentinaCynthiaGonzlez48
Proteaceae are restricted to the Southern Hemisphere, and of the seven tribes of the
subfamily Grevilleoideae, only three (Macadamieae, Oriteae, and Embothrieae) have living members in
Argentina.
Megafossil genera of Proteaceae recorded from
Patagonia
include
Lomatia, Embothrium, Orites, and Roupala. In this report, we evaluate and revise
fossil Argentine Proteaceae on the basis of type material and new specimens. The new col-
lections come from the Tufolitas Laguna del Hunco (early Eocene, Chubut Province), the
Ventana (middle Eocene, Río Negro Province), and the Río Ñirihuau (late Oligocene-early Miocene, Río Negro Province) formations, Patagonia, Argentina. We confirm the presence
of Lomatia preferruginea Berry, L. occidentalis (Berry) Frenguelli, L. patagonica
Frenguelli, Roupala patagonica Durango de Cabrera et Romero, and Orites bivascularis
Romero, Dibbern et Gandolfo. Fossils assigned to Embothrium precoccineum Berry and
E. pregrandiflorum Berry are doubtful, and new material is necessary to confirm the presence of this genus in the fossil record of
Patagonia.
A
putative
new fossil species of
Proteaceae is presented as Proteaceae gen. et sp. indet.
Fossil Proteaceae are compared
with
modern
genera,
and an identification key for the fossil leaf species is presented. Doubtful
historical records of Proteaceae fossils for the Antarctic Peninsula region and Patagonia
are also discussed. Based on this revision, the three tribes of Proteaceae found today in Argentina were already present in
Patagonia by the early
Eocene,
where they probably arrived via the
Australia-Antarctica-South
America
connection.
Talk at INFN CCR Workshop on "Quantum Computing Simulation on FPGA"Mirko Mariotti
Since 2017 we started R&D on co-designing (HW/SW) computational systems, targeting mainly FPGAs. We developed over the years several solutions for computational acceleration on FPGAs, including, but not limited to, the creation of a full framework for building FPGA-based modular architectures, namely the BondMachine project.
The problems addressed by these solutions range from the standard application to the complex neural network inference with a reduced precision. We can analyze the performance of the developed solutions in terms of speedup, latency, and power consumption.
In this talk, we will present the activities of the last year, focusing on how we are using the developed framework and the acquired know-how to create a FPGA-based quantum computer simulator: bmqsim.
bmqsim is a simulator for quantum circuits running on FPGAs. It can produce several target beckends, same based on the BondMachine framework, others based on different High Level Synthesis tools.
Decipher the Magic of Quantum Entanglement.pdfSaikatBasu37
Welcome to the mysterious world of quantum physics!
Today, we'll explore quantum entanglement—a mind-bending phenomenon that connects particles across vast distances.
Biological application of spectroscopy.pptxRahulRajai
Spectroscopy in biological studies involves using light or other forms of electromagnetic radiation to analyze the structure, function, and interactions of biological molecules. It helps researchers understand how molecules like proteins, nucleic acids, and lipids behave and interact within cells.
biological applications of spectroscopy:
1. Studying Biological Molecules:
Proteins:
Spectroscopy can reveal protein structure, including folding patterns and interactions with other molecules.
Nucleic Acids:
It helps analyze the structure of DNA and RNA, including their base sequences and interactions.
Lipids:
Spectroscopy can be used to study lipid interactions within cell membranes and their role in cellular processes.
Metabolic Pathways:
Spectroscopy can monitor changes in metabolic processes and cellular signaling pathways, providing insights into how cells function.
Analytical techniques in dry chemistry for heavy metal analysis and recent ad...Archana Verma
Heavy Metals is often used as a group name for metals and semimetals (metalloids) that have been associated with contamination and potential toxicity (Duffus, 2001). Heavy metals inhibit various enzymes and compete with various essential cations (Tchounwou et al., 2012). These may cause toxic effects (some of them at a very low content level) if they occur excessively, because of this the assessment to know their extent of contamination in soil becomes very important. Analytical techniques of dry chemistry are non-destructive and rapid and due to that a huge amount of soil samples can be analysed to know extent of heavy metal pollution, which conventional way of analysis not provide efficiently because of being tedious processes. Compared with conventional analytical methods, Vis-NIR techniques provide spectrally rich and spatially continuous information to obtain soil physical and chemical contamination. Among the calibration methods, a number of multivariate regression techniques for assessing heavy metal contamination have been employed by many studies effectively (Costa et al.,2020). X-ray fluorescence spectrometry has several advantages when compared to other multi-elemental techniques such as inductively coupled plasma mass spectrometry (ICP-MS). The main advantages of XRF analysis are; the limited preparation required for solid samples and the decreased production of hazardous waste. Field portable (FP)-XRF retains these advantages while additionally providing data on-site and hence reducing costs associated with sample transport and storage (Pearson et al.,2013). Laser Induced Breakdown Spectroscopy (LIBS) is a kind of atomic emission spectroscopy. In LIBS technology, a laser pulse is focused precisely onto the surface of a target sample, ablating a certain amount of sample to create plasma (Vincenzo Palleschi,2020). After obtaining the LIBS data of the tested sample, qualitative and quantitative analysis is conducted. Even after being rapid and non-destructive, several limitations are also there in these advance techniques such as more effective and accurate quantification models are needed. To overcome these problems, proper calibration models should be developed for better quantification of spectrum in near future.
This presentation provides a concise overview of the human immune system's fundamental response to viral infections. It covers both innate and adaptive immune mechanisms, detailing the roles of physical barriers, interferons, natural killer (NK) cells, antigen-presenting cells (APCs), B cells, and T cells in combating viruses. Designed for students, educators, and anyone interested in immunology, this slide deck simplifies complex biological processes and highlights key steps in viral detection, immune activation, and memory formation. Ideal for classroom use or self-learning.
Mode Of Dispersal Of Viral Disease On Plants.pptxIAAS
The document titled "Mode of Transmission of Viral Disease" explains how plant viruses, which are microscopic pathogens that rely on host cells for replication, spread and cause significant agricultural losses. These viruses are transmitted through two main routes: horizontal transmission (from plant to plant within the same generation) and vertical transmission (from parent to offspring via seed or pollen). The mechanisms of transmission are broadly divided into non-insect and insect-based methods. Non-insect transmission includes mechanical or sap transmission, vegetative propagation (e.g., through tubers or grafts), seed and pollen transmission, and the role of organisms like fungi, nematodes, and parasitic plants like dodder. Insect transmission is the most significant natural mode, with vectors such as aphids, leafhoppers, thrips, and whiteflies introducing viruses directly into plant tissues through feeding. Aphids alone are responsible for transmitting about 60% of known plant viruses. Each vector has specific transmission characteristics, such as the use of stylet feeding in aphids and exosomes in leafhoppers. The document also highlights important examples like Tobacco Mosaic Virus, Cucumber Mosaic Virus, and Tomato Yellow Leaf Curl Virus. Overall, the document provides a detailed understanding of how plant viruses spread, the role of vectors, and the implications for crop health and disease management.
International Journal of Pharmacological Sciences (IJPS)journalijps98
Call for Research Articles.!!!
***** FREE PUBLICATION CHARGES*****
International Journal of Pharmacological Sciences (IJPS)
Webpage URL : https://www.wireilla.com/medical/IJPS/index.html
Authors are invited to submit papers through the Journal Submission System
http://allcfps.com/wireilla/submission/index.php
Submission Deadline : June 17, 2025
Contact Us
Here's where you can reach us : [email protected] or [email protected]
International Journal of Pharmacological Sciences (IJPS)journalijps98
Infinite and Standard Computation with Unconventional and Quantum Methods Using Automata
1. infinite and standard computation with
unconventional and quantum methods using
automata
Dissertation Defense
Konstantinos GIannakis
[email protected]
July, 14 2016
Department of Informatics, Ionian University
Supervisor: Dr. Theodore Andronikos
Advisory Committee: Dr. Spyros Sioutas and Dr. Michail Stefanidakis
Department of Informatics, Ionian University
0
2. the dissertation’s story
∙ At first we studied infinite computation with focus on automata.
∙ We then proceeded to the probabilistic versions of the above
models.
∙ Probabilistic computation led us to the search for alternative
models.
∙ Unconventional means of computing, mainly quantum automata.
1
3. motivation and contribution
∙ Moore’s law.
∙ The lack of quantum variants for specific classes of automata.
(i.e. ω-automata)
∙ The power of specific quantum algorithms.
∙ most notably the algorithms of Deutsch, Shor, and Grover.
∙ The traces of infinite behaviour in nature and data volumes.
2
4. dissertation’s structure
⊙
Standard (or Classical) and Probabilistic Computing
∙ Introductory points
⊙
Infinite Computation
∙ Associating automata variants to queries.
⊙
Membrane Computing
∙ Novel definitions of membrane automata.
∙ Use of actual models to implement our methodology.
⊙
Quantum Computing
∙ Proposal of the periodic quantum automata.
∙ Association of these automata with strategies over quantum
games.
3
6. a little history
∙ Initiated before WWII, mainly in the ’30s.
∙ Alan Turing is regarded as the pioneering figure.
∙ Other great scientists of the same period: Kurt Gödel, Alonzo
Church, Stephen Kleene and many more.
∙ Automata and their theory were developed at the early ’40s and
’50s.
5
7. automata
∙ Branch of the Theory of Computation dealing with specialized,
abstract computational models.
∙ Finite-state automata.
∙ Finite memory.
∙ Basic properties: states, transitions, and the input/output.
6
8. dfa και nfa
∙ For a given symbol there is only one state that can be visited.
∙ (Q, Σ, δ : Q x Σ → Q, q0, F)
q0
1
1
0
0
q1
∙ For a given symbol there is none, one, or more states that can be
visited, even for the same symbol.
∙ (Q, Σ, δ : Q x Σ ∪ {ϵ} → P(Q), q0, F)
q0 q1 Άλλες ρυθμίσειςΆλλες ρυθμίσεις
1 1
q2
0,1
0,1
∙ For every NFA there is an equivalent DFA.
∙ The translation of an NFA to a DFA can cause an exponential growth to
the number of states! 7
10. logic and probabilities
Γένεση 18:32
Καί είπεν ο Αβραάμ, Ας μή παροξυνθή ο Κύριός μου, εάν λαλήσω έτι
άπαξ; εάν ευρεθώσιν εκεί δέκα; καί είπε, Δέν θέλω απολέσει αυτήν
χάριν τών δέκα.
Genesis 18:32
Then he said, “Oh may the Lord not be angry, and I shall speak only
this once; suppose ten are found there?” And He said, ”I will not
destroy it on account of the ten.”
Subjective vs Objective aspect of probability
9
11. probabilistic computation
* Introducing probabilities attached to the system’s input.
* Result depends not only on the input, but also on coin flips.
P(w) :=
∑
p∈Acc(w)
n∏
i=1
(pi, wi, pi+1)
∑
p∈Run(w)
n∏
i=1
(pi, wi, pi+1)
. (1)
10
12. pfa (probabilistic finite automata)
∙ Introduced by Rabin (1963)
∙ Similar to simple Markov chains
11
14. ω-computability receiving infinite inputs
∙ ω-automata.
∙ Infinite input
∙ Acceptance conditions
∙ E.g. Büchi automata.
∙ Büchi acceptance condition.
∙ They accept the runs ρ for which In(ρ) ∩ F ̸= ∅ (F ⊆ Q).
∙ Extension of the simple automata with infinite inputs
13
15. ω-computability
∙ An ω-automaton is a tuple (Q, Σ, δ, q0, Acc) where Q is a finite
set of states, Σ is the alphabet, δ : Q × Σ −→ P(Q) is the
transition function (P(Q) is the powerset of Q), q0 is the initial
state, and Acc determines the acceptance condition.
∙ The acceptance condition Acc declares how the infinite runs are
accepted by the automaton.
∙ The class of languages recognized from (almost) all the above
machines are the ω-regular languages.
14
16. automata and queries
∙ There are queries that cannot be expressed neither by standard
SPARQL nor its expansion with regular path queries.
∙ These queries assume that the Web of Linked Data can be
infinite and therefore classic queries expressed in terms of
SPARQL need to be revisited.
∙ We propose a novel method associating ω-automata, to each
query on this infinite structure.
∙ We call these queries ω-regular path queries.
∙ They are of the form a|(ab)ω
.
15
17. infinite horizon
∙ Two SPARQL query semantics
∙ Full-web semantics.
∙ the scope of each query is the full set of LD on the Web.
∙ Reachability-based semantics.
∙ restricted scope of SPARQL queries to data reachable through
specific links, using a given initial set of URIs.
16
18. queries on infinite graphs of graphs
∙ We assume infinite web of LD.
∙ Reachability is guaranteed (remember, we refer to Linked Data).
∙ Acceptance is defined by a set of final states.
∙ Infinite data/queries -> infinite computation
17
19. the process for the association
s1
s0
s2
s3 s4
o
w
o
u
o
u
o
Figure: It accepts the ω-regular language described by the ω-regular
expression u(wou|(ou)?)∗
(ou)ω
.
Symbol Explanation
u Encoding of URI
o Symbol #, separator of URIs
w Symbol read when the query returns a result
18
20. ω-regular path queries
Definition
ω-regular path queries are particular path queries where the regular
expression is replaced by an ω-regular expression. These queries concern
cases that include infiniteness in the expected results.
Mitochondrion2
Mitochondrion1
Mitochondrion3
Fusion Fission
isNeighbour
performs
performs
performs
performs
performs
performs
isNeighbour
produces
produces
19
21. ω-regular path queries (cont.)
∙ We assume that the life-cycle of a mitochondrial population undergoes
infinitely many fusion and fission processes.
∙ It fits to the notion of eventual computability.
∙ The information from the previous Figure can be represented in the
well-known triple form of (subject, predicate, object) that follows:
(Mitochondrion1, performs, Fusion)
(Mitochondrion1, performs, Fission)
(Mitochondrion2, performs, Fusion)
(Mitochondrion2, performs, Fission)
(Mitochondrion3, performs, Fusion)
(Mitochondrion3, performs, Fission)
(Mitochondrion1, isNeighbour, Mitochondrion2)
(Mitochondrion2, isNeighbour, Mitochondrion3)
…
…
…
∙ A query could be asking for the mitochondria that perform infinitely
often the fusion process.
20
22. ω-regular path queries (cont.)
∙ Use of ω-regular expressions that correspond to the regular
expressions used in regular path queries.
∙ These expressions can be translated into equivalent Büchi
automata.
∙ E.g. the query asking for the mitochondria that performs
infinitely often the fusion process could have the following form:
SELECT *
WHERE
{
< mitochondrion > [(performs)|(isNeighbour∗
/performs)]ω
<
mitochondrion >.
}
21
23. the corresponding automaton
s0 s1
a
b
a
b
Figure: A Büchi automaton A describing the ω-regular expression
underneath the simplest example of mitochondrial fusion ([(a)|(b∗
a)]ω
).
Symbol Explanation
a performs
b isNeighbour
22
24. our proposed definition
Definition
The associated automaton is an ω-automaton under the Büchi acceptance
condition. It is a tuple (QS ∪ O, Σp, δ, q0, F, Acc), where:
1. QS ∪ O∈ {S ∪ O} is the finite set of states, where S is the set of subjects
and O the set of objects of the triples,
2. Σp is a finite set of input symbols (alphabet) where each symbol is a
predicate from the Linked Data graph,
3. δ : QS ∪ O × Σp −→ QS ∪ O is the transition function,
4. q0 ∈ QS ∪ O is the initial state of the automaton,
5. F ⊆ QS ∪ O is the set of accepting states, and
6. Acc defines the acceptance condition, in the case of the associated
automaton we propose it is the Büchi acceptance condition.
23
25. another example query
∙ An example query of protein folding/misfolding which
represents the amyloid aggregation could have the following
form:
SELECT *
WHERE
{
< protein >
(folding|misfolding)∗
|[translocates|mitochondrion]ω
}
24
26. our contribution
∙ Combination of particular queries with automata on infinite
inputs.
∙ Proposal of a novel methodology regarding queries and their
association with ω-automata.
∙ We demonstrate some characteristic queries on simple
instances.
∙ We proposed a methodology on how specific queries of regular
paths in SPARQL could be translated into inputs for ω-automata.
∙ Interpretation of the the eventual computability of these
queries using the well-known Büchi acceptance condition.
25
28. membrane computing
∙ Known as P systems with several proposed variants.
∙ Evolution depicted through rewriting rules on multisets of the
form u→v
∙ imitating natural chemical reactions.
∙ u, v are multisets of objects.
∙ The hierarchical status of membranes evolves by constantly
creating and destroying membranes, by membrane division etc.
∙ Types of communication rules:
∙ symport rules (one-way passing through a membrane)
∙ antiport rules (two-way passing through a membrane)
27
29. examples
◦ Membranes create hierarchical structures.
◦ Each membrane contains objects and rules.
◦ Represented either by a Venn diagram or a tree.
(a) Hierarchical nested
membranes
(b) With simple objects and rules
28
30. p systems evolution and computation
∙ Via purely non deterministic, parallel rules.
∙ Characteristics of membrane systems: the membrane structure,
multisets of objects, and rules.
∙ They can be represented by a string of labelled matching
parentheses.
∙ Use of rules =⇒ transitions among configurations.
∙ A sequence of transitions is interpreted as computation.
∙ Accepted computations are those which halt and a successful
computation is associated with a result.
29
31. p automata
∙ Variants of P systems with automata-like behaviour.
∙ Computation starts from an initial configuration.
∙ Acceptance is defined by a set of final states.
∙ They define a computable set of configurations satisfying certain
conditions.
∙ The set of accepted input sequences forms the accepted
language.
∙ A configuration of a P automaton with n membranes is defined
as a n-tuple of multisets of object in each membrane.
∙ A run of a P automaton is defined as a process of altering its
configurations in each step.
30
32. rules used in membrane computing
...
...
b)
a)
c) exo
(a,in)aa(b,in)ba
(a,out)
cab
c→a
bbbba
(a,out)
caa
c→bb
cca
=⇒
=⇒
=⇒
31
33. a case study
∙ A biological model of mitochondrial fusion by Alexiou et al,
expressed in BioAmbient calculus.
∙ Cell is divided into hierarchically nested ambients.
∙ 3 proteins are required (Mfn1, Mfn2 and OPA1) for the successful
fusion.
∙ Fusion can occur:
∙ by the merging of two membrane-bounded segments.
∙ when segments may enter or exit one another.
∙ Synchronized capabilities that can alter ambients’ state are:
entry, exit, or merge of other compartments.
32
34. our approach
∙ Every ambient ≡ membrane subsystem.
∙ Hierarchical structure of ambients ≡ membrane-like segments.
∙ Biomolecular rules from Bioambient calculus to P automata
rewriting rules.
∙ Actions altering ambients’ state (entry, exit, or merge).
Initial configuration:
[[[[[[]AO1[]K]PM1M2]RM1M2]GM1M2]OMOM1M2 [[[[[]BO1]PO1]RO1]GO1]IMOM1M2]skin/cell
Final configuration:
[[]PM1M2[]RM1M2[]GM1M2[]OMOM1M2[]PO1[]RO1[]GO1[]IMOM1M2[]K[]AO1[]BO1]skin/cell
33
35. the production of the the protein mfn1-mfn2
Initial config.
consecutive use of appropriate rule
−−−−−−−−−−−−−−−−−−→ final config. and halt.
∙ Initial configuration: [[[[]PM1M2]RM1M2]GM1M2]OMOM1M2
∙ Final configuration: []PM1M2[]RM1M2[]GM1M2[]OMOM1M2
∙ Halting configuration through consecutive exo operations.
[[[[]PM1M2]RM1M2]GM1M2]OMOM1M2
exo
−−→ [[[]PM1M2]RM1M2]GM1M2[]OMOM1M2
exo
−−→
[[]PM1M2]RM1M2[]GM1M2[]OMOM1M2
exo
−−→ []PM1M2[]RM1M2[]GM1M2[]OMOM1M2
∙ Similarly for the rest of the model.
34
36. definition i
Definition
A generic P system (of degree m, m ≥ 1) with the characteristics described above can be defined as a construct
Π=(V, T, C, H, µ, w1, ..., wm, (R1, ..., Rm), (H1, ..., Hm) i0) ,
where
1. V is an alphabet and its elements are called objects.
2. T ⊆ V is the output alphabet.
3. C ⊆ V, C ∩ T = ⊘ are catalysts.
4. H is the set {pino, exo, mate, drip} of membrane handling rules.
5. µ is a membrane structure consisting of m membranes, with the membranes and the regions labeled in a one-to-one way with
elements of a given set H.
6. wi, 1 ≤ i ≤ m, are strings representing multisets over V associated with the regions 1,2, ... ,m of µ.
7. Ri , 1 ≤ i ≤ m, are finite sets of evolution rules over the alphabet set V associated with the regions 1,2, ... , m of µ. These object
evolution rules have the form u → v.
8. Hi , 1 ≤ i ≤ m, are finite sets of membrane handling rules rules over the set H associated with the regions 1,2, ... , m of µ.
9. i0 is a number between 1 and m and defines the initial configuration of each region of the P system.
35
37. definition ii
Definition
Formally, a one-way P automaton with n membranes (n ≥ 1) and antiport rules is a construct
Π=(V, µ, P1, ..., Pn, c0, F),
where:
1. V is a finite alphabet of objects,
2. µ is the underlying membrane structure of the automaton with n membranes,
3. Pi is a finite set of antiport rules for membrane i with 1≤i≤n without promoters/inhibitors, where each antiport rule is of the
form (a, out; b, in) with a, b being multisets consisting of elements of the set V,
4. c0 is the initial configuration of Π , and
5. F is the set of accepting configurations of Π.
36
41. our contribution
∙ We proposed a novel, original method to describe actual
biomolecular models using membrane automata.
∙ We proposed a novel variant of P automata,
∙ Combination of membrane automata with process calculi.
∙ We present the advantages and disadvantages of this
methodology,
∙ Both P systems and P automata are formal tools, with enhanced
power and efficiency =⇒ could shed light to the problem of
modeling complex biological processes.
40
43. moore’s law
∙ “The number of transistors incorporated in a chip will
approximately double every 24 months.”
∙ 8086 (1978) was 16-bit, had 29.000 transistors, and integration
technology of 3.2 μm.
∙ Intel(Haswell-E) had integration technology of 22nm and
contained 2.6 billion transistors.
∙ Skylake is constructed with integration technology of 14nm.
∙ Moore’s law is about to reach its physical limits.
42
44. consequences of moore’s law
∙ Continuously decreasing size of the computing circuits.
∙ Technological and physical limitations (limits of lithography in
chip design).
∙ New technologies to overcome these barriers, with Quantum
Computation being a possible candidate.
∙ Ability of these systems to operate at a microscopic level.
∙ Redesign and revisit of well-studied models and structures from
classical computation.
43
45. basics of quantum computing
∙ QC considers the notion of computing as a natural, physical
process.
∙ It must obey to the postulates of quantum mechanics.
∙ Bit ⇒ Qubit.
∙ It was initially discussed in the works of Richard Feynman in the
early ’80s.
44
46. dirac symbolism bra-ket notation
∙ Introduced by Paul Dirac.
∙ State 0 is represented as ket |0⟩ and state 1 as ket |1⟩.
∙ Every ket corresponds to a vector in a Hilbert space. For
example:
|0⟩ =
[
1
0
]
, |1⟩ =
[
0
1
]
. (2)
45
47. kets και bras
∙ A qubit is in state |ψ⟩ described by:
|ψ⟩ = c0 |0⟩ + c1 |1⟩ (3)
where c0 and c1 are called probability amplitudes
∙ They are complex numbers for which |c0|2
+ |c1|2
= 1.
∙ For every ket |ψ⟩ there is a bra ⟨ψ| which is:
⟨ψ| = c∗
0 ⟨0| + c∗
1 ⟨1| (4)
where c∗
0 and c∗
1 are the complex conjugates of c0 and c1.
46
48. to quantum automata using dirac formalism
∙ Each state of the machine is a superposition of the basis states
|ψi⟩.
∙ They have the form |ψ⟩=c1 |ψ1⟩ + c2 |ψ2⟩ + · · · + cn |ψn⟩,
∙ The probability of observing the state
|ψ′
⟩=c1 |ψ′
1⟩ + c2 |ψ′
2⟩ + · · · + cn |ψ′
n⟩ is p, with p =
∑
ψ′∈F
|ci|2
(F is the
set of accepting states).
∙ In a MO-automaton the projection matrix P is applied strictly
once.
∙ In MM-automata, there are three disjoint sets of states: the Qa
(accepting states), the Qr (rejecting states) and the Qn of neutral
states.
∙ Measurement after reading each symbol.
47
49. quantum automata variations
∙ Measure-many approach
∙ Measure-once approach
∙ There are regular languages not recognized by a quantum
automaton.
∙ We have to blame the reversibility of the quantum system!
∙ But they are space-efficient.
∙ 2-way variants are more powerful.
48
50. automata and computation (standard and quantum)
∙ Finite automata ⇒ simple models of computation.
∙ Finite quantum automata
∙ A quantum system where each symbol represents the application
of a unitary transformation.
∙ Proposed after the middle of the 1990s.
∙ They can be seen as a generalization of probabilistic finite
automata.
∙ Transitions are weighted with a probability amplitude ⇒ vectors in
a Hilbert space.
∙ Probability semantics under which automata accept or reject.
49
51. terminology needed for clarification
∙ Σ ⇒ the alphabet
∙ Σ∗
⇒ the set of all finite strings over Σ
∙ If U is an n × n square matrix , ¯U is its conjugate, and U†
its
transpose and conjugate.
∙ Cn×n
defines the set of all n × n complex matrices.
∙ A unitary operator (or matrix) U is an orthogonal matrix with
complex entries that preserves the norms of vectors.
∙ Equivalently, a matrix U is unitary if it has an inverse and if ∥Uψ∥
= ∥ψ∥ for every vector ψ.
∙ Hn is an n-dimensional Hilbert space.
50
52. quantum computation states and formalism
∙ Two types of quantum states: pure and mixed states.
∙ A pure state is a state represented by a single ket vector |ψ⟩ in a
Hilbert space over complex numbers.
∙ A mixed state is a statistical distribution of pure states (usually
described with density matrices).
∙ The evolution of a quantum system is described by unitary
transformations.
∙ The states of an n-level quantum system are self-adjoint
positive mappings of Hn with unit trace.
∙ An observable of a quantum system is a self-adjoint mapping
Hn → Hn.
∙ Each state qi ∈ Q with |Q| = n can be represented by a vector
ei = (0, . . . , 1, . . . , 0).
51
53. quantum computation applying matrices, observables, and projection
∙ Each of the states is a superposition of the form
n∑
i=1
ciei.
∙ n is the number of states
∙ ci ∈ C are the coefficients with |c1|2
+ |c2|2
+ · · · + |cn|2
= 1
∙ ei denotes the (pure) basis state corresponding to i.
∙ Each symbol σi ∈ Σ a unitary matrix/operator Uσi
and each
observable O an Hermitian matrix O.
∙ The possible outcomes of a measurement are the eigenvalues
of the observable.
∙ Transition from one state to another is achieved through the
application of a unitary operator Uσi
.
∙ The probability of obtaining a result p is ∥πPi∥, where π is the
current state (or a superposition) and Pi is the projection matrix
of the measured basis state.
∙ The state after the measurement collapses to the πPi
/
∥πPi∥.
52
55. to quantum ω-automata definition
◦ A simple periodic, one-way quantum ω-automaton is a tuple (Q,
Σ, Uδ, q0, π0, F, P, Acc) where:
1. Q is a finite set of states,
2. Σ is the input alphabet,
3. Ua is the n × n unitary matrix that describes the transitions among
the states for each symbol a ∈ Σ,
4. q0 ∈ Q is the initial (pure) state,
5. π0 is the initial vector,
6. F ∈ Q is the set of final states,
7. P is the set [P0, P1, . . . , Pn] of the projection matrices of states, and
8. Acc is an acceptance condition.
54
56. their functionality explanation
∙ It starts at its initial pure state q0, i.e. the state vector of the
system is the π0.
∙ Transitions among the states are expressed with complex
amplitude.
∙ Acc defines the acceptance condition.
Periodic quantum acceptance condition
It defines that infinitely often the measurement of the quantum system
finds with some probability the automaton in one of the final states.
Almost-sure periodic quantum acceptance condition
It defines that infinitely often the measurement of the quantum system
finds the automaton in one of the final states with probability 1.
55
57. periodic quantum automaton
∙ A simple m-periodic, 1-way quantum ω-automaton with periodic
measurements is a tuple (Q, Σ, Uδ, q0, m, π0, F, P, Acc) where:
1. Q is a finite set of states,
2. Σ is the input alphabet,
3. Uα : Q × Σ −→ C[0,1] is the n × n unitary matrix that describes the
transitions among the states for each symbol a ∈ Σ,
4. q0 ∈ Q is the (pure) initial state,
5. m ∈ N defines the measurement period,
6. π0 is the vector of the initial pure state q0,
7. F ∈ Q is the set of final states,
8. P is the set [P0, P1, . . . , Pn] of the projection matrices of states, and
9. Acc is the almost-sure periodic quantum acceptance condition.
56
58. transitions on quantum periodic automata
∙ The transition matrix for every symbol has the form of: Uϕ = i(
cos(ϕ) sin(ϕ)
− sin(ϕ) cos(ϕ)
)
∙ ϕ defines the period (if m is the period of the transition, then
ϕ = π/m).
∙ Counter-clockwise rotation.
∙ We can reverse the rotation by transposing the Uϕ.
∙ Then we have UT
ϕ = i
(
cos(ϕ) − sin(ϕ)
sin(ϕ) cos(ϕ)
)
.
∙ Both return the system to its initial state after the same period.
57
59. quantum periodic automata periodicity
∙ After m applications of the transition matrix U, the state of the
system is Um
|ψ⟩, where |ψ⟩ is the state of the system before the
m transitions.
∙ But Um
=im
(
−1 0
0 −1
)
since Um
=im
(
cos(mϕ) sin(mϕ)
− sin(mϕ) cos(mϕ)
)
and
ϕ = π/m.
∙ In 2m timesteps we obtain the U2m
=
(
1 0
0 1
)
∙ It is the same!
∙ Their difference is a phase of π, since ϕ = 2mπ/m = 2π.
58
60. transitions of the state vector
0
π/4
π/2
3π/4
π
5π/4 7π/4
3π/2
r = 1
ϕϕ
ϕ ϕ
Figure: The vector is in the initial state and for every phase transition with
angle ϕ = π/4 it is rotated counter-clockwise. After m − 1 (=4) transitions
the system is in the state that is symmetric to the initial.
59
61. a quantum game - captain picard vs q
∙ The “PQ Penny Flip” game was described by David A. Meyer in
1999.
∙ A game that showed the superiority of quantum strategies over
the classical ones.
∙ A player has a dominant strategy, no matter what the other
players chooses in every round.
60
62. quantum games and automata
∙ Association of dominant strategies of repeated quantum games
with quantum automata that recognize infinite periodic inputs.
∙ Shown in the PQ-PENNY quantum game where the quantum
strategy outplays the choice of a pure or mixed strategy with
probability 1.
∙ therefore the associated quantum automaton accepts with
probability 1.
∙ We proposed a novel game played on the evolution of an
automaton, where players’ actions and strategies are also
associated with periodic quantum automata.
61
63. our proposed game
∙ 2-player game over the evolution of a simple DFA.
∙ A game played over a 2-state automaton with an alphabet
consisting of two symbols.
∙ q0 is the winning state for Player 1 and q1 the one for Player 2.
q0 q1
a
b
a
b
Figure: The automaton that corresponds to Player 1. In the case of Player 2,
the accepting state is the state q1.
Table: Transition matrix of the automaton.
q0 q1
q0 a b
q1 a b
62
64. provider vs. measurer
∙ Player 1 chooses a symbol and runs it on the automaton until
Player 2 decides to stop the procedure.
∙ Player 2 has no knowledge about either the read symbols or the
current state.
∙ The payoff for each player depends on the current state and the
number of bs.
Table: The game’s payoff matrix. The state in the columns denotes the
automaton’s state after reading the final symbol when Player 2 stops the
procedure.
|b| ≤ |a| |b| > |a|
q0 (1,1) (2,0)
q1 (0,2) (1,1)
63
65. strategies on this game
∙ Player 1 chooses one of the two symbols not knowing when
Player 2 is about to stop the procedure
∙ Every reading of a b symbol gets his win in stake.
∙ If he insists on reading only the symbol a, he guarantees a (1,1)
result for himself.
∙ (1,1) is the Nash point for the deterministic version.
∙ In the quantum version we observe a different behaviour.
∙ Player 2 doesn’t actually stop the evolution but rather, he
measures the current state (thus the name “measurer”).
∙ Player 1 still chooses the symbols in each timestep.
64
66. the quantum version
∙ The automaton is on a superposition of states.
∙ Player 1 associates a quantum operator, similar to those of periodic
quantum automata, to the symbol a.
∙ He associates b with a specific matrix that actually does not alter the
quantum state.
∙ E.g. he chooses the matrix U1 = i
(
cos(ϕ) sin(ϕ)
− sin(ϕ) cos(ϕ)
)
with ϕ = π/m,
m = 2 for the a and the U2 = i
(
−1 0
0 −1
)
with ϕ = π/m, m = 1 for the b.
∙ Player 1 chooses the for the first 2 inputs the symbol a and then he
consequently applies the matrix U2.
∙ This offers strategies that can be described as inputs in quantum
periodic automata
∙ E.g. for w = aabbbbbbbbbbbb . . . bbbb we have the payoff (2,0).
65
67. overall
∙ Quantum automata with infinite computation are still
unexplored.
∙ Different variants of machines, distinguished either by
movement orientation or by the measurement mode.
∙ Need for models and verification processes for infinite QC.
∙ Useful in the verification of quantum systems and the design of
quantum circuits.
∙ Space efficient for periodic ω-languages of the form (am
b)ω
.
∙ Connection with game theory and groups.
∙ Actions in such games form specific groups
∙ Consistency with the underlying quantum physics.
66
68. our contribution
∙ We proposed a novel definition regarding quantum computation
with infinite horizon.
∙ It is a one of the first attempts that combine quantum
computation and infinite inputs.
∙ We exploit the wave-like nature of quantum computing by
presenting a computation scheme that accepts periodic
languages of the form (am
b)ω
, where m is the periodicity.
∙ We illustrate this concept through examples and figures.
∙ Association of the proposed quantum automata with quantum
games, their strategies, and group theory.
67
69. inspired by our work
∙ The organization of a related workshop.
∙ Natural, Unconventional, and Bio-inspired Algorithms and
Computation Methods Workshop (NUBACoM 2016) in Sparta.
∙ The introduction of a new course in the curriculum of the
Department of Informatics.
∙ “Introduction to Quantum and DNA Computing”
∙ Theses and PhD proposals.
∙ Establishment of a new research group in our department.
∙ Quantum and UnconventIonal CompuTing group
∙ Collaboration with the Bioinformatics and Human
Electrophysiology Lab (BiHELab)
∙ The development of a novel quantum programming language
called Qumin (in beta) by A. Singh.
68
70. potential applications
∙ Better and more efficient algorithms for querying Linked Data.
∙ Proper design and verification of unconventional means of
computing.
∙ Design of new, original quantum algorithms.
∙ Universal quantum programming language(s) and architectures.
∙ Study on bio-inspired methods of performing actual
computations, e.g. DNA sequences, nano-scale biological part
etc.
69
71. further exploit of the results and our next steps
∙ Implementation of our theoretical results.
∙ Complexity issues and bounds.
∙ D-Wave
∙ IBM
∙ Use of quantum simulators.
∙ Further research based on our results.
∙ Already submitted works in progress.
70
72. publications
Giannakis, K., and Andronikos, T.
Membrane automata for modeling biomolecular processes.
Natural Computing (2015), 1–13.
Giannakis, K., and Andronikos, T.
Mitochondrial fusion through membrane automata.
In GeNeDis 2014. Springer, 2015, pp. 163–172.
Giannakis, K., and Andronikos, T.
Use of büchi automata and randomness for the description of biological processes.
International Journal of Scientific World 3, 1 (2015), 113–123.
Giannakis, K., Papalitsas, C., and Andronikos, T.
Quantum automata for infinite periodic words.
In Information, Intelligence, Systems and Applications, IISA 2015, The 6th International Conference on (2015), IEEE.
Giannakis, K., Papalitsas, C., Kastampolidou, K., Singh, A., and Andronikos, T.
Dominant strategies of quantum games on quantum periodic automata.
Computation 3, 4 (2015), 586–599.
Giannakis, K., Theocharopoulou, G., Papalitsas, C., Andronikos, T., and Vlamos, P.
Associating ω-automata to path queries on webs of linked data.
Engineering Applications of Artificial Intelligence 51 (2016), 115 – 123.
Mining the Humanities: Technologies and Applications.
Theocharopoulou, G., and Giannakis, K.
Web mining to create semantic content: A case study for the environment.
In Artificial Intelligence Applications and Innovations, L. Iliadis, I. Maglogiannis, H. Papadopoulos, K. Karatzas, and S. Sioutas, Eds., vol. 382 of IFIP Advances in
Information and Communication Technology. Springer Berlin Heidelberg, 2012, pp. 411–420.
Theocharopoulou, G., Giannakis, K., and Andronikos, T.
The mechanism of splitting mitochondria in terms of membrane automata.
In Signal Processing and Information Technology (ISSPIT), 2015 IEEE International Symposium on (2015), IEEE.
71