Beamer Template Uoft
Beamer Template Uoft
Kuilin Chen
Completed the literature review of digital twin and pointed out research opportunities in
current digital twin research
Current research focus is to model the relationship between set points and actual
temperature inside combustion system
ARX, LSTM and GRU models have been developed to predict one-step-ahead
temperature based on past set points and temperature
Try to develop new generative models for time-series
We want to learn a probability distribution over high-dimensional x (e.g. picture and long
time-series)
pD (x) is the true distribution, and pθ (x) is the modelled distribution
Direct optimization over pθ to approximate pD is very challenging (e.g.
high-dimensionality, existence of pD ...)
We define a low-dimensional z with a fixed prior distribution p(z), and pass z through gθ
(deep neural network): Z → X
High-dimensional x can be generated without explicitly knowing high-dimensional density
Adversarial training
min max V (D, G ) = Ex∼pD (x) [log D(x)] + Ez∼pz (z)[log(1 − D(G (z)))]
G D
G is a generator, D is a discriminator
Train D to discriminate the real and generated samples
Simultaneously train G to generate samples close to real samples
p(x) is not explicitly modeled in GAN
Evaluation of generated samples from GAN can be done by human subjectively
Figure: Graphical models to generate x1:T with a recurrent neural network (RNN) and a state space
model (SSM). Rectangle-shaped units are used for deterministic states, while circles are used for
stochastic ones.
RNN and SSM have been combined to develop generative models in some papers
However, their models are limited to categorical input and output (e.g. rotated image
generate, new drug development)
A new generative model is proposed based on combination of bi-directional RNN and SSM
The objective function and output decoding distribution are re-designed to make it
suitable for time-series generation
ELBO
log pθ (x|u) − DKL (qφ (z|x, u)kpθ (z|x, u))
= Ez∼qφ [log pθ (x|z, u)] − DKL [qφ (z|x, u)kpθ (z|u)]
| {z } | {z }
log-likelihood regularization
=L(θ, φ)