0% found this document useful (0 votes)
13 views

Vie W Only: Parameters Identification For A Nonlinear Partial Differential Equation in Image Denoising

The document proposes identifying a parameter in a diffusion equation for image denoising automatically using gradient descent. It discusses using PDE-constrained optimization to select the parameter λ in an equation that models image restoration. Numerical results are given comparing the automatic parameter selection approach to other denoising methods.

Uploaded by

Morabit Abdo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Vie W Only: Parameters Identification For A Nonlinear Partial Differential Equation in Image Denoising

The document proposes identifying a parameter in a diffusion equation for image denoising automatically using gradient descent. It discusses using PDE-constrained optimization to select the parameter λ in an equation that models image restoration. Numerical results are given comparing the automatic parameter selection approach to other denoising methods.

Uploaded by

Morabit Abdo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Moroccan J. of Pure and Appl. Anal.

(MJPAA)
Volume x(x), xxxx, Pages 000–000
ISSN: Online xxxx-xxxx - Print xxxx-xxxx
Parameters identification for a nonlinear partial differential

y
equation in image denoising

nl
A.El mourabit1 , D.Meskine2

O
Abstract. In this work and in the context of PDE constrained optimization problems, we are interested in
identification of a parameter in the diffusion equation proposed in [5]. We propose to identify this parameter
automatically by a gradient descent algorithm to improve the restoration of a noisy image. Finally, we give

iew
numerical results to illustrate the performance of the automatic selection of this parameter and compare our
numerical results with other image denoising approaches or algorithms.

Mathematics Subject Classification (2020). MSC 45H30, MSC 35K10


Key words and phrases. Optimization problems-Diffusion equation -Parameters identification
ev
rR

1. Introduction

Image processing has great importance in a number of fields, for example: photography,
digital cinema, astronomy, medicine... Therefore, it has been the subject of attention of a
ee

number of researchers. One of the fundamental problems in the field of image processing is
the reduction of noise (or denoising) of images [12, 15].
Given an image function u0 defined on Ω, with Ω ⊂ R2 an open and bounded domain, the
rP

problem is to start from an observation u0 , restore or reconstruct a real image u, this is an


important task in image processing, to improve the quality of the image or to obtain both a
reduction of the noise and the preservation of important features of the image.
In denouncing problems, u is the solution to an ill-posed reverse problem that is presented as
Fo

follows
u0 (i, j) = u(i, j) + η (i, j), (1.1)
Received date - Accepted: date.

1 E-mail address:.
e-mail: [email protected]
2 E-mail address:
e-mail: [email protected] .

0
PARAMETERS IDENTIFICATION FOR A NONLINEAR PARTIAL DIFFERENTIAL EQUATION IN IMAGE DENOISING1

where u0 (i, j) is noisy image at pixel (i, j) and η (i, j) is an additive Gaussian noise at pixel (i, j)
with standard deviation σn [17], which can be estimated in practical applications by various
methods. There are many denoising models based on a variational approach [3, 18]. For
exemple, the work of Rudin, Osher and Fatemi [3], regularisation by the functions of (TV) has
received considerable attention in the treatment of image and signal. Thus, the (TV) model
is effective in reducing image noise and preserving the edge, but it suffers from the staircase
effect. For this reason, a lot of work has been done to improve the TV model to reduce the
staircase effect, e.g. Total Generalized Variation (TGV) model [4], the high-order TV methods

y
[19, 20] and the fractional TV model [16].
The noise can be grouped into two classes:

nl
• Noise that depends on the image data,
• Independent noise (we speak of random noise).

O
The image noise is like a random field, because of its high frequency, we keep to characterize
the first order (no correlation between pixels) and sometimes the second order (correlation
between pixels). To compute the denoised image ub, there are denoising methods use image

iew
priors and minimize an energy function J , such that we obtain a function J from a noisy
image u0 , then a low number corresponds to a noise-free image by a matching procedure.
Finally, by minimizing E to obtain a denoised image ub :

ub ∈ arg min J (u). (1.2)


u
ev
To jointly reconstruct u and u0 , we rely on the maximum a posteriori (MAP) which consists in
maximizing the posterior estimate P (u|u0 ) of u as follows
rR

P (u0 |u)P (u)


ub = arg max P (u|u0 ) = arg max , (1.3)
u u P ( u0 )

then
ee

ub = arg max log P (u0 |u) + log P (u), (1.4)


u
rP

with P (u0 |u) is a likelihood function of u, and P (u) is the prior image.
In the case of additive white Gaussian noise with standard deviation σn , we can formulate the
objective function as
Fo

1
ub = arg min ||u − u0 ||2L2 (Ω) + αR(u) (1.5)
u 2

where Ω is a bounded open subset of R2 , R denotes a regularization term and α is a non-


negative regularization parameter.
Note also that ||u − u0 ||2L2 (Ω) is a data fidelity term that is the difference between the original
and noisy images and determining the appropriate image prior R(u) is a proposed key to
variational denoising methods. With varying degrees of success, various functions have been
proposed in the literature.
2 A.EL MOURABIT & D.MESKINE

2 !
|∇u|

Perona and Malik (PM)(see [9]), they take R(u) = exp − and proposed a
κ
reaction-diffusion equation with anisotropic diffusion
∂u
= div[ D (|∇u|)∇u] = D (|∇u|)△u + △uD (|∇u|).∇u, (1.6)
∂t
with D (|∇u|) denotes the diffusion coefficient and satisfies the following conditions:

y
lim D (|∇u|) = 1 and lim D (|∇u|) = 0, (1.7)
|∇u|→0 |∇u|→∞

nl
the model of (PM) able to in enhancing the edges, but the staircase effect is obvious.
In [8], Rodin Osher Fatemi they take R(u) = TV (u) and proposed to minimize the following
function

O
1
ub = arg min ||u − u0 ||2L2 (Ω) + αTV (u) (1.8)
u 2
where

iew
Z  Z 
TV (u) = | Du|dx = sup udivφdx, φ ∈ C1 (Ω, R) and || φ||∞ ⩽ 1 ,
Ω Ω
but this approach strengthens the piecewise constant structures more than the smooth struc-
tures, which gives a staircase effect. In order to reduce this effect and improve the image
ev
quality, the authors in [5], has proposed the following equation
logγ (1 + |∇u|)
  
∂u

 − div ∇u + λu − u0 = 0 in Q,
|∇u| 

∂t
rR



logγ (1 + |∇u|) (1.9)
∇u, n = 0 in ∂Ω × (0, T ),
|∇ u |



u( x, 0) = u0 in Ω.


ee

Also, they yield a result of existence for solutions of (1.9) in a general framework.
Our objective is to analyze and implement a bilevel optimization framework to automatically
select the regularization parameter λ setting in the following image reconstruction issue:
rP

min J (λ), (1.10)


λ∈U ad

where
1
Z Z
Fo

2
J (λ) = |u( T, x ) − u0 ( x )| dx + α |λ( x )|2 dx,
2 Ω Ω
subject to
logγ (1 + |∇u|)
  
∂u

 − div ∇u + λ(u − u0 ) = 0 in Ω × (0, T ),
|∇u| 


∂t
(SE ) logγ (1 + |∇u|) (1.11)
∇u, n = 0 in ∂Ω × (0, T ),
|∇u|



u( x, 0) = u0 in Ω,


PARAMETERS IDENTIFICATION FOR A NONLINEAR PARTIAL DIFFERENTIAL EQUATION IN IMAGE DENOISING3

where u0 represents the input image, α a positive constant, γ is the weighting parameter to be
estimated in (1.10) and U ad is the set of admissible functions defined by
U ad = {λ ∈ L2 (Ω) : λ a ⩽ λ( x ) ⩽ λb a.e in Ω}, (1.12)
λb
where λ a , λb ∈ R with < λ a < λb .
2
This paper is structured as follows: the existence of a solution for the optimization problem is
given in Section 2. Finally, in Section 3 presents the suggested Gradient descent algorithm and

y
the benefits of the proposed method by using comparative experiments with some approachs
denoising methods.

nl
2. Existence of solution for the optimization problem

O
Now, we prove the existence of the solution to the optimal control problem (1.10).
The variational formulation of the problem (1.11) is stated as follows

iew

 Find u ∈ W 1,x L ( Q ) and ∂u ∈ W −1,x L ( Q ) + L2 ( Q ) such that

 A A

  ∂t Z Z
∂u
,ϕ + a(γ, ∇u)∇ϕdxdt + λ(u − u0 )ϕdxdt = 0 in Ω × (0, T ),

 ∂t W −1,x L (Ω ),W 1,x L ( Q )
A
Q Q

 A
for all ϕ ∈ W 1,x L A ( Q),

ev
(2.1)
where A the N-function defined by A(t) = t logγ0 (1 + t), Q = (0, T ) × Ω and a(γ, ∇u) =
logγ (1 + |∇u|)
∇u .
rR

|∇u|

Proposition 2.1. Let (λk )k be a sequence from U ad that converges in L2 (Ω) to λ∗ ∈ U ad and uk ≡
u(λk ) solution of (3.11), ∀k ∈ N, we have
ee

1- The sequence (uk )k converges weakly in L2 (0, T; W 1 L A (Ω)) to u∗ solution of (3.11).


2- The cost functional J is weakly lower semicontinuous (w.l.s.c.) verifies
J (λ∗ ) ⩽ lim inf J (λk ).
rP

k→∞

Proof. 1- Let’s (λk )k be a sequence of U ad and uk solution of (3.11) related to λk , then we have
  Z Z
∂uk
Fo

,ϕ + a(γ, ∇uk )∇ϕdxdt + λk (uk − u0 )ϕdxdt = 0, (2.2)


∂t W −1,x L (Ω),W 1,x L A ( Q) Q Q
A

for all ϕ ∈ W 1,x L


A ( Q ).
From the estimation (2.19), (2.9) and (2.10), we have
uk ⇀ u∗ weakly in W 1,x L A ( Q). (2.3)
Since J is coercive in L2 (Ω); (J (γ) > α||γ|| L2 (Ω) , with α > 0), then (γk )k is bounded in L2 (Ω).
Moreover, we have L2 (Ω) is reflexive so we can extract a subsequence noted again (γk )k which
4 A.EL MOURABIT & D.MESKINE

converges weakly to γ∗ in L2 (Ω)


λk ⇀ λ∗ in L2 (Ω) and λ∗ ∈ U ad . (2.4)
Let t ∈ (0, T ), we choose ϕ = uk (t) as test function in the variational formulation (3.11), we
get:
Z tZ Z tZ Z tZ
1 1
Z Z
u2k (t)dx − u20 (t)dx + γ
log (1 + |∇uk |)|∇uk |dx + λk u2k (t)dx = λ k u k ( t ) u0 ( t )
2 Ω 2 Ω 0 Ω 0 Ω 0 Ω
(2.5)

y
using the Young’s inequality and the fact that λk ∈ U ad , we have

nl
Z tZ Z tZ
1 1
Z Z
u2k (t)dx − u20 dx + γ
log (1 + |∇uk |)|∇u|dx + λ a u2k (t)dxdt
2 Ω 2 Ω 0 Ω 0 Ω
Z tZ Z tZ

O
λ λb
⩽ b u2k (t)dxdt + u0 (t)2 dxdt,
2 0 Ω 2 0 Ω
then

iew
Z tZ Z tZ
1 Tλb + 1
Z
2 λb
uk (t)dx + γ
log (1 + |∇uk |)|∇u|dx + (λ a − ) u2k (t)dxdt ⩽ ||u0 (t)||2L2 (Ω) ,
2 Ω 0 Ω 2 0 Ω 2
then
||uk (τ )||2L2 (Ω) ⩽ C, ∀τ ∈ (0, T ), (2.6)
ev
with C = (1 + T )||u0 (t)||2L2 (Ω) , then ||uk || L∞ (0,T;L2 (Ω)) ⩽ C.
On the other hand Z Z t
Z Z t
rR

A(|uk |)dx + A(|∇uk |)dx ⩽ C, (2.7)


0 Ω 0 Ω
∂u
The inequality (2.6) and (2.7) confirms that (uk )k is bounded in L2 (Ω) and is bounded in
∂t
W −1 L A ( Q ) + L 2 ( Q ).
ee

Then we can extract a subsequence denoted again (uk )k , such that


∂u ∂u∗
uk ⇀ u∗ weakly in W 1 L A ( Q) and k ⇀ weakly in W −1 L A ( Q) + L2 ( Q).
rP

∂t ∂t
Based on the work of R. Aboulaich and D. Meskine (see [5]), there exists a positive constant C,
such that
||uk || L∞ (0,T;L2 (Ω)) ⩽ C, (2.8)
Fo

||uk || L2 (0,T;W 1,x L A (Q)) ⩽ C (2.9)


∂uk
⩽ C. (2.10)
∂t W −1,x L A (Ω)
∂uk
The boundedness of (uk )k and respectively in L2 (0, T; W 1,x L A ( Q)) and L2 (0, T; W −1,x L A (Ω)),
∂t
we can extract a subsequence denoted again (uk )k
uk ⇀ u∗ in L2 (0, T; W 1,x L A ( Q)), (2.11)
PARAMETERS IDENTIFICATION FOR A NONLINEAR PARTIAL DIFFERENTIAL EQUATION IN IMAGE DENOISING5

and
∂uk ∂u∗
⇀ in L2 (0, T; W −1,x L A (Ω)), (2.12)
∂t ∂t
from [5] (proof of Theorem 3.1) to deduce (for a subsequence) that
uk → u∗ a.e in Q. (2.13)
From equation (2.12)
Z T  Z T ∗ 
∂uk ∂u

y
,ϕ −→ ,ϕ . (2.14)
0 ∂t 0 ∂t
W −1,x L A ( Ω ),W W −1,x L A (Ω),W 1,x L A ( Q)
1,x L ( Q )
A

nl
We have λk ⇀ λ∗ in L2 (Ω), then when k tends to +∞
Z TZ Z TZ
λ∗ (u∗ − u0 )dxdt

O
λk (uk − u0 )dxdt −→ (2.15)
0 Ω 0 Ω
A result of Elmahi and Meskine in [24] shows that for a subsequence still denoted by uk
∇uk −→ ∇u∗ a.e in Q,

iew
(2.16)
consequently
Z TZ Z TZ
a(γ, ∇u)∇ϕdxdt −→ a(γ, ∇u∗ )∇ϕdxdt. (2.17)
0 Ω 0 Ω
2- Show that the cost functional J verifies
ev
J (λ∗ ) ⩽ lim inf J (γk ). (2.18)
k→∞
rR

we have
1
Z Z
J (λ∗ ) = |u∗ ( T, x ) − u0 ( x )|2 dx + α |λ∗ ( x )|2 dx
2 Ω Ω
1
Z Z
⩽ lim inf |uk ( T, x ) − u0 ( x )|2 dx + lim inf α |λk ( x )|2 dx
ee

k−→+∞ 2 Ω k−→+∞ Ω
1
Z Z
2
⩽ lim inf ( |uk ( T, x ) − u0 ( x )| dx + α |λk ( x )|2 dx )
k−→+∞ 2 Ω Ω
rP

⩽ lim inf J (λk )


k−→+∞

Theorem 2.1. The problem (1.10) admits at least a solution in L2 (Ω).


Fo

Proof. Consider (λk )k a minimizing sequence of J in L2 (Ω), with


lim J (λk ) = inf J ( γ ). (2.19)
k→+∞ λ ∈ L2 ( Ω )

As (γk )k is bounded in L2 (Ω) and L2 (Ω) is reflexive, then we can extract a subsequence
denoted again (λk )k that converge to λ∗ in L2 (Ω). Moreover, from Proposition 2.1, one has
J (λ∗ ) ⩽ lim inf J (λk ),
k→∞
6 A.EL MOURABIT & D.MESKINE

then from (2.19), we have


J (λ∗ ) ⩽ lim inf J (λk ) ⩽ lim J (λk ) = inf J ( λ ). (2.20)
k→∞ k→∞ λ ∈ L2 ( Ω )

We always
inf J ( λ ) ⩽ J ( λ ∗ ), (2.21)
λ ∈ L2 ( Ω )
therefore
lim J (γk ) = inf J ( λ ) = J ( λ ∗ ), (2.22)
k→∞

y
λ ∈ L2 ( Ω )
this means that the problem (1.10) admits a solution in L2 (Ω).

nl
3. The proposed algorithm and numerical approximation.

O
With a parabolic PDE constraint and a constraint on the control variable, we consider the
optimal control problem
1
Z Z
α

iew
2
min J (λ) = |u( T, x ) − u0 ( x )| dx + |λ( x )|2 dx, (3.1)
λ∈U ad 2 Ω 2 Ω
subject to 
 ut − ∇. ( a(γ, ∇u)∇u) + λ(u − u0 ) = 0 in Q,

(SE ) ⟨ a(γ, ∇u), n⟩ = 0 in Σ, (3.2)
ev
 u( x, 0) = u in Ω,

0
logγ (1 + |∇u|)
where a(γ, ∇u) = , Q = Ω × (0, T ), Σ = ∂Ω × (0, T ), α is a positive regular-
rR

|∇u|
ization parameter and γ positive constant. In (3.1) − (3.2), u and λ are the state variable and
control variable, respectively. The admissible set U ad is defined as
U ad = {λ ∈ L2 (Ω) : λ a ⩽ λ( x ) ⩽ λb a.e in Ω}. (3.3)
ee

λb
with λ a , λb ∈ R and < λ a < λb .
2
Let we define G be an affine solution operator associated with the state equation (3.2) as
rP

G :L2 ( Q) −→ L2 ( Q)
λ 7→ G (λ) := u.
The operator G is bounded, continuous and compact [26]. For u = G (λ), the problem (3.1)-(3.2)
Fo

can be rewritten as follows


1
Z Z
2 α
min | G (λ) − u0 ( x )| dx + |λ( x )|2 dx, (3.4)
λ∈U ad 2 Ω 2 Ω
Now, we consider an auxiliary variable y such that λ = y, consequently the problem (3.1)-(3.2)
becomes 
 min J (λ) + IK (y)
λ,y∈ L2 (Ω) (3.5)
 subjet to λ = y.
PARAMETERS IDENTIFICATION FOR A NONLINEAR PARTIAL DIFFERENTIAL EQUATION IN IMAGE DENOISING7

where IK (.) is the indicator functional of the set K defined by


(
0, if y ∈ K
IK (y) = (3.6)
+∞ i f y ∈ L2 (Ω)\K,
and
1
Z Z
α
J (λ) = | G (λ) − u0 ( x )|2 dx + |λ( x )|2 dx. (3.7)
2 Ω 2 Ω
We write the augmented Lagrangian functional associated with the minimization problem (3.5)

y
as
µ
Lµ (λ, y, β) = J (λ) + IK (y) − ( β, λ − y) + ||λ − y||2 , (3.8)

nl
2
with µ is a positive penalty parameter and β ∈ L2 (Ω) is a Lagrange multiplier, then leads to
ADMM steps:

O
λk+1 = arg min Lµ (λ, yk , βk ) (3.9a)
λ ∈ L2 ( Ω )

yk+1 = arg min Lµ (λk+1 , y, βk ) (3.9b)

iew
y ∈ L2 ( Ω )

β k +1 = β k − µ ( λ k +1 − y k +1 ) (3.9c)

3.1. Algorithm. Let G the solution operator define as


G :V −→ V
ev
λ 7 → u ( λ ) = G ( λ ).
The optimization problem in reduced form as:
rR

min J (λ) = f (u(λ), λ), (3.10)


λ∈U ad
we assume that f : V × V −→ R and
ee

logγ (1 + |∇u|)
 
∂u
e(u, λ) = − div ∇u + λ(u − u0 ) = 0 in Ω × (0, T ),
∂t |∇u|
The variational formulation of the state equation is given by:
rP


∂u


 Find u ∈ W 1,x L A ( Q) and ∈ W −1,x L A ( Q) + L2 ( Q) such that
 ∂t Z
logγ (1 + |∇u|)
  Z
∂u
,v + 2
∇vdxdt + λ(u − u0 )vdxdt = 0 in Ω × (0, T ),
∂t Q |∇ u | Q
Fo

−1,x L (Ω ),W 1,x L ( Q )





 W A A
 1,x
for all v ∈ W L A ( Q),
(3.11)
Let e : W 1,x L A ( Q) × L2 (Ω) −→ W −1,x L A ( Q) defined as
logγ (1 + |∇u|)
  Z
∂u
⟨e(u, λ), v⟩W −1,x L (Q),W 1,x L A (Q) = ,v + ∇vdxdt
A ∂t W −1,x L (Ω),W 1,x L A ( Q) Q |∇u|2
A
Z
+ λ(u − u0 )vdxdt = 0,
Q
8 A.EL MOURABIT & D.MESKINE

the its partial derivative with respect to u is given by


Z Z
⟨eu (u, λ)w, v⟩W −1,x L 1,x L = (u( T, x ) − u0 ( x ) + v( T, x )).w( T, x )dx − v(0, x )w(0, x )dx
A ( Q ),W A ( Q)
Ω Ω
γ logγ−1 (1 + |∇u|)
Z TZ Z TZ Z TZ
− ∂t vwdxdt + ∇w∇vdxdt + λwvdxdt.
0 Ω 0 Ω 1 + |∇u| 0 Ω
The adjoint equation:
!
γ −1

log ( 1 + |∇ u |)

y
∂v γ
 − ∂t + div ∇v + λv = 0, on ]0, T [×Ω,


1 + |∇u|

(AE ) (3.12)

nl


 v( T, x ) = u0 ( x ) − u( T, x ), on ]0, T [×∂Ω,
u(0, x ) = u0 ( x ).

O
The partial derivative with respect to λ is given by
Z Z TZ
⟨eλ (u, λ)h, v⟩W −1,x L 1,x L =α λhdx + h(u( x ) − u0 )v( x )dxdt (3.13)
A ( Q ),W A ( Q)
Ω Ω

iew
0
then
Z T
eλ (u, λ)h = αλh + h(u( x ) − u0 )v( x )dt (3.14)
0
Let λ be a local optimal solution to (3.10) and u(λ) its associated state. The optimality system
ev
is then given through the following equations:

e(u(λ), λ) = 0,
rR

e u ( u ( λ ), λ ) ∗ v = J u ( u ( λ ), λ ),
e λ ( u ( λ ), λ ) ∗ v = J λ ( u ( λ ), λ ),
ee

and we have
(Jλ (u(λ), λ) − eλ (u(λ), λ)∗ v, w − λ) L2 (Ω) ⩾ 0, ∀w ∈ U ad , (3.15)
with
rP

J u ( u ( λ ), λ ) = u − u0 ,
Jλ (u(λ), λ) = αλ,
then
Fo

(Jλ (u(λ), λ)( x ) − eλ (u(λ), λ)∗ v( x ))(w − λ( x )) ⩾ 0, ∀w ∈ R : λ a ⩽ w ⩽ λb . (3.16)


If λ is a local optimal solution for (3.10), then it satisfies the variational inequality

J (λ)(v − λ) ⩾ 0, (3.17)
for all admissible directions v − λ and for all v ∈ U ad (since U ad is convex).
Let introducing the multiplier
µ = Jλ (u(λ), λ) − eλ (u(λ), λ)∗ v,
PARAMETERS IDENTIFICATION FOR A NONLINEAR PARTIAL DIFFERENTIAL EQUATION IN IMAGE DENOISING9

then
λ = PUad (λ − cµ), for all µ ∈ U ad and all c > 0,
where P : L2 ( Ω ) −→ L2 ( Ω ) denotes the projection onto U ad .
λ( x ) = P[λa ,λb ] (λ( x ) − cµ( x )), a.e in Ω.
We decompose µ into its positive and negative parts as
µ = µ a − µb ,

y
where µ a = max (0, µ( x )) and µ a = −min(0, µ( x )).
The inequality (3.16) can be written as:

nl
Jλ (u(λ), λ) − eλ (u(λ), λ)∗ v = µ a − µb , a.e in Ω,
µ a ( x )(w − λ( x )) ⩾ 0 a.e in Ω, ∀v ∈ [λ a , λb ],

O
µb ( x )(w − λ( x )) ⩽ 0 a.e in Ω, ∀v ∈ [λ a , λb ],
µ a ( x ) ⩾ 0, µb ( x ) ⩾ 0 a.e in Ω,

iew
λ a ⩽ λ ⩽ λb a.e in Ω,

Algorithm 1 Projected descent method.


1
ev
3.2. Numerical results. In this section, we will focus on the presentation of the most significant
results that demonstrated the effectiveness of the proposed restricted POE model compared to
rR

some competing settlement methods.


Our results are compared with some competitive denoising methods, including the TV
model [1], the TGV model [4] and the denoising PDE where the parameter λ is a scalar.
Peak Signal to Noise Ratio (PSNR) (ratio of the maximum value of the image to the mean
ee

quadratic error between the original image and (SSIM) (computed over multiple viewports of
a given image) are intended to measure the quality of the resulting images, which are defined
as follows
rP

2552
 
PSNR = 10 log10 , (3.18)
MSE
with MSE is the mean square error resulting in the following
1 m n
Fo

MSE = ∑ ∑
nm i=1 j=1
(u(i, j) − v(i, j))2 ,

where u be the noise-free image of size m × n and v the restored image.


(2σu σv + r2 ) (2µu µv + r1 ) (2 cov uv + r3)
SSIM(u, v) = (3.19)
(µ2u + µ2v + r1 ) (σu2 + σv2 + r2 ) (σu σv + r3 )
with covuv is covariance and r1 = (m1 l )2 , r2 = (m2 l )2 are two stabilizing constants; and l the
dynamics of the pixel values. µu and µv ; σu2 and σv2 are mean; variance of u and v respectively.
10 A.EL MOURABIT & D.MESKINE

y
Figure 1. Test images for comparison experiments.

nl
Using different regularizers (TV model [1] and TGV model [4]), the figure (2) show the
evolution of PSNR and SSIM of the reconstruction results up to 120 iterations (with σnoise = 20).

O
We notice that the evolution of PSNR and SSIM associated with our approach is higher than
the other methods. This demonstrates the effectiveness of the proposed PDE.

iew
ev
rR
ee
rP

Figure 2. The variation of the PSNR and SSIM values ( where σnoise = 20) com-
pared to the number of iterations for various regualisers.
Fo

Figure 3. Right: our model; Middle: TV method[3]; left: TGV method[4].


PARAMETERS IDENTIFICATION FOR A NONLINEAR PARTIAL DIFFERENTIAL EQUATION IN IMAGE DENOISING
11

Figure 4. Right: our model; Middle: TV method[3]; left: TGV method[4].

y
nl
O
Our (PSNR)
iew
Figure 5. Right: our model; Middle: TV method[3]; left: TGV method[4].

Noise level σ = 20 Cameraman Baboon Parrots


28.537 26.225 30.184
Our (SSIM) 0.820 0.747 0.880
ev
TV model [3](PSNR) 27.315 24.355 29.544
TV model [3](SSIM) 0.817 0.595 0.875
TGV model [4](PSNR) 27.307 23.940 29.511
rR

TGV model [4](SSIM) 0.810 0.536 0.877


Table 1. PSNR and SSIM comparisons for the images of Figure 1.
ee

Acknowledgments

The authors would like to thank the anonymous referee for his fructuous remarks.
rP

References

[1] Hintermüller, Michael, and Carlos N. Rautenberg. ”Optimal selection of the regularization function in a
Fo

weighted total variation model. Part I: Modelling and theory.” Journal of Mathematical Imaging and Vision 59.3
(2017): 498-514.
[2] Hintermüller, Michael, et al. ”Dualization and automatic distributed parameter selection of total generalized
variation via bilevel optimization.” arXiv preprint arXiv:2002.05614 (2020).
[3] Rudin, Leonid I., Stanley Osher, and Emad Fatemi. ”Nonlinear total variation based noise removal algo-
rithms.”Physica D: nonlinear phenomena 60.1-4 (1992): 259-268.
[4] Bergounioux, Maı̈tine, and Loic Piffet. ”A second-order model for image denoising.” Set-Valued and Varia-
tional Analysis 18.3 (2010): 277-306.
[5] Aboulaich, Rajae, Driss Meskine, and Ali Souissi. ”New diffusion models in image processing.” Computers
& Mathematics with Applications 56.4 (2008): 874-882.
12 A.EL MOURABIT & D.MESKINE

[6] Bredies, Kristian, Karl Kunisch, and Thomas Pock. ”Total generalized variation.” SIAM Journal on Imaging
Sciences 3.3 (2010): 492-526.
[7] El Mourabit, I., El Rhabi, M., Hakim, A., Laghrib, A., & Moreau, E. (2017). A new denoising model for
multi-frame super-resolution image reconstruction. Signal Processing, 132, 51-65.
[8] : Rudin, L. I., Osher, S., & Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica
D: nonlinear phenomena, 60(1-4), 259-268.
[9] Perona, P., & Malik, J. (1990). Scale-space and edge detection using anisotropic diffusion. IEEE Transactions
on pattern analysis and machine intelligence, 12(7), 629-639.
[10] Catté, F., Lions, P. L., Morel, J. M., & Coll, T. (1992). Image selective smoothing and edge detection by
nonlinear diffusion. SIAM Journal on Numerical analysis, 29(1), 182-193.

y
[11] Yaroslavsky, L. P. (2012). Digital picture processing: an introduction (Vol. 9). Springer Science & Business
Media.

nl
[12] Afraites, L., Hadri, A., Laghrib, A., & Nachaoui, M. (2021). A weighted parameter identification PDE-
constrained optimization for inverse image denoising problem. The Visual Computer, 1-16.
[13] Afraites, L., Atlas, A., Karami, F., & Meskine, D. (2016). Some class of parabolic systems applied to image

O
processing. Discrete & Continuous Dynamical Systems-B, 21(6), 1671.
[14] Elmahi, A., and D. Meskine. ”Strongly nonlinear parabolic equations with natural growth terms in Orlicz
spaces.” Nonlinear Analysis: Theory, Methods & Applications 60.1 (2005): 1-35.
[15] A. Lanza, S. Morigi, F. Sgallari, Constrained TV p -l2 model for image restoration, Journal of Scientific. Com-

iew
puting 68 (1) (2016) 64-91.
[16] Zhang, J., & Chen, K. (2015). A total fractional-order variation model for image restoration with nonhomo-
geneous boundary conditions and its numerical solution. SIAM Journal on Imaging Sciences, 8(4), 2487-2518.
[17] JAIN, Anil K. Fundamentals of digital image processing. Prentice-Hall, Inc., 1989.
[18] S. Farsiu, D. Robinson, M. Elad, P. Milanfar, Advances and challenges in super-resolution, Interna-tional
ev
Journal of Imaging Systems and Technology 14 (2) (2004) 47-57.
[19] Lv, X. G., Song, Y. Z., Wang, S. X., & Le, J. (2013). Image restoration with a high-order total variation
minimization method. Applied Mathematical Modelling, 37(16-17), 8210-8224.
[20] Bergounioux, M., & Piffet, L. (2010). A second-order model for image denoising. Set-Valued and Variational
rR

Analysis, 18(3), 277-306.


[21] C. Van Chung, J. De los Reyes, C. Sch onlieb, Learning optimal spatially-dependent regularization parame-
ters in total variation image denoising, Inverse Problems 33 (7) (2017) 074005.
[22] M. Hinterm uller, C. N. Rautenberg, T. Wu, A. Langer, Optimal selection of the regularization function in
a weighted total variation model. part ii: Algorithm, its analysis and numerical tests, Journal of Mathematical
ee

Imaging and Vision 59 (3) (2017) 515-533.


[23] J. C. De los Reyes, C.-B. Sch onlieb, T. Valkonen, Bilevel parameter learning for higher-order total variation
regularisation models, Journal of Mathematical Imaging and Vision 57 (1) (2017) 1-25.
[24] Elmahi, A., and D. Meskine. ”Parabolic equations in Orlicz spaces.” Journal of the London Mathematical
rP

Society 72.2 (2005): 410-428.


[25] Nocedal, Jorge, and Stephen J. Wright, eds. Numerical optimization. New York, NY: Springer New York,
1999.
[26] Tröltzsch, Fredi. Optimal control of partial differential equations: theory, methods, and applications. Vol.
Fo

112. American Mathematical Soc., 2010.

You might also like