Instant ebooks textbook Digital Signal Processing with Matlab Examples Volume 3 Model Based Actions and Sparse Representation 1st Edition Jose Maria Giron-Sierra (Auth.) download all chapters
Instant ebooks textbook Digital Signal Processing with Matlab Examples Volume 3 Model Based Actions and Sparse Representation 1st Edition Jose Maria Giron-Sierra (Auth.) download all chapters
com
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/conceptual-digital-signal-processing-
with-matlab-keonwook-kim/
textboxfull.com
https://textbookfull.com/product/understanding-digital-signal-
processing-with-matlab-and-solutions-first-edition-edition-poularikas/
textboxfull.com
https://textbookfull.com/product/digital-signal-processing-a-primer-
with-matlab-1st-edition-samir-i-abood-author/
textboxfull.com
https://textbookfull.com/product/real-time-digital-signal-processing-
from-matlab-to-c-with-the-tms320c6x-dsps-third-edition-morrow/
textboxfull.com
Smartphone based real time digital signal processing Third
Edition Abhishek Sehgal
https://textbookfull.com/product/smartphone-based-real-time-digital-
signal-processing-third-edition-abhishek-sehgal/
textboxfull.com
https://textbookfull.com/product/matlab-signal-processing-toolbox-
user-s-guide-the-mathworks/
textboxfull.com
https://textbookfull.com/product/digital-image-processing-a-signal-
processing-and-algorithmic-approach-1st-edition-d-sundararajan-auth/
textboxfull.com
Digital Signal
Processing with
Matlab Examples,
Volume 3
Model-Based Actions and Sparse
Representation
Signals and Communication Technology
More information about this series at http://www.springer.com/series/4748
Jose Maria Giron-Sierra
123
Jose Maria Giron-Sierra
Systems Engineering and Automatic Control
Universidad Complutense de Madrid
Madrid
Spain
MATLAB® is a registered trademark of The MathWorks, Inc., and is used with permission. The
MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or
discussion of MATLAB software or related products does not constitute endorsement or sponsorship by
the MathWorks of a particular pedagogical approach or particular use of the MATLAB software.
This is the third book of a trilogy. As in the other books, a series of MATLAB
programs are embedded in the chapters for several purposes: to illustrate the
techniques, to provide implementation examples, to encourage for personal
exploration departing from a successful start.
The book has two parts, each having just one chapter. These chapters are long
and have a considerable number of bibliographic references.
When using a GPS on a car, sometimes it is not possible to keep contact with
satellites, like for instance inside tunnels. In this case, a model of the car motion—a
dynamic model—can be used for data substitution. The adequate combination of
measurements and models is the key idea of the Kalman filter, which is the central
topic of the first part of the book. This filter was formulated for linear conditions.
There are modifications for nonlinear conditions, like the extended Kalman filter, or
the unscented Kalman filter. A new idea is to use particle filters. These topics are
covered in the chapter under an important perspective: Bayesian filtering.
Compressed sensing has emerged as a promising idea. One of the intended
applications is networked devices or sensors, which are becoming a reality. This
topic is considered in the second part of the book. Some experiments that
demonstrate image denoising applications were included.
For easier reading of the book, the longer programs have been put in an
appendix. And a second appendix on optimization has been added to support some
contents of the last chapter.
The reader is invited to discover the profound interconnections and common-
alities that exist behind the variety of topics in this book. This common ground
would become surely the humus for the next signal processing future.
As said in the preface of the other books, our particular expertise on signal
processing has two main roots: research and teaching. I belong to the Faculty of
Physics, University Complutense of Madrid, Spain. During our experimental
research on autonomous vehicles, maritime drones, satellite control, etc., we
practiced main methods of digital signal processing, for the use of a variety of
sensors and for prediction of vehicle motions. From years ago, I teach Signal
Processing in a Master of Biomedical Physics, and a Master on New technologies.
v
vi Preface
The style of the programs included in the book is purposively simple enough.
The reader is invited to typeset the programs included in the book, for it would help
for catching coding details. Anyway, all programs are available from the book web
page: www.dacya.ucm.es/giron/SPBook3/Programs.
A lot of different materials have been used to erect this book: articles, blogs,
codes, experimentation. I tried to cite with adequate references all the pieces that
have been useful. If someone has been forgotten, please contact me. Most of the
references cited in the text are available from Internet. We have to express our
gratitude to the public information available in this way.
Please, send feedback and suggestions for further improvement and support.
Acknowledgments
Thanks to my University, my colleagues and my students. Since this and the other
book required a lot of time taken from nights, weekends and holidays, I have to
sincerely express my gratitude to my family.
vii
viii Contents
Figure 1.1 Keeping the car at a distance from the road border . . . . . . . . . 7
Figure 1.2 Prediction (P), measurement (M) and update (U) PDFs . . . . . . 9
Figure 1.3 Variation of K in function of σ y =σ x . . . . . . . . . . . . . . . . . . . . 10
Figure 1.4 The algorithm is a cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 1.5 A two-tank system example . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 1.6 System outputs (measurements) . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 1.7 System states, and states estimated by the Kalman filter . . . . . 19
Figure 1.8 Error evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 1.9 Evolution of the Kalman gains . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 1.10 Evolution of the state covariance . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 1.11 The prediction step, from left to right . . . . . . . . . . . . . . . . . . . 25
Figure 1.12 The measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Figure 1.13 Estimation of the next state . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 1.14 Bayes net corresponding to Kalman filter . . . . . . . . . . . . . . . . 27
Figure 1.15 Satellite position under disturbances . . . . . . . . . . . . . . . . . . . . 33
Figure 1.16 Example of nonlinear function: arctan(). . . . . . . . . . . . . . . . . . 34
Figure 1.17 Original and propagated PDFs . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 1.18 Propagation of a PDF through nonlinearity . . . . . . . . . . . . . . . 36
Figure 1.19 Propagated PDFs for sigma ¼ 0:7; 1; 2 . . . . . . . . . . . . . . . . . . 37
Figure 1.20 Propagation of a shifted PDF through nonlinearity . . . . . . . . . 38
Figure 1.21 Basic linear approximation using tangent . . . . . . . . . . . . . . . . 42
Figure 1.22 Falling body example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 1.23 System states (cross marks) . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Figure 1.24 Distance measurement and drag . . . . . . . . . . . . . . . . . . . . . . . 45
Figure 1.25 The three non-zero components of the o f=o x Jacobian . . . . . 47
Figure 1.26 Propagation of ellipsoids (state N=43 -> 44)s . . . . . . . . . . . . . 49
Figure 1.27 System states (cross marks) under noisy conditions. . . . . . . . . 51
Figure 1.28 Distance measurement. Drag . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Figure 1.29 System states (cross marks), and states estimated
by the EKF (continuous) . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 54
xi
xii List of Figures
Figure 2.1 The line L and, a the ball B1=2 , b the ball B1 ,
c the ball B2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Figure 2.2 An example of the solution paths obtained with LARS . . . . . . 159
Figure 2.3 Solution paths using LARS for diabetes set . . . . . . . . . . . . . . 162
Figure 2.4 Solution paths using LASSO for diabetes set . . . . . . . . . . . . . 162
Figure 2.5 Soft-thresholding operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Figure 2.6 Application of ISTA for a sparse signal recovery
example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Figure 2.7 Evolution of objective function along ISTA iterations . . . . . . . 172
Figure 2.8 A BP sparsest solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Figure 2.9 Evolution of objective function k xk1 along ADMM
iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Figure 2.10 Sparse solution obtained with OMP . . . . . . . . . . . . . . . . . . . . 180
Figure 2.11 Evolution of the norm of the residual . . . . . . . . . . . . . . . . . . . 181
Figure 2.12 The CS scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Figure 2.13 An sparse signal being measured . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 2.14 Recovered signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Figure 2.15 Evolution ofkxk1 along iterations . . . . . . . . . . . . . . . . . . . . . . 187
Figure 2.16 A phase transition curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 2.17 Original image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Figure 2.18 (right) Chan-Vese segmentation, (left) level set . . . . . . . . . . . . 197
Figure 2.19 A patch dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Figure 2.20 The dictionary problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Figure 2.21 Original picture, and image with added Gaussian noise. . . . . . 207
Figure 2.22 Patch dictionary obtained with K-SVD . . . . . . . . . . . . . . . . . . 207
Figure 2.23 Denoised image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 2.24 A synthetic image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Figure 2.25 Example of figure during MCA process . . . . . . . . . . . . . . . . . 211
Figure 2.26 The original composite signal and its components . . . . . . . . . . 212
Figure 2.27 Visualization of banded matrix using spy() . . . . . . . . . . . . . . . 216
Figure 2.28 Visualization of the Bucky ball matrix structure . . . . . . . . . . . 217
Figure 2.29 Visualization of the bucky ball graph . . . . . . . . . . . . . . . . . . . 217
Figure 2.30 Visualization of HB/nnc1374 matrix using spy() . . . . . . . . . . . 218
Figure 2.31 Visualization of heat diffusion example . . . . . . . . . . . . . . . . . . 220
Figure 2.32 Effect of Gaussian diffusion, original on top . . . . . . . . . . . . . . 222
Figure 2.33 Effect of Gaussian anti-diffusion, original on top . . . . . . . . . . . 224
Figure 2.34 The diffusion coefficient and the corresponding
flux function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Figure 2.35 Denoising of image with salt & pepper noise,
using P-M method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Figure 2.36 Example of Bregman distance . . . . . . . . . . . . . . . . . . . . . . . . . 228
Figure 2.37 ROF total variation denoising using split Bregman . . . . . . . . . 233
Figure 2.38 Evolution of nuclear norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Figure 2.39 A test of reconstruction quality . . . . . . . . . . . . . . . . . . . . . . . . 236
xiv List of Figures
xv
xvi Listings
1.1 Introduction
Consider the case of satellite tracking. You must determine where is your satellite,
using a large antenna. Signals from the satellite are noisy. Measurements of antenna
angles have some uncertainty margins. But you have something that may help you:
the satellite follows a known orbit, so at time T must be at position P. However this
help should be taken with caution, since there are orbit perturbations.
Satellite tracking is an example of a more general scenario. The target is to estimate
the state of a dynamic system, a state that is changing along time. The means you
have are measurements and a mathematical model of the system dynamics. These
two means should be combined as best as possible.
Some more examples could be useful to capture the nature of the problem to be
considered in this chapter.
According with the description given in a scientific conference, a research team
was developing a small UAV (unmanned aerial vehicle) to fly on their university
campus. They used distance measurement with an ultrasonic sensor, for flight altitude
control. Sometimes the altitude measurements were lost or completely wrong. In
such moments, these measurements were substituted by a reasonable value. At least
a simplistic (or even implicit) model should be used here for two reasons: to determine
that there is a wrong measurement, and to obtain a reasonable value.
Nowadays a similar problem is found with vehicular GPS. In certain circumstances
of the travel—for instance, a tunnel—the connection with satellites is lost, but the
information to the driver should continue.
Another example is the case of biosignals. When measuring electroencephalo-
grams (EEG) or electrocardiograms (ECG), etc., sometimes artifacts appear due to
bad electrodes contact, eye blinking, interferences, etc. These bad measurements
should be correctly identified as outliers, and some correction procedure should be
applied.
This chapter deals with optimal state estimation for dynamic systems. The
Bayesian methodology provides a general framework for this problem. Along time,
a set of important practical methods, which can be seen as Bayesian instances, have
been developed, such as the Kalman filter and the particle filter.
Given a dynamic system, the Bayesian approach to state estimation attempts
to obtain the posterior PDF of the sate vector using all the available information.
Part of this information is provided by a model of the system, and part is based
on measurements. The state estimation could be done with a recursive filter, which
repeats a prediction and an update operation.
Denote the posterior PDF at time step k as p(xk | Yk ), where Yk is the set of all
previous measurements Yk = { y j , j = 1, 2, . . . , k}.
The prediction operation propagates the posterior PDF from time step k − 1 to k,
as follows:
p(x | Y ) = p(x | x ) p(x | Y ) dx (1.1)
k k−1 k k−1 k−1 k−1 k−1
A B C
where A is the prior at k, B is given by the system model, and C is the posterior at
k − 1.
The update operation takes into account the new measurement yk at k:
⎛ ⎞
where w(k) is the process noise, and v(k) is the observation noise.
The first equation of the system model can be used for the term B, and the second
equation for the term L.
If some cases, with linear dynamics and measurement, the system model can be
a Gauss-Markov model:
Notice that, in order to align with the notation commonly employed in the Bayesian
filters literature, we denote vectors in boldface letters (in previous chapters we used
bars over letters).
1.1 Introduction 5
The Kalman filter [55] offers an optimal solution for state estimation, provided
the system is linear, and noises are Gaussian. It is a particular case of the Bayesian
recursive filter.
In more general cases with non-linear dynamics, the system model is based on
the two functions f(·) and h(·). A linearization strategy could be applied to still use
the Kalman filter algorithm, and this is called ‘Extended Kalman Filter’ (EKF). An
alternative is to propagate a few special points (‘sigma points’) through the system
equations; with these points it is possible to approximate the prior and posterior PDFs.
An example of this alternative is the ‘Unscented Kalman Filter’ (UKF). For non-
linear/non-Gaussian cases that do not tolerate approximations, the ‘Particle Filter’
could be used; the filter is based on the propagation of many points.
The chapter starts with the standard Kalman filter, after some preliminaries. Then
nonlinearities are considered, and there are sections on EKF, UKF, and particle
filters. All these methods are naturally linked to the use of computers or digital
processors, and so the world of numerical computation must be visited in another
section. Smoothing is another important field that is treated in Sect. 1.10. The last
sections intend to at least introduce important extensions and applications of optimal
state estimation.
Some limits should be admitted for this chapter, and therefore there is no space for
many related topics, like H-infinity, game theory, exact filters, etc. The last section
offer some links for the interested reader.
The chapter is mainly based on [8, 15, 19, 40, 72, 96].
In general, many problems related with state estimation remain to be adequately
solved. The field is open for more research.
1.2 Preliminaries
• Scalar case:
mean : μx = E(x(n))
• Two processes:
f(y) = E(x|y)
This is an example taken from [95]. It is the case of driving a car, keeping a distance
x from the border of the road (Fig. 1.1).
In this example is useful to take into account that:
with:
μ = Σ Σ1−1 μ1 + ΣΣ2−1 μ2
(1.8)
Σ −1 = Σ1−1 + Σ2−1
(it is usual in the literature to denote the Gaussian PDF as N (μ, Σ)).
At time 0, a passenger guess that the distance to the border is y(0). It could be
considered as an inexact measurement with a variance σ 2y0 . One could assume a
Gaussian conditional density f (x|y(0)) with variance σ 2y0 . In this moment, the best
estimate is x̂(1|0) = y(0).
At time 1, the driver also takes a measurement y(1), which is more accurate.
Assume again a Gaussian conditional density f (x|y(1)), with σ 2y1 < σ 2y0 . The joint
distribution that combines both measurements (product of Gaussians) would have:
σ 2y1 σ 2y0
μx = y(0) + y(1) (1.9)
σ 2y0 + σ 2y1 σ 2y0 + σ 2y1
1 1 1
= 2 + 2 (1.10)
σx2 σ y0 σ y1
The new estimate of the distance would be μx . The corresponding variance, σx2 , is
smaller than either σ 2y0 or σ 2y1 . The combination of two estimates is better.
σ 2y0
x̂(1|1) = μx = y(0) + (y(1) − y(0)) = x̂(1|0) + K (1)(y1 − x̂(1|0))
σ 2y0 + σ 2y1
(1.11)
where:
σ 2y0
K (1) = (1.12)
σ 2y0 + σ 2y1
Equation (1.11) has the form of a Kalman filter; K () is the Kalman gain. The equation
says that the best estimate can be obtained using the previous best estimate and a
correction term. This term compares the latest measurement with the previous best
estimate.
Likewise, the variance could be written as follows:
The term w is random perturbation with variance σw2 . The lateral velocity is u, set
equal to zero. After some time T , the best estimate (prediction) would be:
The increase of variance is not good. A new measurement is welcome. Suppose that
at time 2 a new measurement is taken. Again the product of Gaussians appears, as
we combine the prediction x̂(2|1) and the measurement y(2). Therefore:
where:
σx2 (2|1)
K (2) = (1.19)
σx2 (2|1) + σ 2y2
1.2 Preliminaries 9
0.2
P
0.15
M
0.1
0.05
0
-10 -5 0 5 10 15
x
The philosophy of the Kalman filter is to obtain better estimates by combining pre-
diction and measurement (the combination of two estimates is better). The practical
procedure is to update the prediction with the measurement.
Figure 1.2 shows what happens with the PDFs of the prediction (P), the measure-
ment (M) and the update (U). The update PDF has the smallest variance, so it is a
better estimation.
0.8
0.6
K
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
sigy/sigx
Another aspect of the Kalman filter philosophy is that, in order to estimate the
present system state, the value of K () is modulated according with the confidence
offered by measurements or by prediction. Suppose that the prediction variance is
constant, then, if measurement uncertainty increases, K decreases: it could be said
that the correction exerted by K becomes more ‘prudent’.
Figure 1.3 shows how K depends on σ y /σx . The figure has been generated with
the Program 1.17.
This subsection focuses on the propagation of mean and covariance. The basis of the
next study is an important lemma, which applies to a partition of a set of Gaussian
variables:
1.2 Preliminaries 11
x1
x =
x2
With:
μx1 S11 S12
μx = ; Sx =
μx2 S21 S22
Lemma: the conditional distribution of x1 (n), given a x2 (n) = x∗2 (n) is Gaussian
with:
−1
mean = μx1 + S12 S22 (x2 − μx2 )
−1
cov = Σ11 − Σ12 Σ22 Σ21
x(n + 1)
(1.22)
y(n)
Taking into account the model, the four covariance components are:
Σx x = A Σ(n) A T + Σw (1.23)
Σ yx = C Σ(n) A T + Σwv
T
(1.25)
Σ yy = C Σ(n) C T + Σv (1.26)
Thus, the mean and the variance of the partitioned process are the following:
A x̂(n) B
μp = + u(n) (1.27)
C x̂(n) 0
where:
−1
K (n) = Σx y · Σ yy = [A Σ(n) C T + Σwv ] · [C Σ(n) C T + Σv ]−1 (1.31)
The last three equations provide a one-step prediction. The term K (n) is called the
Kalman gain.
And the important point is that since we know the conditional mean, then we have
the minimum variance estimate (MVE) of the system state: this is the Kalman filter.
This subsection links with subsection (5.2.5) where a simple example of Wiener
filter was described. In that example, no model of x(n) was considered. A recursive
estimation was obtained, with the following expression:
As it was said in the second book, the above expression has the typical form of
recursive estimators, where the estimation is improved in function of the estimation
error.
1.2 Preliminaries 13
In the preceding subsection the Kalman filter was derived by studying the propa-
gation of means and variances. Notice that x̂(n + 1) was obtained using y(n). This
is prediction.
Now, let us establish a second version of the filter where x̂(n + 1) is obtained using
y(n + 1). This is filtering. A scalar Gauss-Markov example will be considered. The
derivation of the Kalman filter will be done based on minimization of estimation
variance. This is a rather long derivation borrowed from [8]. Although long, it is an
interesting deduction exercise that exploits orthogonality relations.
Let us proceed along three steps:
1. Problem statement
The scalar system is the following:
where A and C are constants (they are not matrices, however we prefer to use
capital letters).
Assumptions:
x(0) = 0; w(0) = 0
(it will reach the typical form, after the coming development)
The estimation variance is:
∂ Σ(n + 1)
= 2 E {(x̂(n + 1) − x(n + 1)) · y(n + 1) } = 0 (1.38)
∂ k(n + 1)
Exploring the Variety of Random
Documents with Different Content
V, el cual consultó el asunto en Valladolid con graves
teólogos y logró se diesen las leyes del año 1553 en
favor de ellos. No aceptado el obispado del Cuzco,
hubo de aceptar el de Chiapa, que administró varios
años, hasta renunciarlo, muriendo en Madrid.
Opusiéronse á su pretensión Ginés de Sepúlveda,
Bartolomé Frías Albornoz, Fernández de Oviedo y
otros muchos. Sobre esta defensa y contienda
versan casi todas sus obras. Breuissima relación de
la destruyción de las Indias, 1552; Barcelona, 1646;
escrita en 1542, como él lo dice al principio, es un
alegato cristiano, pero exagerado, en pro de los
indios de América, que aprovecharon los enemigos
de España para dar color de verdad á la llamada
leyenda negra que sobre ella han formado. Audiatur
et altera pars, como en toda causa, esto es, las
Leyes de Indias y los efectos de nuestra
colonización. Otras naciones no ofrecerán tal
abogado de los indios ni tales leyes protectoras,
porque tampoco los miraron como á hijos de Dios,
cual los miraban los españoles. El efecto de nuestra
colonización ha sido cristianizar á los indios; el de
otras naciones, hacerlos desaparecer enteramente.
De los españoles, sólo algunos particulares
comenderos abusaron de la encomienda contra las
leyes; las otras naciones, por ley y sistema acabaron
con ellos. El remedio que Las Casas aprobó fué tan
malo como la enfermedad, como hijo del celo
indiscreto: la llevada á América de los negros
africanos, que paró en esclavitud, aunque después
lo desaprobó y le pesó de ello. Tradújose la Relación
al latín, francés, italiano, alemán. Como historiador,
escribió la Historia de las Indias, inédita hasta 1875-
1876; perdida la primera redacción en vida del autor,
que la había comenzado en 1527, sólo quedan tres
décadas de la segunda, empezada en 1552 y no
terminada. Su deseo, según puso en el testamento,
fué quedase sin publicar hasta cuarenta años
después de su muerte; pero no se sabe cómo
Antonio de Herrera (1559-1625) la aprovechó en las
cuatro primeras décadas de su Historia de los
hechos de los Castellanos en las Islas i Tierra firme
del mar oceano, 1601. Llega su historia desde 1492
hasta el año 1520, y es verídico cuando narra
acontecimientos que él vió ó de Colón, cuyos
papeles tuvo en sus manos; pero no es narración
completa, dejándose otras cosas, y aunque su
intención es cristiana y de razón, la polémica le lleva
á exagerar lo que va contra sus principios. Admírale
su adversario Sepúlveda (1490-1573) como
polemista; como historiador es mediano en su
Historia de las Indias y en su Apologética Historia, y
siempre exagerado por su celo religioso.
Don Fray Bartolomé de las Casas.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com