Objetivos De Aprendizaje: Δt Δx=Vδt
Objetivos De Aprendizaje: Δt Δx=Vδt
• Modelar una onda, que se mueve con una velocidad de onda constante,
con una expresión matemática.
• Calcular la velocidad y la aceleración del medio.
• Mostrar cómo la velocidad del medio difiere de la velocidad de la onda
(velocidad de propagación).
Pulsos
Un pulso se puede describir como una onda que consiste en una única alteración que se
desplaza por el medio con una amplitud constante. El pulso se mueve como un patrón que
mantiene su forma mientras se propaga con una rapidez de onda constante. Dado que la
rapidez de onda es constante, la distancia que recorre el pulso en un tiempo ΔtΔ� es igual
a Δx=vΔtΔ�=�Δ� (Figura 16.8).
Tensor (machine learning)
1 language
• Article
• Talk
• Read
• Edit
• View history
Tools
•
•
•
•
•
•
•
•
•
•
•
•
From Wikipedia, the free encyclopedia
Tensor informally refers in machine learning to two different concepts that
organize and represent data. Data may be organized in a multidimensional
array (M-way array) that is informally referred to as a "data tensor"; however in the
strict mathematical sense, a tensor is a multilinear mapping over a set of domain
vector spaces to a range vector space. Observations, such as images, movies,
volumes, sounds, and relationships among words and concepts, stored in an M-
way array ("data tensor") may be analyzed either by artificial neural
networks or tensor methods.[1][2][3][4][5]
Tensor decomposition can factorize data tensors into smaller
tensors.[1][6] Operations on data tensors can be expressed in terms of matrix
multiplication and the Kronecker product.[7] The computation of gradients, an
important aspect of the backpropagation algorithm, can be performed
using PyTorch and TensorFlow.[8][9]
Computations are often performed on graphics processing units (GPUs)
using CUDA and on dedicated hardware such as Google's Tensor Processing
Unit or Nvidia's Tensor core. These developments have greatly accelerated neural
network architectures and increased the size and complexity of models that can be
trained.
History[edit]
A tensor is by definition a multilinear map. In mathematics, this may express a
multilinear relationship between sets of algebraic objects. In physics, tensor fields,
considered as tensors at each point in space, are useful in expressing mechanics
such as stress or elasticity. In machine learning, the exact use of tensors depends
on the statistical approach being used.
In 2001, the field of signal processing and statistics were making use of tensor
methods. Pierre Comon surveys the early adoption of tensor methods in the fields
of telecommunications, radio surveillance, chemometrics and sensor processing.
Linear tensor rank methods (such as, Parafac/CANDECOMP) analyzed M-way
arrays ("data tensors") composed of higher order statistics that were employed in
blind source separation problems to compute a linear model of the data. He noted
several early limitations in determining the tensor rank and efficient tensor rank
decomposition.[10]
In the early 2000s, multilinear tensor methods[1][11] crossed over into computer
vision, computer graphics and machine learning with papers by Vasilescu or in
collaboration with Terzopoulos, such as Human Motion
Signatures,[12][13] TensorFaces[14][15] TensorTexures[16] and Multilinear
Projection.[17][18] Multilinear algebra, the algebra of higher-order tensors, is a suitable
and transparent framework for analyzing the multifactor structure of an ensemble
of observations and for addressing the difficult problem of disentangling the causal
factors based on second order[14] or higher order statistics associated with each
causal factor.[15]
Tensor (multilinear) factor analysis disentangles and reduces the influence of
different causal factors with multilinear subspace learning.[19] When treating an
image or a video as a 2- or 3-way array, i.e., "data matrix/t