to send
to send
R
This is a curated list of libraries and frameworks for Neural network and deep learning in R. Please
feel free to contribute.
package description
The fastai package provides R wrappers to fastai. The fastai library
simplifies training fast and accurate neural nets using modern best
practices. The library is based on research into deep learning best
fastai
practices undertaken at fast.ai, and includes "out of the box" support for
vision, text, tabular, audio, time-series and collab (collaborative
filtering) models.
Direct binding to libtorch C++ library (that also powers pytorch). These
torch set of packages (https://github.com/mlverse) are an attempt to create an
ecosystem like pytorch
R package for object detection and image segmentation using YOLOv3
platypus
and U-net
Another collection of neural networks. Includes versions of 'BP',
nnlib2Rcpp
'Autoencoder', 'LVQ' (supervised and unsupervised), 'MAM'
A supervised transformation of datasets is performed. The aim is similar
to that of Principal Component Analysis (PCA), that is, to carry out data
transformation and dimensionality reduction, but in a non-linear
supervised way. This is achieved by first training a 3-layer Multi-Layer
Perceptron and then using the activations of the hidden layer as a
transformation of the input features. In fact, it takes advantage of the
nntrf
change of representation provided by the hidden layer of a neural
network. This can be useful as data pre-processing for Machine Learning
methods in general, specially for those that do not work well with many
irrelevant or redundant features. Rumelhart, D.E., Hinton, G.E. and
Williams, R.J. (1986) "Learning representations by back-propagating
errors" doi:10.1038/323533a0
Statistical/Machine Learning, using advanced combinations of
nnetsauce randomized and quasi-randomized neural networks layers. It contains
models for regression, classification, and time series forecasting
'R' implementation and interface of the Machine Learning platform
rTorch
'PyTorch' https://pytorch.org/ developed in 'Python'. It requires a 'conda'
package description
environment with 'torch' and 'torchvision' to provide 'PyTorch' functions,
methods and classes. The key object in 'PyTorch' is the tensor which is
in essence a multidimensional array. These tensors are fairly flexible to
perform calculations in CPUs as well as 'GPUs' to accelerate the process
implementation of GRNN ( General Regression Neural Network
GRnnet
(Specht, 1991) )
Analysis functions to quantify inputs importance in neural network
models. Functions are available for calculating and plotting the inputs
importance and obtaining the activation function of each neuron layer
NeuralSens
and its derivatives. The importance of a given input is defined as the
distribution of the derivatives of the output with respect to that input in
each training data point.
Implementation of some Deep Learning methods. Includes multilayer
deepNN perceptron, different activation functions, regularisation strategies,
stochastic gradient descent and dropout
unsupervised deep neural networks, from building their architecture to
ruta
their training and evaluation built on top of keras and tensorflow
Adds a high-level interface for 'keras' neural nets. kms() fits neural net
and accepts R formulas to aid data munging and hyperparameter
selection. kms() can optionally accept a compiled
kerasformula keras_sequential_model() from 'keras'. kms() accepts a number of
parameters (like loss and optimizer) and splits the data into sparse test
and training matrices. kms() returns a single object with predictions, a
confusion matrix, and function call details
Provides probability computation, data generation, and model estimation
for fully-visible Boltzmann machines. It follows the methods described
BoltzMM
in Nguyen and Wood (2016a) doi:10.1162/NECO_a_00813 and Nguyen
and Wood (2016b) doi:10.1109/TNNLS.2015.2425898
keras Rstudio's R keras interface
kerasR R interface to the keras library by Taylor Arnold
tensorflow Interface to tensorflow
Software for feed-forward neural networks with a single hidden layer,
nnet
and for multinomial log-linear models
Training of neural networks using backpropagation, resilient
backpropagation with (Riedmiller, 1994) or without weight backtracking
neuralnet
(Riedmiller and Braun, 1993) or the modified globally convergent
version by Anastasiadis et al. (2005). The package allows flexible
package description
settings through custom-choice of error and activation function.
Furthermore, the calculation of generalized weights (Intrator O &
Intrator N, 1993) is implemented
Visualization and analysis tools to aid in the interpretation of neural
network models. Functions are available for plotting, quantifying
NeuralNetTools
variable importance, conducting a sensitivity analysis, and obtaining a
simple list of model weights
The Stuttgart Neural Network Simulator (SNNS) is a library containing
many standard implementations of neural networks. This package wraps
the SNNS functionality to make it available from within R. Using the
RSNNS RSNNS low-level interface, all of the algorithmic functionality and
flexibility of SNNS can be accessed. Furthermore, the package contains
a convenient high-level interface, so that the most common neural
network topologies and learning algorithms integrate seamlessly into R
Provides an interface to kernel routines from the FCNN C++ library.
FCNN is based on a completely new Artificial Neural Network
representation that offers unmatched efficiency, modularity, and
extensibility. FCNN4R provides standard teaching (backpropagation,
Rprop, simulated annealing, stochastic gradient) and pruning algorithms
(minimum magnitude, Optimal Brain Surgeon), but it is first and
FCNN4R
foremost an efficient computational engine. Users can easily implement
their algorithms by taking advantage of fast gradient computing routines,
as well as network reconstruction functionality (removing weights and
redundant neurons, reordering inputs, merging networks). Networks can
be exported to C functions in order to integrate them into virtually any
software solution.
Implementation of 'softmax' regression and classification models with
multiple layer neural network. It can be used for many tasks like word
softmaxreg embedding based document classification, 'MNIST' dataset handwritten
digit recognition and so on. Multiple optimization algorithm including
'SGD', 'Adagrad', 'RMSprop', 'Moment', 'NAG', etc are also provided.
The darch package is built on the basis of the code from G. E. Hinton
and R. R. Salakhutdinov (available under Matlab Code for deep belief
nets). This package is for generating neural networks with many layers
(deep architectures) and train them with the method introduced by the
publications "A fast learning algorithm for deep belief nets" (G. E.
DARCH
Hinton, S. Osindero, Y. W. Teh 2006) and "Reducing the dimensionality
of data with neural networks" (G. E. Hinton, R. R. Salakhutdinov
(2006). This method includes a pre training with the contrastive
divergence method published by G.E Hinton (2002) and a fine tuning
with common known training algorithms like backpropagation or
package description
conjugate gradients. Additionally, supervised fine-tuning can be
enhanced with maxout and dropout, two recently developed techniques
to improve fine-tuning for deep learning.
An implementation of deep neural network with rectifier linear units
trained with stochastic gradient descent method and batch normalization.
A combination of these methods have achieved state-of-the-art
performance in ImageNet classification by overcoming the gradient
deeplearning
saturation problem experienced by many deep architecture neural
network models in the past. In addition, batch normalization and dropout
are implemented as a means of regularization. The deeplearning package
is inspired by the darch package and uses its class DArch.
Implement some deep learning architectures and neural network
deepnet
algorithms, including BP,RBM,DBN,Deep autoencoder and so on
Implementation of a Recurrent Neural Network in R including GRU,
rnn
LSTM
Restricted Boltzmann Machines in R(also see Landgraf's
rbm
implementaion)
This package is based on the C++ code from Yusuke Sugomori, which
implements basic machine learning methods with many layers (deep
RcppDL learning), including dA (Denoising Autoencoder), SdA (Stacked
Denoising Autoencoder), RBM (Restricted Boltzmann machine) and
DBN (Deep Belief Nets).
Deep Learning: Model high-level abstractions in data by using non-
h20 linear transformations in a layer-by-layer method. Deep learning is an
DeepLearning example of unsupervised learning and can make use of unlabeled data
that other algorithms cannot. Also see, deepwater
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with
mxnet
Dynamic, Mutation-aware Dataflow Dep Scheduler
TensorFlow is an open source software library for numerical
computation using data flow graphs. Nodes in the graph represent
mathematical operations, while the graph edges represent the
multidimensional data arrays (tensors) communicated between them.
The flexible architecture allows you to deploy computation to one or
TensorFlow
more CPUs or GPUs in a desktop, server, or mobile device with a single
API. TensorFlow was originally developed by researchers and engineers
working on the Google Brain Team within Google’s Machine
Intelligence research organization for the purposes of conducting
machine learning and deep neural networks research, but the system is
package description
general enough to be applicable in a wide variety of other domains as
well.
Implementation of the sparse autoencoder in R environment, following
the notes of Andrew Ng. The features learned by the hidden layer of the
autoencoder
autoencoder (through unsupervised learning of unlabeled data) can be
used in constructing deep belief neural networks
An implementation of a stacked sparse autoencoder for dimension
reduction of features and pre-training of feed-forward neural networks
with the 'neuralnet' package is contained within this package. The
package also includes a predict function for the stacked autoencoder
SAENET object to generate the compressed representation of new data if required.
For the purposes of this package, 'stacked' is defined in line
with http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders .
The underlying sparse autoencoder is defined in the documentation of
'autoencoder'.
Computing prediction intervals of neural network models
(e.g.backpropagation) at certain confidence level. It can take the output
nnetpredint
from models trained by other packages like 'nnet', 'neuralnet', 'RSNNS',
etc
An R package to streamline the training, fine-tuning and predicting
deepr
processes for deep learning based on darch and deepnet
Probabilistic neural networks: The program pnn implements the
algorithm proposed by Specht (1990). It is written in the R statistical
language. It solves a common problem in automatic learning. Knowing a
set of observations described by a vector of quantitative variables, we
classify them in a given number of groups. Then, the algorithm is trained
pnn with this datasets and should guess afterwards the group of any new
observation. This neural network has the main advantage to begin
generalization instantaneously even with a small set of known
observations. It is delivered with four functions (learn, smooth, perf and
guess) and a dataset. The functions are documented with examples and
provided with unit tests.
Quantile Regression Neural Network: Fit a quantile regression neural
qrnn network with optional left censoring using a variant of the finite
smoothing algorithm.
Validation Tools for Artificial Neural Networks. Methods and tools for
analysing and validating the outputs and modelled functions of artificial
validann neural networks (ANNs) in terms of predictive, replicative and structural
validity. Also provides a method for fitting feed-forward ANNs with a
single hidden layer.
package description
Fits neural networks to learn about back propagation. Can fit neural
networks with up to two hidden layer and two different error functions.
TeachNet
Also able to handle a weight decay. But just able to compute one output
neuron and very slow
neural RBF and MLP neural networks with graphical user interface
Monotone Multi-Layer Perceptron Neural Network. Train and make
monmlp predictions from a multi-layer perceptron neural network with partial
monotonicity constraints.
Examples of Neural Networks. Implementations of several basic neural
learNN
network concepts in R, as based on posts on http://qua.st/
The program GRNN implements the algorithm proposed by Specht
grnn
(1991)
Short Term Forecasting via GMDH-Type Neural Network Algorithms.
Group method of data handling (GMDH) - type neural network
algorithm is the heuristic self-organization method for modelling the
GMDH
complex systems. In this package, GMDH-type neural network
algorithms are applied to make short term forecasting for a univariate
time series
Training and predict functions for Single Hidden-layer Feedforward
Neural Networks (SLFN) using the Extreme Learning Machine (ELM)
algorithm. The ELM algorithm differs from the traditional gradient-
based algorithms for very short training times (it doesn't need any
iterative tuning, this makes learning time very fast) and there is no need
elmNNRcpp to set any other parameters like learning rate, momentum, epochs, etc.
This is a reimplementation of the 'elmNN' package using
'RcppArmadillo' after the 'elmNN' package was archived. For more
information, see "Extreme learning machine: Theory and applications"
by Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew (2006), Elsevier
B.V, doi:10.1016/j.neucom.2005.12.126
brnn Bayesian Regularization for Feed-Forward Neural Networks
This package was born to release the TAO robust neural network
algorithm to the R users. It has grown and I think it can be of interest for
AMORE
the users wanting to implement their own training algorithms as well as
for those others whose needs lye only in the "user space"
An Easy to Use Multilayer Perceptron. Trains neural networks
simpleNeural (multilayer perceptrons with one hidden layer) for bi- or multi-class
classification