0% found this document useful (0 votes)
15 views

Keras

Uploaded by

carnivore.coc
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Keras

Uploaded by

carnivore.coc
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Deep Learning with Keras3 : : CHEATSHEET Keras TensorFlow

Intro Define Compile Fit Evaluate Predict


INSTALLATION
The keras3 R package uses the Python Keras library.
Keras is a high-level neural networks API You can install all the prerequisites directly from R.
developed with a focus on enabling fast
• Batch size
• Functional See ?keras3::install_keras for details and options.
experimentation. It supports multiple back-ends, • Optimiser • Epochs • Evaluate • classes
Model
including TensorFlow, Jax and Torch. • Loss • Validation • Plot • probability library(keras3)
• Sequential • Metrics split reticulate::install_python()
model
Backends like TensorFlow are lower level install_keras()
mathematical libraries for building deep neural This installs the required libraries in virtual
network architectures. The keras3 R package https://keras.posit.co The “Hello, World!” environment named 'r-keras'.
makes it easy to use Keras with any backend in R. https://www.manning.com/books/deep-learning-with-r-second-edition of deep learning
It will automatically detect if a GPU is available.
TRAINING AN IMAGE RECOGNIZER ON MNIST DATA
Working with Keras Models
DEFINE A MODEL INSPECT A MODEL CORE LAYERS
Functional API: keras_input() and keras_model() print(model) Print a summary of a Keras model layer_dense() Add a densely- # input layer: use MNIST images
Define a Functional Model with inputs and outputs. connected NN layer to an output mnist <- dataset_mnist()
x_train <- mnist$train$x; y_train <- mnist$train$y
inputs <- keras_input(<input-shape>)
plot(model, show_shapes = FALSE, show_dtype = x_test <- mnist$test$x; y_test <- mnist$test$y
outputs <- inputs |>
FALSE, show_layer_names = FALSE, ...) layer_einsum_dense() Add a
layer_dense() |> layer_... dense layer with arbitrary # reshape and rescale
model <- keras_model(inputs, outputs) Plot a Keras model dimensionality
x_train <- array_reshape(x_train, c(nrow(x_train), 784))
x_test <- array_reshape(x_test, c(nrow(x_test), 784))
x_train <- x_train / 255; x_test <- x_test / 255
Sequential API: keras_model_sequential()
Define a Sequential Model composed of a linear stack EVALUATE A MODEL layer_activation() Apply an y_train <- to_categorical(y_train, 10)
y_test <- to_categorical(y_test, 10)
of layers activation function to an output
evaluate(object, x = NULL, y = NULL, batch_size = # defining the model and layers
model <- NULL) Evaluate a Keras model model <-
keras_model_sequential(<input-shape>) |> layer_dropout() Applies Dropout keras_model_sequential(input_shape = c(28, 28, 1))

layer_dense() |> layer_... PREDICT to the input model |>


layer_conv_2d(filters = 32, kernel_size = c(3, 3),
activation = "relu") |>
Subclassing API: Model() predict() Generate predictions from a Keras model layer_reshape() Reshapes an
layer_max_pooling_2d(pool_size = c(2, 2)) |>
layer_conv_2d(filters = 64, kernel_size = c(3, 3),
Subclass the base Model class output to a certain shape activation = "relu") |>
predict_on_batch() Returns predictions for a single layer_max_pooling_2d(pool_size = c(2, 2)) |>
layer_flatten() |>
batch of samples. layer_dropout(rate = 0.5) |>
COMPILE A MODEL layer_permute() Permute the
layer_dense(units = num_classes,
activation = "softmax")

SAVE/LOAD A MODEL dimensions of an input according


compile(object, optimizer, loss, metrics, ...) to a given pattern
# View the model summary
summary(model)
Configure a Keras model for training save_model(); load_model() plot(model)

Save/Load models using the ".keras" file format. n layer_repeat_vector() Repeats # compile (define loss and optimizer)

FIT A MODEL the input n times model |> compile(


loss = 'categorical_crossentropy',
save_model_weights(); load_model_weights() optimizer = optimizer_rmsprop(),
fit(object, x = NULL, y = NULL, batch_size = NULL, Save/load model weights to/from ".h5" files. layer_lambda(object, f) Wraps
metrics = c('accuracy')
epochs = 10, verbose = 1, callbacks = NULL, …) x f(x) )
arbitrary expression as a layer
Train a Keras model for a fixed number of epochs save_model_config(); load_model_config()
# train (fit)
(iterations) Save/load model architecture to/from a ".json" file.
model |> fit(
layer_activity_regularization() x_train, y_train,
epochs = 30, batch_size = 128,
L1 L2 Layer that applies an update to validation_split = 0.2
Customize training: Deploy the cost function based input )
- Provide callbacks to fit(): Export just the forward pass of the trained model for activity
model |> evaluate(x_test, y_test)
model |> predict(x_test)
- Define a custom Callback(). inference serving. # save the full model
- Call train_on_batch() in a custom training loop. export_savedmodel(model, "my-saved-model/1") layer_masking() Masks a save_model(model, "mnist-classifier.keras")
- Subclass Model() and implement a custom sequence by using a mask value to
Save a TF SavedModel for inference. skip timesteps
# deploy for serving inference.
train_step method. dir.create("serving-mnist-classifier")
- Write a fully custom training loop. Update weights export_savedmodel(modek, "serving-mnist-classifier/1")

with model$optimizer$apply(gradients, weights) rsconnect::deployTFModel("my-saved-model") layer_flatten() Flattens an input


rsconnect::deployTFModel("serving-mnist-classifier")

Deploy a TF SavedModel to Connect for inference.

CC BY SA Posit So ware, PBC • [email protected] • posit.co • Learn more at keras.posit.co • HTML cheatsheets at pos.it/cheatsheets • keras3 1.0.0 • Updated: 2024-06
ft
More layers Preprocessing Preprocessing
CONVOLUTIONAL LAYERS IMAGE PREPROCESSING TEXT PREPROCESSING
Keras TensorFlow
Load Images
text_dataset_from_directory()
layer_conv_1d() 1D, e.g.
temporal convolution
image_dataset_from_directory()
Create a TF Dataset from image files in a directory.
Generate a TF Dataset from text files in a directory. Pre-trained models
layer_text_vectorization(), Keras applications are deep learning models
layer_conv_2d_transpose() image_load(), image_from_array(), get_vocabulary(), set_vocabulary() that are made available with pre-trained
Transposed 2D (deconvolution) image_to_array(), image_array_save() Map text to integer sequences. weights. These models can be used for
Work with PIL Image instances prediction, feature extraction, and fine-tuning.
layer_conv_2d() 2D, e.g. spatial NUMERICAL FEATURES PREPROCESSING
application_mobilenet_v3_large()
convolution over images Transform Images layer_normalization() application_mobilenet_v3_small()
op_image_crop() Normalizes continuous features. MobileNetV3 Model, pre-trained on ImageNet
op_image_extract_patches()
layer_conv_3d_transpose() layer_discretization()
Transposed 3D (deconvolution) op_image_pad() application_efficientnet_v2s()
op_image_resize() Buckets continuous features by ranges. application_efficientnet_v2m()
layer_conv_3d() 3D, e.g. spatial
convolution over volumes op_image_affine_transform() CATEGORICAL FEATURES PREPROCESSING application_efficientnet_v2l()
op_image_map_coordinates() layer_category_encoding() EfficientNetV2 Model, pre-trained on ImageNet
layer_conv_lstm_2d() op_image_rgb_to_grayscale() Encode integer features.
Convolutional LSTM Operations that transform image tensors in application_inception_resnet_v2()
deterministic ways. layer_hashing() application_inception_v3()
layer_separable_conv_2d() Hash and bin categorical features.
Depthwise separable 2D Inception-ResNet v2 and v3 model, with
image_smart_resize() weights trained on ImageNet
Resize images without aspect ratio distortion. layer_hashed_crossing()
layer_upsampling_1d() Cross features using the "hashing trick".
layer_upsampling_2d() application_vgg16(); application_vgg19()
layer_upsampling_3d() Image Layers layer_string_lookup() VGG16 and VGG19 models
Upsampling layer Builtin image preprocessing layers. Note, any Map strings to (possibly encoded) indices.
image operation function can also be used as a application_resnet50() ResNet50 model
layer_zero_padding_1d() layer in a Model, or used in layer_lambda(). layer_integer_lookup()
layer_zero_padding_2d() Map integers to (possibly encoded) indices.
layer_zero_padding_3d() application_nasnet_large()
Zero-padding layer Image Preprocessing Layers TABULAR DATA application_nasnet_mobile()
layer_resizing() One-stop utility for preprocessing and encoding NASNet model architecture
layer_cropping_1d() layer_rescaling() structured data. Define a feature space from a list of
layer_cropping_2d() layer_center_crop() table columns (features).
layer_cropping_3d() feature_space <-
Cropping layer layer_feature_space(features = list(<features>)) ImageNet is a large database of images with
Image Augmentation Layers labels, extensively used for deep learning
POOLING LAYERS Preprocessing layers that randomly augment
image inputs during training. Adapt the feature space to a dataset
layer_max_pooling_1d() adapt(feature_space, dataset) application_preprocess_inputs()
layer_random_crop()
layer_max_pooling_2d() application_decode_predictions()
layer_random_flip() Use the adapted feature_space preprocessing layer
layer_max_pooling_3d() Preprocesses a tensor encoding a batch of
Maximum pooling for 1D to 3D layer_random_translation() as a layer in a Keras Model, or in the data input images for an application, and decodes
layer_random_rotation() pipeline with tfdatasets::dataset_map() predictions from an application
layer_average_pooling_1d() layer_random_zoom()
layer_average_pooling_2d() layer_random_contrast() Available features:
layer_average_pooling_3d()
Average pooling for 1D to 3D
layer_random_brightness()
feature_float()
feature_float_rescaled() Callbacks
feature_float_normalized()
feature_float_discretized() A callback is a set of functions to be applied at
layer_global_max_pooling_1d() SEQUENCE PREPROCESSING given stages of the training procedure. You can
layer_global_max_pooling_2d() use callbacks to get a view on internal states
layer_global_max_pooling_3d() timeseries_dataset_from_array() feature_integer_categorical()
feature_integer_hashed() and statistics of the model during training.
Global maximum pooling Generate a TF Dataset of sliding windows over a
timeseries provided as array.
feature_string_categorical() callback_early_stopping() Stop training when
layer_global_average_pooling_1d() a monitored quantity has stopped improving
layer_global_average_pooling_2d() audio_dataset_from_directory() feature_string_hashed()
Generate a TF Dataset from audio files. callback_learning_rate_scheduler() Learning
layer_global_average_pooling_3d() rate scheduler
Global average pooling feature_cross()
pad_sequences() feature_custom() callback_tensorboard() TensorBoard basic
Pad sequences to the same length visualizations
CC BY SA Posit So ware, PBC • [email protected] • posit.co • Learn more at keras.posit.co • HTML cheatsheets at pos.it/cheatsheets • keras3 1.0.0 • Updated: 2024-06
ft

You might also like