Keras
Keras
Save/Load models using the ".keras" file format. n layer_repeat_vector() Repeats # compile (define loss and optimizer)
CC BY SA Posit So ware, PBC • [email protected] • posit.co • Learn more at keras.posit.co • HTML cheatsheets at pos.it/cheatsheets • keras3 1.0.0 • Updated: 2024-06
ft
More layers Preprocessing Preprocessing
CONVOLUTIONAL LAYERS IMAGE PREPROCESSING TEXT PREPROCESSING
Keras TensorFlow
Load Images
text_dataset_from_directory()
layer_conv_1d() 1D, e.g.
temporal convolution
image_dataset_from_directory()
Create a TF Dataset from image files in a directory.
Generate a TF Dataset from text files in a directory. Pre-trained models
layer_text_vectorization(), Keras applications are deep learning models
layer_conv_2d_transpose() image_load(), image_from_array(), get_vocabulary(), set_vocabulary() that are made available with pre-trained
Transposed 2D (deconvolution) image_to_array(), image_array_save() Map text to integer sequences. weights. These models can be used for
Work with PIL Image instances prediction, feature extraction, and fine-tuning.
layer_conv_2d() 2D, e.g. spatial NUMERICAL FEATURES PREPROCESSING
application_mobilenet_v3_large()
convolution over images Transform Images layer_normalization() application_mobilenet_v3_small()
op_image_crop() Normalizes continuous features. MobileNetV3 Model, pre-trained on ImageNet
op_image_extract_patches()
layer_conv_3d_transpose() layer_discretization()
Transposed 3D (deconvolution) op_image_pad() application_efficientnet_v2s()
op_image_resize() Buckets continuous features by ranges. application_efficientnet_v2m()
layer_conv_3d() 3D, e.g. spatial
convolution over volumes op_image_affine_transform() CATEGORICAL FEATURES PREPROCESSING application_efficientnet_v2l()
op_image_map_coordinates() layer_category_encoding() EfficientNetV2 Model, pre-trained on ImageNet
layer_conv_lstm_2d() op_image_rgb_to_grayscale() Encode integer features.
Convolutional LSTM Operations that transform image tensors in application_inception_resnet_v2()
deterministic ways. layer_hashing() application_inception_v3()
layer_separable_conv_2d() Hash and bin categorical features.
Depthwise separable 2D Inception-ResNet v2 and v3 model, with
image_smart_resize() weights trained on ImageNet
Resize images without aspect ratio distortion. layer_hashed_crossing()
layer_upsampling_1d() Cross features using the "hashing trick".
layer_upsampling_2d() application_vgg16(); application_vgg19()
layer_upsampling_3d() Image Layers layer_string_lookup() VGG16 and VGG19 models
Upsampling layer Builtin image preprocessing layers. Note, any Map strings to (possibly encoded) indices.
image operation function can also be used as a application_resnet50() ResNet50 model
layer_zero_padding_1d() layer in a Model, or used in layer_lambda(). layer_integer_lookup()
layer_zero_padding_2d() Map integers to (possibly encoded) indices.
layer_zero_padding_3d() application_nasnet_large()
Zero-padding layer Image Preprocessing Layers TABULAR DATA application_nasnet_mobile()
layer_resizing() One-stop utility for preprocessing and encoding NASNet model architecture
layer_cropping_1d() layer_rescaling() structured data. Define a feature space from a list of
layer_cropping_2d() layer_center_crop() table columns (features).
layer_cropping_3d() feature_space <-
Cropping layer layer_feature_space(features = list(<features>)) ImageNet is a large database of images with
Image Augmentation Layers labels, extensively used for deep learning
POOLING LAYERS Preprocessing layers that randomly augment
image inputs during training. Adapt the feature space to a dataset
layer_max_pooling_1d() adapt(feature_space, dataset) application_preprocess_inputs()
layer_random_crop()
layer_max_pooling_2d() application_decode_predictions()
layer_random_flip() Use the adapted feature_space preprocessing layer
layer_max_pooling_3d() Preprocesses a tensor encoding a batch of
Maximum pooling for 1D to 3D layer_random_translation() as a layer in a Keras Model, or in the data input images for an application, and decodes
layer_random_rotation() pipeline with tfdatasets::dataset_map() predictions from an application
layer_average_pooling_1d() layer_random_zoom()
layer_average_pooling_2d() layer_random_contrast() Available features:
layer_average_pooling_3d()
Average pooling for 1D to 3D
layer_random_brightness()
feature_float()
feature_float_rescaled() Callbacks
feature_float_normalized()
feature_float_discretized() A callback is a set of functions to be applied at
layer_global_max_pooling_1d() SEQUENCE PREPROCESSING given stages of the training procedure. You can
layer_global_max_pooling_2d() use callbacks to get a view on internal states
layer_global_max_pooling_3d() timeseries_dataset_from_array() feature_integer_categorical()
feature_integer_hashed() and statistics of the model during training.
Global maximum pooling Generate a TF Dataset of sliding windows over a
timeseries provided as array.
feature_string_categorical() callback_early_stopping() Stop training when
layer_global_average_pooling_1d() a monitored quantity has stopped improving
layer_global_average_pooling_2d() audio_dataset_from_directory() feature_string_hashed()
Generate a TF Dataset from audio files. callback_learning_rate_scheduler() Learning
layer_global_average_pooling_3d() rate scheduler
Global average pooling feature_cross()
pad_sequences() feature_custom() callback_tensorboard() TensorBoard basic
Pad sequences to the same length visualizations
CC BY SA Posit So ware, PBC • [email protected] • posit.co • Learn more at keras.posit.co • HTML cheatsheets at pos.it/cheatsheets • keras3 1.0.0 • Updated: 2024-06
ft