0% found this document useful (0 votes)
169 views

Tensor Flow 2

Uploaded by

bari nugroho
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views

Tensor Flow 2

Uploaded by

bari nugroho
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

TensorFlow v2.

0 Cheat Sheet

TensorFlow™ A layer instance is called on a tensor and returns a tensor. An


input tensor and output tensor can then be used to define a
TensorFlow is an open-source software library for high- Model, which is compiled and trained just as a Sequential model.
performance numerical computation. Its flexible architecture Models are callable by themselves and can be stacked the
enables to easily deploy computation across a variety of same way while reusing trained weights.
platforms (CPUs, GPUs, and TPUs), as well as mobile and edge
devices, desktops, and clusters of servers. TensorFlow comes Transfer learning and fine-tuning of pretrained models saves
with strong support for machine learning and deep learning. your time if your data set does not differ significantly from the
original one.
High-Level APIs for Deep Learning import tensorflow as tf
Keras is a handy high-level API standard for deep learning import tensorflow_datasets as tfds
models widely adopted for fast prototyping and state-of- dataset = tfds.load(name=’tf_flowers’, as_supervised=True)
the-art research. It was originally designed to run on top of NUMBER_OF_CLASSES_IN_DATASET = 5
different low-level computational frameworks and therefore the IMG_SIZE = 160
TensorFlow platform fully implements it.
def preprocess_example(image, label):
The Sequential API is the most common way to define your image = tf.cast(image, tf.float32)
neural network model. It corresponds to the mental image we image = (image / 127.5) - 1
use when thinking about deep learning: a sequence of layers. image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
import tensorflow as tf
from tensorflow.keras import datasets, layers, models DATASET_SIZE = 3670
BATCH_SIZE = 32
# Load data set train = dataset[’train’].map(preprocess_example)
mnist = datasets.mnist train_batches = train.shuffle(DATASET_SIZE).batch(BATCH_SIZE)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Load MobileNetV2 model pretrained on ImageNet data
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.applications.MobileNetV2(
# Construct a neural network model input_shape=(IMG_SIZE, IMG_SIZE, 3),
model = models.Sequential() include_top=False, weights=’imagenet’, pooling=’avg’)
model.add(layers.Flatten(input_shape=(28, 28))) model.trainable = False
model.add(layers.Dense(512, activation=tf.nn.relu))
# Add a new layer for multiclass classification
model.add(layers.Dropout(0.2))
new_output = tf.keras.layers.Dense(
model.add(layers.Dense(10, activation=tf.nn.softmax))
NUMBER_OF_CLASSES_IN_DATASET, activation=’softmax’)
model.compile(optimizer=’adam’,
new_model = tf.keras.Sequential([model, new_output])
loss=’sparse_categorical_crossentropy’,
new_model.compile(
metrics=[’accuracy’])
loss=tf.keras.losses.categorical_crossentropy,
# Train and evaluate the model optimizer=tf.keras.optimizers.RMSprop(lr=1e-3),
model.fit(x_train, y_train, epochs=5) metrics=[’accuracy’])
model.evaluate(x_test, y_test)
# Train the classification layer
new_model.fit(train_batches.repeat(), epochs=10,
The Functional API enables engineers to define complex
steps_per_epoch=DATASET_SIZE // BATCH_SIZE)
topologies, including multi-input and multi-output models, as
well as advanced models with shared layers and models with
residual connections. After the execution of the given transfer learning code, you can
make MobileNetV2 layers trainable and perform fine-tuning of
from tensorflow.keras.layers import Flatten, Dense, Dropout the resulting model to achieve better results.
from tensorflow.keras.models import Model

# Loading data set must be here <...> Jupyter Notebook


inputs = tf.keras.Input(shape=(28, 28)) Jupyter Notebook is a web-based interactive computational
x = Flatten()(inputs) environment for data science and scientific computing.
x = Dense(512, activation=’relu’)(x)
x = Dropout(0.2)(x) Google Colaboratory is a free notebook environment that
predictions = Dense(10, activation=’softmax’)(x) requires no setup and runs entirely in the cloud. Use it for
model = Model(inputs=inputs, outputs=predictions) jump-starting a machine learning project.
# Compile, train and evaluate the model here <...>

Version 2.1 Get the latest version at www.altoros.com/visuals Order private training at www.altoros.com/training
TensorFlow v2.0 Cheat Sheet

A Reference Machine Learning Workflow tf.data.Dataset represents a sequence of elements each containing
one or more Tensor object(-s). This can be exemplified by a pair of
Here’s a conceptual diagram and a workflow example:
tensors representing an image and a corresponding class label.

import tensorflow as tf

DATASET_URL = “https://archive.ics.uci.edu/ml/machine-” \
“learning-databases/covtype/covtype.data.gz”
DATASET_SIZE = 387698
dataset_path = tf.keras.utils.get_file(
fname=DATASET_URL.split(’/’)[-1], origin=DATASET_URL)

COLUMN_NAMES = [
’Elevation’, ’Aspect’, ’Slope’,
’Horizontal_Distance_To_Hydrology’,
’Vertical_Distance_To_Hydrology’,
’Horizontal_Distance_To_Roadways’,
’Hillshade_9am’, ’Hillshade_Noon’, ’Hillshade_3pm’,
’Horizontal_Distance_To_Fire_Points’, ’Soil_Type’,
’Cover_Type’]

def _parse_line(line):

# Decode the line into values


fields = tf.io.decode_csv(
records=line, record_defaults=[0.0] * 54 + [0])

# Pack the result into a dictionary


features = dict(zip(COLUMN_NAMES,
fields[:10] + [tf.stack(fields[14:54])] + [fields[-1]]))

# Extract one-hot encoded class label from the features


class_label = tf.argmax(fields[10:14], axis=0)
return features, class_label

def csv_input_fn(csv_path, test=False,


batch_size=DATASET_SIZE // 1000):

# Create a dataset containing the csv lines


dataset = tf.data.TextLineDataset(filenames=csv_path,
compression_type=’GZIP’)
# Parse each line
01 Load the training data using pipelines created with tf.data. dataset = dataset.map(_parse_line)
As an input, you can use either in-memory data (NumPy), or a
# Shuffle, repeat, batch the examples for train and test
local storage, or a remote persistent storage.
dataset = dataset.shuffle(buffer_size=DATASET_SIZE,
02 Build, train, and validate a model with tf.keras, or use seed=42)
premade estimators.
TEST_SIZE = DATASET_SIZE // 10
03 Run and debug with eager execution, then use tf.function return dataset.take(TEST_SIZE).batch(TEST_SIZE) if test \
for the benefits of graphs. else dataset.skip(TEST_SIZE).repeat().batch(batch_size)

04 For large ML training tasks, use the Distribution Strategy API


for deploying training on Kubernetes clusters within on-premises Functions from the tf.feature_column namespace are used to
or cloud environments. put raw data into a TensorFlow data set. A feature column
is a high-level configuration abstraction for ingesting and
05 Export to SavedModel—an interchange format for representing features. It does not contain any data but tells the
TensorFlow Serving, TensorFlow Lite, TensorFlow.js, etc. model how to transform the raw data so that it matches the
expectation. The exact feature column to choose depends on
The tf.data API enables to build complex input pipelines from the feature type and the model type. The continuous feature
simple pieces. The pipeline aggregates data from a distributed type is handled by numeric_column and can be directly fed into
file system, applies transformation to each object, and merges a neural network or a linear model.
shuffled examples into training batches.

Version 2.1 Get the latest version at www.altoros.com/visuals Order private training at www.altoros.com/training
TensorFlow v2.0 Cheat Sheet
# Build, train, and evaluate the estimator
model = tf.estimator.LinearClassifier(feature_columns,
n_classes=4)
model.train(input_fn=lambda: csv_input_fn(dataset_path),
steps=10000)
model.ev aluate(
input_fn=lambda: csv_input_fn(dataset_path, test=True))

SavedModel contains a complete TF program and does not


require the original model-building code to run, which makes it
useful for deploying and sharing models.
# Export model to SavedModel
_builder = tf.estimator.export. \
build_parsing_serving_input_receiver_fn
_spec_maker = tf.feature_column.make_parse_example_spec

serving_input_fn = _builder(_spec_maker(feature_columns))

export_path = model.export_saved_model(
“/tmp/from_estimator/”, serving_input_fn)

The following code sample shows how to load and use the
saved model with Python.
# Import model from SavedModel
imported = tf.saved_model.load(export_path)

# Use imported model for prediction


def predict(new_object):
example = tf.train.Example()

# All regular continuous features


Categorical features can be ingested by functions with the for column in COLUMN_NAMES[:-2]:
“categorical_column_” prefix, but they need to be wrapped val = new_object[column]
by embedding_column or indicator_column before being fed into example.features.feature[column]. \
Neural Network models. For linear models, indicator_column is float_list.value.extend([val])
an internal representation when categorical columns are passed
# One-hot encoded feature of 40 columns
in directly.
for val in new_object[’Soil_Type’]:
example.features.feature[’Soil_Type’]. \
feature_columns = [tf.feature_column.numeric_column(name)
float_list.value.extend([val])
for name in COLUMN_NAMES[:10]]
# Categorical column with ID
feature_columns.append(
example.features.feature[’Cover_Type’]. \
tf.feature_column.categorical_column_with_identity(
int64_list.value.extend([new_object[’Cover_Type’]])
’Cover_Type’, num_buckets=8)
) return imported.signatures[’predict’](
# Soil_type[1-40] is a tensor of length 40 examples=tf.constant([example.SerializeToString()]))
feature_columns.append(
predict({
tf.feature_column.numeric_column(’Soil_Type’, shape=(40,))
’Elevation’: 2296, ’Aspect’: 312, ’Slope’: 27,
)
’Horizontal_Distance_To_Hydrology’: 256,
’Horizontal_Distance_To_Fire_Points’: 836,
The Estimator API provides high-level encapsulation for best
’Horizontal_Distance_To_Roadways’: 1273,
practices: model training, evaluation, prediction, and export
’Vertical_Distance_To_Hydrology’: 145,
for serving. The tf.estimator.Estimator subclass represents a
’Hillshade_9am’: 136, ’Hillshade_Noon’: 208,
complete model. Its object creates and manages tf.Graph and
’Hillshade_3pm’: 206,
tf.Session for you. Premade estimators include Linear Classifier,
’Soil_Type’: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
DNN Classifier, and Gradient Boosted Trees. BaselineClassifier
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
and BaselineRegressor will help to establish a simple model for
0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
sanity check during further model development.
’Cover_Type’: 6})

Version 2.1 Get the latest version at www.altoros.com/visuals Order private training at www.altoros.com/training

You might also like