Note that image data augmentation layers are only active during training (similarly to the Dropout layer).
\
from tensorflow import keras from tensorflow.keras import layers # Create a data augmentation stage with horizontal flipping, rotations, zooms data_augmentation = keras.Sequential( [ layers.RandomFlip("horizontal"), layers.RandomRotation(0.1), layers.RandomZoom(0.1), ] ) # Load some data (x_train, y_train), _ = keras.datasets.cifar10.load_data() input_shape = x_train.shape[1:] classes = 10 # Create a tf.data pipeline of augmented images (and their labels) train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.batch(16).map(lambda x, y: (data_augmentation(x), y)) # Create a model and train it on the augmented image data inputs = keras.Input(shape=input_shape) x = layers.Rescaling(1.0 / 255)(inputs) # Rescale inputs outputs = keras.applications.ResNet50( # Add the rest of the model weights=None, input_shape=input_shape, classes=classes )(x) model = keras.Model(inputs, outputs) model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy") model.fit(train_dataset, steps_per_epoch=5)
\
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170498071/170498071 [==============================] - 5s 0us/step 5/5 [==============================] - 25s 31ms/step - loss: 9.0505 <keras.src.callbacks.History at 0x7fdb34287820>
You can see a similar setup in action in the example image classification from scratch.
\
# Load some data (x_train, y_train), _ = keras.datasets.cifar10.load_data() x_train = x_train.reshape((len(x_train), -1)) input_shape = x_train.shape[1:] classes = 10 # Create a Normalization layer and set its internal state using the training data normalizer = layers.Normalization() normalizer.adapt(x_train) # Create a model that include the normalization layer inputs = keras.Input(shape=input_shape) x = normalizer(inputs) outputs = layers.Dense(classes, activation="softmax")(x) model = keras.Model(inputs, outputs) # Train the model model.compile(optimizer="adam", loss="sparse_categorical_crossentropy") model.fit(x_train, y_train)
\
1563/1563 [==============================] - 3s 2ms/step - loss: 2.1271 <keras.src.callbacks.History at 0x7fda8c6f0730>
\
# Define some toy data data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]]) # Use StringLookup to build an index of the feature values and encode output. lookup = layers.StringLookup(output_mode="one_hot") lookup.adapt(data) # Convert new test data (which includes unknown feature values) test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]]) encoded_data = lookup(test_data) print(encoded_data)
\
tf.Tensor( [[0. 0. 0. 1.] [0. 0. 1. 0.] [0. 1. 0. 0.] [1. 0. 0. 0.] [1. 0. 0. 0.] [1. 0. 0. 0.]], shape=(6, 4), dtype=float32)
Note that, here, index 0 is reserved for out-of-vocabulary values (values that were not seen during adapt()).
You can see the StringLookup in action in the Structured data classification from scratch example.
# Define some toy data data = tf.constant([[10], [20], [20], [10], [30], [0]]) # Use IntegerLookup to build an index of the feature values and encode output. lookup = layers.IntegerLookup(output_mode="one_hot") lookup.adapt(data) # Convert new test data (which includes unknown feature values) test_data = tf.constant([[10], [10], [20], [50], [60], [0]]) encoded_data = lookup(test_data) print(encoded_data)
\
tf.Tensor( [[0. 0. 1. 0. 0.] [0. 0. 1. 0. 0.] [0. 1. 0. 0. 0.] [1. 0. 0. 0. 0.] [1. 0. 0. 0. 0.] [0. 0. 0. 0. 1.]], shape=(6, 5), dtype=float32)
Note that index 0 is reserved for missing values (which you should specify as the value 0), and index 1 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can configure this by using the mask_token and oov_token constructor arguments of IntegerLookup.
You can see the IntegerLookup in action in the example structured data classification from scratch.
If you have a categorical feature that can take many different values (on the order of 1e4 or higher), where each value only appears a few times in the data, it becomes impractical and ineffective to index and one-hot encode the feature values. Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector of fixed size. This keeps the size of the feature space manageable, and removes the need for explicit indexing.
\
# Sample data: 10,000 random integers with values between 0 and 100,000 data = np.random.randint(0, 100000, size=(10000, 1)) # Use the Hashing layer to hash the values to the range [0, 64] hasher = layers.Hashing(num_bins=64, salt=1337) # Use the CategoryEncoding layer to multi-hot encode the hashed values encoder = layers.CategoryEncoding(num_tokens=64, output_mode="multi_hot") encoded_data = encoder(hasher(data)) print(encoded_data.shape)
\
(10000, 64)
This is how you should preprocess text to be passed to an Embedding layer.
\
# Define some text data to adapt the layer adapt_data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Create a TextVectorization layer text_vectorizer = layers.TextVectorization(output_mode="int") # Index the vocabulary via `adapt()` text_vectorizer.adapt(adapt_data) # Try out the layer print( "Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(), ) # Create a simple model inputs = keras.Input(shape=(None,), dtype="int64") x = layers.Embedding(input_dim=text_vectorizer.vocabulary_size(), output_dim=16)(inputs) x = layers.GRU(8)(x) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) # Create a labeled dataset (which includes unknown tokens) train_dataset = tf.data.Dataset.from_tensor_slices( (["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0]) ) # Preprocess the string inputs, turning them into int sequences train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y)) # Train the model on the int sequences print("\nTraining model...") model.compile(optimizer="rmsprop", loss="mse") model.fit(train_dataset) # For inference, you can export a model that accepts strings as input inputs = keras.Input(shape=(1,), dtype="string") x = text_vectorizer(inputs) outputs = model(x) end_to_end_model = keras.Model(inputs, outputs) # Call the end-to-end model on test data (which includes unknown tokens) print("\nCalling end-to-end model on test string...") test_data = tf.constant(["The one the other will absorb"]) test_output = end_to_end_model(test_data) print("Model output:", test_output)
\
Encoded text: [[ 2 19 14 1 9 2 1]] Training model... 1/1 [==============================] - 2s 2s/step - loss: 0.5296 Calling end-to-end model on test string... Model output: tf.Tensor([[0.01208781]], shape=(1, 1), dtype=float32)
You can see the TextVectorization layer in action, combined with an Embedding mode, in the example text classification from scratch.
Note that when training such a model, for best performance, you should always use the TextVectorization layer as part of the input pipeline.
This is how you should preprocess text to be passed to a Dense layer.
\
# Define some text data to adapt the layer adapt_data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "multi_hot" output_mode # and ngrams=2 (index all bigrams) text_vectorizer = layers.TextVectorization(output_mode="multi_hot", ngrams=2) # Index the bigrams via `adapt()` text_vectorizer.adapt(adapt_data) # Try out the layer print( "Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(), ) # Create a simple model inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),)) outputs = layers.Dense(1)(inputs) model = keras.Model(inputs, outputs) # Create a labeled dataset (which includes unknown tokens) train_dataset = tf.data.Dataset.from_tensor_slices( (["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0]) ) # Preprocess the string inputs, turning them into int sequences train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y)) # Train the model on the int sequences print("\nTraining model...") model.compile(optimizer="rmsprop", loss="mse") model.fit(train_dataset) # For inference, you can export a model that accepts strings as input inputs = keras.Input(shape=(1,), dtype="string") x = text_vectorizer(inputs) outputs = model(x) end_to_end_model = keras.Model(inputs, outputs) # Call the end-to-end model on test data (which includes unknown tokens) print("\nCalling end-to-end model on test string...") test_data = tf.constant(["The one the other will absorb"]) test_output = end_to_end_model(test_data) print("Model output:", test_output)
\
WARNING:tensorflow:5 out of the last 1567 calls to <function PreprocessingLayer.make_adapt_function.<locals>.adapt_step at 0x7fda8c3463a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. Encoded text: [[1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0.]] Training model... 1/1 [==============================] - 0s 392ms/step - loss: 0.0805 Calling end-to-end model on test string... Model output: tf.Tensor([[0.58644605]], shape=(1, 1), dtype=float32)
This is an alternative way of preprocessing text before passing it to a Dense layer.
\
# Define some text data to adapt the layer adapt_data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "tf-idf" output_mode # (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams) text_vectorizer = layers.TextVectorization(output_mode="tf-idf", ngrams=2) # Index the bigrams and learn the TF-IDF weights via `adapt()` text_vectorizer.adapt(adapt_data) # Try out the layer print( "Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(), ) # Create a simple model inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),)) outputs = layers.Dense(1)(inputs) model = keras.Model(inputs, outputs) # Create a labeled dataset (which includes unknown tokens) train_dataset = tf.data.Dataset.from_tensor_slices( (["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0]) ) # Preprocess the string inputs, turning them into int sequences train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y)) # Train the model on the int sequences print("\nTraining model...") model.compile(optimizer="rmsprop", loss="mse") model.fit(train_dataset) # For inference, you can export a model that accepts strings as input inputs = keras.Input(shape=(1,), dtype="string") x = text_vectorizer(inputs) outputs = model(x) end_to_end_model = keras.Model(inputs, outputs) # Call the end-to-end model on test data (which includes unknown tokens) print("\nCalling end-to-end model on test string...") test_data = tf.constant(["The one the other will absorb"]) test_output = end_to_end_model(test_data) print("Model output:", test_output)
\
WARNING:tensorflow:6 out of the last 1568 calls to <function PreprocessingLayer.make_adapt_function.<locals>.adapt_step at 0x7fda8c0569d0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. Encoded text: [[5.4616475 1.6945957 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.0986123 1.0986123 1.0986123 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.0986123 0. 0. 0. 0. 0. 0. 0. 1.0986123 1.0986123 0. 0. 0. ]] Training model... 1/1 [==============================] - 0s 363ms/step - loss: 6.8945 Calling end-to-end model on test string... Model output: tf.Tensor([[0.25758243]], shape=(1, 1), dtype=float32)
You may find yourself working with a very large vocabulary in a TextVectorization, a StringLookup layer, or an IntegerLookup layer. Typically, a vocabulary larger than 500MB would be considered "very large".
In such a case, for best performance, you should avoid using adapt(). Instead, pre-compute your vocabulary in advance (you could use Apache Beam or TF Transform for this) and store it in a file. Then load the vocabulary into the layer at construction time by passing the file path as the vocabulary argument.
ParameterServerStrategy.There is an outstanding issue that causes performance to degrade when using a TextVectorization, StringLookup, or IntegerLookup layer while training on a TPU pod or on multiple machines via ParameterServerStrategy. This is slated to be fixed in TensorFlow 2.7.
\ \
:::info Originally published on the TensorFlow website, this article appears here under a new headline and is licensed under CC BY 4.0. Code samples shared under the Apache 2.0 License.
:::
\


