9951 explained code solutions for 126 technologies


python-tensorflowHow do I use dropout in TensorFlow with Python?


Dropout is a regularization technique used to reduce overfitting in neural networks. In TensorFlow, it can be implemented with the tf.nn.dropout() function.

Example code

# Create a placeholder for the input
x = tf.placeholder(tf.float32, shape=[None, 784])

# Define the dropout rate
keep_prob = tf.placeholder(tf.float32)

# Create a fully-connected layer with dropout
fc1 = tf.layers.dense(x, 256, activation=tf.nn.relu)
fc1_drop = tf.nn.dropout(fc1, keep_prob)

# Output layer
logits = tf.layers.dense(fc1_drop, 10)

The tf.nn.dropout() function takes two arguments: the input and the dropout rate. The dropout rate is a float value between 0 and 1 that determines the probability of dropping out a particular unit. In the example above, the tf.nn.dropout() function is applied to the fully-connected layer with a dropout rate of keep_prob.

The output of the example code is a logits tensor with a shape of [None, 10].

Code explanation

  1. x = tf.placeholder(tf.float32, shape=[None, 784]): Creates a placeholder for the input.
  2. keep_prob = tf.placeholder(tf.float32): Defines the dropout rate.
  3. fc1 = tf.layers.dense(x, 256, activation=tf.nn.relu): Creates a fully-connected layer.
  4. fc1_drop = tf.nn.dropout(fc1, keep_prob): Applies the tf.nn.dropout() function to the fully-connected layer with the dropout rate specified by keep_prob.
  5. logits = tf.layers.dense(fc1_drop, 10): Creates the output layer.

Helpful links

Edit this code on GitHub