TensorFlow用法参数详细说明,TensorFlow常用命令一览表,TensorFlow最全使用文档, 使用手册。

General notes

Standard flow

# usually have something like
X = tf.placeholder(tf.float32, shape=(None, n_features), name='X')

# usually build network in `with tf.variable_scope`
# set up variables with `tf.get_variable`

# establish the session and initialize variables
sess = tf.Session()
init = tf.global_variables_initializer()
# can also do a subset, e.g.
# init_ab = tf.variables_initializer([a, b], name='init_subset')
sess.run(init)
# can also run solo vars, e.g. `sess.run(W.initializer)`

# set up an optimizer (need a cost function)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
loss = ... some function

# run the training
for epoch in range(training_epochs):
    _, l = sess.run([train_op, loss] feed_dict={X: xs, Y: labels})
    # can get cost in a separate call instead
    # l = sess.run(loss, feed_dict={X: xs, Y: labels})
    if epoch % epoch_print_frequency == 0:
        print(epoch, l)

# examine learned model params
w_val = sess.run(w)
print('learned parameters', w_val)

sess.close()

Saving and loading models - checkpoints

Saving and loading models - protobufs

Graph manipulation

Note: constants are stored in the graph definition. This makes loading graphs expensive if constants are big. Therefore, only use constants for primitive types and use variables or readers for data that requires more memory.

Note: don’t defer creating/initializing objects until needed in graph definiion. This can cause us to actually create an object more times than is needed (e.g., if done in a loop, the operation is created N times!). Always separate fully the definition of ops from their computation.

Creating variables

Note that we also used a TF initializer here - this will create new random values every time we call sess.run(initialize_all_variables()).

Reshaping tricks

Shape

tensor.shape
tensor.shape.as_list()
tensor.get_shale().as_list()

Evaluating tensors

print sess.run(W)
print W.eval()

TensorBoard

with tf.Session() as sess:
    writer = tf.summary.FileWriter('./graphs', sess.graph)
    ...   # computations

writer.close()

Then, externally:

$ python prog.py
$ tensorboard --logdir="./graphs" --port 6006

Then, go to http://localhost:6006

InteractiveSession

InteractiveSession makes itself the default:

sess = tf.InteractiveSession()
...
expr.eval()  # no need to specify `sess`
sess.close()

Feeding

Feeding values is useful for testing, etc.

a = tf.add(2, 5)
b = tf.multiply(a, 3)
with tf.Session() as sess:
    replace_dict = {a: 15}
    sess.run(b, feed_dict=replace_dict)  # returns 45

Simple tensor evaluation

import tensorflow as tf
w = tf.get_variable('weights', [2, 2], initializer=tf.random_normal_initializer())
[op.name for op in tf.get_default_graph().get_operations()]
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    t = tf.get_default_graph().get_tensor_by_name('weights:0').eval()
    print(t)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    t = sess.run(w)
    print(t)

Interactive sessions

sess = tf.Session()
print sess.run(my_tensor)
sess.close()

Note, even when interactive:

sess = tf.Session()
print my_tensor.eval(session=sess)
sess.close()

i.e., need to sepecify the session with eval().

Tensorboard graphs

tf.reset_default_graph()
writer = tf.summary.FileWriter('./simple_example_graph')
x = 2
y = 3
op1 = tf.add(2, 3)
op2 = tf.multiply(2, 3)
op3 = tf.pow(op1, op2)
with tf.Session() as sess:
    writer.add_graph(sess.graph)
writer.close()

Suppress warnings

``` import os os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘2’