Tensorflow and Matrices containing Variables

Recently Pablo, Dennis and I were wondering what the best way to build Tensors with variables inside. I’ve found three ways (that largely mirror the numpy equivalents). Basically just different combinations of stacking, concatting, reshaping and gathering. [related SO question]

import tensorflow as tf
import numpy as np

a = tf.Variable(1.0,dtype=np.float32)
b = tf.Variable(2.0,dtype=np.float32)
with tf.GradientTape() as t:
    #these lines are equivalent:
    M = tf.reshape(tf.gather([a**2,b**2,a**2/2,1],[0,2,3,1]),[2,2])
    M = tf.reshape(tf.stack([a**2,a**2/2,1,b**2]),[2,2])
    M = tf.concat([tf.stack([[a**2,a**2/2]]),tf.stack([[1,b**2]])],0)
    gradients = t.gradient(tf.linalg.det(M),[a,b])
    print(gradients)
[<tf.Tensor: shape=(), dtype=float32, numpy=7.000001>, <tf.Tensor: shape=(), dtype=float32, numpy=4.0000005>]

I thought I’d just add that, one (possibly unwise) default behaviour of the gradient method is, if one were to ask for the derivative of a matrix it will return the derivative of the reduce_sum of the matrix:

with tf.GradientTape() as t:
    M = tf.concat([tf.stack([[a**2,a**2/2]]),tf.stack([[1,b**2]])],0)
    gradients = t.gradient(M,[a,b])
    print(gradients)
[<tf.Tensor: shape=(), dtype=float32, numpy=3.0>, <tf.Tensor: shape=(), dtype=float32, numpy=4.0>]

Which one can see is returning the derivative of the sum of M.