Let me begin this blog with a little bit of introduction about TensorFlow, which is now a very popular framework in the world of Deep Learning.
TensorFlow, Google’s gift to the world of data science and machine learning. TensorFlow 1.0 was released in 2015 by the Google Brain team and the current version is 2.0 which was released in 2019.
The primary interface of TensorFlow is Python, but the core functionality is written in C++ for better performance. Tensorflow stores operations in a graph, like other frameworks, and it can be deployed to a GPU or a network. TensorBoard is the utility provided by TensorFlow using which these graphs and their operations are visualized. TensorFlow supports execution on CPUs and GPUs, like other frameworks such as Theano, Torch, etc.
Tensor is the main and central data type of TensorFlow. It is just a regular array like Numpy’s ndarrays. But unlike Numpy’s ndarray, tensors cannot be accessed using regular Python routines. It can be accessed using TensorFlow API which provides a vast list of functions that is used to create, transform and operate on tensors.
The following are the 3 important points about Tensors that we should always keep in mind:
Every tensor is an instance of the Tensor class.
A tensor can contain data of any data type such as numbers, string, or boolean values. But all the elements in a tensor must be of the same data type.
Using tf package functions, we can create, transform and operate the tensors.
Just like we declare variables in most of the programs, the TensorFlow applications start by creating the tensors. A tensor is an array with zero or more dimensions.
Depending on the dimensions, tensors can be categorized as:
Scalar — a zero-dimensional tensor
Vector — a one-dimensional tensor
Matrix — a two-dimensional tensor
The tf package provides some functions to create tensors and fill them with known values. The below table lists all the functions that tf package offers along with the description of each function.
The optional name argument in the functions in Table 1 is the identifier for the tensor. Applications can use the name of the tensor to access it through the tensor’s graph.
A tensor can have multiple dimensions. Few terms to note are:
Rank — Number of dimensions in a tensor
Shape — lengths of a tensor’s dimensions that form an array
Many functions in Table 1 have “shape” as one of the parameters. Following are the examples to demonstrate how to set the shape parameter:
 — The tensor contains a single value.
 — The tensor is a one-dimensional array containing four values.
[2, 4] — The tensor is a 2x4 matrix.
[2,4,5] — The tensor is a multidimensional array whose dimensions equal 2, 4, and 5.
The parameter “dtype” is there in most of the functions in Table 1 which identifies the data type of the tensor’s elements. The default value of “dtype” is float32, which indicates that, by default, tensors contain single-precision floating-point values. The below table lists float32 and other possible data types that the tensor elements can have.
Creating Tensors with Random values
Many TensorFlow applications require tensors that contain random values. The tf package provides many functions for creating random-valued tensors and the below table lists them.
All functions in Table 3 take in a seed parameter that initializes the random number generator. Setting a random seed is important to ensure that sequences aren’t repeated. We can get the seed value by using the set_random_seed function, which accepts a floating-point value and sets the seed value for the random numbers in the current graph.
The tensor elements can be shuffled using the random_shuffle function, it doesn’t create a new tensor.
All the other 3 functions create a tensor according to the input parameters.
Hope this blog helps everyone to understand Tensors, the most important and essential data type of TensorFlow, and how to create them and use them in an application.
Thanks for reading!