Tensors — Basics of pytorch programming

Arun Purakkatt
4 min readJul 30, 2020
(Source:https://luckyhasae.com/installing-pytorch-anaconda/)

We have discussed the concept of deep learning and neural network in our previous post. In this post we will discuss about the basic element of a neural network program which is tensor.

1 What is a Tensor ?

A tensor is the primary data structure of neural network. Mathematical generalization of a tensor can be understood more precisely with the following table.

Tensors can be considered as nd arrays. Tensors are multi dimensional arrays. To generalize the term tensor

· A scalar is a 0 dimensional tensor

· A vector is a 1 dimensional tensor

· A matrix is a 2 dimensional tensor

· A nd-array is an n dimensional tensor

1.1 Attributes of a Tensor

· Rank — number of indexes needed to access an element.

· Axes — specific dimension.

· Shape — length of each axes of the tensor.

2 . Pytorch and Tensor operation basics

Pytorch is a library to process tensors .Tensor can be of any number of dimension and different lengths along each dimensions.

2.1 Defining Tensors:

Lets import pytorch and define a tensor as shown below. Here we are creating a tensor for a number. The data type of the tensor can be verified using .dtype command.

Lets define vector and matrix using tensor

Lets define a 3d array and understand the shape of the array using .shape property of the tensor. Which denotes below our tensor is of size 2 X 2 X 3 . Understanding the tensor size is really important as it comes to deep learning.

2.2 Tensor operations and gradients :

Lets combine arithmetic operations with tensor operations. We have created 3 tensors here x,w,b with a parameter requires_grad = True . With tensors with requires_grad = True we can automatically compute derivative of that tensors.

As expected, y is a tensor with the value 3 * 4 + 5 = 17. What makes PyTorch special is that we can automatically compute the derivative of y w.r.t. the tensors that have requires_grad set to True i.e. w and b. To compute the derivatives, we can call the .backward method on our result y.

If we see below code , x does not have requires_grad set to True hence x.grad is none.

2.3 Interoperability with numpy :

Numpy is a popular library in python for mathematical and scientific operation.It is efficient in multi dimensional array operation. Pytorch leverages these features of numpy lets see how?

We can create an array using numpy library and convert it to a torch tensor using torch.from_numpy.

A torch tensor can be converted to a numpy array using .numpy method.

In my next post I will be discussing on linear regression using pytorch tensors.

You can learn more about tensors and tensor operations here: https://pytorch.org/docs/stable/tensors.html

Find the code here on github : https://github.com/Arun-purakkatt/Deep_Learning_Pytorch/blob/master/Pytorch_basics.ipynb

Stay connected : https://www.linkedin.com/in/arun-purakkatt-mba-m-tech-31429367/

Credits & References :

Inspired by the following resources:

PyTorch Tutorial for Deep Learning Researchers by Yunjey Choi:

FastAI development notebooks by Jeremy Howard:

--

--