This blog post covers the basic tensor operations in PyTorch. We’ll go over adding and subtracting tensors, as well as multiplying and dividing them. We’ll also see how to work with tensors of different shapes.

For more information check out our video:

## Introduction

Tensor operations are the basis for many of the computations done in PyTorch. In this tutorial, we’ll see how to perform some of the most common tensor operations, such as multiplication, addition, and transposition. We’ll also see how to work with special types of tensors, like sequences and zero tensors. Let’s get started!

## What are Tensors?

As we saw in the previous section, PyTorch is a powerful framework for creating and training neural networks. A key element of PyTorch is the tensor, which is similar to a NumPy array but can run on GPUs for increased performance. In this section, we’ll explore what tensors are and how they can be used in PyTorch.

Tensors are similar to NumPy arrays in that they represent (potentially high-dimensional) data. However, unlike NumPy arrays, tensors can be used on GPUs for faster computation. In addition, PyTorch provides a number of tensor operations that are not present in NumPy (e.g., multiplication of two tensors).

Tensors are created using the torch.Tensor class (or its alias, torch.tensor). For example, we can create a 2D tensor with 3 rows and 4 columns as follows:

“`

>>> import torch

>>> x = torch.tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])

>>> print(x)

tensor([[ 1, 2, 3, 4],

[ 5, 6, 7, 8],

[ 9, 10, 11

## Tensor Operations

Tensors are the fundamental data structure in PyTorch that allows us to perform numerical computations on GPU. In this tutorial, we will go through some of the most common tensor operations in PyTorch.

Tensor operations in PyTorch can be broadly classified into 2 types:

– Point-wise operations: these are operations that are applied to each element in the tensor independently. Examples of point-wise operations include addition, subtraction, multiplication and division.

– Reduction operations: these are operation that collapse the tensor along a given dimension. Examples of reduction operations include sum, mean and max.

In addition to the above 2 types, there are also other types of special tensor operations such as broadcasting, transposing and indexing. We will cover all these different types of operations in this tutorial.

## Creating Tensors in PyTorch

PyTorch is a powerful tool for working with tensors. In this article, we’ll explore some of the most common tensor operations in PyTorch so that you can get started building deep learning models.

Creating Tensors in PyTorch

There are several ways to create tensors in PyTorch. The most common way is to use the torch.Tensor class. This class takes as input a list or a NumPy array and creates a tensor from it. For example, we can create a 2D tensor with 3 rows and 4 columns like this:

tensor = torch.Tensor([[1,2,3,4],

[5,6,7,8],

[9,10,11,12]])

If we want to create a tensor with random values, we can use the torch.rand function:

tensor = torch.rand(3,4) # Creates a 3×4 matrix with random values from 0 to 1

## Indexing Tensors in PyTorch

In PyTorch Tensors, there are a few different ways to select subsets of elements from a tensor. These are the most common:

– `select()`: A method that returns a new tensor containing only the selected elements.

– `narrow()`: A method that returns a new tensor that shares the same storage as the original tensor, but with only a subset of the elements.

– `index_select()`: A method that returns a new tensor containing only the elements at the specified indices.

More information on these methods can be found in the PyTorch documentation.

## Slicing Tensors in PyTorch

Tensor operations in PyTorch are very similar to NumPy’s. In this section, we’ll see how to slice tensors in PyTorch.

To slice a tensor in PyTorch, you just need to specify the start and end indices of the desired slice. For example, if we have a 1D tensor with 10 elements, and we want to get the third through seventh element of the tensor, we would do the following:

tensor[2:7]

This would return a 1D tensor with 5 elements, namely the third, fourth, fifth, sixth, and seventh elements of the original tensor.

We can also specify a step size when slicing tensors. For example, if we want to get every other element of a 1D tensor with 10 elements, we would do the following:

tensor[::2]

This would return a 1D tensor with 5 elements: the first, third, fifth, seventh, and ninth element of the original tensor.

## Joining and Splitting Tensors in PyTorch

Tensors in PyTorch can be created and manipulated in a variety of ways. In this tutorial, we will see how to perform common tensor operations such as joining and splitting Tensors.

Joining Tensors

Tensors can be joined together using the torch.cat() function. This function takes in a list of Tensors and returns a single Tensor that is the concatenation of all the input Tensors.

Let’s say we have two tensors, A and B, with shapes (3,4) and (5,4) respectively. We can concatenate them along the first dimension using the following code:

import torch

A = torch.randn(3,4)

B = torch.randn(5,4)

C = torch.cat([A,B], dim=0)

print(C.size()) # outputs: torch.Size([8, 4])

As we can see from the output, the new Tensor C has a size of (8,4), which is the concatenation of A and B along the first dimension. We can also concatenate along other dimensions by changing the value of dim . For example, if we want to concatenate A and B along the second dimension:

import torch

A = torch.randn(3,4)

B = torch.randn(3,5) # notice that B has a different shape!

C = torch.cat([A,B], dim=1) # dim=1 means concatenate along 2nd dimension # changing dim to any other value produces an error! print(C.size()) # outputs: torch.Size([3, 9])

# because A and B have different shapes!

As we can see from the output, C now has size (3,9). Notice that if we try to concatenate A and B along any other dimension besides 1 (the second dimension), we get an error because A and B have different shapes! So when using the torch.cat() function, make sure that all input Tensors have the same shape…except for the dimension along which you are concatenating!

Splitting Tensors

Just as we can join Tensors together using cat , we can also split them apart using the split function:

import torch

A = torch . randn (16 , 4 ) # create a tensor with size 16×4 c , d =torch . split (A , 8 ) # c is size 8×4; d is also size 8×4 e , f , g=torch . split (A , [6 , 10 ]) “# e is size 6×4; f is size 4×4; g is also size 6x” See how easy it is to split apart a Tensor into multiple smaller ones? This comes in handy all the time when you need to manipulate data in certain ways.. Try playing around with different values for A , csizeand dimto get a feel for how this function works

## Mathematical Operations on Tensors in PyTorch

Mathematical operations on tensors are an important part of PyTorch since we use them so often in mathematical modeling and machine learning. Let’s take a look at some of the most common tensor operations and how they work in PyTorch.

Tensor operations in PyTorch can be broadly categorized into three types: pointwise, reduction, and broadcasting.

Pointwise operations are those that operate on individual elements of a tensor, such as addition, subtraction, multiplication, and division. In contrast, reduction operations collapse a tensor to a single value by performing an operation such as summation or multiplication across all the elements of the tensor. Broadcasting is a special type of operation that allows us to perform mathematical operations on two tensors of different sizes.

1. Pointwise Operations

Pointwise operations are applied element-wise to two input tensors of the same size. That is, if we have two tensors A and B with the same size (i.e., A.size() == B.size()), then the pointwise operation C = f(A, B) produces a third tensor C with the same size as A and B, where each element c_ij of C is obtained by applying the function f to the corresponding elements a_ij and b_ij of A and B:

c_ij = f(a_ij, b_ij) # for all i, j

Some examples of pointwise operations are element-wise addition, subtraction, multiplication (also known as Hadamard product), division, minimum, maximum, etc. In PyTorch, we can perform all these operations using torch.* functions. For example:

import torch

# Element-wise addition

print(torch.add(torch.ones(5), torch .ones(5)))

# Element-wise subtraction

print(torch .sub(torch .ones(5), torch .ones (5)))

# Element-wise multiplication (Hadamard product)

print((torch .ones (5)) * (torch .ones (5)))

# Element-wise division

print((torch .ones (5)) / (torch .ones (5)))

# Element-wise minimum; note that both inputs must have same size!

print( torch .min ((6* torch .ones ((2 , 3 )), 5* torch . ones((2 , 3 )))))

2 Broadcasting in PyTorch# Broadcasting is often used to perform mathematical operations on two tensors of different sizes without having to explicitly resize them first using methods like view(). In general terms broadcasting allows us to perform arithmetic operations on two arrays even when their shapes do not match perfectly; PyTorch will automatically resize one or both arrays until their shapes match so that the operation can be performed element-wise as usual p q It is important to note that both arrays must have compatible shapes for broadcasting to work properly; specifically r lenght r must either be equal lenght An error will be raised r otherwise If you’re unsure about whether two arrays can be broadcast together check out NumPy’s documentation on broadcasting rules here Some examples where broadcasting can come in handy include vectorizing your code by operating on whole rows or columns at once instead p q p iteration over each element separately performing mathematical operations between scalars and vectors or matrices e g adding 3 5 times every 7th term an array Multiplication terms

## Linear Algebra Operations on Tensors in PyTorch

Tensors in PyTorch support a variety of linear algebra operations. Let’s start with one dimensional tensors (vectors). We can perform addition/subtraction and multiplication/division operations on vectors just like we do with scalars:

“`Python

# Addition/subtraction

u = torch.tensor([1, 0])

v = torch.tensor([0, 1])

w = u + v

# Prints [1, 1]

print(w)

z = u – v

# Prints [1, -1]

print(z)

# Multiplication/ division

u = torch.tensor([1, 2]) # note that unlike numpy arrays, we have to explicitly set the datatype here! In this case we want 32 bit floats (the default is 64 bit floats). Check out the documentation for details: https://pytorch.org/docs/stable/tensors.html?highlight=long#torch-tensor-creation-ops

v = torch.tensor([3, 4]) # For brevity we’ll stick with 32 bit floats for all subsequent examples as well

w = u * v # element-wise multiplication; also known as the Hadamard product; results in [3, 8] “`

## Conclusion

In this PyTorch tutorial, we have seen how to perform various tensor operations such as creating, initializing, reshaping, concatenating, splitting, indexing and so on. We also looked at the available methods and attributes for performing these operations.

Keyword: Tensor Operations in PyTorch