Matrix multiplication is a binary operation in mathematics that creates a matrix from two matrices. It is most commonly used in linear algebra. The number of columns in the first matrix must be equal to the number of rows in the second matrix for matrix multiplication. The resulting matrix, known as the matrix product, contains the number of rows and columns from the first and second matrices. AB denotes the product of matrices A and B.

In 1812, the French mathematician Jacques Philippe Marie Binet described matrix multiplication to represent the composition of linear maps represented by matrices. Matrix multiplication is thus a fundamental tool of linear algebra, with numerous applications in many areas of mathematics, as well as statistics, physics, economics, and engineering. Computing matrix products is a fundamental operation in all linear algebra computational applications. Let’s know more about matrix multiplication and matrices in detail.

Table of Contents

**What are Matrices?**

In linear algebra, determinants and matrices are used to solve linear equations by applying Cramer’s rule to a set of non-homogeneous linear equations. Determinants are only computed for square matrices. If the determinant of a matrix is zero, it is referred to as singular, and if it is one, it is referred to as unimodular. The determinant of the matrix must be nonsingular, that is, its value must be nonzero, for the system of equations to have a unique solution.

**History of Matrix Multiplication**

The matrix has a long history of use in the solution of linear equations. Arrays were the name given to them until the 1800s. In 1850, James Joseph Sylvester coined the term “matrix” (Latin for “womb,” derived from mater—mother), who saw a matrix as an object that gave rise to a number of determinants known today as minors, that is, determinants of smaller matrices derived from the original one by removing columns and rows. In 1913, English mathematician Cullis was the first to use modern bracket notation for matrices.

Matrices can be used to write and work with multiple linear equations, also known as a system of linear equations, in a compact manner. When compared to linear transformations, also known as linear maps, matrices and it reveal their essential properties.

**Definition-**

When two matrices are multiplied, the result is a matrix. It is a binary operation whose output is also a matrix. Matrix multiplication is only possible in linear algebra when the matrices are compatible. In general, unlike arithmetic multiplication, matrix multiplication is not commutative, which means that the multiplication of matrix A and B, denoted as AB, cannot be equal to BA, i.e., AB ≠ BA. As a result, the order of multiplication is critical when multiplying matrices. This is an interesting topic, to understand it better, you may log in to Cuemath.com.

Given two matrices A and B, the multiplication of matrix A by matrix B is given as (AB). That is, the resultant matrix for the multiplication of any m × n matrix ‘A’ with a n × p matrix ‘B’ can be given as matrix ‘C’ of order m × p.

**What Is Matrix Compatibility?**

Two matrices A and B are said to be compatible if the number of columns in A equals the number of rows in B. That is, if A is a matrix of order m×n and B is a matrix of order n×p, matrices A and B are compatible.

**Key Takeaways On Matrices Multiplication**

- Matrix multiplication requires that the given matrices be compatible.
- The following rule can be used to determine the order of a product matrix:
- If A is a matrix of order mn and B is a matrix of order n × p, then the product matrix has order m × p.
- Multiplication of rows and columns is referred to as matrix multiplication.