A brief explanation of my research

A few of you have asked about my research recently. Since it’s difficult to explain verbally, the below is a handy guide.

A matrix is an array of numbers- they are used in a whole range of fields such as geometry (matrices can be used to describe geometric transformations such as rotation), physics, astronomy, quantum, probability (the game of Snakes and Ladders can be entirely described by using a matrix), and others.

The below is an example of a matrix. Because it has 3 rows and 3 columns, it is known as a 3 x 3 matrix, a matrix with an unspecified number of rows and columns n is an n x n matrix. [for some reason this picture isn’t coming out properly in some browsers, if you click on it, it works fine]

A matrix

A matrix

Not a matrix

Not a matrix

There will be occasions where you might want to multiply two matrices together, for example, if you want to get a matrix that represents both a rotation and a reflection, you can multiply the rotation matrix by a reflection matrix (there are a lot of these calculations when you play computer games or when you create a Hollywood blockbuster).

There is a standard algorithm used to do this multiplication, which is easy-ish to learn -especially so if you are a computer, however for large values of n, n x n matrices become very inefficient to multiply, taking about n^3 multiplications (additions aren’t so big a deal), so it would have been prudent to find more efficient algorithms, i.e. reducing the number of multiplications as much as possible.

Volker Strassen discovered a way of doing this in 1968, reducing it to n^2.8 multiplications using an ingenious method of chopping it up into smaller bits and re-arranging it. Various other methods were introduced over the years, and the value of this exponent, known as w, reduced over the years as shown.

500px-Bound_on_matrix_multiplication_omega_over_time

In 1990 Coppersmith and Winograd found that they could use an algorithm and square it (that is, do it twice) to get a pretty big improvement down to 2.3754.

I decided to cube it- unfortunately this yielded no improvement. However, I had a hunch that this was because there was something about combining a 1 and a 2 algorithm that was holding it back. So, I raised it to the fourth power, using some new techniques, which did indeed yield the marginal improvement as shown above (to 2.3736). A year later, Virginia Williams used some short-cuts I used (and raised it to the 8th power) to get it down to 2.3727 and is currently in the process of reducing it further, which is exciting! As you can tell, there is probably a limit as to far far we can get with this method- the lower limit is n^2 (it can’t get any lower than that, as that would mean fewer operations than there are elements in the matrices), and it has been shown we can get this if certain conditions can be met (involving all kinds of mathematical techniques).

Sadly, all this is all very ivory tower- there is neither a need nor a capability currently to apply these algorithms, as they only become efficient for huge values of n (which we are nowhere near using currently).

I remain hopeful, however, that there will be a use for it some day! I hope that made sense.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s