## Introduction and description of problem

We'd begin with a few basic definitions of terms we'd use, one of them being informal.

Given a square matrix A, we'd mark its (i,j) element as a_{ij} here, for all i,j.

We say that A is a lower unit triangular matrix, if a_{ii}=1 for all i and a_{ij}=0 for all i,j with i>j.

Furthermore, we say that A is a tridiagonal matrix, if a_{ij}=0 for all i,j with |i-j|>1.

Informally, we say that A is sparse, if it has just a few non-zero elements.

The term "just a few" is what we don't define here formally. One can treat it as some constant part of the elements (e.g. 1/10). The other can treat it as 1/n of them, where n is the size of the (square) matrix A.

One question that may arise now is: Why do we like sparse matrices? What kind of advantages do they offer?

Well, here are just two such advantages:

- Mathematical calculations involving sparse matrices are in general simpler, in comparison to ones with non-sparse (or dense) matrices. For instance, it's not uncommon that the sum of two sparse matrices is a sparse matrix, although maybe less than the first two.

- In the scope of computing, there are ways of storing sparse matrices in memory efficiently (see section 2.1 in [1]).

Now, to begin with the main research topic: Given a symmetric matrix A, it may be possible to partition it into a form A=LTL^{t}, where L is a lower unit triangular matrix, and T is a tridiagonal matrix.

There is an algorithm which solves this, named after Parlett-Raid, along with a recursive version[2].

After this, we can get to the actual question of research: Given that A is sparse, how can we partition it in such a way, where L is sparse, assuming a partitioning exists?

## Progress and results

So, as suggested by my faculty, I've got a hold of a book[1] and begun reading, so I'd get some background.

After some time, not yet having a clear way of dealing with the question, I've got to implement a few functions for experimenting, with the main candidate being a recursive Parlett-Reid algorithm[2]. A summarization of what I've got to learn during the process of implementing the algorithm is following.

Recall that we want a partitioning of the form A=LTL^{t} as mentioned above. During the execution of the algorithm, we're basically filling the unknown elements of L column-by-column (where the order of them depends on the implementation).

There are times there's one and unique column to fill in at a certain stage, while there are also times we may not continue and have to halt (for instance, if there's no partitioning).

Similarly, it's also possible that, at a certain stage, we have a freedom of choice for a specific column of L. This is where it can be wise to try and set it to a column full of zeros, excluding the diagonal unit term. After all, our goal is to have the matrix L in the partitioning as sparse as we can.

What I haven't found is an efficient algorithm known to always return a partitioning A = LTL^{t} with the mostly sparse L (assuming one exists). What can be shown to be found as seen above, is a way to play with a known algorithm in order to try and get a sparse L.