## Rethinking linear algebra part two: ellipsoids in data science

### 1 Our expedition of eigenvectors still continues

This article is still going to be about eigenvectors and PCA, and this article still will not cover LDA (linear discriminant analysis). Hereby I would like you to have more organic links of the data science ideas with eigenvectors.

In the second article, we have covered the following points:

• You can visualize linear transformations with matrices by calculating displacement vectors, and they usually look like vectors swirling.
• Diagonalization is finding a direction in which the displacement vectors do not swirl, and that is equal to finding new axis/basis where you can describe its linear transformations more straightforwardly. But we have to consider diagonalizability of the matrices.
• In linear dimension reduction such as PCA or LDA, we mainly use types of matrices called positive definite or positive semidefinite matrices.

In the last article we have seen the following points:

• PCA is an algorithm of calculating orthogonal axes along which data “swell” the most.
• PCA is equivalent to calculating a new orthonormal basis for the data where the covariance between components is zero.
• You can reduced the dimension of the data in the new coordinate system by ignoring the axes corresponding to small eigenvalues.
• Covariance matrices enable linear transformation of rotation and expansion and contraction of vectors.

I emphasized that the axes are more important than the surface of the high dimensional ellipsoids, but in this article let’s focus more on the surface of ellipsoids, or I would rather say general quadratic curves. After also seeing how to draw ellipsoids on data, you would see the following points about PCA or eigenvectors.

• Covariance matrices are real symmetric matrices, and also they are positive semidefinite. That means you can always diagonalize covariance matrices, and their eigenvalues are all equal or greater than 0.
• PCA is equivalent to finding axes of quadratic curves in which gradients are biggest. The values of quadratic curves increases the most in those directions, and that means the directions describe great deal of information of data distribution.
• Intuitively dimension reduction by PCA is equal to fitting a high dimensional ellipsoid on data and cutting off the axes corresponding to small eigenvalues.

Even if you already understand PCA to some extent, I hope this article provides you with deeper insight into PCA, and at least after reading this article, I think you would be more or less able to visually control eigenvectors and ellipsoids with the Numpy and Maplotlib libraries.

*Let me first introduce some mathematical facts and how I denote them throughout this article in advance. If you are allergic to mathematics, take it easy or please go back to my former articles.

• Any quadratic curves can be denoted as , where .
• When I want to clarify dimensions of variables of quadratic curves, I denote parameters as .
• If a matrix is a real symmetric matrix, there exist a rotation matrix such that , where and . are eigenvectors corresponding to respectively.
• PCA corresponds to a case of diagonalizing where is a covariance matrix of certain data. When I want to clarify that is a covariance matrix, I denote it as .
• Importantly covariance matrices are positive semidefinite and real symmetric, which means you can always diagonalize and any of their engenvalues cannot be lower than 0.

*In the last article, I denoted the covariance of data as , based on Pattern Recognition and Machine Learning by C. M. Bishop.

*Sooner or later you are going to see that I am explaining basically the same ideas from different points of view, using the topic of PCA. However I believe they are all important when you learn linear algebra for data science of machine learning. Even you have not learnt linear algebra or if you have to teach linear algebra, I recommend you to first take a review on the idea of diagonalization, like the second article. And you should be conscious that, in the context of machine learning or data science, only a very limited type of matrices are important, which I have been explaining throughout this article.

### 2 Rotation or projection?

In this section I am going to talk about basic stuff found in most textbooks on linear algebra. In the last article, I mentioned that if is a real symmetric matrix, you can diagonalize with a rotation matrix , such that , where . I also explained that PCA is a case where , that is, is the covariance matrix of certain data. is known to be positive semidefinite and real symmetric. Thus you can always diagonalize and any of their engenvalues cannot be lower than 0.

I think we first need to clarify the difference of rotation and projection. In order to visualize the ideas, let’s consider a case of . Assume that you have got an orthonormal rotation matrix which diagonalizes . In the last article I said diagonalization is equivalent to finding new orthogonal axes formed by eigenvectors, and in the case of this section you got new orthonoramal basis which are in red in the figure below. Projecting a point on the new orthonormal basis is simple: you just have to multiply with . Let be , and then . You can see are projected on respectively, and the left side of the figure below shows the idea. When you replace the orginal orthonormal basis with as in the right side of the figure below, you can comprehend the projection as a rotation from to by a rotation matrix .

Next, let’s see what rotation is. In case of rotation, you should imagine that you rotate the point in the same coordinate system, rather than projecting to other coordinate system. You can rotate by multiplying it with . This rotation looks like the figure below.

In the initial position, the edges of the cube are aligned with the three orthogonal black axes , with one corner of the cube located at the origin point of those axes. The purple dot denotes the corner of the cube directly opposite the origin corner. The cube is rotated in three dimensions, with the origin corner staying fixed in place. After the rotation with a pivot at the origin, the edges of the cube are now aligned with a new set of orthogonal axes , shown in red. You might understand that more clearly with an equation: . In short this rotation means you keep relative position of , I mean its coordinates , in the new orthonormal basis. In this article, let me call this a “cube rotation.”

The discussion above can be generalized to spaces with dimensions higher than 3. When is an orthonormal matrix and a vector , you can project to or rotate it to , where and . In other words , which means you can rotate back to the original point with the rotation matrix .

I think you at least saw that rotation and projection are basically the same, and that is only a matter of how you look at the coordinate systems. But I would say the idea of projection is more important through out this article.

Let’s consider a function , where is a real symmetric matrix. The distribution of is quadratic curves whose center point covers the origin, and it is known that you can express this distribution in a much simpler way using eigenvectors. When you project this function on eigenvectors of , that is when you substitute for , you get . You can always diagonalize real symmetric matrices, so the formula implies that the shapes of quadratic curves largely depend on eigenvectors. We are going to see this in detail in the next section.

* denotes an inner product of and .

*We are going to see details of the shapes of quadratic “curves” or “functions” in the next section.

To be exact, you cannot naively multiply or for rotation. Let’s take a part of data I showed in the last article as an example. In the figure below, I projected data on the basis .

You might have noticed that you cannot do a “cube rotation” in this case. If you make the coordinate system with your left hand, like you might have done in science classes in school to learn Fleming’s rule, you would soon realize that the coordinate systems in the figure above do not match. You need to flip the direction of one axis to match them.

Mathematically, you have to consider the determinant of the rotation matrix . You can do a “cube rotation” when , and in the case above was , and you needed to flip one axis to make the determinant . In the example in the figure below, you can match the basis. This also can be generalized to higher dimensions, but that is also beyond the scope of this article series. If you are really interested, you should prepare some coffee and snacks and textbooks on linear algebra, and some weekends.

When you want to make general ellipsoids in a 3d space on Matplotlib, you can take advantage of rotation matrices. You first make a simple ellipsoid symmetric about xyz axis using polar coordinates, and you can rotate the whole ellipsoid with rotation matrices. I made some simple modules for drawing ellipsoid. If you put in a rotation matrix which diagonalize the covariance matrix of data and a list of three radiuses , you can rotate the original ellipsoid so that it fits the data well.

### 3 Types of quadratic curves.

*This article might look like a mathematical writing, but I would say this is more about computer science. Please tolerate some inaccuracy in terms of mathematics. I gave priority to visualizing necessary mathematical ideas in my article series. If you are not sure about details, please let me know.

In linear dimension reduction, or at least in this article series you mainly have to consider ellipsoids. However ellipsoids are just one type of quadratic curves. In the last article, I mentioned that when the center of a D dimensional ellipsoid is the origin point of a normal coordinate system, the formula of the surface of the ellipsoid is as follows: , where satisfies certain conditions. To be concrete, when is the surface of a ellipsoid, has to be diagonalizable and positive definite.

*Real symmetric matrices are diagonalizable, and positive definite matrices have only positive eigenvalues. Covariance matrices , whose displacement vectors I visualized in the last two articles, are known to be symmetric real matrices and positive semi-defintie. However, the surface of an ellipsoid which fit the data is , not .

*You have to keep it in mind that are all deviations.

*You do not have to think too much about what the “semi” of the term “positive semi-definite” means fow now.

As you could imagine, this is just one simple case of richer variety of graphs. Let’s consider a 3-dimensional space. Any quadratic curves in this space can be denoted as , where at least one of is not .  Let be , then the quadratic curves can be simply denoted with a matrix and a 3-dimensional vector as follows: , where , . General quadratic curves are roughly classified into the 9 types below.

You can shift these quadratic curves so that their center points come to the origin, without rotation, and the resulting curves are as follows. The curves can be all denoted as .

As you can see, is a real symmetric matrix. As I have mentioned repeatedly, when all the elements of a symmetric matrix are real values and its eigen values are , there exist orthogonal/orthonormal matrices such that , where . Hence, you can diagonalize the with an orthogonal matrix . Let be an orthogonal matrix such that . After you apply rotation by to the curves (a)” ~ (i)”, those curves are symmetrically placed about the xyz axes, and their center points still cross the origin. The resulting curves look like below. Or rather I should say you projected (a)’ ~ (i)’ on their eigenvectors.

In this article mainly (a)” , (g)”, (h)”, and (i)” are important. General equations for the curves is as follows

• (a)”:
• (g)”:
• (h)”:
• (i)”:

, where .

Even if this section has been puzzling to you, you just have to keep one point in your mind: we have been discussing general quadratic curves, but in PCA, you only need to consider a case where is a covariance matrix, that is . PCA corresponds to the case where you shift and rotate the curve (a) into (a)”. Subtracting the mean of data from each point of data corresponds to shifting quadratic curve (a) to (a)’. Calculating eigenvectors of corresponds to calculating a rotation matrix such that the curve (a)’ comes to (a)” after applying the rotation, or projecting curves on eigenvectors of . Importantly we are only discussing the covariance of certain data, not the distribution of the data itself.

*Just in case you are interested in a little more mathematical sides: it is known that if you rotate all the points on the curve with the rotation matrix , those points are mapped into a new quadratic curve . That means the rotation of the original quadratic curve with (or rather rotating axes) enables getting rid of the terms . Also it is known that when , with proper translations and rotations, the quadratic curve can be mapped into one of the types of quadratic curves in the figure below, depending on coefficients of the original quadratic curve. And the discussion so far can be generalized to higher dimensional spaces, but that is beyond the scope of this article series. Please consult decent textbooks on linear algebra around you for further details.

### 4 Eigenvectors are gradients and sometimes variances.

In the second section I explained that you can express quadratic functions in a very simple way by projecting on eigenvectors of .

You can comprehend what I have explained in another way: eigenvectors, to be exact eigenvectors of real symmetric matrices , are gradients. And in case of PCA, I mean when eigenvalues are also variances. Before explaining what that means, let me explain a little of the totally common facts on mathematics. If you have variables , I think you can comprehend functions in two ways. One is a normal “functions” , and the others are “curves” . “Functions” get an input and gives out an output , just as well as normal functions you would imagine. “Curves” are rather sets of such that .

*Please assume that the terms “functions” and “curves” are my original words. I use them just in case I fail to use functions and curves properly.

The quadratic curves in the figure above are all “curves” in my term, which can be denoted as or . However if you replace of (g)”, (h)”, and (i)” with , you can interpret the “curves” as “functions” which are denoted as . This might sounds too obvious to you, and my point is you can visualize how values of “functions” change only when the inputs are 2 dimensional.

When a symmetric real matrices have two eigenvalues , the distribution of quadratic curves can be roughly classified to the following three types.

• (g): Both and are positive or negative.
• (h): Either of or is positive and the other is negative.
• (i): Either of or is 0 and the other is not.

The equations of (g)” , (h)”, and (i)” correspond to each type of , and thier curves look like the three graphs below.

And in fact, when start from the origin and go in the direction of an eigenvector , is the gradient of the direction. You can see that more clearly when you restrict the distribution of to a unit circle. Like in the figure below, in case , which is classified to (g), the distribution looks like the left side, and if you restrict the distribution in the unit circle, the distribution looks like a bowl like the middle and the right side. When you move in the direction of , you can climb the bowl as as high as , in as high as .

Also in case of (h), the same facts hold. But in this case, you can also descend the curve.

*You might have seen the curve above in the context of optimization with stochastic gradient descent. The origin of the curve above is a notorious saddle point, where gradients are all in any directions but not a local maximum or minimum. Points can be stuck in this point during optimization.

Especially in case of PCA, is a covariance matrix, thus . Eigenvalues of are all equal to or greater than . And it is known that in this case is the variance of data projected on its corresponding eigenvector . Hence, if you project , quadratic curves formed by a covariance matrix , on eigenvectors of , you get .  This shows that you can re-weight , the coordinates of data projected projected on eigenvectors of , with , which are variances . As I mentioned in an example of data of exam scores in the last article, the bigger a variance is, the more the feature described by vary from sample to sample. In other words, you can ignore eigenvectors corresponding to small eigenvalues.

That is a great hint why principal components corresponding to large eigenvectors contain much information of the data distribution. And you can also interpret PCA as a “climbing” a bowl of , as I have visualized in the case of (g) type curve in the figure above.

*But as I have repeatedly mentioned, ellipsoid which fit data well is .

*You have to be careful that even if you slice a type (h) curve with a place the resulting cross section does not fit the original data well because the equation of the cross section is The figure below is an example of slicing the same as the one above with , and the resulting cross section.

As we have seen, , the eigenvalues of the covariance matrix of data are variances or data when projected on it eigenvectors. At the same time, when you fit an ellipsoid on the data, is the radius of the ellipsoid corresponding to . Thus ignoring data projected on eigenvectors corresponding to small eigenvalues is equivalent to cutting of the axes of the ellipsoid with small radiusses.

I have explained PCA in three different ways over three articles.

• The second article: I focused on what kind of linear transformations convariance matrices enable, by visualizing displacement vectors. And those vectors look like swirling and extending into directions of eigenvectors of .
• The third article: We directly found directions where certain data distribution “swell” the most, to find that data swell the most in directions of eigenvectors.
• In this article, we have seen PCA corresponds to only one case of quadratic functions, where the matrix is a covariance matrix. When you go in the directions of eigenvectors corresponding to big eigenvalues, the quadratic function increases the most. Also that means data samples have bigger variances when projected on the eigenvectors. Thus you can cut off eigenvectors corresponding to small eigenvectors because they retain little information about data, and that is equivalent to fitting an ellipsoid on data and cutting off axes with small radiuses.

*Let be a covariance matrix, and you can diagonalize it with an orthogonal matrix as follow: , where . Thus . is a rotation, and multiplying a with means you multiply each eigenvalue to each element of . At the end enables the reverse rotation.

If you get data like the left side of the figure below, most explanation on PCA would just fit an oval on this data distribution. However after reading this articles series so far, you would have learned to see PCA from different viewpoints like at the right side of the figure below.

### 5 Ellipsoids in Gaussian distributions.

I have explained that if the covariance of a data distribution is , the ellipsoid which fits the distribution the best is . You might have seen the part somewhere else. It is the exponent of general Gaussian distributions: .  It is known that the eigenvalues of are , and eigenvectors corresponding to each eigenvalue are also respectively. Hence just as well as what we have seen, if you project on each eigenvector of , we can convert the exponent of the Gaussian distribution.

Let be and be , where . Just as we have seen, . Hence .

*To be mathematically exact about changing variants of normal distributions, you have to consider for example Jacobian matrices.

This results above demonstrate that, by projecting data on the eigenvectors of its covariance matrix, you can factorize the original multi-dimensional Gaussian distribution into a product of Gaussian distributions which are irrelevant to each other. However, at the same time, that is the potential limit of approximating data with PCA. This idea is going to be more important when you think about more probabilistic ways to handle PCA, which is more robust to lack of data.

I have explained PCA over 3 articles from various viewpoints. If you have been patient enough to read my article series, I think you have gained some deeper insight into not only PCA, but also linear algebra, and that should be helpful when you learn or teach data science. I hope my codes also help you. In fact these are not the only topics about PCA. There are a lot of important PCA-like algorithms.

In fact our expedition of ellipsoids, or PCA still continues, just as Star Wars series still continues. Especially if I have to explain an algorithm named probabilistic PCA, I need to explain the “Bayesian world” of machine learning. Most machine learning algorithms covered by major introductory textbooks tend to be too deterministic and dependent on the size of data. Many of those algorithms have another “parallel world,” where you can handle inaccuracy in better ways. I hope I can also write about them, and I might prepare another trilogy for such PCA. But I will not disappoint you, like “The Phantom Menace.”

### Appendix: making a model of a bunch of grape with ellipsoid berries.

If you can control quadratic curves, reshaping and rotating them, you can make a model of a grape of olive bunch on Matplotlib. I made a program of making a model of a bunch of berries on Matplotlib using the module to draw ellipsoids which I introduced earlier. You can check the codes in this page.

*I have no idea how many people on this earth are in need of making such models.

I made some modules so that you can see the grape bunch from several angles. This might look very simple to you, but the locations of berries are organized carefully so that it looks like they are placed around a stem and that the berries are not too close to each other.

[Refereces]

[1]C. M. Bishop, “Pattern Recognition and Machine Learning,” (2006), Springer, pp. 78-83, 559-577

[2]「理工系新課程　線形代数　基礎から応用まで」, 培風館、(2017)

[3]「これなら分かる　最適化数学　基礎原理から計算手法まで」, 金谷健一著、共立出版, (2019), pp. 17-49

[4]「これなら分かる　応用数学教室　最小二乗法からウェーブレットまで」, 金谷健一著、共立出版, (2019), pp.165-208

[5] 「サボテンパイソン 」
https://sabopy.com/

## Rethinking linear algebra: visualizing linear transformations and eigen vectors

In terms of calculation processes of Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA), which are the dimension reduction techniques I am going to explain in the following articles, diagonalization is what they are all about. Throughout this article, I would like you to have richer insight into diagonalization in order to prepare for understanding those basic dimension reduction techniques.

When our professor started a lecture on the last chapter of our textbook on linear algebra, he said “It is no exaggeration to say that everything we have studied is for this ‘diagonalization.'” Until then we had to write tons of numerical matrices and vectors all over our notebooks, calculating those products, adding their rows or columns to other rows or columns, sometimes transposing the matrices, calculating their determinants.

It was like the scene in “The Karate Kid,” where the protagonist finally understood the profound meaning behind the prolonged and boring “wax on, wax off” training given by Miyagi (or “jacket on, jacket off” training given by Jackie Chan). We had finally understood why we had been doing those seemingly endless calculations.

But usually you can do those calculations easily with functions in the Numpy library. Unlike Japanese college freshmen, I bet you are too busy to reopen textbooks on linear algebra to refresh your mathematics. Thus I am going to provide less mathematical and more intuitive explanation of diagonalization in this article.

### 1, The mainstream ways of explaining diagonalization.

*The statements below are very rough for mathematical topics, but I am going to give priority to offering more visual understanding on linear algebra in this article. For further understanding, please refer to textbooks on linear algebra. If you would like to have minimum understandings on linear algebra needed for machine learning, I recommend the Appendix C of Pattern Recognition and Machine Learning by C. M. Bishop.

In most textbooks on linear algebra, the explanations on dioagonalization is like this (if you are not sure what diagonalization is or if you are allergic to mathematics, you do not have to read this seriously):

Let be a vector space and let  be a mapping of into itself,  defined as , where is a matrix and is dimensional vector. An element is called an eigen vector if there exists a number such that and . In this case is uniquely determined and is called an eigen value of belonging to the eigen vector .

Any matrix has eigen values , belonging to . If is basis of the vector space , then is diagonalizable.

When is diagonalizable, with matrices , whose column vectors are eigen vectors , the following equation holds: , where .

And when is diagonalizable, you can diagonalize as below.

Most textbooks keep explaining these type of stuff, but I have to say they lack efforts to make it understandable to readers with low mathematical literacy like me. Especially if you have to apply the idea to data science field, I believe you need more visual understanding of diagonalization. Therefore instead of just explaining the definitions and theorems, I would like to take a different approach. But in order to understand them in more intuitive ways, we first have to rethink waht linear transformation means in more visible ways.

### 2, Linear transformations

Even though I did my best to make this article understandable to as little prerequisite knowledge, you at least have to understand linear transformation of numerical vectors and with matrices. Linear transformation is nothing difficult, and in this article I am going to use only 2 or 3 dimensional numerical vectors or square matrices. You can calculate linear transformation of by A as equations in the figure. In other words, is a vector transformed by .

*I am not going to use the term “linear transformation” in a precise way in the context of linear algebra. In this article or in the context of data science or machine learning, “linear transformation” for the most part means products of matrices or vectors.

*Forward/back propagation of deep learning is mainly composed of this linear transformation. You keep linearly transforming input vectors, frequently transforming them with activation functions, which are for the most part not linear transformation.

As you can see in the equations above, linear transformation with A transforms a vector to another vector. Assume that you have an original vector in grey and that the vector in pink is the transformed by A is. If you subtract from , you can get a displacement vector, which I displayed in purple. A displacement vector means the transition from a vector to another vector.

Let’s calculate the displacement vector with more vectors . Assume that , and I prepared several grid vectors in grey as you can see in the figure below. If you transform those grey grid points with , they are mapped into the vectors in pink. With those vectors in grey or pink, you can calculate the their displacement vectors in purple.

I think you noticed that the displacement vectors in the figure above have some tendencies. In order to see that more clearly, let’s calculate displacement vectors with several matrices and more grid points. Assume that you have three square matrices , and I plotted displace vectors made by the matrices respectively in the figure below.

I think you noticed some characteristics of the displacement vectors made by those linear transformations: the vectors are swirling and many of them seem to be oriented in certain directions. To be exact, some displacement vectors have extend in the same directions as some of original vectors in grey. That means  linear transformation by did not change the direction of the original vector , and the unchanged vectors are called eigen vectors. Real eigen vectors of each A are displayed as arrows in yellow in the figure above. But when it comes to , the matrix does not have any real eigan values.

In linear algebra, depending on the type matrices , you have consider various cases such as whether the matrices have real or imaginary eigen values, whether the matrices are diagonalizable, whether the eigen vectors are orthogonal, or whether they are unit vectors. But those topics are out of the scope of this article series, so please refer to textbooks on linear algebra if you are interested.

Luckily, however, in terms of PCA or LDA, you only have to consider a type of matrices named positive semidefinite matrices, which is classified to, and I am going to explain positive semidefinite matrices in the fourth section.

### 3, Eigen vectors as coordinate system

Source: Ian Stewart, “Professor Stewart’s Cabinet of Mathematical Curiosities,” (2008), Basic Books

Let me take Fibonacci numbers as an example to briefly see why diagonalization is useful. Fibonacci is sequence is quite simple and it is often explained using an example of pairs of rabbits increasing generation by generation. Let be the number of pairs of grown up rabbits in the generation. One pair of grown up rabbits produce one pair of young rabbit The concrete values of are , , , , , , , . Assume that and that , then you can calculate the number of the pairs of grown up rabbits in the next generation with the following recurrence relation. .Let be , then the recurrence relation can be written as , and the transition of are like purple arrows in the figure below. It seems that the changes of the purple arrows are irregular if you look at the plots in normal coordinate.

Assume that are eigen values of , and are eigen vectors belonging to them respectively. Also let scalars such that . According to the definition of eigen values and eigen vectors belonging to them, the following two equations hold: . If you calculate is, using eigen vectors of , . In the same way, , and . These equations show that in coordinate system made by eigen vectors of , linear transformation by is easily done by just multiplying eigen values with each eigen vector. Compared to the graph of Fibonacci numbers above, in the figure below you can see that in coordinate system made by eigen vectors the plots changes more systematically generation by generation.

In coordinate system made by eigen vectors of square matrices, the linear transformations by the matrices can be much more straightforward, and this is one powerful strength of eigen vectors.

*I do not major in mathematics, so I am not 100% sure, but vectors in linear algebra have more abstract meanings and various things in mathematics can be vectors, even though in machine learning or data science we  mainly use numerical vectors with more concrete elements. We can also say that matrices are a kind of maps. That is just like, at leas in my impression, even though a real town is composed of various components such as houses, smooth or bumpy roads, you can simplify its structure with simple orthogonal lines, like the map of Manhattan. But if you know what the town actually looks like, you do not have to follow the zigzag path on the map.

### 4, Eigen vectors of positive semidefinite matrices

In the second section of this article I told you that, even though you have to consider various elements when you discuss general diagonalization, in terms of PCA and LDA we mainly use only a type of matrices named positive semidefinite matrices. Let be a square matrix. If for all values of the vector , the is said to be a positive semidefinite matrix. And also it is known that being a semidefinite matrix is equivalent to for all the eigen values .

*I think most people first learn a type of matrices called positive definite matrices. Let be a square matrix. If for all values of the vector , the is said to be a positive definite matrix. You have to keep it in mind that even if all the elements of are positive, is not necessarly positive definite/semidefinite.

Just as we did in the second section of this article, let’s visualize displacement vectors made by linear transformation with a square positive semidefinite matrix .

*In fact , whose linear transformation I visualized the second section, is also positive semidefinite.

Let’s visualize linear transformations by a positive definite matrix . I visualized the displacement vectors made by the just as the same way as in the second section of this article. The result is as below, and you can see that, as well as the displacement vectors made by , the three dimensional displacement vectors below are swirling and extending in three directions, in the directions of the three orthogonal eigen vectors , and .

*It might seem like a weird choice of a matrix, but you are going to see why in the next article.

You might have already noticed and are both symmetric matrices and that their elements are all real values, and that their diagonal elements are all positive values. Super importantly, when all the elements of a symmetric matrix are real values and its eigen values are , there exist orthonormal matrices such that , where .

*The title of this section might be misleading, but please keep it in mind that positive definite/semidefinite matrices are not necessarily real symmetric matrices. And real symmetric vectors are not necessarily positive definite/semidefinite matrices.

### 5, Orthonormal matrices and rotation of vectors

In this section I am gong to explain orthonormal matrices, as known as rotation matrices. If a matrix is an orthonormal matrix, column vectors of are orthonormal, which means , where . In other words column vectors form an orthonormal coordinate system.

Orthonormal matrices have several important matrices, and one of them is . Combining this fact with what I have told you so far, you we can reach one conclusion that you can orthogonalize a real symmetric matrix as . This is known as spectral decomposition or singular value decomposition.

Another important property of is that is also orthonormal. In other words, assume is orthonormal and that , also forms a orthonormal coordinate system.

…It seems things are getting too mathematical and abstract (for me), thus for now I am going to wrap up what I have explained in this article .

We have seen

• Numerical matrices linearly transform vectors.
• Certain linear transformations do not change the direction of vectors in certain directions, which are called eigen vectors.
• Making use of eigen vectors, you can form new coordinate system which can describe the linear transformations in a more straightforward way.
• You can diagonalize a real symmetric matrix with an orthonormal matrix .

Of our current interest is what kind of linear transformation the real symmetric positive definite matrix enables. I am going to explain why the purple vectors in the figure above is swirling like that in the upcoming articles. Before that, however, we are going to  see one application of what we have seen in this article, on dimension reduction. To be concrete the next article is going to be about principal component analysis (PCA), which is very important in many fields.

*In short, the orthonormal matrix I mentioned above enables rotation of matrix, and the diagonal matrix expands or contracts vectors along each axis. I am going to explain that more precisely in the upcoming articles.

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

*I attatched the codes I used to make the figures in this article. You can just copy, paste, and run, sometimes installing necessary libraries.

import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches

T_A = np.array([[1, 1],
[1, 0]])

total_step = 5
x = np.zeros((total_step, 2))

x[0]  = np.array([1, 0])

for i in range(total_step - 1):
x[i + 1] = np.dot(T_A, x[i])

eigen_values, eigen_vectors = np.linalg.eig(T_A)
idx = eigen_values.argsort()[::-1]
eigen_values = eigen_values[idx]
eigen_vectors = eigen_vectors[:,idx]
for i in range(len(eigen_vectors)):
if(eigen_vectors.T[i][0] < 0):
eigen_vectors.T[i] = - eigen_vectors.T[i]

v_initial = x[0]
v_coefficients = np.zeros((total_step , 2))
v_coefficients[0] = np.dot(v_initial ,  np.linalg.inv(eigen_vectors.T))

for i in range(total_step-1):
v_coefficients[i + 1] = v_coefficients[i] * eigen_values

for i in range(total_step):
v_1_list[i+1] = v_coefficients.T[0][i]*eigen_vectors.T[0]
v_2_list[i+1] = v_coefficients.T[1][i]*eigen_vectors.T[1]

plt.figure(figsize=(20, 15))
fontsize = 20
small_shift = 0.2

plt.plot(x[:, 0], x[:, 1], marker='o', linestyle='none', markersize=10, color='black')

plt.arrow(0, 0, eigen_vectors.T[0][0], eigen_vectors.T[0][1], width=0.05, head_width=0.2, color='orange')
plt.arrow(0, 0, eigen_vectors.T[1][0], eigen_vectors.T[1][1], width=0.05, head_width=0.2, color='orange')

plt.text(eigen_vectors.T[0][0], eigen_vectors.T[0][1]+small_shift, r'', va='center',ha='right', fontsize=fontsize + 10)
plt.text(eigen_vectors.T[1][0] - small_shift, eigen_vectors.T[1][1],r'', va='center',ha='right', fontsize=fontsize + 10)

for i in range(total_step):

plt.arrow(0, 0, v_1_list[i+1][0], v_1_list[i+1][1], head_width=0.05, color='darkviolet', length_includes_head=True)
plt.arrow(0, 0, v_2_list[i+1][0], v_2_list[i+1][1], head_width=0.05, color='darkviolet', length_includes_head=True)

plt.text(v_1_list[i+1][0] + 2*small_shift , v_1_list[i+1][1]-2*small_shift,r''.format(1,i+1, 1),va='center',ha='right', fontsize=fontsize)
plt.text(v_2_list[i+1][0]-0.1, v_2_list[i+1][1],r''.format(2, i+1, 2),va='center',ha='right', fontsize=fontsize)

plt.arrow(v_1_list[i+1][0], v_1_list[i+1][1], v_2_list[i+1][0], v_2_list[i+1][1], head_width=0, color='black', linestyle='--', length_includes_head=True)
plt.arrow(v_2_list[i+1][0], v_2_list[i+1][1], v_1_list[i+1][0], v_1_list[i+1][1], head_width=0, color='black', linestyle='--', length_includes_head=True)

orange_patch = mpatches.Patch(color='orange', label='Eigen vectors')
purple_patch = mpatches.Patch(color='darkviolet', label='Scalar multiples of the eigen vectors')
plt.legend(handles=[orange_patch, purple_patch], fontsize=25, loc='lower right')

for i in range(total_step):
plt.text(x[i][0]+0.1, x[i][1]-0.05, r'n={0}'.format(i), fontsize=20)

plt.grid(True)
plt.ylabel("", fontsize=20)
plt.xlabel("", fontsize=20)
plt.title("Fibonacci sequence and its eigen space", fontsize=30)
#plt.savefig("Fibonacci_eigen_space.png")
plt.show()

import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches

const_range = 10

X = np.arange(-const_range, const_range + 1, 1)
Y = np.arange(-const_range, const_range + 1, 1)
U_x, U_y = np.meshgrid(X, Y)

T_A_0 = np.array([[3, 1],
[1, 2]])

T_A_1 = np.array([[3, 1],
[-1, 1]])

T_A_2 = np.array([[1, -1],
[1, 1]])

T_A_list = np.array((T_A_0, T_A_1, T_A_2))

const_range = 5
plt.figure(figsize=(30, 10))
labels = ["Grids", "Displacement vectors made by A", "Real eigen vectors of A"]
title_list = [r" has two different real eigen vectors.", r" has two identical real unit eigen vectors.",  r" has only imaginary eigen vectors."]
for idx in range(len(T_A_list)):

eigen_values, eigen_vectors = np.linalg.eig(T_A_list[idx])
sorted_idx = eigen_values.argsort()[::-1]
eigen_values = eigen_values[sorted_idx]
eigen_vectors = eigen_vectors[:,sorted_idx]
eigen_vectors = eigen_vectors.astype(float)

for i in range(len(eigen_vectors)):
if(eigen_vectors.T[i][0] < 0):
eigen_vectors.T[i] = - eigen_vectors.T[i]

X = np.arange(-const_range, const_range + 1, 1)
Y = np.arange(-const_range, const_range + 1, 1)
U_x, U_y = np.meshgrid(X, Y)

V_x = np.zeros((len(U_x), len(U_y)))
V_y = np.zeros((len(U_x), len(U_y)))

temp_vec = np.zeros((1, 2))

W_x = np.zeros((len(U_x), len(U_y)))
W_y = np.zeros((len(U_x), len(U_y)))

plt.subplot(1, 3, idx + 1)

for i in range(len(U_x)):
for j in range(len(U_y)):
temp_vec[0][0] = U_x[i][j]
temp_vec[0][1] = U_y[i][j]

temp_vec[0] = np.dot(T_A_list[idx], temp_vec[0])

V_x[i][j] = temp_vec[0][0]
V_y[i][j] = temp_vec[0][1]

W_x[i][j] = V_x[i][j] - U_x[i][j]
W_y[i][j] = V_y[i][j] - U_y[i][j]
#plt.arrow(0, 0, V_x[i][j], V_y[i][j], head_width=0.1, color='red')
plt.arrow(0, 0, U_x[i][j], U_y[i][j], head_width=0.3, color='dimgrey', label=labels[0])
plt.arrow(U_x[i][j], U_y[i][j], W_x[i][j], W_y[i][j], head_width=0.3, color='darkviolet', label=labels[1])

range_const = 20
plt.xlim([-range_const, range_const])
plt.ylim([-range_const, range_const])
plt.title(title_list[idx], fontsize=25)

if idx==2:
continue

plt.arrow(0, 0, eigen_vectors.T[0][0]*10, eigen_vectors.T[0][1]*10, head_width=1, color='orange', label=labels[2])
plt.arrow(0, 0, eigen_vectors.T[1][0]*10, eigen_vectors.T[1][1]*10, head_width=1, color='orange', label=labels[2])

grey_patch = mpatches.Patch(color='grey', label='Grids')
purple_patch = mpatches.Patch(color='darkviolet', label='Displacement vectors made by A')
yellow_patch = mpatches.Patch(color='gold', label='Real eigen vectors of A')
plt.legend(handles=[grey_patch, purple_patch, yellow_patch], fontsize=25, loc='lower right', bbox_to_anchor=(-0.1, -.35))
#plt.savefig("linear_transformation.png")
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.proj3d import proj_transform
from mpl_toolkits.mplot3d.axes3d import Axes3D
from matplotlib.text import Annotation
from matplotlib.patches import FancyArrowPatch
import matplotlib.patches as mpatches

class Annotation3D(Annotation):
def __init__(self, text, xyz, *args, **kwargs):
super().__init__(text, xy=(0,0), *args, **kwargs)
self._xyz = xyz

def draw(self, renderer):
x2, y2, z2 = proj_transform(*self._xyz, renderer.M)
self.xy=(x2,y2)
super().draw(renderer)

def _annotate3D(ax,text, xyz, *args, **kwargs):
'''Add anotation text to an Axes3d instance.'''

annotation= Annotation3D(text, xyz, *args, **kwargs)

setattr(Axes3D,'annotate3D',_annotate3D)

class Arrow3D(FancyArrowPatch):
def __init__(self, x, y, z, dx, dy, dz, *args, **kwargs):
super().__init__((0,0), (0,0), *args, **kwargs)
self._xyz = (x,y,z)
self._dxdydz = (dx,dy,dz)

def draw(self, renderer):
x1,y1,z1 = self._xyz
dx,dy,dz = self._dxdydz
x2,y2,z2 = (x1+dx,y1+dy,z1+dz)

xs, ys, zs = proj_transform((x1,x2),(y1,y2),(z1,z2), renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
super().draw(renderer)

def _arrow3D(ax, x, y, z, dx, dy, dz, *args, **kwargs):
'''Add an 3d arrow to an Axes3D instance.'''

arrow = Arrow3D(x, y, z, dx, dy, dz, *args, **kwargs)

setattr(Axes3D,'arrow3D',_arrow3D)

T_A = np.array([[60.45, 33.63, 46.29],
[33.63, 68.49, 50.93],
[46.29, 50.93, 53.61]])

T_A = T_A/50
const_range = 2

X = np.arange(-const_range, const_range + 1, 1)
Y = np.arange(-const_range, const_range + 1, 1)
Z = np.arange(-const_range, const_range + 1, 1)

U_x, U_y, U_z = np.meshgrid(X, Y, Z)

V_x = np.zeros((len(U_x), len(U_y), len(U_z)))
V_y = np.zeros((len(U_x), len(U_y), len(U_z)))
V_z = np.zeros((len(U_x), len(U_y), len(U_z)))

temp_vec = np.zeros((1, 3))

W_x = np.zeros((len(U_x), len(U_y), len(U_z)))
W_y = np.zeros((len(U_x), len(U_y), len(U_z)))
W_z = np.zeros((len(U_x), len(U_y), len(U_z)))

eigen_values, eigen_vectors = np.linalg.eig(T_A)
sorted_idx = eigen_values.argsort()[::-1]
eigen_values = eigen_values[sorted_idx]
eigen_vectors = eigen_vectors[:,sorted_idx]
eigen_vectors = eigen_vectors.astype(float)

fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(111, projection='3d')
grid_range = const_range + 5
ax.set_xlim(-grid_range, grid_range)
ax.set_ylim(-grid_range, grid_range)
ax.set_zlim(-grid_range, grid_range)

eigen_values, eigen_vectors = np.linalg.eig(T_A)
sorted_idx = eigen_values.argsort()[::-1]
eigen_values = eigen_values[sorted_idx]
eigen_vectors = eigen_vectors[:,sorted_idx]
eigen_vectors = eigen_vectors.astype(float)

for i in range(len(eigen_vectors)):
if(eigen_vectors.T[i][0] < 0):
eigen_vectors.T[i] = - eigen_vectors.T[i]

for i in range(len(U_x)):
for j in range(len(U_x)):
for k in range(len(U_x)):
temp_vec[0][0] = U_x[i][j][k]
temp_vec[0][1] = U_y[i][j][k]
temp_vec[0][2] = U_z[i][j][k]

temp_vec[0] = np.dot(T_A, temp_vec[0])

V_x[i][j][k] = temp_vec[0][0]
V_y[i][j][k] = temp_vec[0][1]
V_z[i][j][k] = temp_vec[0][2]

W_x[i][j][k] = V_x[i][j][k] - U_x[i][j][k]
W_y[i][j][k] = V_y[i][j][k] - U_y[i][j][k]
W_z[i][j][k] = V_z[i][j][k] - U_z[i][j][k]
ax.arrow3D(0, 0, 0,
U_x[i][j][k], U_y[i][j][k], U_z[i][j][k],
mutation_scale=10, arrowstyle="-|>", fc='dimgrey', ec='dimgrey')
#ax.arrow3D(0, 0, 0,
#          V_x[i][j][k], V_y[i][j][k], V_z[i][j][k],
#           mutation_scale=10, arrowstyle="-|>", fc='red', ec='red')
ax.arrow3D(U_x[i][j][k], U_y[i][j][k], U_z[i][j][k],
W_x[i][j][k], W_y[i][j][k], W_z[i][j][k],
mutation_scale=10, arrowstyle="-|>", fc='darkviolet', ec='darkviolet')

ax.arrow3D(0, 0, 0, eigen_vectors.T[0][0]*10, eigen_vectors.T[0][1]*10, eigen_vectors.T[0][2]*10,
mutation_scale=10,  arrowstyle="-|>", fc='orange', ec='orange')
ax.arrow3D(0, 0, 0, eigen_vectors.T[1][0]*10, eigen_vectors.T[1][1]*10, eigen_vectors.T[1][2]*10,
mutation_scale=10, arrowstyle="-|>", fc='orange', ec='orange')
ax.arrow3D(0, 0, 0, eigen_vectors.T[2][0]*10, eigen_vectors.T[2][1]*10, eigen_vectors.T[2][2]*10,
mutation_scale=10, arrowstyle="-|>", fc='orange', ec='orange')

ax.text(eigen_vectors.T[0][0]*8 , eigen_vectors.T[0][1]*8, eigen_vectors.T[0][2]*8+1, r'', fontsize=20)
ax.text(eigen_vectors.T[1][0]*8 , eigen_vectors.T[1][1]*8, eigen_vectors.T[1][2]*8, r'', fontsize=20)
ax.text(eigen_vectors.T[2][0]*8 , eigen_vectors.T[2][1]*8, eigen_vectors.T[2][2]*8, r'', fontsize=20)

grey_patch = mpatches.Patch(color='grey', label='Grids')
orange_patch = mpatches.Patch(color='orange', label='Orthogonal eigen vectors of A')
purple_patch = mpatches.Patch(color='darkviolet', label='Displacement vectors made by A')
plt.legend(handles=[grey_patch, orange_patch, purple_patch], fontsize=20, loc='lower right')

ax.set_xlabel(r'', fontsize=25)
ax.set_ylabel(r'', fontsize=25)
ax.set_zlabel(r'', fontsize=25)
#plt.savefig("symmetric_positive_definite_visualizaiton.png")
plt.show()


## Illustrative introductions on dimension reduction

“What is your image on dimensions?”

….That might be a cheesy question to ask to reader of Data Science Blog, but most people, with no scientific background, would answer “One dimension is a line, and two dimension is a plain, and we live in three-dimensional world.” After that if you ask “How about the fourth dimension?” many people would answer “Time?”

You can find books or writings about dimensions in various field. And you can use the word “dimension” in normal conversations, in many contexts.

*In Japanese, if you say “He likes two dimension.” that means he prefers anime characters to real women, as is often the case with Japanese computer science students.

The meanings of “dimensions” depend on the context, but in data science dimension is usually the number of rows of your Excel data.

When you study data science or machine learning, usually you should start with understanding the algorithms with 2 or 3 dimensional data, and you can apply those ideas to any D dimensional data. But of course you cannot visualize D dimensional data anymore, and you always have to be careful of what happens if you expand degree of dimension.

Conversely it is also important to reduce dimension to understand abstract high dimensional stuff in 2 or 3 dimensional space, which are close to our everyday sense. That means dimension reduction is one powerful way of data visualization.

In this blog series I am going to explain meanings of dimension itself in machine learning context and algorithms for dimension reductions, such as PCA, LDA, and t-SNE, with 2 or 3 dimensional visible data. Along with that, I am going to delve into the meaning of calculations so that you can understand them in more like everyday-life sense.

#### This article series is going to be roughly divided into the contents below.

1. Curse of Dimensionality
2. PCA, LDA (to be published soon)
3. Rethinking eigen vectors (to be published soon)
4. KL expansion and subspace method (to be published soon)
5. Autoencoder as dimension reduction (to be published soon)
6. t-SNE (to be published soon)

I hope you could see that reducing dimension is one of the fundamental approaches in data science or machine learning.