**Introduction**

If we want to predict the user A's rating of the book X, but we only have the A's rating for some other books and user B's rating of the book X. How can we predict the A's rating of the book X? The easiest way is to simply forecast as average. But we never know with accuracy.

SVD (Singular Value Decomposition) is based on the existing ratings, analysis the favorite degree of the raters for every factors, and get the ranks from analysis result at last. In the above example, there are many factors of the book, such as the cover, author, story, price and etc,.

SVD algorithms make a ranking matrix \(R\) with \(n\) rows and \(m\) columns as abstract. \(R[u][i]\) means that the rank of the object \(i\) from user \(u\). It can be decomposed into a user factors matrix \(P\) with \(n\) rows and \(f\) columns (\(P[u][k]\) means that rank of the factor \(k\) from user \(u\)) and a object factors matrix with \(m\) rows and \(f\) columns (\(Q[i][k]\) means that rank of the factor \(k\) of object \(i\)). This can be represented by the formula like this:

\[R=PQ^\mathrm{T}\]

There is an example for decomposed to two matrix. The larger \(P\), represents more users prefer the book, larger \(Q\), represents high factor degree. We can predict the user A's rating of the book X after decomposed.

+-------------+------+------+------+ |Rank Matrix R|Book X|Book Y|Book Z| +-------------+------+------+------+ | User A | 6 | 3 | ? | +-------------+------+------+------+ | User B | 3 | 2 | 6 | +-------------+------+------+------+ | +----------------------+------------------------+ | | | v | +-----------------------+--------+----------+ v |Object Factors Matrix Q|Computer|Literature| +---------------------+--------+----------+ +-----------------------+--------+----------+ |User Factors Matrix P|Computer|Literature| | Book X | 6 | 0 | +---------------------+--------+----------+ +-----------------------+--------+----------+ | User A | 1 | 0.2 | | Book Y | 3 | 3 | +---------------------+--------+----------+ +-----------------------+--------+----------+ | User B | 0.3 | 1 | | Book Z | 0 | 6 | +---------------------+--------+----------+ +-----------------------+--------+----------+

In addition to considering how the user like this book, but also affect by whether they be a strict raters and existing ratings when the user ranking a book in fact. Somebody will give high rank when they got this book has been rank as high value. The factor of how user like this book has been exists, we need to add two new factor to record that another parts to improve the accuracy of the model. After improved formula like this:

\[R=OverallMean+biasU+biasI+PQ^\mathrm{T}\]

\(OverallMean\) means that average rank of the all books, \(biasU\) means that the deviation with \(OverallMean\) of the user ranking, and \(biasI\) means that the deviation with \(OverallMean\) of the book ranks. \(P\) and \(Q\) meaning are unchanged and they are all matrices except the \(OverallMean\).

After decompose, suppose we want to predict user \(u\) rating for book \(i\):

\[\hat{r}_{u,i}=OverallMean+b_u+b_i+p_uq_i^\mathrm{T}\]

**SVD Implement**

Two decomposed matrices get by learning. SVD using stochastic gradient descent learning parameters except the \(OverallMean\). The learning process can be summarized as this: initial value of each parameter, and then use these parameters to predict, and the predicted results were compared with known rates, adjust each various parameters based on comparison results at last. Adjustment the value of the parameter, making the following formula can take to the minimum:

\[\sum_{(u,i)\in\alpha}\{(r_{u,i}-OverallMean-b_u-b_i-p_uq_i^\mathrm{T})^2+\lambda(b_u^2+b_i^2+\begin{Vmatrix}p_u\end{Vmatrix}^2+\begin{Vmatrix}q_i\end{Vmatrix}^2)\}\]

\(\alpha\) means that all the training samples, the first part of the parentheses represents the deviation of the current predictions and the actual value, the second part of the parentheses is to prevent overfitting.

That's the main ideas of SVD.

**Reference**

Jim Lambers - [The SVD Algorithm]

Chih-Chao Ma - [A Guide to Singular Value Decomposition for

Collaborative Filtering] Department of Computer Science, National Taiwan University, Taipei, Taiwan

Netflix Update: Try This at Home

Matrix Factorization Techniques for Recommender Systems