## This post is about summary of a talk by Federico Monti on "Deep Geometric Matrix Completion".

#### It is a second post in a series of four posts. The first post discusses about a talk by Xavier Bresson at IPAM UCLA workshop :Link to first post.

IPAM recently hosted a workshop on New Deep Learning Techniques. This blog is about a talk at the workshop by Federico Monti on Deep Geometric Matrix Completion.
• Deep Geometric Matrix Completion is a geometric deep learning based approach for Recommendation Systems. The problem of recommending an item to customers can be formulated as a matrix completion task.

### Matrix completion as a constraint minimization problem:

• Many researchers have posed a matrix completion problem as constraint minimization problem. Candes et al, 2008 proposed a method that reconstructs a matrix $$X$$ which is as close to the original user-item matrix as possible. The method puts additional low rank constraint on the matrix that acts as a regularization term helping to achieve good results. However, this method does not consider any relation between users or items.
• The items/users which have similar properties should be assigned similar score. To incorporate this intuition in matrix completion algorithm Kalofolias et al, 2014 have replaced the low-rank constraint term by sum of two Dirichlet energy terms. One term was defined on user similarity graph and another term on item similarity graph. Empirically, it proved to be a better regularizer and could outperform algorithm proposed by Candes et al.
• Rao et al 2015 improved this method by parameterizing the reconstructed matrix $$X$$ as a product of two matrices, containing representation of users and items respectively. This paper also uses two Dirichlet energy terms in its loss function. The main advantage of this algorithm over above two, was the reduced number of parameters.
• The constraint minimization approach poses two challenges:
• The user similarity graph and item similarity graph is not fully utilized to see the relation between graph structure and scoring pattern.
• Number of parameters grow linearly wrt number of users and items.
• To overcome these limitations, researchers have proposed to use CNN for graphs.

### Graph Convolutional Neural Network(GCNN)

• Graph Convolutional Neural Network is a generalization of convolution operation on non-euclidean data/graph data. Different researchers have proposed different approaches of applying CNN for graphs. They can be broadly categorized into 3 types: spectral methods [Henaff et al, Defferrard et al], spatial domain [Masci et al, Boscaini et al] and embedding domain [Maron et al]. Methods from spectral domain perform convolution operation in Fourier domain of signal and learns spectral filters. Spatial domain algorithms map neighborhood nodes to a structure and then apply convolution by means of some coefficients. In embedding based methods, the non-Euclidean data is mapped to regular domain where convolution is well-defined.
• Convolution in spectral domain is the projection of input graph signal $$f$$ over different powers of Laplace operator (i.e. eigenvectors of graph Laplacian). If the graph has specific structure (say, communitites), the ChebNet can find this structure but requires higher order polynomial. Specifically, order of the polynomial equal to the number of communitites in the graph. To overcome this problem, CayleyNet proposed a spectral zooming method by using Cayley transform.
GCNN for Matrix Completion
• Above two algorithms for applying convolution on graphs work only on single graph. However, for matrix completion problem, we need to consider signal from two graphs, one from user similarity graph and another on item similarity graph. To deal with signal from multiple domains Geometric Matrix Completion paper proposes multi-graph convolutional network (MGCNN).
• One approach to deal with the input matrix $$X$$ defined over two domains is to compute Fourier transform of the matrix given by $$\hat{X}=\Phi_r^TX\Phi_c$$. The equation says, we can do multi-graph Fourier transform of $$X$$ by pre-multiplying and post-miltiplying with eigenvectors of Laplacian of row-graph and column-graph respectively.
• The convolution in spectral domain is defined as: first apply multi-graph Fourier transform, do element-wise matrix multiplication and then apply inverse Graph Fourier transform by pre-multiplying and post-multiplying with eigenvectors of Laplacian. i.e. $$X*Y=\Phi_r(\hat{X}\circ \hat{Y})\Phi_c^T$$. However, this convolution method has huge computational complexity $$O(mn)$$ i.e. order of product of number of users and items.
• To overcome the limitation, the MGCNN represents the matrix $$X$$ as product of two matrices, an approach proposed earlier by Rao et al 2015. As a result, we have two matrices, each one defined only on single domain i.e. one matrix on item graph and another on user graph and we can use existing techniques of applying convolution on a single graph. This approach has reduced the complexity to $$O(m+n)$$.
• Matrix completion problem is posed as information diffusion task. Hence, final architecture has recurrent version of multi-graph CNN. In each iteration, MGCNN predicts the features for every user-item pair that describes the behaviour of that pair in the matrix and RNN combines this information with its current state information to update the values of the matrix.
Summary:
• User-Item recommendation problem can be defined as matrix completion problem.
• ChebNet and CayleyNet define graph convolutional in spectral domain and they can work only when the data is from single domain. However, the matrix completion problem deals with data from two domain-users and items.
• The multi-graph convolution neural network (MGCNN) is designed to work with data from multiple domains.
• The MGCNN poses the problem of matrix completion as information diffusion problem and uses recurrent multi-graph convolutional neural network to solve it.

### Paper Summary #2: Convolutional Neural Network on Graphs with Fast Localized Spectral Filtering (NIPS 2016)

This post summarizes the NIPS 2016 paper by Defferrard et al on "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering". Convolutional Neural Networks have accomplished many breakthroughs, ranging from a classification of a million images of ImageNet to very complex tasks like object tracking in surveillance videos. These advancements are not restricted to image data. The CNNs (and, in general, deep learning concepts) have been able to achieve state-of-the-art results even on text and speech applications.

CNNs are proved to be very powerful tool in solving many problems from images, text and speech domain. If that is the case then the question that we want to ask here is, can we use CNNs to solve problems on graphs as well.
If we take a closer look at the data domain that we were dealing with, we realize that this data has specific structure, e.g. images are nothing but the 2-D grid of pixels, the text is a stream of words and can be thought of as a…

### Paper Summary #1: Learning Convolutional Neural Network for Graphs (ICML2016)

This post summarizes the PATCHY-SAN algorithm proposed by Niepert et al in their ICML 2016 paper "Learning Convolutional Neural Networks for Graphs"In the literature, the convolution operation on graphs is defined in 3 domains: spectral domain [Defferrard et al, Kipf et al], spatial domain [Masci et al, Boscaini et al] and embedding domain [Maron et al]. The method proposed by Niepert et al in "Learning Convolutional Neural Networks for Graphs" falls into the category of spatial domain algorithm.The broad aim of the paper is to learn a function over a bunch of graphs that will give us a general representation of them. These representations can then be used for any task such graph classification.Challenges in applying convolution directly on graphs:
There are two main problems that needs to be addressed before we could apply convolution on graphs.Challenge 1: Images can be thought of as a regular graph, where each pixel is denoted by a node and two adjacent pixels …

### CNN for Graph: Notes on IPAM UCLA talk- Part I

This post is about my understanding of Xavier Bresson's talk at IPAM UCLA on "Convolutional Neural Networks on Graphs".This post has 3 section: the first section talks about how convolution and pooling operations are defined on graphs, the second section talks about different graph CNN architectures proposed so far and the last section talks about Graph Neural Network - a framework to deal with arbitrary shape graphs.Institute for Pure & Applied Mathematics(IPAM), UCLA recently organized a workshop on New Deep Learning Techniques with many great talks on emerging methods in deep learning. The concept of applying convolution on non-Euclidean data is interesting in its in own way. Xavier Bresson presented a nice overview of it in his talk at the workshop. In this post, I have discussed some of the concepts, that I found worth noting down.CNNs are successful in solving many problems in computer vision, NLP, speech, that seemed impossible to solve a few years back. The …