WeChat Mini Program
Old Version Features

Generalization Bounds for Graph Embedding Using Negative Sampling: Linear Vs Hyperbolic

NEURIPS 2021(2021)

Univ Greenwich | Principal Researcher | Senior Lecturer | Hong Kong Polytech Univ | Full Professor | Natl Inst Informat

Cited 12|Views34
Abstract
Graph embedding, which represents real-world entities in a mathematical space, has enabled numerous applications such as analyzing natural languages, social networks, biochemical networks, and knowledge bases. It has been experimentally shown that graph embedding in hyperbolic space can represent hierarchical tree-like data more effectively than embedding in linear space, owing to hyperbolic space’s exponential growth property. However, since the theoretical comparison has been limited to ideal noiseless settings, the potential for the hyperbolic space’s property to worsen the generalization error for practical data has not been analyzed. In this paper, we provide a generalization error bound applicable for graph embedding both in linear and hyperbolic spaces under various negative sampling settings that appear in graph embedding. Our bound states that error is polynomial and exponential with respect to the embedding space’s radius in linear and hyperbolic spaces, respectively, which implies that hyperbolic space’s exponential growth property worsens the error. Using our bound, we clarify the data size condition on which graph embedding in hyperbolic space can represent a tree better than in Euclidean space by discussing the bias-variance trade-off. Our bound also shows that imbalanced data distribution, which often appears in graph embedding, can worsen the error.
More
Translated text
Key words
Graph embedding,Poincare embedding,hyperbolic space,negative sampling,generalization error,statistical learning theory,Rademacher complexity
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined