Representation learning has recently enjoyed enormous success in learning useful representations of semantic data from various areas such as vision, speech, or natural language processing. It has developed at the crossroad of different disciplines and application domains, and as a research field by itself with several successful workshops or sessions at major machine learning and data mining conferences, as well as with the recent ICLR(International Conference on Learning Representations) whose first edition was in 2013. Representation learning for graphs has newly attracted rising attentions from researchers and communities, thus, we take here a specific view of this field and want to call upon researchers concerned with statistical learning of representations, including matrix- and tensor-based latent factor models, probabilistic latent models, metric learning, graphical models and also recent techniques such as deep learning, feature learning, compositional models, and issues concerned with non-linear structured prediction models.
The first Representation Learning for Semantic Data (ReLSD) Workshop has been held co-located with ECML/PKDD 2014 (http://conference.bupt.edu.cn/rl2014/), and the second Representation Learning for Semantic Data (ReLSD) Workshop has been held co-located with ICDM 2015 (http://conference.bupt.edu.cn/rl2015/), which both received great attentions and participants from many researchers and companies. This year, we seek to hold the third workshop focusing on graph representation learning in a more capacious platform as in IJCAI 2017. The focus of this workshop will be on representation learning approaches applied to graph-structured data emerging from real world. New models and learning algorithms based on representation learning that can address all of the challenges in graph-structured data mining are encouraged.
The focus of this workshop will be on representation learning approaches, including deep learning, feature learning, metric learning, algebraic and probabilistic latent models, dictionary learning and other compositional models, to solving problems in graph-structured data mining. Papers on new models and learning algorithms that combine aspects of the two fields of representation learning and graph data mining are especially welcome. This one-day workshop will include a mixture of invited talks and contributed presentations, which will cover a broad range of subjects pertinent to the workshop theme. Besides classical paper presentations, the call also includes demonstration for applications on these topics. We believe this workshop will accelerate the process of identifying the power of representation learning operating on graph-structured data.
A non-exhaustive list of relevant topics:
- unsupervised representation learning for graph-structured data
- supervised representation learning for graph-structured data
- metric learning and kernel learning for graph-structured data
- hierarchical models on graph-structured data mining
- optimization for representation learning
- applications in graph-structured data mining.
- other related applications
We also encourage submissions which relate research results from other areas to the workshop topics.
We invite two types of submissions for this workshop:
We welcome submission of unpublished research results. Paper length should be limited to a maximum of 6 pages in the IJCAI conference format. Papers should be typeset using the IJCAI proceedings manuscript style, though the submissions do not need to be anonymous. All submissions will be anonymously peer reviewed and will be evaluated on the basis of their technical quality. Submissions must be made through the EasyChair System.
A one-page description of the demonstration in a free format is encouraged.