Call for Papers: Representation Learning Workshop (RL 2014) Joint With ECML/PKDD 2014

News

2014-03-27: The web page for the workshop is now online.

Important dates

Submission deadlines

  • Submission deadline:
    June 20, 2014
  • Acceptance notification:
    July 11, 2014
  • Final paper submission:
    July 25, 2014
  • Workshop date:
    September 15, 2014

Objectives

Representation learning has developed at the crossroad of different disciplines and application domains. It has recently enjoyed enormous success in learning useful representations of data from various application areas such as vision, speech, audio, or natural language processing. It has developed as a research field by itself with several successful workshops at major machine learning conferences, sessions at the main machine learning conferences (e.g., 3 sessions on deep learning at ICML 2013 + related sessions on e.g. tensors or compressed sensing) and with the recent ICLR (International Conference on Learning Representations) whose first edition was in 2013.

We take here a broad view of this field and want to attract researchers concerned with statistical learning of representations, including matrix- and tensor-based latent factor models, probabilistic latent models, metric learning, graphical models and also recent techniques such as deep learning, feature learning, compositional models, and issues concerned with non-linear structured prediction models. The focus of this workshop will be on representation learning approaches, including deep learning, feature learning, metric learning, algebraic and probabilistic latent models, dictionary learning and other compositional models, to problems in real-world data mining. Papers on new models and learning algorithms that combine aspects of the two fields of representation learning and data mining are especially welcome. This one-day workshop will include a mixture of invited talks, and contributed presentations, which will cover a broad range of subjects pertinent to the workshop theme. Besides classical paper presentations, the call also includes demonstration for applications on these topics. We believe this workshop will accelerate the process of identifying the power of representation learning operating on semantic data.

Topics of Interest

A non-exhaustive list of relevant topics:
– unsupervised representation learning and its applications
– supervised representation learning and its applications
– metric learning and kernel learning and its applications
– hierarchical models on data mining
– optimization for representation learning
– other related applications based on representation learning.

We also encourage submissions which relate research results from other areas to the workshop topics.


 Workshop Organizers


Program Committee

  • Thierry Artieres, Université Pierre et Marie Curie, France
  • Samy Bengio, Google, USA
  • Yoshua Bengio, University of Montreal, Canada
  • Antoine Bordes, Facebook NY, USA
  • Leon Bottou, MSR NY, USA
  • Joachim Buhman, ETH Zurich, Switzerland
  • Zheng Chen, Microsoft, China
  • Ronan Collobert, IDIAP, Switzerland
  • Patrick Fan, Virginia Tech, USA
  • Patrick Gallinari, Université Pierre et Marie Curie, France
  • Huiji Gao, Arizona State University, USA
  • Marco Gori, University of Siena, Italy
  • Sheng Gao, Beijing University of Posts and Telecommunications, China
  • Jun He, Renmin University, China
  • Sefanos Kollias, NTUA, Greece
  • Hugo Larochelle, University of Sherbrooke, Canada
  • Zhanyu Ma, Beijing University of Posts and Telecommunications, China
  • Yann Lecun, NYU Courant Institute and Facebook, USA
  • Nicolas Leroux, Criteo, France
  • Dou Shen, Baidu, China
  • Alessandro Sperduti, University of Padova, Italy
  • Shengrui Wang, University of Sherbrooke, Canada
  • Jason Weston, Google NY, USA
  • Jun Yan, Microsoft, China
  • Guirong Xue, Ali, China
  • Shuicheng Yan, National University of Singapore, Singapore
  • Kai Yu, Baidu, China
  • Benyu Zhang, Google, USA

Program

  • To Be Announced.

Submission of Papers

We invite two types of submissions for this workshop:

  • Paper submission

We welcome submission of unpublished research results. Paper length should be between 8-12 pages, though additional material can be put in a supplemental section. Papers should be typeset using the standard ECML/PKDD format, though the submissions do not need to be anonymous. All submissions will be anonymously peer reviewed and will be evaluated on the basis of their technical content. Template files can be downloaded at LNCS site.

  • Demo submission

A one page description of the demonstration in a free format is required.

We recommend to follow the format guidelines of ECML/PKDD (Springer LNCS), as this will be the required format for accepted papers.

发表评论

电子邮件地址不会被公开。 必填项已用*标注