学术讲座: Learning representations for semantic relational data

学术讲座通知

讲座题目: Learning representations for semantic relational data

主讲人:Patrick Gallinari教授

时间:2014年7月11日(周五)下午15:00-17:00

地点:教三楼811会议室

主持人:高升 模式识别实验室

内容摘要:Learning representations for complex relational data has emerged at the crossroad between different research topics in machine learning. The motivation of this work is often driven by the applications themselves and by the nature of the data which are often complex (multimodal, heterogeneous, dynamic), and multi-relational (e.g. biology, social networks). One possible approach is to map these data onto one or more continuous latent spaces in order to obtain representations on which it is possible to use classical machine learning methods. In recent years, several lines of research have developed these ideas, sometimes independently, and they are now represented in the “Learning Representations” community. The tools deployed rely on statistical modeling, on linear algebra with matrix or tensor factorization, or more recently on neural networks. The presentation will give a brief presentation of some of these methods and show applications in the field of semantic data analysis and social networks.

主讲人简介:Patrick. Gallinari is professor in Computer Science at Universite Pierre et Marie Curie (UPMC), France. His research domain is primarily statistical machine learning with applications to domains involving semantic data like information retrieval. His recent work has focused on statistical modeling of complex relational data described by sequences, trees or graphs.  Before that, he has been a pioneer of neural networks in France, participating to the development of this domain in Europe. He has also been director of the computer science lab. at UPMC for about 10 years.

 

Call for Papers: Representation Learning Workshop (RL 2014) Joint With ECML/PKDD 2014

News

2014-03-27: The web page for the workshop is now online.

Important dates

Submission deadlines

  • Submission deadline:
    June 20, 2014
  • Acceptance notification:
    July 11, 2014
  • Final paper submission:
    July 25, 2014
  • Workshop date:
    September 15, 2014

Objectives

Representation learning has developed at the crossroad of different disciplines and application domains. It has recently enjoyed enormous success in learning useful representations of data from various application areas such as vision, speech, audio, or natural language processing. It has developed as a research field by itself with several successful workshops at major machine learning conferences, sessions at the main machine learning conferences (e.g., 3 sessions on deep learning at ICML 2013 + related sessions on e.g. tensors or compressed sensing) and with the recent ICLR (International Conference on Learning Representations) whose first edition was in 2013.

We take here a broad view of this field and want to attract researchers concerned with statistical learning of representations, including matrix- and tensor-based latent factor models, probabilistic latent models, metric learning, graphical models and also recent techniques such as deep learning, feature learning, compositional models, and issues concerned with non-linear structured prediction models. The focus of this workshop will be on representation learning approaches, including deep learning, feature learning, metric learning, algebraic and probabilistic latent models, dictionary learning and other compositional models, to problems in real-world data mining. Papers on new models and learning algorithms that combine aspects of the two fields of representation learning and data mining are especially welcome. This one-day workshop will include a mixture of invited talks, and contributed presentations, which will cover a broad range of subjects pertinent to the workshop theme. Besides classical paper presentations, the call also includes demonstration for applications on these topics. We believe this workshop will accelerate the process of identifying the power of representation learning operating on semantic data.

Topics of Interest

A non-exhaustive list of relevant topics:
– unsupervised representation learning and its applications
– supervised representation learning and its applications
– metric learning and kernel learning and its applications
– hierarchical models on data mining
– optimization for representation learning
– other related applications based on representation learning.

We also encourage submissions which relate research results from other areas to the workshop topics.


 Workshop Organizers


Program Committee

  • Thierry Artieres, Université Pierre et Marie Curie, France
  • Samy Bengio, Google, USA
  • Yoshua Bengio, University of Montreal, Canada
  • Antoine Bordes, Facebook NY, USA
  • Leon Bottou, MSR NY, USA
  • Joachim Buhman, ETH Zurich, Switzerland
  • Zheng Chen, Microsoft, China
  • Ronan Collobert, IDIAP, Switzerland
  • Patrick Fan, Virginia Tech, USA
  • Patrick Gallinari, Université Pierre et Marie Curie, France
  • Huiji Gao, Arizona State University, USA
  • Marco Gori, University of Siena, Italy
  • Sheng Gao, Beijing University of Posts and Telecommunications, China
  • Jun He, Renmin University, China
  • Sefanos Kollias, NTUA, Greece
  • Hugo Larochelle, University of Sherbrooke, Canada
  • Zhanyu Ma, Beijing University of Posts and Telecommunications, China
  • Yann Lecun, NYU Courant Institute and Facebook, USA
  • Nicolas Leroux, Criteo, France
  • Dou Shen, Baidu, China
  • Alessandro Sperduti, University of Padova, Italy
  • Shengrui Wang, University of Sherbrooke, Canada
  • Jason Weston, Google NY, USA
  • Jun Yan, Microsoft, China
  • Guirong Xue, Ali, China
  • Shuicheng Yan, National University of Singapore, Singapore
  • Kai Yu, Baidu, China
  • Benyu Zhang, Google, USA

Program

  • To Be Announced.

Submission of Papers

We invite two types of submissions for this workshop:

  • Paper submission

We welcome submission of unpublished research results. Paper length should be between 8-12 pages, though additional material can be put in a supplemental section. Papers should be typeset using the standard ECML/PKDD format, though the submissions do not need to be anonymous. All submissions will be anonymously peer reviewed and will be evaluated on the basis of their technical content. Template files can be downloaded at LNCS site.

  • Demo submission

A one page description of the demonstration in a free format is required.

We recommend to follow the format guidelines of ECML/PKDD (Springer LNCS), as this will be the required format for accepted papers.

Science上发表实验室邓伟洪、郭军、胡佳妮、张洪刚等撰写的论文

s_dab602fc254eac92114d52cf2ba66cda125519

2008年1月25日,美国Science周刊发表了英国学者R.0Jenkins和A.0M.0Burton的文章,提出了一种基于image0averaging的人脸识别算法,在对25位世界名人的500幅照片的识别中,取得了令人震惊的100%的识别率,并提出了将该方法用于护照等证件照识别的建议。

北邮模式识别实验室邓伟洪老师敏锐地意识到该文章存在问题,经过与导师郭军等人讨论和分析,找到了文章的问题所在,并迅速撰写了一篇技术评论。Science杂志经过对原文作者进行置疑,确认了邓伟洪、郭军等人的观点,在2008年8月15日的周刊上发表了这篇评论,指出了原文中的错误,并澄清了人脸识别中的几个基本问题。

据悉,这是北京邮电大学师生首次在Science上发表论文,是一次历史性的突破。