网站页面已加载完成

由于您当前的浏览器版本过低,存在安全隐患。建议您尽快更新,以便获取更好的体验。推荐使用最新版Chrome、Firefox、Opera、Edge

Chrome

Firefox

Opera

Edge

ENG

当前位置: 首页 · 学术交流 · 正文

学术交流

【学术报告】研究生灵犀学术殿堂第193期之Arthur Gretton报告会通知

发布时间:2017年07月04日 来源:研工部 点击数:

全校师生:

云顶集团定于2017年7月5日举办研究生灵犀学术殿堂——Arthur Gretton报告会,现将有关事项通知如下:

1.报告会简介

报告人:Arthur Gretton

时 间:2017年7月5日(星期三) 上午9:00(开始时间)

地 点: 长安校区 89院之间报告厅

主 题: Learning Interpretable Features to Compare Distributions

内容简介:I will present adaptive two-sample tests with optimized testing power and interpretable features. These will be based on the maximum mean discrepancy (MMD), a difference in the expectations of features under the two distributions being tested. Useful features are defined as being those which contribute a large divergence between distributions with high confidence. These interpretable tests can further be used in benchmarking and troubleshooting generative models, in a goodness-of-fit setting. For instance, we may detect subtle differences in the distribution of model outputs and real hand-written digits which humans are unable to find (for instance, small imbalances in the proportions of certain digits, or minor distortions that are implausible in normal handwriting).

2.欢迎各学院师生前来听报告。报告会期间请关闭手机或将手机调至静音模式。

党委研究生工作部

电子信息学院

2017年6月30日

报告人简介

Associate Professor of the Gatsby Computational Neuroscience Unit from the part of the Centre for Computational Statistics and Machine Learning at UCL. His research focus on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data.