Poster at the Abcam Symposium 2016

abcam_symposiumOn 3rd October 2016 I’ll present a poster on “Deep siamese neural networks for prediction of long-range interactions in chromatin”  Abcam Symposium 2016, at MaRS Auditorium in Toronto.

Advertisements

My poster at NIPS 2014


This year I will be again at NIPS 2014 international conference, and I will defend a poster entitled “Deep Autoencoder Neural Networks for Prediction of Biomolecular Annotations” at two workshops. One is MCLB 2014 – Workshop on Machine Learing in Computational Biology, and the other one is MLCDA 2014 – Workshop on Machine Learning for Clinical Data Analysis, Healthcare and Genomics.

NIPS 2014 will be held in Montreal (Quebec, Canada) from 8th to 13th of December 2014.

See you in Montreal!

Our paper accepted at CIBB 2014

They just informed us that our paper entitled “Correlation of Gene Function Annotation Lists through Enhanced Spearman and Kendall Measures” written mainly by me, by Eleonora Ciceri and Marco Masseroli, just got accepted at CIBB 2014 – the 11th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics.

The conference will take place in Cambridge (United Kingdom) at the end of June 2014. See you there!

Yan LeCun: Proposal for a new publishing model in computer science

I signal to you all this interesting article written by Yan LeCun (New York University Courant Institute & Facebook), in which he proposes a new publishing model for the computer science papers. This strategy seems excellent to me, and it has been adopted for the recent ICLR 2014 conference paper submission. I hope it will be used in the future for the computer science conferences. Here’s the proposal:

Our current publication system should be redesigned to maximize the rate of progress in our field. This means accelerating the speed at which new ideas and results are exchanged, disseminated, and evaluated. This also means minimizing the amount of time each of us spends evaluating other people’s work through reviewing and sifting through the literature. A major issue is that our current system, with its emphasis on highly-selective conferences, is highly biased against innovative ideas and favors incremental tweaks on well-established methods. Ideas that turn out to be highly influential are sometimes held up for months (if not years) in reviewing purgatory, particularly if they require several years to come to maturity (there are a few famous examples, mentioned). The friction in our publication system is slowing the progress of our field. It makes progress incremental. And it makes our conferences somewhat boring.

[continue here on Yann.LeCun.com]

My favourite papers and talks at Nips 2013

I just came back from NIPS 2013, an illustrious world conference on machine learning in South Lake Tahoe, California.

Here’s the best papers and talks from the conference:

  • “Dropout training as adaptive regularization” by Stefan Wager, Sida Wang, Percy Liang (Stanford). Interesting research in which authors attribute the dropout algorithm training to an adaptive regularizer, and find common aspects with AdaGrad, an online learning method based on the adaptive gradient descent.
  • “Adaptive dropout for training deep neural networks” by Jimmy Ba, Brendan Frey (University of Toronto). Again on the dropout algorithm, authors investigate some alternatives to picking 0.5 as unity dropout probability, during the dropout training.
  • “Understanding dropout” by Pierre Baldi, Peter J. Sadowski (University of California Irvine). Authors investigate important mathematical issues of the dropout algorithm.
  • “Training and Analysing Deep Recurrent Neural Networks”  by Michiel Hermans, Benjamin Schrauwen (Universiteit Gent). Authors apply deep recurrent neural networks to sequence time series prediction, and show some interesting applications to the character sequence prediction of English Wikipedia text.
  • “Deep supervised and convolutional generative stochastic network for protein secondary structure prediction” by Jian Zhou and Olga Troyanskaya (Princeton), from the Deep Learning workshop. An interesting application of generative stochastic network (GSN) to the important issue of the protein secondary structure prediction.
  • “Tissue-dependent alternative splicing prediction using deep neural network” by Michaek K. K. Leung, Hui Yuan Xiong, Leo J. Lee and Brendan J. Frey (University of Toronto), from the Machine Learning in Computational Biology workshop. Authors apply the dropout algorithm in a deep neural network to predict new regions of tissue alternative splicing.

A really top-level conference with intriguing workshops… thanks a lot to all the organizers!

[EDIT: check out these blog posts by Paul Mineiro, hundalhh, Yisong Yue, Sebastien Bubeck, Memming]