Each year scientists from around the world publish thousands of research papers in AI but only a few of them reach wide audiences and make a global impact in the world. Below are the top-10 most impactful research papers published in top AI conferences during the last 5 years. The ranking is based on the number of citations and includes major AI conferences and journals.
Explaining and Harnessing Adversarial Examples, Goodfellow et al., ICLR 2015, cited by 6995
What? One of the first fast ways to generate adversarial examples for neural networks and introduction of adversarial training as a regularization…
This post is written together with Ekaterina Vorobyeva.
At ten I once said to my friend how great it would if our teacher presented to us unsolved math problems. It would be much more fun to approach these in class instead of the textbook exercises. It was the time when we had no easily available internet, and the information on unsolved problems was scarce. Today, 20 years after, when I google unsolved problems in mathematics I get a huge list of problems. But, sadly, most of these are beyond my comprehension. …
Chinese translation is available here.
At the beginning of the year, I have a feeling that Graph Neural Nets (GNNs) became a buzzword. As a researcher in this field, I feel a little bit proud (at least not ashamed) to say that I work on this. It was not always the case: three years ago when I was talking to my peers, who got busy working on GANs and Transformers, the general impression that they got on me was that I was working on exotic niche problems. …
This post analyzes what authors and organizations publish at NeurIPS 2020 this December, similar to the analysis I did for ICML 2020.
Disclaimer: As before, such analysis is prone to minor errors due to how people write their names and affiliations in CMT. So this analysis is good to get global insights rather than pinpoint exact numbers❕
Okay, let’s go.
There are in total 9454 submissions of which 1900 papers were accepted, making it 20% acceptance rate.
The number of submissions continues to grow exponentially with 40% YoY. With this…
This year KDD gathered 346 papers (for research and applied tracks), 34 workshops, 45 tutorials (lecture and hands-on) making it one of the biggest applied research conferences in computer science. Let’s take a look at some of the highlights of this conference.
Let’s have a look at the word cloud of the most frequent trends of this year.
Criteo AI Lab has 9 accepted papers at ICML 2020. This is a new record for us and we are proud of the research and engineering team that we have!
Established in 2018, Criteo AI Lab drives the research agenda for Criteo with the focus on the topics of computational advertisement such as reinforcement learning, recommender systems, classification models, and others. Our papers are continuously published at top conferences in machine learning such as ICML, KDD, and COLT. Take a look at our publications and blog.
With 9 publications at ICML 2020, we are number 1 in EU and top-7…
ICML is one of the most important conferences in Machine Learning and therefore it’s interesting to see who publishes at this conference. So I looked at the accepted papers for ICML 2020 and analyzed authors, organizations, and countries that participated this year. The conference will take place virtually from 13th to 18th July in 2020.
This year there are 1088 accepted papers from 4990 submissions, leading to 21.8% acceptance rate.
Let’s first take a…
Python is cool: after so many years of using it, there are these little peculiar things that amaze me. I recently stumbled upon a very simple line of code that most of the experienced python programmers I know could not explain without googling it.
The names of Turing, Minsky, and McCarthy, the founders of Computer Science and Artificial Intelligence in the west, are now familiar to everybody. However, little is known about the history of AI developments under the Iron Curtain of the USSR, although sometimes the competition between two systems was not less acute that in space. Below is a forgotten story of Soviet AI presented through the lens of the life of heroes of those events, Andrey Leman and his colleagues.
There are two paradigms for graph representations: graph kernels and graph neural networks. Graph kernels typically create an embedding of a graph, based on decomposition, in an unsupervised manner. For example, we can count the number of triangles or more generally triplets of each type a graph has and then use these counts to get embeddings. This is known to be an instance of a graphlet kernel.