Machine Learning (ML) has attracted tremendous attention over last few years. It became a highly demanded field of Computer Science showing amazing results and impacting our lives. The academia produced hundreds of papers introducing new algorithms and experiments. However, it became too hard to keep track of good papers. With around 50 papers appearing on arXiv every single day it would be really good to have short summaries and highlights of the best research papers.
There is a number of resources, which already provide short summaries and are frequently updated. The examples are:
- Deep Learning Weekly
- The Wild Week in AI
- Summaries and notes on Deep Learning research papers
However, they often aren’t focusing on Reinforcement Learning and often do not concentrate on research papers. Not having a resource with good and simple highlights from these papers delivered very often is a bit disappointing.
Starting the Reinforcement Learning Digest
Studying Machine Learning means reading as many papers as possible as frequently as one could (and of course trying to reproduce the results and improve algorithm implementations based on these papers). Since I am really passionate about Reinforcement Learning, I do that a lot and I thought it might be a good idea to share some notes and ideas about the papers I read, wrapping it up into a form digest.
Therefore, I decided to start writing short summaries.
I’ve prepared the first issue of the digest to give an overview how it might look in my opinion. I would really like this digest to be useful for Ph.D. students, researchers and ML & RL enthusiasts, so I would be happy to hear back to see what could have been better.
I’m still not completely sure what the best format of these digests would be. There are several points:
- What would be the optimal frequency of digest issues? Being a subset of ML, RL is not really huge and it might not be enough articles each week to review. My thought is that making issues biweekly might be the best option so far.
- What would be the optimal size of summaries? Going into too many details doesn’t make too much sense, but I’m thinking of several paragraphs without too many details and some key points.
- I’m thinking of hosting everything on my Github Pages blog, which natively supports RSS feeds. Would that be enough or is email subscription more important?
I decided to start the digest after a short discussion in ods.ai, the largest Russian Data Science community with the lovely Slack chat.
I’d like to thank Alexander Pashevich (@alexpashevich) for originally pointing out that such digest might be useful and Sergey Arkhangelskiy (@vertix) for sending a link to r/reinforcementlearning, which has many great papers.
Making this digest a collaborative effort sounds amazing to me and I’d be happy to take any help. I would really love to hear back from you. If you have any thoughts about the digest format, which papers should be included in future digests and so on please leave a comment or shoot me an email to firstname.lastname@example.org.