Summarizing 12 months of reading papers (2021)

Last year, I wrote about the 122 papers that I read in my first year at Google and that I summarize on the RelatedWork site. Over the last 18 months or so, I’ve spent a lot less time doing the one hour train commute between Cambridge and London so I only read 59 papers in the last year and added 188 papers to the backlog of unread papers. You can read my summaries of all the papers, read notes on common themes in the papers, and download BibTeX for all the papers.

[Note that the main motivation for writing these paper summaries is to help me learn new research fields. So these summaries are usually not high quality reviews by an expert in the field but, instead, me trying to make sense of a barrage of terminology, notation and ideas as I try to figure out whether the paper is useful to my work. You should write your own summaries of any of the papers I list that sound interesting.]

Zettelkasten

I have found that a great way of organizing my thoughts about papers is as a Zettelkasten. The basic structuring idea is based on creating links between papers and concepts. Every time I come across a new concept in a paper, I create a new page about it and link the paper to that page. Each paper, concept or link added to the Zettelkasten refines my understanding of the research field and that understanding is (partly) captured in the links between concepts, between papers and between papers and concepts. And since every page has back-references to the pages that link to it, I can easily find related papers that have different views of a concept or that improve upon an idea.

Last year, I had only just adopted the Zettelkasten concept and I was often not adding new concept pages until after I came across the concept a second time. This year, I have tried to be more aggressive about adding new concepts. This has turned out to be much easier when I am reading papers in a completely new field because my ignorance makes it easier to spot new concepts. For example, when when I started reading about machine learning every page I read had a bunch of new acronyms like RNN, CNN, ReLU or unfamiliar terms like Softmax, Activation or Attention and I created pages for each of these concepts, looked them up, linked to the wikipedia page (or similar) and linked the current and later papers to the concept.

[I wrote a lot more about Zettelkasten and the tools that support it last year]

Rust and verification

I spent most of the year working on the Rust verification project at Google so, unsurprisingly, many of the papers are about the Rust language with a bit of an emphasis around Rust unsafe code.

Also, some pre-Rust papers that these papers build on.

Symbolic execution, verification and testing

The Rust verification project focused on creating a continuum of verification techniques and tools including fuzz-testing, concolic execution, symbolic execution, bounded model checking and abstract interpretation.

CPUs and security

Neural networks / Machine learning

Since I work in a machine-learning part of Google, I have been reading about machine learning.

Information flow control

Programming languages

Miscellaneous

Written on October 6, 2021.
The opinions expressed are my own views and not my employer's.