Distill is a phenomenal resource to find clear, accessible, yet technically nuanced explanations of concepts in machine learning. There are two Distill articles that did, for me at least, a terrific job introducing me to the ideas of Graphical Neural Networks. The articles are so well-written that there is nothing else I can “distill” from them. Only linking here my hypothesis annotations.
Note that my annotations for Article #1 are more extensive only because I read it first. Also, the first article goes over more foundational materials while the second article goes into more details and nuances. The second articles contain many details about many implementation lessons and experiences on how to design model details, etc., that I shall postpone reading until I start building my own GNN networks potentially in the future.
Here is a really cool application of GNN to a computational pathology problem: