What seems like a lifetime ago, I had a blog. I called it Random Walk, since it was the most accurate description I could think of at the time for the variety of topics that my brain would bounce to from day to day. I wrote about politics, religion, science, and even did one post that garnered a little attention featuring a statistical analysis of the ethics of eating meat.
As with the journals that I began writing only to leave half empty, work on Random Walk ceased in 2008 after I started graduate school. Maybe it was my workload, or maybe it was my running out of things to say. In any case, I closed up shop and pulled the blog from the Internet, since I felt that a blog that never got updated was somehow sadder than no blog at all.
I went on to complete a master’s degree in applied mathematics and a Ph.D. in cognitive science. Midway through graduate school, I “discovered” machine learning and quickly got obsessed with it. My dissertation focused on introducing various machine learning and data mining techniques—primarily interpretable ones such as decision trees—to the analysis of human data in order to tease out various patterns of behavior of interest to cognitive scientists. This was at a time when the vast majority of research in my area applied rather stodgy null-hypothesis statistical testing, which made it increasingly difficult to design experiments for complicated behaviors. In my idealism, I thought that I was issuing a clarion call to fellow cognitive scientists, letting them know that machine learning was the future.
Cognitive scientists—and everyone else—indeed got that message, but not from me. I graduated in 2013, right as deep learning was beginning to take off, which meant that my forward-looking work was already outdated relative to where the field was heading. The intervening 10 years of progress has only deepened the sense that I came up in the horse-and-buggy days. But during that time, I completed postdoctoral training in computational cognitive science and machine learning, which only served to increase my interest in these topics. I still love this stuff.
I also still think that Random Walk is a good name for a blog. This resurrected version will focus largely on the theoretical topics within machine learning that have caught my interest. Some posts will shine a light on connections among seemingly disparate parts of the literature. Other posts will be efforts to digest the work of others through writing about it. And others will be ideas of mine in various stages of development. Much of it will be grounded in the world of generative modeling, which in the age of GANs, GPT-3, and diffusion models has become a hot and exciting (and even worrisome) topic. I often like to refer to generative modeling as the “new alchemy” for its transmutation of something base into something precious. This will be the topic of my next post.