My research interests have changed quite a lot over my career. As my current understanding of the various threads has come into sharper focus for me, I figured I should write it all down.
First and foremost, I am not a specialist. My work is not, and will never be, in a single, narrowly focused area. I'm interested in too many things, and like to draw from a wide array of domains. I have to actively restrain myself from adding new topics to what I actively work on, no matter how interesting they might be.
I have come to classify the large intertwined threads of my work as follows:
It is very important that these classifiers are not disjoint. And that quite a bit of my work mixes these. But also, that most of my work does 'fit' fairly strongly in one bin or another as its principal theme.
It's probably fair to understand structured by teachable to a computer. Thus whatever 'information' is being considered, it needs to be encoded in ways that enable manipulation by programs.
Just random information, i.e. sequences of bits, is far from useful. The most important point here is that the information itself needs to be structured. So a lot of my time goes into first finding out the latent structure in the things that I want to process.
Two of the most interesting (to me!) sources of structured information are mathematics and computer science. Mathematical theories are highly structured, both internally, and externally. By external relations, I mean that there are lots of functions (functors) that relate both the presentation and the semantics of theories. And this information can be leveraged: quite a bit of it can be generated from some basic pieces. Programs, when looked at appropriately, are also highly structured, and full of redundancies. Ditto for models. This is basically the genesis of all of my work on meta-programming.
Some of my forays into the hinterlands of programming languages (reversible computing, quantum computing and the now more mainstream probabilistic programming domain) are also driven by wanting to understand what exactly are the deeper structures that are present. Type theory and category theory seem like good tools for doing that, so I've delved fairly deeply into them as well.
My interests to structured information are uniformly driven by practical concerns. Theory serves as a way to prove that the structure really is there. Eventually though, the aim is to show that (meta-)programs that use the information "do the right thing".
At first, my interest was all driven by code duplication. Not so much cut-and-paste programming, which is mostly a bad habit driven by weak abstraction features. But more subtle code duplication where humans write the same code, more or less, over and over again, which is obviously a productivity killer.
So I got very interested in both generic and generative programming as principled means to deal with the problem. First as simply solutions, and then as sources of theory for good solutions. For example, my current understanding of most "generic" programming is "being able to be polymorphic over the things your current programming languages doesn't let you do, but you still want to do it anyways."
Then, obviously coupled with the previous point, I started seeing that a lot of the artifacts in real code bases involve a lot of information duplication. So I started to use generative techniques as a means to de-duplicate this information. This is where Drasil came from.
A lot of the software that I'm interested in ultimately has human users. And humans have very particular requirements. Although I tend to be allergic to buzzwords, moving away from UI towards UX is actually a positive.
I am not an HCI expert. I am barely knowledgeable on the topic, really. And, in some ways, I'm kind of skipping over parts of HCI, to go directly to cognitive psychology, human mechanics, etc. Thus the humans as mechanisms tag line.