Wow, I can't believe that this blog has been dormant for more than six years! And the last post was frankly not more than a sign of life, informing about the migration from Octopress to Zola.
It's not that I haven't had anything to say in all those years. It's just that I have exclusively published my blog posts on the INNOQ company blog. While I plan to be an even more active contributor to the company blog this year, I also decided it was time to come out of hibernation on my personal blog.
In this post, I provide some meta discussion of the latest blog series I started on the INNOQ company blog, titled Developing with AI through the Cognitive Lens. Here are two interesting facts:
Firstly, even though I studied Applied Communication and Media Science, an interdisciplinary study course integrating general psychology, computer science, and sociology, with a focus in human-computer-interaction and usability, I started my career as a Java backend developer. The relentless forces of path dependency led to me to continue that road.
Secondly, I have long held a very critical view of the current LLM offerings both for ethical and environmental reasons.
That's still true. However, AI-assisted coding and fully agentic software development have become so pervasive that I found it increasingly hard to ignore them — especially as someone clients expect to be familiar with current trends.
Some time last year, I finally had time to read Felienne Hermans' great book The Programmer's Brain, which teaches software developers all they need to know about cognition.
Reading this book rekindled my interest in cognitive psychology. I wanted to explore what cognitive psychology can teach us about the likely effects of using AI assistants or agents in software development. Knowing how humans learn, how they encode knowledge, and about the differences in how juniors and experts encode and retrieve knowledge, should tell us something about patterns of interacting with AI tools we should avoid, and which ones may prove beneficial.
In the first post, Speed vs Skill, I examine the tension between increasing short-term productivity and skill retention.
In the second post, I explain elaboration, a term from the science of learning, why it's important for learning, which AI interaction patterns eliminate the need for elaboration and which ones maintain it.
In the third post, the last one so far, I apply the concept of cognitive load theory to AI-assisted coding, using it to explain the limitations of what we can learn from a recent study by Anthropic about the effect of AI assistance on coding skills.
So far, writing this series has been really exciting. It's immensely satisfying to finally combine my interest and knowledge in general psychology with my expertise in software development and computer science. I definitely plan to write a few more posts in the series.