• SW2.ai
  • Posts
  • Post-Gluecon Musings

Post-Gluecon Musings

It’s been almost one month since we ran Gluecon — a show which I purposefully tried to stack with AI content for developers. As I’ve started to recover from running the conference (yes, it takes about a month), several thoughts have bubbled to top of mind for me:

  1. The Skills Gap: I went into the show wondering what the skills gap was between more “traditional” cloud/API developers and AI developers. Of course, by “AI developer” I don’t mean engineers using code-assist tools either on or off the job (where some reports put that number at north of 70% of developers). Rather, I mean developers who have spent the last cycle developing cloud containers or monitoring or observability or identity management or whatever, who are now trying to make the jump to building AI applications or tooling or developer-oriented things. In short, I’m no closer to an answer here. Some tools (see, the explosion in open source tooling, models and agents) will clearly make this jump easy, while other areas are….not so easy. Given that the current answer is a “yes and no,” I suspect that this gray area will end up being a space of much discussion over the coming months — one that will accelerate if layoffs of developers continue (or even increase).

  2. A Seminal Moment: In talking with folks at Gluecon, I was a bit surprised at how many folks weren’t aware of the seminal place that the research paper, “Attention Is All You Need” holds in the recent wave of AI development. If you’re working on orienting yourself contextually in AI, and you haven’t read this paper from 2017, go do so now. I don’t think it’s overstating it to say that this paper (with the introduction of Transformer architecture) is one of the main reasons that we are where we are today.

  3. To that end, I’d like to nominate a more recent research paper as one that may come to be seen as another seminal moment in the development of AI: “Tree of Thoughts: Deliberate Problem Solving with Large Language Models.” This paper is about generalizing over the current “chain of thought” approach being taken in LLMs, and it is entirely worth your time.

  4. Lastly, I’m growing increasingly convinced that we’re approaching some near-term exhaustion around “AI” as a topic, and we may even start pushing up against the boundaries of advances in this current wave. The first part of that statement is mostly a hunch, and it won’t surprise me to see VCs suddenly pull back a bit on their AI fundings. Expect the “was it all hype?” articles to start appearing shortly. The second part of that statement (about advances) occurred to me as I was looking at this paper. How quickly will recursion start damaging model training? Perhaps even more pointedly, how valuable will non-AI generated data for training become over the next two, three or five years?

As always, thanks for reading — and if you find this post useful, please pass it on to someone else that you think might like it.

Until next time…