Pragmatism in the real world

Some thoughts on LLM usage in my work

While it would be nice to put the genie back in the bottle, that hasn’t happened often in human history, so for the foreseeable future, AI in the form of LLMs are here to stay. I imagine that what we use them for will change over time as we collectively internalise their limitations.

Personally, I’m now using them for my work as much as I reasonably can. This is involving many different areas, such as asking questions that I would previously have just googled. While the LLM does make things up, the first 40 answers on Google can be decidedly less than helpful nowadays too. The LLM will read many more webpage results than I can be bothered to, provide a summary and cite its sources so that I can click a few to get a feel for how much I trust it.

When developing, I’m using the LLM for multiple tasks. It can help me find bugs when I paste in snippets of code and say “this code is getting X wrong. Tell me why and propose a fix”. I generally find that this level of focussed question works quite well, though sometimes, it will go off on a tangent and never come back.

In an IDE, I’ve found LLM-based smart-autocomplete from something like Copilot works quite well. Again, it’s a focussed small task and so tends to work reasonably well for me. I’ve also found that writing a comment as I would normally do helps the LLM get it right.

New to me is Claude Code. This command line tool reads the files in the current directory and can operate on them. I’ve found that it is able to do bigger chunks of work as a result and I use git diff to assess its work and tweak as necessary before I personally commit.

One thing that is obvious in hindsight, but if I don’t have a clear idea in my mind of what I need, then the tools will do worse. This can be more clearly seen with the larger blocks of work, so now, if I’m exploring how and what I want to do, I will talk with the LLM more conversationally more like a brainstorming session to get to the point where I know what I want to do. Then I can instruct it to do the work. This works no differently from trying to delegate a job to a junior and sounds so obvious when written down. However, in the excitement of seeing the magic of the LLM doing the right thing, it’s easy to forget and then be surprised when it goes so so wrong.

Not every tool is useful for everyone or useful in every situation. There are also ethical and environmental considerations that affect how people view any given technology and I do not want to suggest that LLM tools must be used; they don’t.

I realise that I’m not using these tools are much as others, however, I’m finding them useful.

Thoughts? Leave a reply

Your email address will not be published. Required fields are marked *