When discussing the release of Anthropic's Claude 3.5 Sonnet a few days ago, one of my Coworker wrote:
A growing element of being a good developer is learning how to best use these tools.
I couldn't agree more.
In this blog post, I want to share some examples of how I've been using Large Language Models (LLMs) in my software development workflow.
llm is a CLI tool for interacting with Large Language Models, created by Simon Willison. I've been using it for about a year, and I really like it!
Hopefully, this sparks some ideas on how you can make use of this great tool in your own workflows.
Note: If you have access to Github Copilot Chat, you can use it in a similar fashion for many of the use cases below. Unfortunately it's not avaliable on the command-line.
Getting started
To install llm
follow these instructions. I recommend installing it with pipx, so it's available anywhere on your machine.
llm
makes it very easy to choose between many different models and providers, using the -m, --model
option. Here is an example of a basic prompt:
llm -m gpt-4o "ten names for a pet octopus"
By default, llm
ships with support for the OpenAI models. To use those, you have to set up your API key.
There are plugins for lots of different other models and providers, including Mistral, Gemini, Claude and many others. There are also options for running local models, which can be a great option if you're dealing with sensitive data.
Note: I usually try out prompts with a cheaper model first (e.g. GPT 3.5). 80% of the time this already gives me a good enough answer. If not, I can just switch to a more expensive, capable model. I love this approach, as you start getting a feel of what different models are good at.
Naming things
Let's start with a basic example for one the two hardest things in programming: Naming things. Turns out LLMs are great at this:
Hallucination is not a bug, it is LLM's greatest feature.
- Andrej Karpathy on X
It's usually a good idea to ask for multiple options, and then choose the one you like the most.
llm "Suggest suitable names for a python function that takes a list of \
numbers and returns a new list containing only the even numbers from \
the original list."
If you want more suggestions, continue the current conversation using the -c, --continue
option.
llm -c "more suggestions please"
This approach also works great for finding names for products or python projects, e.g. based on a readme.
Explain code
Okay, let's take it up a notch. The next example uses symbex, a CLI tool to find code in your Python project. The output from symbex
is piped to llm
, along with a system prompt.
symbex filter_even_numbers | \
llm --system 'Describe this code succinctly'
Writing tests
Rather than just explaining some piece of code, you can use llm
to do all kinds of things with it: Ask for criticism, improvements, or write tests:
symbex filter_even_numbers | \
llm --system 'Please write tests for this function, using pytest.'
Naming things consistently
This example is a variation of "naming things". It uses symbex
to find Python functions in a particular file that match the given pattern, e.g. get_*
. The function signatures are piped to llm
along with a system prompt.
symbex 'get_*' --function -f src/my_app/utils.py -n -s | \
llm -s "here are the signatures of a number of python utility functions. \
they currently use a confusing mix of words: _from_, _for_, and _by_. \
can you suggest a rule of when to use each one? based on that, suggest \
more consistent function names."
Explain jargon
Simon Willison shared a system prompt that he is using for his custom GPT Dejargonizer.
A system prompt can be saved as a llm template using the --save
option. It can then be re-used by referencing it with -t, --template
.
llm -s "Explain all acronyms and jargon terms in the entered \
text, as a markdown list. Use **bold** for the term, then \
provide an explanation. [...]" --save
curl -s https://manassaloi.com/2023/12/26/tech-power-law.html | \
strip-tags article | \
llm -t dejargonizer
This example uses curl
to fetch an article, and another tool called strip-tags (guess who created it 😉). strip-tags
removes html tags from a page and optionally selects areas with CSS selectors. More examples on how to use it can be found here.
Write docstrings
A great use case for LLMs is writing doc strings. E.g. you could set up a template that uses a section from the Kraken coding conventions as a system prompt:
The first sentence of a function's docstring should complete this sentence:
This function will ...
which helps enforce an imperative mood. [...]
This example uses the --undocumented
option of symbex
, which will list all undocumented functions from the specified file.
symbex -f src/module/foo.py --function --undocumented | \
llm -t docstring
Write commit messages
This example uses the git diff to generate a commit message.
git diff --staged | \
llm -s "Write a succinct commit message. I should consist of a \
capitalized, short summary, and more detailed explanatory text, \
if necessary."
I haven't had good results results with this approach, though. The reason is probably that a good commit message is not simply a re-phrase of the code changes. It should explain the why and provide more context for the reviewer. This context is usually not available in the code itself, but in other forms of communication, e.g. feature requests, bug reports etc.
Write a PR description
This examples uses llm
to write a PR description based on a git log
. I've seen this work reasonably well for a first draft, provided that the individual commits have meaningful commit messages.
git log master..HEAD --pretty=format:%s -p | \
llm -s "Write a succinct description of this PR based on the commit \
messages and diff."
Fin
That's a wrap. I hope this provides some inspiration on how to use LLMs on the command-line and in the software development workflow.
What tools, recipes and prompts have been working well for you? Let me know on X or Mastodon.
Further reading
- Language models on the command-line by Simon Willison: In-depth introduction on how to use LLMs on the command line
- Doing Stuff with AI by Ethan Mollick: A plea to experience AI through play and for doing serious stuff, with lots of examples