RS monogramRussell Schmidt
Lightbox image, just a zoomed in version of the last picture. Hit Escape to exit and return to the last page.
AI & Technology

Creating with LLMs

Tags:aillmcreativity

The parameters for this portfolio project were that I would code alongside ChatGPT and use Claude for a sounding board and to flesh out some of the blog posts that were at this point pretty old. I also needed some quick content for the project pages. (The writing is a bit overwrought as of the time of this writing but I hope to correct that soon.)

The workflow from the human perspective was a mix of copying and pasting of boilerplate, hand-coding styling and some quick debugging, and some back and forth dialog with a machine that really did have some endearing human qualities. I try to be polite with my speech to LLMs because it is a good habit of mind, and that also ends up anthropomorphizing the LLM which I find makes me more sympathetic.

My flow was pair programming with ChatGPT and using Claude for QA. The 80/20 rule was in full effect for features. 20% of the app took 80% of the time. I had a wild hair about being able to download articles as a pdf, and the html2pdf library is amazing but has a learning curve when you want to do extensive formatting. I wanted a floating download button that looked and animated a certain way. Both of these things took far more time than the rest of the application.

I hadn't used TypeScript extensively but found myself getting the hang of it while also cursing chasing down types. I had also not used Tailwind before or Next.js so this was all a bit of a learning opportunity for me. I found myself struggling with Tailwind 4 and Next.js 15 changes not having being internalized by the LLMs. In practice I had to chase down bugs and confront repeated use of patterns that were no longer valid with these versions.

ChatGPT seemed perhaps slightly more error prone than Claude, and more likely to replace working code and styling I wanted to preserve unprompted. It did need to be reminded of the versions of the frameworks we were working in when I would catch old patterns that caused bugs be reintroduced. But it was faster and I didn't hit a credit limit so it became my everyday driver. Claude's coding accuracy was on point in the QA role; it was a one-shot sniper at figuring ChatGPT-generated bugs out. That is also how my workflow progressed, and I assume that had Claude been my go-to, I would have said the same of ChatGPT as my QA.

Lessons learned

  1. Include the specific file and specify what you want changed when coding with an LLM friend.
  2. Keep changes small and incremental.
  3. Using two LLMs is pretty effective where one is QA and the other is programming with you.
  4. Update the LLM on changes you made that were off script.
  5. Use git (duh) so you can roll back commits when you go too far down crafing bugs on bugs.
  6. Revert to working versions immediately to avoid compounding errors if you can.
  7. Smoke test anything potentially impacted on a file change after a change is implemented.
  8. With Tailwind I insisted on following Tailwind convention - it kept doing CSS which was breaking convention. Be explicit.