My Experience Using Large Language Models to Write Blog Posts

The challenges and benefits of using large language models for writing blog posts, as well as the application of the Microsoft Semantic Kernel to streamline the process.
Author

Lucas A. Meyer

Published

May 23, 2023

My Experience Using Large Language Models to Write Blog Posts

I recently conducted an experiment using large language models (LLMs) to write blog posts. However, I didn’t enjoy the experience as much as I thought I would, even though LLMs should have been helpful since English is not my first language.

The Problems of Using LLMs to Write for You

One of the main issues I encountered was losing my “voice.” The articles didn’t seem like something I would write. Even though the final versions were at least 70% written by me, something felt off. Sometimes the tone was too happy, and other times it was filled with comparisons that I wouldn’t make. It just didn’t feel like me.

Another problem was the “fabrications” (previously called “hallucinations”): LLMs tend to create plausible but false examples. For instance, while writing about Alice in Wonderland for an upcoming post, GPT-4 fabricated a fact that would have been perfect for my story if it were true: Alice goes through a door with the number 42 on it. However, she doesn’t. These fabrications are time-consuming to review and remove. They were interesting and plausible, but verifying them took time, ultimately costing me more than saving me time.

What Worked Well

Some aspects of using LLMs worked really well. Generating images for blog posts using Stable Diffusion is something I’ll definitely continue doing. Searching for suitable, free-to-use images can be time-consuming. Generating images using AI is much faster, and the results are usually good enough, often exceeding my expectations, especially when I use my dog.

Another useful feature was converting a text post into a more structured Quarto Markdown file with a header and sections. While it’s not a significant time-saver, it improves the organization with minimal effort, making it worthwhile.

Creating titles for my posts also worked well. I am used to writing posts for LinkedIn, which don’t have titles. An LLM can read a text and generate a suitable title in seconds, usually doing a much better job than I would. This is another time-saving benefit.

Lastly, LLMs can check my grammar. As a non-native English speaker, I make many mistakes, but a single pass of an LLM improves my writing without sacrificing my “voice.”

Using the Semantic Kernel

This weekend, I began using the Microsoft Semantic Kernel to chain several LLM actions together. I’m still learning how to use it, but I’m already impressed. I’m using it to generate titles, fix my grammar, create images, and convert my text into a Quarto Markdown file. I’ll soon write a blog post teaching how to use the Semantic Kernel in Python. If you can’t wait, you can see how I’m using it by looking at the source code for my blog on GitHub.