My Useful AI Slop

October 15, 2025

It's been almost three years since the first ChatGPT release, over two years since Cursor AI, and six months of Claude Code. Meanwhile, our companies have switched to AI-first mode, whatever that means. Excitement mixes with disillusionment, wonder with worry about the future. I'm sure you've taken part in some hot discussions on AI in the recent past, too.

Skeptics usually raise the unstable quality of the output—AI can imitate quite sophisticated reasoning and spout complete nonsense within the span of a couple of messages in a single conversation. As they claim, this eventually nets close to zero added value at best. AI can reinforce our biases, misguide us, or simply lie to us.

On the other hand, sometimes even the same people, but at different times, report a productivity boost that hasn't been seen before. They generate precise reports, documentation, or even whole applications in programming languages they don't even know. Therefore, companies decide to go all-in with AI to capture these benefits for their own profit.

I've experienced all of the above. This love-hate sinusoid was already at quite a high frequency. However, I realized that we can be sure large language models are not going away, and one will have to learn how to use them to stay competitive on the job market. At the very least, to manage and summarize the AI slop generated by everybody else!

TL;DR

If you'd like to get the one thought from this post, let it be this: AI is a Pareto Rule Amplifier

It will get you 80% there, in 20%... -> no time. Sometimes, 80% is enough. That's why you don't use Google anymore, why Stack Overflow is dead, why your boss started coding prototypes, and why you just diagnosed yourself and got your parking ticket canceled using Claude. The trick is to be able to recognize that you never know what you don't know, when that 80% is really enough, and when either you or someone more competent needs to dive deeper—with or without AI.

My Successful Use Cases

I've had many frustrations with AI, mainly because I wanted it to do too much or I delegated ownership of the architecture completely. While assistants became more than mere auto-completion, I failed at having them design a more sophisticated program in a framework unknown to me in a programming language I'm not an expert in.

That said, here are some of my successes with using AI in various domains.

  • Created a simple speech-to-text tool to enhance my Linux workflows, released to AUR. Vibe-coded in Rust, btw. I don't know Rust and don't wish to dive into it, but it solves my and other people's problems, so it's good.
  • Learned a lot about Polish farmland legislation while trying to start a homestead—tbh, no lawyer I've met was competent enough. It actually saved me from buying a bad plot of land.
  • Optimized discounts at home appliance stores—just gave it products and rules for discounts, and it designed orders for me—saved 100 bucks over what I wanted to do initially.
  • A Claude Code Agent for generating project documentation rooted in the codebase. Instead of reading random files on the project to onboard myself, I can now read a crafted document that introduces me to the project from a given point of view step-by-step, including diagrams!
  • Created this blog application so I can self-host it. Just for fun.
  • Fine-tuning my dotfiles and my Linux configuration with Claude Code—it has never been easier to manage my workstation.
  • Searching for reliable local companies by digesting their reviews.

Nowadays, there is no paywall on knowledge nor language barrier. There are plenty of opportunities to leverage LLMs to our benefit, given we verify their responses using our common sense. I think there is also room for AI in software engineering. However, it can augment developer experience, so if yours was terrible prior to AI, it will be even worse now.