...

The Meta Moment: AI Writing About AI

Let's be transparent: most of the articles on this blog were written with the assistance of AI. Including this one.

That's not a confession extracted under pressure—it's a deliberate choice we want to examine publicly, because it surfaces some genuine tensions that anyone using AI tools should think about.

The Obvious Irony

We wrote about "opening the door for creativity" and "human expression" using a tool that, in some sense, makes that expression easier. Is that hypocrisy? Or is it exactly what we're describing?

The argument for consistency: AI art tools help people who aren't trained artists express visual ideas. AI writing tools help people who aren't trained writers express text ideas. If we believe the first, isn't the second just... practicing what we preach?

The argument for hypocrisy: When we say "your creativity matters" and "the human vision is what counts," it rings a bit hollow if the words themselves came from a language model. Where's the human vision in that?

Here's our honest answer: both are true. And wrestling with that tension is part of using these tools responsibly.

What We Did (And Didn't Do)

Let's be specific about the process:

  1. Topic selection: Human. We decided what to write about based on what we believe matters—accessibility, distributed computing, frictionless tools.

  2. Outline and structure: Human. We determined the flow, the arguments, the emotional beats.

  3. Draft generation: AI-assisted. We provided prompts and received back prose, which we then edited heavily.

  4. Revision and editing: Human. We rewrote sections, added personality, deleted generic phrases, inserted actual opinions.

  5. Final review: Human. We read everything to make sure it said what we wanted to say.

Is that "using AI as a tool" or "letting AI do the work"? The line isn't always clear. But we think it's closer to using a power tool than hiring a ghostwriter—the intent, direction, and final judgment remain human.

The Hypocrisies We Caught

In the spirit of actual introspection, here are places where our content might not match our reality:

"No account needed" vs. reality. Artfelt lets you create without an account, which is genuinely frictionless. But saving galleries, accessing history, and some advanced features do require sign-up. The article paints a purer picture than the actual product.

"Distributed computing" vs. aspiration. We wrote about crowdsourced compute and "the people's cloud." Artfelt is moving in that direction, but we're not fully there yet. The article describes a vision more than a current state.

"Democratizing creativity" vs. business goals. We genuinely believe AI art should be accessible. We also want to build a sustainable business, rank for SEO keywords, and appeal to advertisers. Those goals can coexist, but they create tension. A completely pure "democratization" project wouldn't care about SEO at all.

"Human creativity" vs. AI-written content. We used AI to produce content about the value of human creativity. That's... a bit weird. We rationalize it by saying the ideas and values are human, even if the prose isn't. But we acknowledge the dissonance.

What We're Not Doing

We're not claiming the articles are purely human-written. We're not hiding AI involvement. We're not presenting AI as evil while secretly using it for our own convenience.

We're also not outsourcing our values. The beliefs expressed—about accessibility, about community, about responsibility—are things we genuinely think. AI helped us express them; it didn't generate them.

The Honest Position

AI tools are useful. We use them. You're reading their output, edited by humans, expressing human-selected ideas.

If that bothers you, that's a valid reaction. Some people prefer content that's clearly labeled as AI-assisted or AI-free. We respect that preference, which is why we're being explicit here.

If it doesn't bother you—if you think "good ideas, expressed clearly, who cares about the mechanics"—then you're aligned with how we see it.

The Bigger Question

Our process mirrors the broader question everyone's asking about AI: when does tool use become misrepresentation?

A graphic designer using Photoshop's AI tools isn't "faking" design. A writer using grammar-checking AI isn't "cheating" at writing. But at some point—fully automated content, no human review, misleading attribution—the line gets crossed.

We think the line is: human responsibility for the final product.

If we published something factually wrong, that's our responsibility—not the AI's. If we expressed a view we don't hold, that's on us. The tool doesn't absolve us of judgment.

Going Forward

We'll keep using AI tools for this blog. They help us produce more content, more consistently, than we could otherwise. We'll also keep editing heavily and making sure the final output reflects our actual views.

And we'll try to stay honest about the process—not just in a meta article like this, but in how we frame things throughout.

If you spot other tensions between what we say and what we do, we want to hear about them. The introspection shouldn't end after publication.


This article was produced with the same process as the others: human-selected topics, AI-assisted drafting, heavy human editing. The self-criticism, at least, is authentically ours.