Dumb Thoughts on Dumber AI

Generative AI is a stain on creative industries, and an indictment of those who would otherwise employ creatives to do the work they can’t.

Plenty of others have written about this—Rach Smith summarizes well, Louie discusses regularly, and Robb outed escalatory and dishonest scraping practices. I have no new points to add, but reaffirming the myriad of voices feels good.

We do good with it at Zello. I’m no shill, and I genuinely hope that this can be the shape of acceptable use going forward. We use a self-hosted model to transcribe and translate live voice, allowing workers to urgently communicate across linguistic boundaries.

GenAI is an abstraction on mass plagiarism. The popular narrative focuses on trademarked content—oh no, poor Disney—while the true damage disproportionately impacts independent creators.

It’s convenient for MidJourney to graphically demonstrate these examples, given it visualizes the abstract and un-relatable. There’s a hierarchy of not-my-jobs, in which both plagiarism’s and “I could’ve done it myself”'s plausible-deniability decreases as we move from written word to visual proof. It’s harder to claim that ChatGPT samples a specific writer than it is to claim that MidJourney copied a specific artist—please prove me wrong, I’d love new fodder. I’m no writer and lack the experience to pull a writer’s voice from aggregate text.

After how many generations removed from recognizable plagiarism is GenAI acceptable, or does the principle of the matter make it universally unacceptable? This is no catch-all—even when generations removed from primary sources, opt-out models like Figma’s and (supposedly) Perplexity’s are unacceptable. Is MidJourney’s plagiarism more immediately recognizable because it literally contains watermarks, and if so, would we recognize a textual equivalent (or its trivial omission) from ChatGPT? How many levels removed from a primary source, and secondarily from how many sources must a model sample, for translation to be an acceptable use?

Even without measurable plagiarism, does that clear the way to unquestioning GenAI use? No, the threat to individual livelihoods within affected industries is too much of a limitation. It’s more difficult to recognize textual plagiarism, but even then, it’s better to pay a writer than to employ a model that ambivalently builds on their work.

None of this even begins to touch on Figma’s tone-deaf, blind, and buck-passing “Make Designs”, and their atrocious passing-the-buck to their designers whose work was blamed for GenAI’s problematic nature.

The moral position is to not engage in the tools’ use, and when engaging, to extensively question and critique. The least one can do is to not blindly use or encourage the use of these products.