“I Tried Automating AI News for a Year It Nearly Destroyed My Reputation”
“Chasing speed and scale with AI taught me the hard way that trust matters more than views.”

Spoiler: the easiest money on the internet usually costs you your reputation first.
Last November, I opened my email and saw the subject line:
“Your story was removed for violating our rules on misinformation.”
It was a Medium notification referring to an article I barely remembered writing.
The worst part? The story was technically “mine,” but the words were not. They were stitched together by an AI model, auto‑fact‑checked by absolutely no one, and pushed out in a rush to test a “news automation system” I had convinced myself was clever.
It was supposed to be a harmless experiment.
Instead, it became a very clear mirror: I had turned myself into exactly the kind of creator I couldn’t stand.
Before: The Slow, Boring Way I Used to Write
For years, my process for anything news‑related was embarrassingly simple:
Read primary sources (reports, filings, studies)
Compare at least two reputable outlets
Take notes, then write my own explanation, with links
Hit publish only if I felt I could defend every line in a comment war
This meant I rarely broke news.
By the time my piece went live, Twitter had already torn the story apart, TikTok had reassembled it into conspiracy content, and four Medium “thought leaders” had squeezed it into “5 lessons from [news event] that will change your life.”
But the readers I did have were loyal. I’d get emails like:
“I wait for your version before I decide what I think about this.”
Slow, careful, not very scalable, mildly satisfying.
Then AI writing tools showed up, and I decided being careful was overrated.
The Trigger: A Single Viral “AI News” Article
The shift started with one Medium article that landed in my feed.
You know the type:
A generic news topic
Big, dramatic headline
“I used AI to write this in 7 minutes, here’s how you can too”
A few vague references to “fact‑checking with Google”
It had tens of thousands of views and a shower of claps.
No sources. No links. No disclosure that the article itself was 90% machine‑written beyond one throwaway sentence buried near the bottom.
Two days later I saw almost the same piece on Substack. Different author, different screenshots, same story:
“I automated my news writing and now I make $X per month while I sleep.”
That’s when the thought hit me:
“If this junk is doing that well, what would happen if I did it properly?”
Properly, in my head, meant:
Use AI to generate first drafts
Add my own analysis on top
Keep an eye on factual accuracy
I told myself it would free me up to focus on “the higher‑order thinking.”
In reality, it quietly turned me into a fact‑checking department for a bullshit factory.
The Build Phase: My “Responsible” AI News Machine
I spent three weekends putting together a small system.
I called it, somewhat proudly, NewsLoop.
Here’s what that looked like in boring detail:
1. Topic Feed
I subscribed to:
20+ RSS feeds from mainstream outlets
A couple of niche industry newsletters
A Google News alert for specific key phrases
All this fed into a Notion database where each “story” was just:
Headline
1–2 line summary
Source link
Tags (tech, policy, business, etc.)
2. AI Drafting Block
For each story that looked promising, I’d paste the summary + source into an AI tool with a structured prompt:
Explain the story in plain language
Pull out 3–5 implications
Offer one contrarian take
Suggest a clear headline and subhead
The model was fast. A 1,200‑word draft in under 20 seconds.
I tuned the prompts to avoid obvious “AI voice” giveaways: no bullet‑point spam, no overexcited tone, no “as we can see.”
3. Human Edit (Me, allegedly)
My plan:
Check every claim against the source article
Add missing context from at least one more source
Rewrite any paragraph that sounded generic
Insert my own analysis and examples
The first few times, I did all of this.
They took 45–60 minutes each, and the result was… okay.
Not great, but certainly not worse than most “hot take” posts that were doing numbers.
So I scaled:
Week 1: 3 stories
Week 2: 5 stories
Week 3: 9 stories
And somewhere around Week 4, my standards started to slip.
The Realization: I Was Shipping Polished Misinformation
The email from Medium about “misinformation” was the slap in the face.
But the rot had started before that.
The first crack: a reader comment.
“Your piece says ‘experts agree’ but links to one tweet and a blog post. Where are the experts?”
They were right.
AI had confidently written: “Many experts claim…” based on exactly one person’s quote in the original article. I had skimmed past it, assumed it was paraphrasing something real, and left it in.
Second crack: numbers.
After a month of this system, my stats looked like this:
27 “news style” posts published
Average writing time per post: ~30 minutes
Average views per post: 1,200
New email subscribers: 93
Objectively, not bad.
But I did a simple comparison that hurt more than any hate comment.
I pulled up my last 10 “old style” pieces from before NewsLoop:
Average writing time per post: 4–6 hours
Average views per post: 800
New email subscribers: 71
Almost the same subscriber growth, less reach, much more work.
By the metrics, the AI‑assisted stuff was “winning.”
So why did it feel awful?
Because when people replied to the new posts, the tone had shifted.
Old replies:
“This helped me understand what’s happening.”
“I hadn’t seen this angle anywhere else.”
New replies:
“Source?”
“This sounds like clickbait, where’s the actual reporting?”
“This reads like those auto‑generated sites that spam Google.”
I had built a system that was good at impressions and bad at trust.
And I hadn’t noticed, because I was checking view counts, not reputational damage.
How Fake “AI News” Actually Gets Made (And Why It Feels Real)
Once I was inside my own machine, I started seeing the pattern everywhere.
Most AI‑news grifts follow the same simple structure:
Start from someone else’s reporting.
Open a mainstream article, copy the main facts.
Feed it to an AI and ask for “a new angle.”
“Write a news explainer in a critical tone” or “summarize for beginners.”
Strip out uncertainty.
The original article might say “preliminary evidence suggests.”
The AI version ends up with “Studies show.”
Remove the links.
Why risk sending someone away? Also, no links means fewer ways to prove laziness.
Add a confident headline.
“X Is Broken” or “Nobody Is Talking About Y.”
Publish at speed and volume.
Ten shallow takes beats one careful piece in most recommendation algorithms.
To a casual reader, it feels like commentary.
To someone who follows the primary sources, it’s obvious parroting with added certainty and removed nuance.
And the weirdest part: a lot of creators doing this don’t think of themselves as malicious. They think of it as “content repurposing” or “news curation with AI assistance.”
That’s exactly what I told myself too.
The Impact (Yes, Including Money)
I kept NewsLoop running for three months before I shut it down. Here’s what actually happened during that period:
Posts published with AI assistance: 61
Average time per post: 35–40 minutes
Total Medium views (estimated): ~78,000
New email subscribers: 212
Direct earnings from Medium Partner Program: $527
Indirect earnings (coaching clients citing those articles): 0
Compare that to the three months before:
Posts published manually: 14
Total Medium views: ~19,000
New email subscribers: 134
Partner Program earnings: $301
Indirect earnings (consulting, workshops): ~$2,400
The ugly truth: the low‑effort AI‑news content made more platform pennies but zero real business.
It attracted the wrong people:
Drive‑by readers
Comment warriors
“Gotcha” types who enjoy pointing out tiny errors
The slow pieces where I actually read studies, linked sources, and admitted uncertainty?
Those were the ones people forwarded to their teams.
Those were the ones that got me emails like, “Can you do a session for our company on this?”
Views vs. trust is not a fair fight. Trust loses quietly, then disappears completely.
What I Got Wrong (And Why I Killed the System)
Looking back, my mistakes weren’t technical. They were about incentives.
Here’s where I misjudged things:
1. I Treated AI Like a Junior Writer, Not an Unreliable Parrot
I expected the model to “do the boring part” and leave me with the smart bits.
What it actually did was:
Copy the source structure
Smooth over nuance
Amplify whatever bias was in the original reporting
It was like paraphrasing a paraphrase.
If I didn’t manually trace every claim back to a primary source, I was just polishing someone else’s misread.
2. I Optimized for Throughput Instead of Spine
Once I saw I could publish almost daily, I started measuring myself by “output.”
That pushed me to:
Accept weaker ideas
Publish pieces I wasn’t fully proud of
Let small inaccuracies slide
The math was simple in my head: if each mediocre article brought in 3 new subscribers, that was still a win.
The math I didn’t do: one badly wrong article can cost you 30 readers who won’t come back.
3. I Assumed Platforms Would Handle the Worst Abuses
Medium had made noises about AI disclosures. Substack had their own “trust and safety” messaging.
I assumed the worst offenders—people clearly auto‑posting unedited AI sludge or outright fake news—would be filtered out.
They weren’t.
In fact, I routinely saw AI‑generated news summaries getting boosted and recommended, including a few that were obviously hallucinated.
My little “semi‑responsible” system sat in an ethical gray zone: not fake, but not rigorous either. I used the existence of louder offenders as an excuse.
“If they don’t care about that, my stuff is fine.”
It wasn’t.
A Better Way: How I Use AI Now Without Lying to Myself
After the takedown email, I shut NewsLoop off entirely for two weeks.
Then I rebuilt my workflow with one rule:
AI can help me think, but it cannot be the voice I publish under my name.
Concretely, that looks like this now:
Research Aid, Not Content Mill
I’ll paste a dense report into a model and ask,
“What are 5 questions a non‑expert would have about this?”
It’s great at surfacing angles I might miss.
Outline Partner, Not Ghostwriter
I use it to outline structures:
“Given these notes, suggest three ways to organize an article.”
Stupidity Checker, Not Fact Source
Occasionally I’ll ask,
“What are obvious counterarguments to this claim?”
Sometimes it reminds me of perspectives I should at least acknowledge.
What I don’t do anymore:
Ask it to draft entire sections
Let it summarize news articles I haven’t read myself
Use any phrase it outputs that sounds generic or overconfident
This has a painful side effect: my throughput is back down.
I publish maybe one substantial piece a week now, sometimes less. My views dropped. My earnings from platform ads dropped too.
But the emails changed again.
Now I get:
“This felt like an actual person wrote it.”
“I trust your stuff more than the algorithm’s recommendations.”
That’s the only metric I care about for anything touching news, policy, or real people’s lives.
If You Want to Use AI for News Writing Without Becoming “That Guy”
I’m not going to pretend there’s a pure‑ethics, high‑volume, high‑income hack here.
There isn’t.
But if you’re tempted to use AI for newsy content—Medium, Substack, or your own blog—this is the simple framework I wish someone had smacked me with earlier.
1. Decide Which Role AI Plays (Pick One)
You get one of these per piece:
Research assistant: helps you find angles and questions
Structure assistant: helps with outlines and flow
Style assistant: helps you tighten language you already drafted
If you notice it doing all three, it’s basically the writer—and you’re the editor stapling your name on top.
2. Set a Non‑Negotiable “Human Time” Per Article
Pick a minimum, and honor it.
For news‑adjacent pieces, mine is:
45 minutes research (reading sources)
45 minutes writing
30 minutes editing
If I’m not willing to give a topic that much attention, I don’t publish on it.
3. Track Trust, Not Just Traffic
Create a simple log:
How many people replied to the piece?
Did anyone ask for clarification or sources?
Did anyone reference it in a context that matters (team email, Slack, meeting)?
Those signals mean more than views or claps.
If your trust signals go down while your views go up, you’re sliding into content farming, whether you admit it or not.
4. Be Honest in the Byline
If AI materially touched the text, say so.
You don’t need a neon disclaimer. A line like:
“Drafted with the help of an AI tool; all analysis, editing, and mistakes are mine.”
That one sentence does more for long‑term credibility than any clever headline.
The Bigger Shift: What Kind of Writer Do You Want To Be?
The day I killed NewsLoop, I opened my drafts folder and deleted 14 half‑finished “AI news” posts.
Each one represented about 20 minutes of “work.”
It felt like dragging junk out of a closet I’d been avoiding opening.
Since then, I’ve had to sit with a basic question:
Am I here to generate content or to build a body of work I can stand behind in five years?
AI makes it dangerously easy to confuse the two.
Especially on platforms that reward speed and quantity, you can ride the fake‑news‑but‑not‑quite wave for a while—slightly warmed‑over headlines, pseudo‑original takes, confident summaries built on shaky reporting.
The money might even be decent if you play the game hard enough.
But the bill comes due quietly.
It arrives as people who stop taking you seriously, who hesitate before quoting you, who put your work in the same bucket as everything else stamped “probably machine‑generated.”
I’d rather write less and be wrong occasionally than publish more and be unmoored from what’s actually true.
The tools aren’t the problem.
The problem is when we let them nudge us from “this helps me think” to “this thinks for me.”
That’s the line I crossed without noticing.
I don’t plan to cross it again.
About the Creator
abualyaanart
I write thoughtful, experience-driven stories about technology, digital life, and how modern tools quietly shape the way we think, work, and live.
I believe good technology should support life
Abualyaanart



Comments
There are no comments for this story
Be the first to respond and start the conversation.