Our Review Methodology

How we actually test AI tools.

We don't review tools from press releases. We sign up, we pay, we use them for real creator work, and we score them across six dimensions. Here's the exact 12-point rubric every tool goes through before we publish a word.

Why this page exists

Most AI tool directories scrape a pricing page, copy a feature list, and slap an affiliate link on top. Google's Helpful Content Update specifically targeted that pattern in 2024, and it kept tightening in 2025 and 2026. More importantly, it's useless to you as a creator. Knowing a tool "uses AI" and costs "$19/month" doesn't tell you if it actually works on your weekly show.

So we test every tool on the real work we actually ship: our own YouTube videos, our own podcast episodes, our own newsletter, our own social posts. The rubric below is the exact process. If a review on this site doesn't meet all 12 points, it doesn't publish.

The 12-point rubric

1. Hands-on access — real account, real work

We sign up with a creator email, go through onboarding like any user would, and start a real project inside the tool. No "watch the demo video" reviews. If a tool is gated behind sales, we book the sales call and log the experience. If the free tier gates the useful features, we say exactly what you get for zero dollars.

2. Pay-tier testing — we hit the paywall

For every tool that has paid plans, we upgrade to at least one paid tier and run it for a minimum of seven days before writing the review. Lifetime deals and refundable subscriptions get tested on the paid tier, then downgraded. Pricing claims in our reviews reflect what you're actually charged at checkout, including common add-ons.

3. Real creator workload

We use the tool on a genuine creator task that matches its category. Video editors edit our actual podcast video. Thumbnail generators make thumbnails for shipping YouTube videos. Newsletter tools send real newsletters to real subscribers. If it breaks on real work, that's in the review.

4. Output quality scoring

We keep a sample library of original content (scripts, raw footage, audio, product shots) that we run through every tool in a category. That way when we compare two AI video editors, we're comparing them on the same 40-minute podcast video, not whatever happened to be on each reviewer's desk. Output gets scored out of 10 based on usability without manual cleanup, fidelity to the source, and final export quality.

5. Ease of use scoring

We log onboarding friction, hidden settings, and time-to-first-useful-output. A tool with a steep learning curve that ultimately rewards you gets scored differently from one that is confusing and still mediocre. We note where creators will get stuck.

6. Pricing value scoring

We line up what you get at every tier against what competitors give you at the same price. Tools with deceptive pricing (credits that expire, generations that run out in a week, feature paywalls not disclosed on the pricing page) lose points here and we call it out in the "What Annoys Me" section.

7. Features vs. category peers

Every tool is scored against its top three competitors in the same category. "Good transcription" means nothing in isolation; "better transcription than Descript, on par with Otter, worse than Riverside" means something. We show the work.

8. Speed benchmarks

We time how long a tool takes to produce its core output on a standard test input: a 15-minute podcast, a 90-second TikTok script, a 10-slide carousel, a 1,500-word newsletter draft. Times are logged on a clean, wired-connection office machine.

9. Platform reality check

Mac, Windows, web, mobile — we confirm what actually works, not what the marketing page claims. If the Mac app is beta quality, we say so. If the iOS app lacks half the web features, we say so.

10. Export quality

For anything that exports (video, audio, images, documents), we download the export, check the file against the source, and run it through whatever downstream platform a creator would use — YouTube Studio, TikTok, Spotify for Podcasters, Substack, Gmail. Exports that look fine in-app but fail at upload earn a mention.

11. Support responsiveness

We file a genuine support ticket during every review. How long they take to answer, whether the answer solves the problem, and whether we get pushed to self-serve docs for basic questions — all factored into the scorecard.

12. Re-test cadence

Every review gets revisited on a rolling 90-day cycle. Pricing changes, feature drops, and new competitors trigger an earlier re-test. The "Last tested" date at the top of each review is real. If we haven't tested it in more than 180 days, that review gets flagged as out of date.

Our six scored dimensions

Every tool review ends with a scorecard. Each dimension is scored out of 10, using the rubric above. The Overall score is a weighted average, with weights varying by category (ease of use matters more for beginner categories; output quality matters more for professional categories). Weights are always shown on the review itself.

Who reviews what

Tool reviews on InfluencerAI are written by our two co-founders, Fredrik Filipsson and Morten Andersen. Every review shows which of us wrote it and when it was last tested. Bylines are real humans, not pseudonyms. We link our LinkedIn profiles on every byline so you can verify.

Affiliate relationships and editorial independence

We use affiliate links. Some of the tools we review pay us a commission when you try a paid plan through our link. Affiliate status does not change our scores, our rankings, or our recommendations. We have affiliate deals with tools we've scored under 6/10, and those reviews say the tool is weak and recommend alternatives. Scores are set before any affiliate link is added, by the reviewer, not by the partnerships side of the business. For the complete policy, see our affiliate disclosure.

What would make us remove or rewrite a review

A tool ships a major update that changes our score by more than 1.0 points. Pricing changes by more than 20% on any tier. A security, privacy, or legal issue emerges that creators should know about. A reader sends us evidence that something in the review is factually wrong (we publish corrections visibly, with a strikethrough on the original claim and a dated note). We do not remove reviews because the vendor asked us to.

Got a problem with a review?

Email hi@influenceraiagents.com or reach us via the contact form. If you spot a factual error, we fix it the same week. If you run a tool we've reviewed and want to respond publicly, we'll link to your response at the top of the review.