How We Test: The Methodology Behind Every Recommendation
Most hosting review sites never buy a single plan. We buy every account with our own money, test for 6+ months, and publish raw benchmark data. Here's exactly how every test, score, and ranking on ThatMy.com is produced — step by step, with real examples.
Most Hosting Reviews Are Based on Nothing. Ours Are Based on This.
I'm going to be blunt: the majority of hosting review websites on the internet have never purchased a single hosting plan. They rewrite the host's marketing page, add an affiliate link, and call it a "review." No tests. No data. No accountability.
I know this because I've read hundreds of competitor reviews while building ThatMy.com. I can tell in seconds which reviewers have never logged into a cPanel dashboard, never SSH'd into a server, never run a TTFB test. Their reviews say things like "blazing fast servers" and "excellent uptime" without a single number to back it up.
This page exists to show you — in uncomfortable detail — exactly what I do differently. Not because I think you'll enjoy reading about my testing methodology (you probably won't). But because I want you to understand why my recommendations look different from every other site.
Here's a quick example of why methodology matters:
What Most Review Sites Do
Read the host's marketing page. Copy the feature list. Add affiliate link. Say "great for beginners!" Collect $150 commission. Never buy the plan. Never test the server. Never read the TOS.
What ThatMy.com Does
Buy the plan at retail price. Install WordPress. Run TTFB tests with no CDN. Stress-test at 250 concurrent users. Identify the CPU model. Look up PassMark ranking. Read the full TOS. Search TrustPilot for complaints. Publish the raw numbers.
The result? My rankings often disagree with the "consensus." Bluehost is #1 on most review sites. On ThatMy.com, it's on the avoid list. Because when you actually test it — 480ms TTFB, 306% renewal price increase, 4,200+ 1-star TrustPilot reviews — the data tells a very different story than the marketing.
Below is every step of my process, in order, with real examples from actual tests I've run.
We Buy Real Hosting Plans (With Our Own Money)
Every host reviewed on ThatMy.com is purchased with my own credit card at retail pricing. I don't use vendor-provided "press accounts," free review accounts, or demo environments that could have boosted performance. The plan I test is the exact same plan you'd buy if you clicked "Sign Up" right now.
This matters more than you think. Multiple hosts have offered me free accounts for review purposes. I've turned down every single one. Here's why:
By purchasing at retail price through the normal checkout flow, I get the same experience you get. Same server assignment. Same resource allocation. Same support queue. If the checkout process has confusing upsells, I experience them. If the onboarding is slow, I experience that too.
What about the cost? Maintaining active hosting test accounts isn't cheap. I currently have active accounts on ScalaHosting, Cloudways, Kinsta, Hostinger, Contabo, ChemiCloud, and several others — costing roughly $200/month in hosting bills alone. These accounts exist solely for testing. I fund this through the affiliate commissions I earn when readers sign up through my links. It's a self-sustaining cycle: honest reviews → reader trust → affiliate revenue → more test accounts → more honest reviews.
Some hosts I've tested and subsequently closed the account (GoDaddy, Bluehost, HostGator). For these, I note the test date in my reviews and flag when data might be outdated. I'll reopen accounts and re-test if I have reason to believe the host has significantly changed their infrastructure — which happens more often than you'd think.
Standardized WordPress Setup (Same Config, Every Host)
The single biggest flaw in most hosting benchmarks is uncontrolled variables. If you test Host A with a lightweight theme and Host B with a bloated page builder, the difference you measure is the theme, not the server. Your benchmark is useless.
Every host I test gets the exact same WordPress installation:
Why no CDN or caching plugins? Because I'm testing the server, not the caching layer. A CDN can mask a slow server by serving cached static files from edge nodes. That's great for production websites — but it tells you nothing about the underlying server performance. When you run WooCommerce, process forms, or access your admin dashboard, those requests bypass the CDN and hit the actual server. That's the speed that matters, and that's what I measure.
I do note what server-side caching the host provides by default (LiteSpeed Cache on LiteSpeed hosts, Varnish on some VPS platforms, OPcache settings) because this is part of the hosting product. But I never add external caching that you'd have to install yourself — because then I'd be testing the plugin, not the host.
In early 2025, I tested Hostinger's Premium plan with this standardized setup. Their TTFB came in at 182ms — decent, but far from their advertised "fastest hosting." Hostinger's marketing page shows speed test results taken with LiteSpeed Cache fully tuned, Cloudflare CDN active, and an optimized theme. My test strips all of that away. The 182ms is what your server actually does before any optimization.
Is 182ms bad? No. It's acceptable for shared hosting. But it's not the "30ms" their marketing implies, and it's significantly slower than ScalaHosting's 28ms VPS or ChemiCloud's 95ms shared hosting. Context matters. Raw numbers don't lie.
TTFB & Speed Testing (The Most Important Number)
Time To First Byte (TTFB) is the single most important hosting speed metric. It measures how quickly the server responds to a request — the time from your browser sending a request to receiving the first byte of the response. This includes DNS resolution, TCP connection, TLS handshake, and server processing time.
TTFB tells you how fast the server is, independent of your website's code, theme, or content. A 50ms TTFB means the server is fast. A 500ms TTFB means the server is slow. No amount of image optimization, minification, or lazy loading can fix a slow server — because TTFB happens before any content is even delivered.
How I test TTFB:
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}\n")Why median instead of average? Because one slow response out of 50 can spike the average by 100ms. The median gives you the "typical" experience — what you'd feel when loading the site on a normal visit. I report both median and P95 (the 95th percentile — the speed at which 95% of requests are faster) so you can see both the normal case and the worst case.
Look at that table. ScalaHosting at 28ms. GoDaddy at 620ms. That's a 22x difference in server response time. Both are hosting WordPress. Both are sold as "fast hosting." But one responds before you blink and the other takes longer than a Google search to process a single request.
This is why TTFB testing matters. Without it, you're relying on the host's marketing claim of "fast servers" — which every host claims, including GoDaddy.
Load Testing — What Happens When Real Traffic Hits
A server can look fast with 1 visitor. The real test is what happens under load — when 50, 100, or 250 people are all requesting pages at the same time. This is where shared hosting plans fall apart and VPS plans prove their value.
I use Loader.io to simulate concurrent traffic. The test ramps up gradually: 10 users, 25, 50, 100, 150, 200, 250. At each level, I measure:
Why this matters for you: If you're building a business website, you will eventually have traffic spikes. A blog post goes viral on Reddit. A product launch drives 500 visitors in an hour. A Black Friday sale. Your host needs to handle these spikes without crashing, throttling, or throwing errors. Load testing tells you whether it will.
Look at Bluehost and GoDaddy — they didn't even survive the load test. The servers returned 503 errors before we hit 100 concurrent users. These are the same hosts that millions of people are using for their business websites. They literally cannot handle a moderately popular blog post.
ScalaHosting's VPS barely flinched — 10% degradation at 250 users. That's because you get dedicated CPU cores on a VPS. Nobody else's traffic affects your performance. On shared hosting, your site shares CPU with hundreds of other accounts, and when one of them spikes, everyone suffers.
CPU PassMark Verification — Exposing the Hardware Gap
Hosting companies love to say "powerful servers" and "high-performance infrastructure" without telling you what CPU you're actually running on. I find out. Here's how:
On VPS/Cloud hosts (where I have SSH access): I run cat /proc/cpuinfo to get the exact CPU model. Then I look up that model on PassMark's CPU Benchmark database which ranks 1,190 server-class CPUs by performance.
On shared hosts (where SSH is limited): I check the host's blog, support documentation, and help articles for CPU model mentions. If that fails, I contact support directly and ask. Some hosts will tell you. Others won't — which tells you something too.
Why CPU model matters: The difference between a 2016 Intel Xeon E5-2650 and a 2024 AMD EPYC 9474F is massive — 10x+ in single-thread performance. Hosting companies that use older CPUs can offer lower prices because the hardware is depreciated. But your WordPress site executes PHP on a single thread, so single-thread CPU performance directly determines your page generation speed.
ScalaHosting's AMD EPYC 9474F at rank #31 has a PassMark score of 102,432 points. Bluehost's Intel Xeon E5-2650 v4 scores approximately 8,200 points. That's a 12.5x performance gap in raw CPU power. And yet both are sold as "hosting" at similar price points.
This is why I rank CPUs in every review. You can't compare hosting plans without knowing what hardware you're running on. A $3/mo shared plan on a 2024 EPYC processor will outperform a $10/mo plan on a recycled 2016 Xeon — and you'd never know without checking.
12-Month Uptime Monitoring
Speed doesn't matter if your site is down. I monitor every host continuously for a minimum of 12 months using third-party uptime monitoring services. Every outage is logged with the exact duration, time, and whether the host acknowledged it.
Why 12 months? Because any host can have a good week. Even GoDaddy can show 100% uptime for 30 days. The patterns that matter — recurring outages, maintenance window issues, degraded performance during peak hours — only emerge over months of continuous monitoring.
How I monitor:
What I've learned from uptime monitoring: Most reputable hosts deliver 99.95%+ uptime consistently. The real differences emerge in how they handle incidents — scheduled maintenance communication, incident response time, and transparency about root causes. ScalaHosting has been the most transparent: they publish incident reports and proactively notify clients. Some hosts (I won't name names... actually, yes I will — GoDaddy and Bluehost) don't acknowledge outages at all unless you open a support ticket first.
Renewal Price & TOS Deep-Dive (Where the Scams Live)
This is the step that most reviewers skip entirely — and it's the one that saves you the most money. The intro price is marketing. The renewal price is what you actually pay. And the Terms of Service is where the fine print hides the real resource limits, suspension policies, and billing traps.
For every host, I document:
See the difference? ScalaHosting's renewal is $4.95 — a 68% increase. That's manageable. SiteGround's renewal is $17.99 — a 502% increase. If you signed up for 36 months at $2.99/mo and forgot to cancel before renewal, your next year costs $215.88 instead of $35.88. That's not a price increase — that's a completely different product at 6x the cost.
Why I read the full TOS: Because the marketing page says "unlimited" and the TOS page says "25% CPU, 200,000 inodes, 1GB RAM, 20 entry processes." The TOS is the legal document. The marketing page is an advertisement. I quote the TOS.
TrustPilot, Reddit & Complaint Ecosystem Research
Speed tests and pricing analysis tell you what the product does. Complaint research tells you what happens when things go wrong — which they will, eventually, with every host. I want to know: When customers have problems, does the host fix them or fight them?
TrustPilot research method:
Reddit research method:
Why complaint research matters: A host with 100ms TTFB and great benchmarks can still ruin your business if their support is terrible, their billing is predatory, or they suspend accounts without warning. I've seen hosts with excellent technical performance but abysmal customer treatment. My reviews include both dimensions — because you deserve to know what happens when something breaks.
Ownership research: I also check who owns the hosting company. Private equity acquisitions are a reliable predictor of quality decline. When Newfold Digital acquired Bluehost and HostGator, support quality dropped and prices rose. When DigitalOcean acquired Cloudways, the same pattern began. I track ownership changes via Crunchbase, Wikipedia, and SEC filings because they affect your experience 12-24 months after the acquisition — long after the "everything stays the same" press release.
How All 8 Steps Come Together Into a Ranking
After completing all 8 steps for a host, I have a comprehensive profile: raw server speed, load resilience, CPU power, uptime reliability, true pricing, TOS fine print, and real user complaints. Now I rank.
My ranking factors, weighted by importance:
Why performance is weighted highest (30%): Because it's the one thing you can't fix yourself. You can add a CDN. You can optimize your images. You can install a caching plugin. But you can't make a slow server fast. Server performance is the foundation — everything else is built on top of it.
Why features are weighted lowest (5%): Because features are commoditized. Every host offers free SSL, one-click WordPress, and automated backups. These are table stakes in 2026, not differentiators. I won't rank a host higher because they include "free domain" when their server is 3x slower than the competition.
Things You'll Never See on ThatMy.com
Transparency isn't just about what I do — it's about what I refuse to do. Here are the practices you'll never find on this site:
I'd rather lose affiliate revenue by being honest than gain revenue by misleading the people who trust me. That's not just ethics — it's good business. Readers who trust you come back. Readers you've deceived don't.
— Mangesh Supe, Founder of ThatMy.comTesting Never Stops — Our Ongoing Monitoring Process
My testing isn't a one-time event. Hosting companies change — they upgrade servers, raise prices, get acquired, change TOS policies, switch data centers. A review from 2023 can be dangerously outdated in 2026. Here's how I keep everything current:
Quarterly re-testing: Every active host gets re-benchmarked at least once per quarter. I re-run TTFB tests, check for CPU upgrades/downgrades, and verify current pricing. If a host has changed significantly, I update the review within two weeks.
Real-time uptime monitoring: Uptime checks run 24/7 on every active host. I get alerts within 60 seconds of any downtime event. This data continuously feeds into my uptime assessments.
Pricing surveillance: Hosting companies change prices without announcing it. I manually check pricing pages monthly and compare against my records. When I find a change, I update every page that references that host's pricing.
Ownership & industry tracking: I monitor hosting industry news for acquisitions, leadership changes, and infrastructure updates. When DigitalOcean acquired Cloudways, I re-evaluated and updated my Cloudways recommendation within a month — including projections about likely price increases that turned out to be accurate.
TrustPilot & Reddit monitoring: I check complaint ecosystems monthly for each major host. A sudden spike in 1-star reviews about a specific issue (e.g., "account suspended," "billing dispute") triggers an immediate review update.
The Exact Tools I Use (No Secret Sauce)
I'm not hiding my methodology behind proprietary tools. Here's every tool I use — most are free. You could replicate my tests yourself if you wanted to.
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}" with timing breakdowns. Free. Reproducible. No third-party interpretation layer.cat /proc/cpuinfo reveals the exact CPU model. On shared hosts: support inquiries, blog posts, and documentation analysis.None of these tools are proprietary or expensive. The "secret" to ThatMy.com's methodology isn't special software — it's the willingness to actually spend the time and money to do the testing. Most review sites don't test because testing takes months and costs thousands of dollars. Writing a fake review from the marketing page takes 30 minutes and costs nothing.
Questions About Our Testing (Answered Honestly)
"Do you earn affiliate commissions?"
Yes. When you click an affiliate link on ThatMy.com and sign up for hosting, I earn a commission. This is disclosed on every page. Affiliate commissions are how I fund the $200+/month in test hosting accounts and the thousands of hours of research. But — and this is the critical difference — I recommend hosts based on test data, not commission rates. Bluehost pays me more per signup than ScalaHosting. I recommend ScalaHosting anyway. The data decides.
"How do I know your benchmarks are real?"
You don't — and you shouldn't trust anyone blindly. What I can tell you: every benchmark number comes from a test I ran on a plan I purchased. I describe the exact methodology (curl timing, Loader.io parameters, PassMark lookups) so you can replicate the tests yourself. I'd love it if readers verified my numbers. If my data is wrong, tell me — I'll re-test and correct it publicly.
"Why don't you test more hosts?"
Money and time. Each host costs $50-$200 to test (plan purchase + 6 months of monitoring). I currently test 28+ hosts. I prioritize hosts that readers actually consider — which means the major shared hosts, popular managed WordPress platforms, and VPS providers that serve the most customers. If you want me to test a specific host, tell me and I'll add it to the list.
"Doesn't your recommendation of ScalaHosting make you biased?"
Fair question. Yes, I use ScalaHosting for ThatMy.com. Yes, I recommend them. Could I be subconsciously biased? Maybe. Here's my counter: I've switched my #1 recommendation three times in 10 years (SiteGround → Cloudways → ScalaHosting). Each time, I switched because the data told me to, not because of any financial relationship. I recommended SiteGround when they deserved it. I stopped when they didn't. I'll do the same with ScalaHosting if they ever stop earning the #1 spot. My track record shows I follow the data, even when it costs me affiliate revenue.
"Why do your rankings disagree with other review sites?"
Because other review sites don't test. They rank based on commission rates, brand recognition, and recycled opinions. When you actually buy the plans and run benchmarks, the results look very different from the marketing. Bluehost ranks #1 on sites that earn $150 per signup without testing. On ThatMy.com — where I've tested Bluehost and measured 480ms TTFB, 306% renewal markup, and 4,200+ TrustPilot complaints — it's on the avoid list. The data is clear. My rankings reflect the data.
"How often do you update your reviews?"
Target: every review updated at least twice per year. High-priority pages (top recommendations, competitive comparisons) are updated quarterly. If a host changes pricing, upgrades hardware, or has a major incident, I update the relevant pages within two weeks. Every page shows its "Last Updated" date.
"Can hosting companies pay to improve their ranking?"
No. Not for any amount of money. I've been asked. I've declined. If that ever changes, ThatMy.com will be shut down the same day, because the site's entire value is built on honest rankings.
See the Data in Action
Now that you know how we test, explore the results. Every ranking, review, and comparison below was produced using the methodology you just read.

