If your analytics dashboard isn't showing what you expect, this page covers the most common causes and fixes.
"No data yet" everywhere
The most common cause: the log parser hasn't run yet, or the access log file doesn't exist.
Check the log file exists
SSH into your server and verify:
ls -lh /home/USERNAME/yourdomain.com/logs/access.log
You should see a non-zero file. If it doesn't exist, your domain doesn't have any visits yet — load a few pages in your browser and try again.
Check the agent is parsing it
The agent scans for new domains every 10 minutes. If you just added a domain, wait up to 10 minutes for the analytics worker to discover it. Then check the agent log:
journalctl -u opterius-agent | grep -i analytics
You should see lines like:
Analytics worker started
Analytics: tracking new domain example.com (log: /home/user/example.com/logs/access.log)
If you don't see those lines, the agent isn't running the analytics worker. Restart the agent:
systemctl restart opterius-agent
Force a flush
The agent only flushes buckets to disk every 5 minutes. If you just enabled the feature and want to see data immediately, wait 5 minutes after some traffic comes in.
Countries are all "Unknown" / 🌐
This means the MaxMind GeoLite2 database isn't installed. Without it, IP-to-country lookups return empty.
Fix: the panel administrator needs to configure MaxMind in System Settings → Integrations and click Download GeoLite2. See Geographic Data for the full setup walkthrough.
If MaxMind is configured but countries are still empty, SSH in and verify the file exists:
ls -lh /var/lib/opterius/GeoLite2-Country.mmdb
The file should be ~6 MB. If it's missing or 0 bytes, the download failed — try again from the admin Integrations page and check the result message for the error.
Bot traffic is 100%
Either:
- Your site is genuinely getting hammered by crawlers (rare for established sites, common for brand-new sites Google is heavily indexing)
- The user-agent detection is misclassifying real users as bots (unlikely)
- The only "visitors" you're testing with are tools like
curl,wget, or Postman (these are correctly classified as bots)
To test with a real browser visit, open your site in Chrome and reload. That visit will count as Chrome / Windows (or whatever your OS is) and should reduce the bot percentage on the next refresh.
Visit counts are way higher than I expected
The Visits stat counts every HTTP request, including images, CSS, JavaScript, and font files. A single page load that includes 30 images is 31 visits.
If you want a "page views" number, look at the Top Pages table — it filters to URLs that visitors actually browsed to, not asset requests.
My old visits are missing after I enabled the feature
The analytics worker only starts processing access logs from the moment you upgrade. It doesn't backfill historical data because:
- Doing so would take time proportional to your access log size
- The worker doesn't know how far back to go
- Old data is often partial (logrotate may have already rotated some away)
So the first time you open the analytics dashboard after upgrading, you'll see only data from the moment the feature was enabled forward. Wait a day or two for meaningful stats.
The 24-hour view shows blanks for the latest hour
The latest hour bucket is held in memory by the agent. If the dashboard loads before the first scheduled flush (within the first 5 minutes after agent start), the latest hour might appear empty even though traffic is happening.
Wait 5 minutes and refresh the page. The bucket will be flushed to disk and visible.
I see lots of direct referrers but no Google
That's normal. Most visitors don't have a Referer header set:
- Mobile app browsers don't send
Refererfor security/privacy reasons - HTTPS to HTTP transitions strip the
Referer - Some search engines (especially Google with its tracking redirect) hide the original referrer
If your direct count is much larger than google.com, it doesn't mean Google isn't sending you traffic — it just means Google sends most of it through redirects that don't preserve the referrer.
The dashboard takes ages to load for 90D view
Loading 90 days of buckets means reading ~90 daily files from disk for each chart refresh. On a slow disk this can take a couple of seconds.
This is expected behavior in v1. Future optimization: pre-aggregate daily and monthly summaries so the longer ranges read fewer files.
After I deleted a domain, its analytics are still in the dashboard
The domain selector lists all domains the user owns according to the panel database. If you deleted a domain via the panel, it won't appear. If you only deleted the files, the panel still has the domain record — go to Domains and delete it from there.
The on-disk bucket files for the deleted domain will be pruned by the daily cleanup job after 90 days. To delete them immediately:
rm -rf /var/lib/opterius/analytics/deleted-domain.com
My access log has a custom format and parsing fails
The agent only parses the default Nginx "combined" log format. Custom log formats are silently skipped — the worker reads each line, fails to match the regex, and moves on.
If you have a custom format, you have two options:
- Use a separate access log for analytics by adding a second
access_logdirective to your vhost using the standard combined format - Add the standard format as a fallback alongside your custom format
The second option is easier:
log_format combined_default '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
server {
# ... your existing config ...
access_log /home/user/example.com/logs/access.log combined_default;
}
The standard "combined" format is what Opterius writes by default for new domains, so you only need to do this for domains migrated from another panel.
Where to ask for help
If your problem isn't covered here, open a ticket through the Support page in the panel. Include:
- The domain you're querying
- The selected time range
- A screenshot of what you see vs what you expected
- The output of
journalctl -u opterius-agent | tail -100