I don’t think the platform or development environment would be a reason for Google to reject your website for indexing. So, moving your site to a Wordpress environment is unlikely to improve the situation.
Google can reject or delay a website’s indexing for several reasons. Most are technical or quality-related rather than punitive. Here are the main causes, explained plainly.
The most common reason is that Google cannot access the site properly. If your pages return errors such as 404 (not found), 403 (forbidden), or 5xx server errors, Googlebot may give up. This also happens if your server is slow, times out, or blocks Google’s IP addresses.
Another frequent cause is that indexing has been explicitly blocked. A robots.txt file may be disallowing Googlebot, or your pages may contain a noindex meta tag or HTTP header. This is surprisingly easy to overlook, especially on sites that were recently in development or staging.
Thin or low-quality content is a major factor. If a site has very little original text, heavily duplicated content, auto-generated pages, or pages that exist mainly to redirect users elsewhere, Google may crawl it but choose not to index it. This often shows up in Search Console as “Crawled – currently not indexed”.
New or low-authority sites can also be ignored temporarily. If a site has no inbound links, minimal content, or no clear topical focus, Google may deprioritise it until it sees signals that the site is legitimate and useful.
Policy or guideline violations can trigger rejection. This includes spammy practices such as keyword stuffing, cloaking, doorway pages, scraped content, or misleading behaviour. In more serious cases, a manual action may be applied, which will be visible in Google Search Console.
Poor site structure can be an issue. If internal links are broken, pages are orphaned, or navigation relies heavily on JavaScript that Google struggles to render, Google may not discover or understand your content well enough to index it.
Duplicate or canonical issues are another common cause. If Google believes another URL is the “main” version of your page, it may choose that one instead and exclude the version you submitted. This often happens with HTTP vs HTTPS, www vs non-www, or parameterised URLs.
Finally, security or trust problems can stop indexing. Sites that are hacked, infected with malware, or lack basic HTTPS security may be excluded to protect users.
If you want to diagnose a specific rejection, Google Search Console is essential. The “Pages” (Indexing) report will usually tell you whether the page is blocked, crawled but not indexed, duplicated, or affected by a policy issue.