Website Security: Why AI-Generated Passwords Could Be A Risk For Teams And Clients
By Neil Kilgallon on Friday, 20 February 2026
With nearly 30 years of experience in web design, most of which I spent as the Technical Director, I have seen significant changes in online security, particularly in recent years. In this article, I will discuss AI-generated passwords, using real-life examples and client stories. Over the years, I have had the privilege of working with numerous banking, fintech, and healthcare clients, such as Crown Agents Bank, Zenith, Meniga, JJC Fintech, and Hamad Medical. All of these clients hold extremely sensitive information, and security considerations are paramount.
This article isn't about criticising AI-generated passwords; it aims to highlight potential risks so you can be better prepared and aware. It will also praise the benefits and solutions that AI can create in this field.
‘Technical delivery is of a super high standard, and the team is incredible.’
Yasmeen Jaffer, JJC Fintech
Growing Role Of AI
AI has evolved from a futuristic concept into a workplace asset across many industries, particularly in IT. These statements are backed by fact from reputable sources such as the 1Adobe/Digital Marketing Institute (2026), with the AI Marketing Trends noting that 71% of businesses have integrated GenAI into their workflows. Gartner have also predicted that worldwide 2AI spending will total $2.52 trillion in 2026. This is not a trend, but a transformation in how AI will increasingly be used for project delivery.
As a consequence, website owners and developers are increasingly relying on AI platforms to generate security passes. In terms of web development, below are five examples where AI could be used for password generation:
- When setting up a CMS account (WordPress, ProcessWire, Drupal) for an administration account.
- For hosting control panels, such as Plesk.
- eCommerce platforms such as WooCommerce or Shopify.
- When configuring an email account for the mailbox password.
- When creating gated content for clients.
Random Generation Versus Secure Cryptographic Randomness
If you have generated a password using AI, they certainly look very random and near impossible to guess. However, they are based on statistically likely outputs based on learned probability distributions. Below are two examples from the horse's mouth. I asked ChatGPT and Gemini the following questions; below are their responses.
Question. Show me facts for this statement - AI language models generate outputs based on learned patterns and probabilities.
Gemini:
The Autoregressive Mechanism (Token Prediction)
Large Language Models (LLMs) are autoregressive, meaning they generate text one piece (token) at a time. For each new character or word, the model computes a probability distribution over its entire vocabulary.
- The Fact: The model doesn't ‘know’ a password; it calculates which character is most likely to follow the previous one based on its training data.
- The Risk: If the training data contains millions of examples of human-made passwords, the AI will naturally lean toward those human patterns (e.g., placing symbols at the end or starting with a capital letter).
ChatGPT:
Large Language Models Are Trained on Probability Prediction
Models such as OpenAI’s GPT-4 and GPT-3 are built using a transformer architecture that is trained to - Predict the next most likely token (word or character) in a sequence based on patterns learned from training data.
This process is known as next-token prediction.
The model does not ‘generate randomness’ - it generates statistically likely outputs based on learned probability distributions.
Below is a 20-character password generated by AI platforms. They were created on 20th February, 2026 at 09:45.
| ChatGPT | T9#vL2@qZ7!mX4$rP8^c |
| Gemini | kP9#vL2!mX7*rQ4&nZ9$ |
| Grok | X7$kP9mW#vL2qT8rN4jZ |
The pattern is striking. The string vL2 appears in all three. There is a simple reason for this - LLMs are trained and designed to be readable. They tend to avoid repeating characters, such as ‘aa’, for readability.
Even more alarming, 13 specific characters (# $ 2 4 7 9 L P X Z m v r) are shared across all three variations.
Passwords need to be highly unpredictable and random, not tied to any structure or rules. This is also known as high-entropy randomness. The gold standard for unpredictability is achieved through cryptographically secure pseudo-random number generators (CSPRNGs). This is what reputable password managers such as 1Password, NordPass, and Bitwarden use.
3A recent study by Irregular, a cybersecurity firm, found that 16-character passwords generated by popular LLMs often have as little as 27 bits of entropy, compared to the 98 bits found in truly random strings. To put this into context, a modern GPU in 2026 can brute-force 27 bits in seconds, whereas 98 bits would take trillions of years.
| Method | Entropy (BITS) | Estimated Time To Crack (2026 Technology) |
| AI-Generated (LLM) | 27 Bits | Seconds to hours |
| CSPRNG (Manager) | 98 Bits | Trillion of years |
The key takeaway here is that AI-generated passwords have a probability and prediction logic, CSPRNG (Password) are mathematically random.
Weaponization: Hackers Using PassGAN
PassGAN is a tool often used by hackers; it stands for Generative Adversarial Network. PassGAN can generate plausible passwords because of its architecture. It learns the rules humans use when creating passwords and produces high-quality guesses that mimic real passwords. The problem is compounded when the hacker knows which platform created the password; they can then tailor the attack to that model.
This can be problematic for websites, and we have had numerous clients approach us with breaches to their CMS. Tools such as PassGAN underscore why we prioritise security and help educate our clients on best practices. One of the key practices we always reiterate to clients is to use a strong password, with a minimum of 16 characters that is truly random.
4Research from early 2026 shows that while a traditional ‘brute-force’ tool might try trillions of useless combinations, a PassGAN-informed attack is 24% more efficient at cracking NIST-compliant passwords because it ‘understands’ human and LLM bias.
“Our valued clients in banking, fintech, and healthcare trust us to lock down back-ends and databases - a philosophy we apply to every website we build.”
Neil Kilgallon, Technical Director, ID Studio
Our Take
As an agency with nearly three decades of technical experience in securing websites for financial and medical clients handling extremely sensitive information, we understand the importance of rigorous security. While AI offers many benefits and can be very helpful, there are no shortcuts when it comes to security. Entry points should follow zero-trust principles, and under no circumstances should you give hackers an advantage by using potentially vulnerable passwords. Our advice is to use CSPRNG-based managers - this is what we recommend to our clients and follow ourselves. If you can’t fully trust an AI-generated password, using it would violate our zero-trust principles.
Sources:
- digitalmarketinginstitute.com
- gartner.com
- theregister.com
- researchgate.net