Shopping cart
Your cart empty!
Terms of use dolor sit amet consectetur, adipisicing elit. Recusandae provident ullam aperiam quo ad non corrupti sit vel quam repellat ipsa quod sed, repellendus adipisci, ducimus ea modi odio assumenda.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Do you agree to our terms? Sign up
Have you ever asked an AI for a password? When you do, it quickly generates one, telling you confidently that the output is strong.
In reality, it's anything but, according to research shared exclusively with Sky News by AI cybersecurity firm Irregular. The research, which has been verified by Sky News, found that all three major models - ChatGPT, Claude, and Gemini - produced highly predictable passwords, leading Irregular co-founder Dan Lahav to make a plea about using AI to make them.
"You should definitely not do that," he told Sky News. "And if you've done that, you should change your password immediately.
And we don't think it's known enough that this is a problem." Predictable patterns are the enemy of good cybersecurity, because they mean passwords can be guessed by automated tools used by cybercriminals. But because large language models (LLMs) do not actually generate passwords randomly and instead derive results based on patterns in their training data, they are not actually creating a strong password, only something that looks like a strong password - an impression of strength which is in fact highly predictable.
Some AI-made passwords need mathematical analysis to reveal their weakness, but many are so regular that they are clearly visible to the naked eye. A sample of 50 passwords generated by Irregular using Anthropic's Claude AI, for instance, produced only 23 unique passwords.
One password - K9#mPx$vL2nQ8wR - was used 10 times. Others included K9#mP2$vL5nQ8@xR, K9$mP2vL#nX5qR@j and K9$mPx2vL#nQ8wFs.
When Sky News tested Claude to check Irregular's research, the first password it spat out was K9#mPx@4vLp2Qn8R. OpenAI's ChatGPT and Google's Gemini AI were slightly less regular with their outputs, but still produced repeated passwords and predictable patterns in password characters.
Google's image generation system NanoBanana was also prone to the same error when it was asked to produce pictures of passwords on Post-its. 'Even old computers can crack them' Online password checking tools say these passwords are extremely strong.
They passed tests conducted by Sky News with flying colours: one password checker found that a Claude password wouldn't be cracked by a computer in 129 million trillion years. But that's only because the checkers are not aware of the pattern, which makes the passwords much weaker than they appear.
"Our best assessment is that currently, if you're using LLMs to generate your passwords, even old computers can crack them in a relatively short amount of time," says Mr Lahav. This is not just a problem for unwitting AI users, but also for developers, who are increasingly using AI to write the majority of their code.
Read more from Sky News:Do AI resignations leave the world in 'peril'?Can governments ever keep up with big tech? AI-generated passwords can already be found in code that is being used in apps, programmes and websites, according to a search on GitHub, the most widely-used code repository, for recognisable chunks of AI-made passwords. For example, searching for K9#mP (a common prefix used by Claude) yielded 113 results, and k9#vL (a substring used by Gemini) yielded 14 results.
There were many other examples, often clearly intended to be passwords. Most of the results are innocent, generated by AI coding agents for "security best practice" documents, password strength-testing code, or placeholder code.
However, Irregular found some passwords in what it suspected were real servers or services and the firm was able to get coding agents to generate passwords in potentially significant areas of code. "Some people may be exposed to this issue without even realising it just because they delegated a relatively complicated action to an AI," said Mr Lahav, who called on the AI companies to instruct their models to use a tool to generate truly random passwords, much like a human would use a calculator.
What should you do instead? Graeme Stewart, head of public sector at cybersecurity firm Check Point, had some reassurance to offer. "The good news is it's one of the rare security issues with a simple fix," he said.
"In terms of how big a deal it is, this sits in the 'avoidable, high-impact when it goes wrong' category, rather than 'everyone is about to be hacked'." Other experts observed that the problem was passwords themselves, which are notoriously leaky. "There are stronger and easier authentication methods," said Robert Hann, global VP of technical solutions at Entrust, who recommended people use passkeys such as face and fingerprint ID wherever possible.
And if that's not an option? The universal advice: pick a long, memorable phrase, and don't ask an AI. Sky News has contacted OpenAI, while Anthropic declined to comment.
A Google spokesperson said: "LLMs are not built for the purpose of generating new passwords, unlike tools like Google Password Manager, which creates and stores passwords safely. "We also continue to encourage users to move away from passwords and adopt passkeys, which are easier and safer to use.".