🚀Introducing Bright Star: AI-Powered, Autonomous Security Testing & Remediation! Learn more>>

Back to blog
Published: May 28th, 2025

How to Write Secure AI-Generated Code

Time to read: 5 min

Generative AI has quickly become a staple in modern software development. Developers are using tools like GitHub Copilot and ChatGPT to build features, generate tests, and accelerate development timelines. But speed comes with a trade-off. AI may be able to write functional code, but it doesn’t understand context or intent, and it certainly doesn’t understand security.

If you’re relying on AI to help write your code, here’s the reality: unless you’re guiding it intentionally and reviewing its output thoroughly, it will likely introduce risks. That’s because AI models generate what looks statistically correct – not necessarily what’s secure or maintainable.

This article explores how to use AI coding tools without compromising your application’s security posture.

The Hidden Risks of AI-Created Code

AI models are trained on massive datasets, including public repositories and community Q&A forums. While that’s a rich source of examples, it also means AI often reproduces insecure practices that it’s seen before: outdated cryptographic functions, SQL queries without parameterization, or web handlers with no input validation.

In practice, that means developers can end up shipping vulnerable code that “works” – at least until attackers find the gap. These risks aren’t hypothetical. Researchers have already shown how large language models can generate code that’s exploitable, even when prompted with common use cases.

Write Secure Prompts, Not Just Code

The quality and safety of AI-generated code often comes down to how you ask for it. Vague prompts tend to produce code that’s generic and potentially insecure. For example, asking for a “login API in Node.js” may return something that stores plain-text passwords or relies on insecure query building.

Instead, you should explicitly ask the AI to use secure components: request password hashing with bcrypt, parameterized queries, and structured validation libraries. The more security expectations you include in the prompt, the more likely the output will reflect them. It’s also worth stating what to avoid – functions like eval, for example, or insecure serialization patterns.

In a team setting, it helps to standardize secure prompt templates so that developers are nudged toward best practices from the start.

Never Skip Review, Even for “Simple” Code

Treat AI-generated code the same way you’d treat code from a junior developer: don’t assume it’s right just because it compiles. Manual review is critical, especially when the code touches authentication, authorization, data access, or any user-facing component.

In addition to code review, apply static analysis and linters with security rules enabled. Tools like SonarQube, Bandit, and ESLint (with security plugins) can catch many of the obvious missteps that AI might introduce. It’s not just about correctness – it’s about risk reduction.

Security testing doesn’t end with static tools. Feeding AI-generated code into your SAST or DAST workflows helps detect deeper issues. If your organization has a security champion or AppSec team, have them weigh in on any AI-heavy codebase contributions.

Validate Everything Because AI Often Doesn’t

Input validation is one of the most frequently overlooked areas in AI-generated code. The code might look correct at a glance, but unless you’ve explicitly asked for it, there’s a good chance it won’t properly validate inputs or escape output.

Always double-check how inputs are handled, whether they come from HTTP requests, command-line arguments, or third-party APIs. Ensure your AI-generated code uses frameworks that support robust validation and sanitization.

And don’t just stop at validation. Think about encoding, escaping, and safe defaults. AI might not have the full picture of the attack surface you’re dealing with, so it’s your responsibility to review the code with adversarial thinking in mind.

Be Careful with Dependencies

AI doesn’t vet packages. It often recommends libraries that are outdated, unmaintained, or even potentially malicious. That means developers need to take extra care when accepting package suggestions from generative tools.

Always review the libraries that AI suggests. Check their last update date, look for known vulnerabilities (via tools like npm audit or pip-audit), and avoid packages with low community adoption or suspicious commit histories. Even legitimate libraries can introduce risk if they’re misconfigured or misused.

To keep things safe over time, make sure to pin dependency versions and use automation tools like Dependabot to track updates and patch known issues.

Watch for Secrets and Unsafe Defaults

It’s not uncommon for AI to include example API keys, JWT secrets, or hardcoded passwords in generated code. These are meant as placeholders, but if copied carelessly, they can easily make it into production environments.

You should never store secrets directly in code – AI-generated or otherwise. Use environment variables or a secret management system to keep sensitive data out of version control. It’s also good practice to add common secret file types (like .env, .pem, or .crt) to .gitignore by default in all generated scaffolds.

Educate Your Team on AI Usage

One of the biggest risks with AI-generated code isn’t the model, it’s how humans use it. Developers might assume that code output by AI is trustworthy because it appears polished or comes with documentation. That’s dangerous.

Every team using AI tools should invest in internal guidance for safe usage. Clarify where AI tools are useful (like writing boilerplate or generating test cases) and where they require stricter oversight (like anything touching security, business logic, or data handling). Set clear expectations for review, testing, and validation.

Don’t just train the AI to write better prompts – train your team to think critically about AI’s limitations.

Final Thoughts

Generative AI is a powerful tool, but like all tools, it needs to be used responsibly. Writing secure code with AI isn’t about banning the technology, but rather about layering guardrails around it. From prompt design to post-generation review, developers and security teams must work together to ensure AI accelerates development without increasing risk.

The key takeaway: AI can help you write code faster, but it’s still your job to make sure that code is safe.

Subscribe to Bright newsletter!