We're hiring
Full Stack Security Engineer
Build production software. Break production software. Ship with AI every day.
About VRG
VRG is a technology consultancy that builds and secures software for clients across legal, enterprise SaaS, and professional services. We pair deep engineering capability with AI-powered workflows to deliver audit-grade security assessments, custom web applications, and intelligent automation. Our clients trust us to find what automated scanners miss and to build what off-the-shelf tools can't.
We are a small, high-output team. Everyone ships. Everyone owns outcomes. There are no layers between you and the work that matters.
The Role
We're looking for a Full Stack Security Engineer who can do two things exceptionally well: build production web applications and break them. You'll split your time between hands-on security audit work for our clients and building secure, AI-augmented internal and client-facing products.
This is an AI-first position. We don't mean you'll occasionally ask ChatGPT for help. We mean AI tooling is embedded in every stage of your workflow — writing code, reviewing code, triaging vulnerabilities, generating audit reports, automating recon, and accelerating penetration testing. If you aren't already shipping work with tools like Claude Code, Cursor, or Copilot, this role will feel foreign. If you are, you'll feel at home.
What You'll Do
Security Audit & Assessment
- ▸Conduct web application penetration tests, code reviews, and architecture assessments for client engagements across SaaS, legal tech, and enterprise platforms
- ▸Identify vulnerabilities across the full stack: front-end injection, API misconfigurations, broken auth, business logic flaws, insecure data handling, and cloud misconfigs
- ▸Produce clear, actionable security audit reports that both technical and non-technical stakeholders can act on
- ▸Use and extend AI-powered tooling to automate reconnaissance, vulnerability triage, and report generation
- ▸Stay current with OWASP Top 10, CWE/CVE databases, emerging attack vectors, and AI-specific threats such as prompt injection, data poisoning, and model exfiltration
Full Stack Engineering
- ▸Design, build, and maintain secure web applications using modern frameworks (React, Next.js, Node.js/TypeScript, Python, or equivalent)
- ▸Own features end-to-end: database schema and API design through front-end UI to deployment and monitoring
- ▸Integrate LLM-based features, RAG pipelines, and agentic workflows into client-facing and internal products
- ▸Build and maintain CI/CD pipelines with security scanning baked in — SAST, DAST, dependency auditing, secrets detection
- ▸Architect infrastructure on AWS or GCP with security-first defaults: least-privilege IAM, network segmentation, encryption at rest and in transit, audit logging
AI-First Engineering Culture
- ▸Use AI coding assistants (Claude Code, Cursor, Copilot) as core development tools, not novelties
- ▸Build and improve internal AI tooling that accelerates audit workflows: automated recon agents, vulnerability classifiers, report drafters
- ▸Evaluate and harden AI/LLM-powered systems for clients, including testing for prompt injection, data leakage, and access control bypass
- ▸Contribute to our playbook for AI-augmented security assessments — helping define what best-in-class looks like
What We're Looking For
Must-Haves
- ▸3–8+ years of professional software engineering experience, with meaningful time on both building and breaking web applications
- ▸Demonstrated ability to find and exploit real vulnerabilities in production systems — web apps, APIs, cloud environments
- ▸Strong full stack skills: comfortable owning a feature from Postgres to React and everything in between
- ▸Working knowledge of OWASP Top 10, common vulnerability classes (XSS, CSRF, IDOR, SSRF, SQLi, auth bypass), and at least one penetration testing framework (Burp Suite, OWASP ZAP, or equivalent)
- ▸Hands-on, daily use of AI coding tools in your current workflow — we will ask you to show us something you built or audited with AI assistance in the last 60 days
- ▸Ability to write clear, structured security reports that translate technical findings into business risk
- ▸Comfort working autonomously in a remote, async-first environment with minimal hand-holding
Strong Preferences
- ▸Experience with cloud security on AWS or GCP: IAM, VPC configuration, CloudTrail / audit logging, container security
- ▸Familiarity with DevSecOps practices: integrating SAST/DAST into CI/CD, dependency scanning, secrets management
- ▸Background in security consulting, bug bounty programmes, or professional penetration testing engagements
- ▸Experience building or securing LLM-powered applications, RAG systems, or agentic workflows
- ▸Relevant certifications are a bonus but not required: OSCP, OSWE, CEH, AWS Security Specialty, or similar
- ▸Experience with compliance frameworks (SOC 2, ISO 27001, GDPR, HIPAA) in an audit or assessment context
Bonus Points
- ▸You've built your own security tooling — custom Burp extensions, automated scanners, AI-assisted recon
- ▸You've contributed to open-source security projects or published vulnerability research
- ▸You've worked in a consultancy environment managing multiple client engagements simultaneously
- ▸You speak the language of both developers and security teams and can bridge the two
How We Work
Remote-first
Work from anywhere. We're distributed and async by default. Meetings are short and purposeful. Deep work is protected.
AI-first
Every engineer uses AI tooling daily. We build with it, audit with it, and continuously evaluate new tools. If a model can handle 80% of the grunt work, we let it — and spend our time on the 20% that requires human judgement.
Small team, big ownership
No ticket jockeys. You scope, build, test, ship, and support. You own client relationships on your engagements.
Client-facing
You'll interact directly with clients, present audit findings, and advise on remediation. Communication skills matter as much as technical ability.
Our Stack
We're pragmatic about tools and pick what's right for the job. Here's what you'll encounter most often:
Interview Process
We respect your time. Our process is straightforward and fast:
- 1
Intro call(30 min)
Tell us about your work. Show us something you've built or broken with AI tools.
- 2
Technical exercise(async, paid)
A realistic security assessment or build task. Use whatever tools you want, including AI.
- 3
Working session(90 min)
Pair with a team member on a real problem. This is as much about fit as skill.
- 4
Offer
We move fast when we find the right person.
How to Apply
Send an email to careers@vrg.asia and include:
- ▸Your CV or LinkedIn profile
- ▸A short note on the most interesting security finding or technical challenge you've worked on recently
- ▸A link to something you've built or shipped with AI tools in the last 60 days — GitHub, demo, write-up, anything
- ▸Optional: relevant certifications, bug bounty profiles, or published research
No cover letter needed. We'd rather see your work.
VRG is an equal opportunity employer. We evaluate candidates on ability and output, not pedigree. If you can build it and break it, we want to talk to you.