The Call is Coming from Inside the House
Today’s attackers don’t break in—they’re invited. Fake jobs, stolen logins, and exposed pipelines open the door: as attacks get more sophisticated, here’s how to make sure they don’t get far.
“What happens when an attacker gets hold of a developer’s credentials?"
This question is supposed to be a quick exercise, but you can add some details to make it more vivid; perhaps we can establish that we are referring to AWS credentials? Or a username/password for your ticketing system?
Of course, there’s nuance: different positions would require different sets of permissions; leaking credentials from senior DevOps staff is expected to have a higher impact than the exfiltration of the new junior frontend engineer right? This also highlights the fact that I'm using the rather generic developer/engineer terms, which are a simplification to talk about everyone fulfilling a role on software development; and hence, need some level of privileged access into internal systems.
But then it's really hard to create airtight restrictions around each one of these roles and reach the Principle of Least Privilege at its fullest. Strict permission checks tend to generate friction and slow the development process; adding seamless security validations takes serious investment on your Cloud Platform and deployment infrastructure. And even when you get there, sometimes the minimal set of required privileges will be large, at least large enough to fit in a sophisticated attack. So even if you do a great job and everyone has exactly the permissions they should have, developers are intrinsically high access actors on any company. They have high visibility into internal details of the system and delivery pipeline, repositories, security practices…and they just have to. And even more interesting from the attacker's perspective: they have access to internal systems and platforms to request (and sometimes grant) access to themselves, opening a substantial surface for Privilege Escalation attacks.
As result, you have all the elements to implement highly effective attacks:
High-privilege, often long-lived credentials
Near-unavoidable access to source code, internal docs, and infra
The ability to request—or even grant—themselves more access
All these traits converge into a potent attack vector. Last Pass was victim of two related incidents that resulted in a serious customer data breach. In the first attack, they targeted the development environment by stealing developer credentials, which wasn't enough to find customer data per se; however, the information about the system inner details was enough to elaborate a second more sophisticated attack that took place a few months later. Now the target was slightly different: a senior DevOps Engineer, who had access to the contents of production backups. The incident seriously damaged the company image and market share. So even if you do the right thing and segregate environments, the inside knowledge might be enough to fuel a second, more elaborated and effective attack.
In a similarly famous case, SolarWinds massive breach was a ingenious supply chain attack that affected hundreds of organization across the world. It was only possible because the attackers obtained valid internal credentials and were able to escalate their own privileges to control an intricate supply chain of libraries, APIs and software products to a point where it was impossible to tell which network traffic was legit. That event also reminds us of another critical characteristic of this type of attack: build infrastructure (like CI/CD pipelines) is often an easy target.
Most of these attacks start, of course, with a great deal of Social Engineering; attackers take leverage of public information to craft a scam scenario that's credible enough to incentivize the target developer perform some action. Cyberhaven's catastrophic incident started with a simple phishing email mimicking a legit Google Play communication.
An increasingly exploited venue for executing these attacks is Linkedin. The business and employment-oriented social network not only offers a massive amount of information about your team and your company structure and stack, but allows recruiters (real or not) to reach out. There are now tons of famous cases of fake job offers designed to lead a developer to run nefarious code in an environment with corporate network and fresh credentials. What makes this particularly serious is: there's definitely someone of your team interviewing out there right now. And these attacks are pretty sophisticated and well funded. Many are state-sponsored and backed up by a comprehensive and credible public information network; Linkedin profiles, Websites, Job openings are carefully crafted to give the fake jobs credibility1, And then the malicious code can be anywhere, really. Obfuscated dependencies, libraries, code that will be downloaded. It's really easy to conceal the method that will be used to exfiltrate your credentials.
Many companies treat their security like a tortoise shell: strong on the outside, soft on the inside. Perimeter defenses are strict and visible, but once breached—even with a single valid credential—internal systems tend to be overly trusting. That trust is easily exploited in developer-targeted attacks, where valid access can be used to spin up malicious infrastructure like Pods or VMs. From there, with limited internal controls in place, attackers can start escalating their own privileges and moving deeper into critical systems
Considering that valid developers credentials might be stolen and used to acquire inside knowledge about your stack and defenses is a must have on your threat model. Both your external defenses and internal perimeter must take be built over the assumption that who is operating these credentials might not the person you know and trust. I'll follow up with some of the key concepts (sometimes more like properties) that must be embedded on any tech company to mitigate the severe effects of stolen developer credentials.
Segregation of Duties: It Takes Two to Ship a Disaster
This is probably one of the oldest tricks in the book, stating that you need to have different people performing complementary jobs to complete a task. Roles like Author, Auditor, Reviewer, Approver, must be each impersonated by different people. That simple principle not only helps spotting mistakes as early as possible, it protects the process against fraud, theft, abuse of power and overall conflicting incentives.
In small teams or squads, you can design your process to be safe enough with fewer roles involved. One hard requirement would be that interaction with production data and infrastructure (e.g. deployments) needs at least two people to happen (e.g. Reviewer and Author). Also make sure the permissions to exercise any of these roles is reserved only for those who have clearance and knowledge to do so. If virtually anyone can impersonate both roles at the same time, it doesn't help much. Optimizing the number of approvals needed is key for reducing friction and being able to deploy cheap and fast while keeping it safe (and also reliable adding an extra pair of eyes). By requiring at least two sets of valid credentials to release a shady version of that nice little internal npm package, you made the attacker's life harder; it's substantially tougher to get credentials from two developers. What about finding targets with the correct set of privilege? Segregation of duties even at its bare minimum can really tip the scales in your favor.
Time is of the essence: implement short lived credentials
Credentials that do not expire expose a intrinsic design flaw: if they are compromised, they will still be valid…well forever. Or until your team detect the intrusion, which might take a while. Usually enough for the damage to be done.
But what if the stolen credentials do not live forever? Adding the time element to the board is a huge game changer. The shorter it lives, the harder is to exploit them in a meaningful way. It takes time to learn about the internal perimeter or to find a way to escalate themselves into production access. And if the attack involves several credentials (trying bypass the Segregation of Duties layer), coordinating the exfiltration and execution is substantially more complex.
Short lived credentials might be a bit trickier to implement in some cases, but so many cloud services and development tools now offer full support to drop long lived, static credentials in favor of safer alternatives, which are becomin2g relatively painless to implement. On the top of it, you develop applications relying on safer authentication mechanisms that can be seamless executed in a vast ecosystem of cloud providers. In my experience, using the recommended keyless authentication mechanisms for GCP and AWS are always the best choice. Sadly there's plenty of outdated documentation pointing out to “credential files” as the default approach, so sometimes you have to dig deeper to find the safer road. Even so, after a few years following that approach in different teams, I still advocate for enforcing safer authentication across the stack from day 0.
Environment Isolation
What Happens in dev, stays in dev
This is one of the rare direct answers about cybersecurity: yes, you MUST make sure that your production infrastructure and data is isolated from the development/staging/testing/qa resources, which I'll just call dev. One of the most interesting properties of well segregated environments is that no matter what you are doing on dev, you can't interfere with real workloads, regardless of if you want it or not. Making it hard for people to screw up is a vital characteristic of any Software Development Lifecycle.
Another important property is that a fully isolated dev environment means that more people on the team can have access to move fast and experiment new cloud resources, tweak configuration, explore, learn, without any risk of interfering with real customer facing operations. This actually can offset many of the friction caused by the isolation in the first place, acting as a complementary visibility source (other than tools like Grafana, Datadog, Sentry, etc)
An quite interesting discussion topic is what to do when you need real data to build a dev environment that actually useful? That's a common pattern for projects on the “data” realm, ETLs, machine learning, etc, which I'll cover on its own article; suffice to say now that isolation becomes more expensive, but still really valuable.
To make attackers lives easier, we often forget to consider access to the infrastructure that test, build, and delivery the code we use in production as something to be isolated. Any piece of infrastructure that interacts with production must be treated also as production. Recent public cases like SolarWinds and LastPass that we highlight illustrate how attackers often take leverage from CI/CD infrastructure and software packages deployed internally to reach their goals.
And it's often hard to properly protect the delivery pipelines and its resources. Hard controls often become road blockers to engineers that want delivery software fast, so the release process must be carefully thought to balance control and velocity. For instance, when you favor (like FluxCD does) an approach where it's actually the production environment that fetches and apply new changes, you can focus on protecting only the triggers for those changes (e.g. your main branch). When you have a special infrastructure accessible to most of your team (everyone wants to deploy) that's authorized to push changes to production, you have a larger surface to control that also includes testing, QA, linting, etc.
Conclusion
Stolen credentials are a serious incident, regardless of how much you though about your internal perimeter's defenses. Monitoring internal activity is expensive for most companies, and just the knowledge the attacker
One obvious take away is that engineers should be really careful because fake job positions are out there and they seem pretty real. As much as you shouldn't put gum you found on the road on your mouth, you shouldn't be executing code from the internet without serious scrutiny. Not even clicking on strange links for that matter. But we say that to everyone hoping to avoid phishing and the reality is it's still an effective threat vector. And that applies to all humans, even the so called admins.
I think we can all agree that telling people to not click on suspicious links is not enough. And you won't see thread models out there that have the luxury to downplay the potentially catastrophic impact of stolen developer credentials, and attackers know that. As fake job scams get more complex and pervasive, companies can't really count on a low likelihood for these incidents to occur. Fortunately there are many things we can do to limit the impact of such breaches, as discussed on this article.
If your systems assume internal trust and lack proper isolation between development and production, one stolen login might be all it takes Because sooner or later, someone’s credentials will get stolen. You have to assume that. But if your stack is designed with isolation, role separation, and short-lived access in mind, an intruder with valid credentials might still have nowhere to go.
Lead image by Mila Aguiar
A note on poisoned pull requests and open-source code risks
Fake job offers aren’t the only socially engineered way to target developers. Another rising trend is poisoned open-source contributions—where an attacker submits a seemingly helpful pull request to a popular GitHub repo or internal project. Once merged, it introduces backdoors, hidden exfiltration code, or supply chain dependencies that can later be exploited.These attacks work especially well when teams have generous auto-merge settings, weak code review, or rely on overly permissive GitHub bots. Even internal libraries aren’t safe—if a compromised contributor pushes a malicious version of an internal npm package that others depend on, it can silently propagate.
Vetting contributors, using automated dependency scanning, and monitoring code behavior—not just code diffs—are crucial parts of securing modern software pipelines.
What does "keyless authentication" actually mean?
Traditional authentication often relies on long-lived credential files—think~/.aws/credentialsor service account JSON keys for GCP. These static secrets are easy to steal and hard to rotate.Keyless authentication, on the other hand, uses identity-based access—where the runtime environment (e.g. a VM, Pod, or CI/CD job) assumes an identity managed by your cloud provider. In AWS, this means using IAM Roles and instance profiles; in GCP, this means Workload Identity or Service Account Impersonation.
These mechanisms avoid the need for static secrets entirely, relying instead on signed tokens exchanged during runtime—making your stack both more secure and easier to manage at scale.



