Supply chain attacks have quadrupled in frequency over the past five years. The reason is straightforward: attackers have learned that the most defended organisations still have exploitable blind spots — forgotten subdomains, unpatched dev environments, compromised build pipelines, and vendors with lateral access into their cloud tenants. External attack surface management is no longer optional. If you are not systematically discovering your own infrastructure before threat actors do, someone else is doing the reconnaissance for you.
The Forgotten Infrastructure Problem
Every organisation we assess has infrastructure it does not know about. After years of acquisitions, migrations, and rushed deployment cycles, the asset inventory maintained by IT bears only a loose relationship to what is actually reachable on the internet. Shadow IT compounds the problem — development teams spin up cloud resources that are never formally inventoried, marketing agencies deploy microsites on subdomains that are never handed back, and acquired companies bring their own sprawling footprint into scope.
In a recent external assessment against a mid-market financial services firm in Ontario, we identified 34 internet-exposed assets that were absent from their CMDB entirely. These included:
- A staging environment for a decommissioned customer portal, still running on an unpatched version of Apache Struts and accessible without authentication
- Three S3 buckets with public ACLs containing configuration files, internal API documentation, and employee onboarding materials
- A Confluence instance on a subdomain provisioned during a 2021 acquisition that retained the acquired company's default admin credentials
- Two wildcard certificate SANs pointing at cloud infrastructure no longer under the organisation's control — a subdomain takeover waiting to happen
- An exposed Jenkins build server with anonymous read access to build logs containing AWS access key fragments
None of these findings required exploitation of a zero-day. Every one of them was reachable with commodity tooling in under four hours of passive and semi-active reconnaissance. The issue is not technical complexity — it is that organisations stop inventorying assets the moment they stop actively using them.
Expired Certificates and Dangling DNS
Certificate expiry is an underappreciated signal. When we see an expired TLS certificate on a subdomain, it tells us the subdomain has likely been forgotten — it is not in anyone's regular monitoring rotation. Dangling DNS records are worse: a CNAME pointing at a cloud provider hostname that has been released back to the pool can be registered by anyone. Subdomain takeover then allows an attacker to host content under your brand, issue phishing pages that pass SPF checks, or intercept OAuth redirect callbacks from legitimate services that still reference the old subdomain.
Dependency Chain Exploitation
Software supply chain compromise has shifted the attacker's target from your application to your application's dependencies. Rather than attacking a hardened production system directly, a threat actor poisons a package in the ecosystem that your build pipeline pulls automatically. The SolarWinds and XZ Utils incidents demonstrated that even mature, widely-used software is not immune.
When we conduct supply chain security assessments, we focus on three distinct dependency attack vectors:
Compromised and Typosquatted Packages
Typosquatting on npm, PyPI, and RubyGems remains effective because developers install packages quickly and do not always verify publisher identity. We have found organisations pulling packages such as reqeusts, colourama, and lodahs — each a slight misspelling of a legitimate, widely-used library. We use dependency confusion testing to determine whether internal package names are resolvable from public registries, which would allow an attacker to publish a higher-versioned malicious package that gets pulled automatically by pip or npm.
# Check if internal package names resolve from public PyPI
pip index versions internal-auth-utils 2>/dev/null
pip index versions corp-logging-sdk 2>/dev/null
# Enumerate packages in requirements files across the repository
grep -r "^[a-zA-Z]" requirements*.txt | sort -u | \
while read pkg; do
pip index versions "$pkg" 2>/dev/null | head -1
done
Build Pipeline Poisoning
CI/CD systems are high-value targets because they have privileged access to production credentials, signing keys, and deployment infrastructure. A compromised build agent can exfiltrate secrets, modify build artefacts before signing, or inject backdoors that survive code review entirely. We test build pipelines by examining whether:
- Build definitions can be modified by developers without a separate approval gate
- Secrets are passed as plaintext environment variables visible in build logs
- Third-party GitHub Actions are pinned to commit SHAs rather than mutable tags
- Build agents have persistent access to production environments beyond the build context
- Pull requests from forked repositories can trigger workflows with access to repository secrets
The last point is particularly common. GitHub Actions workflows using pull_request_target with a checkout of the attacker-controlled branch give any external contributor the ability to execute arbitrary code in the context of your repository secrets — including AWS credentials, signing keys, and API tokens for deployment targets.
External Attack Surface Discovery Methodology
Our external attack surface management (EASM) process is structured in five phases: passive asset discovery, certificate transparency mining, DNS history analysis, active port and service enumeration, and service fingerprinting with vulnerability correlation. The goal of the first three phases is to build the broadest possible asset inventory without generating any traffic to the target — everything sourced from public data.
Phase 1: Subdomain Enumeration
We combine multiple data sources to maximise coverage. Brute-force DNS enumeration using curated wordlists catches internal naming conventions. Passive sources — Shodan, SecurityTrails, VirusTotal, and RapidDNS — surface assets that have been indexed historically. Certificate transparency logs are the most reliable single source for discovering subdomains that have never appeared in public DNS queries.
# Passive subdomain enumeration via multiple sources
subfinder -d target.com -all -recursive -o subdomains-passive.txt
# Certificate transparency mining
curl -s "https://crt.sh/?q=%.target.com&output=json" | \
jq -r '.[].name_value' | \
sed 's/\*\.//g' | sort -u > subdomains-ct.txt
# DNS brute-force with organisation-specific wordlist
puredns bruteforce /opt/wordlists/combined.txt target.com \
--resolvers /opt/resolvers/public-resolvers.txt \
-w subdomains-brute.txt
# Merge and deduplicate
cat subdomains-passive.txt subdomains-ct.txt subdomains-brute.txt | \
sort -u > subdomains-all.txt
Phase 2: Certificate Transparency and DNS History
Certificate transparency logs contain the complete issuance history for every publicly trusted TLS certificate. This means subdomains that existed three years ago but were decommissioned — along with their associated SANs — are permanently recorded. We query the crt.sh database and cross-reference results against current DNS resolution to identify dangling records and potential subdomain takeover candidates.
# Identify subdomains that no longer resolve (dangling DNS candidates)
while read subdomain; do
if ! host "$subdomain" > /dev/null 2>&1; then
echo "[DANGLING] $subdomain"
fi
done < subdomains-all.txt
# Check CNAME targets for takeover potential
while read subdomain; do
cname=$(dig +short CNAME "$subdomain" 2>/dev/null)
if [ -n "$cname" ]; then
echo "$subdomain -> $cname"
fi
done < subdomains-all.txt | grep -E "(azurewebsites|s3\.amazonaws|cloudfront|github\.io|heroku)"
Phase 3: Port Scanning and Service Fingerprinting
Once we have a confirmed asset list, we move to active enumeration. Fast SYN scanning across the full port range surfaces services running on non-standard ports — management interfaces, monitoring dashboards, and development tools that should never be internet-facing. Service version detection then feeds directly into vulnerability correlation.
# Fast host discovery and full-port SYN scan
nmap -sn -iL resolved-ips.txt -oG alive-hosts.gnmap
nmap -sS -p- --min-rate 5000 -iL alive-ips.txt -oA full-port-scan
# Service and version detection on discovered ports
nmap -sV -sC -p $(cat open-ports.txt) \
--script=http-title,http-server-header,ssl-cert \
-iL alive-ips.txt -oA service-scan
# HTTP probing for web services
httpx -l subdomains-all.txt -ports 80,443,8080,8443,8888,9090,3000,5000 \
-title -status-code -tech-detect -o http-alive.txt
Third-Party Vendor Risk Assessment
Modern organisations extend significant trust to third-party vendors — SaaS integrations with direct API access to customer data, managed service providers with persistent VPN tunnels into the internal network, and cloud platform vendors whose support staff can be socially engineered into account access. We assess vendor risk not through questionnaires but from an attacker's perspective: what is actually reachable, what trust has been delegated, and what is the blast radius of vendor compromise?
Evaluating Vendor Security Posture
We approach vendor assessment the same way a threat actor would — starting from the vendor's own external attack surface. A vendor that fails basic hygiene on their public infrastructure (expired certificates, exposed admin panels, unpatched services) is unlikely to have robust controls protecting the access they hold into your environment. Key indicators we examine include:
- The vendor's own subdomain footprint and how well it is maintained
- Whether vendor staff email addresses appear in data breach datasets with reused credentials
- The scope of OAuth and API permissions granted to the vendor's application in your tenant
- Whether the vendor's integration uses a dedicated service account with scoped permissions or a privileged user account
- Network access scope — is the vendor VPN or peering limited to specific source IPs and destination services, or is it overly broad?
Cloud Workspace Misconfigurations Enabling Cross-Tenant Access
Cloud-native misconfigurations in vendor integrations create cross-tenant access paths that are invisible to standard perimeter controls. In Google Workspace environments, over-permissioned OAuth application grants can allow a compromised vendor to read all Drive files, access Gmail, and enumerate the entire directory — permissions granted by an administrator who clicked through an OAuth consent screen years ago and never reviewed it since.
In Microsoft 365 tenants, we frequently find vendor service principals with Application-level permissions (not delegated) and no certificate-based authentication requirement. An attacker who compromises the vendor's Azure AD tenant can leverage these service principals to access your tenant's data without any per-user authentication prompt — the access is invisible in the user sign-in logs that most SOC teams monitor.
# Enumerate OAuth applications and their permission scopes in Azure AD
az ad app list --all --query \
"[].{AppId:appId, DisplayName:displayName, SignInAudience:signInAudience}" \
-o table
# List service principals with high-privilege app role assignments
az ad sp list --all --query \
"[?appRoleAssignmentRequired==\`false\`].{SP:displayName, AppId:appId}" | \
grep -v "Microsoft"
# Check for over-permissioned delegated grants
Connect-MgGraph -Scopes "Application.Read.All"
Get-MgServicePrincipalOauth2PermissionGrant -All | \
Where-Object { $_.Scope -match "Mail|Files|Directory" } | \
Select-Object ClientId, Scope, ConsentType
Continuous Attack Surface Monitoring
Point-in-time assessments miss drift. An external penetration test conducted in Q1 captures the attack surface as it existed during that engagement window. By Q3, the organisation has deployed twelve new cloud workloads, a vendor has been granted a new integration scope, and a developer has pushed a test subdomain that was never cleaned up. The assessment result is already stale.
This is the fundamental limitation of annual or even quarterly penetration testing as the sole mechanism for external attack surface visibility. A new S3 bucket misconfigured on a Wednesday afternoon is exploitable by Friday — an annual test will not catch it.
The CART Approach
Continuous Automated Red Teaming (CART) addresses the drift problem by treating attack surface discovery as an ongoing process rather than a project. We implement CART programmes that run daily asset discovery sweeps, track changes against the previous baseline, and alert on high-severity delta findings — new exposed services, certificate changes indicating new infrastructure, or subdomains resolving to cloud providers that could indicate takeover risk.
The CART pipeline we build for clients typically combines:
- Scheduled subfinder and amass runs against the organisation's seed domains, with new subdomain alerts triggering immediate follow-on enumeration
- Certificate transparency monitoring via certstream, filtered to the organisation's domain set, for near-real-time detection of new certificate issuances
- Daily httpx probes against the confirmed asset inventory, with status code change alerts (a 200 where there was previously a 404 deserves immediate attention)
- Nuclei template scans against newly discovered assets within hours of detection, before they reach steady-state security review
- Integration with the organisation's ticketing system so findings create tracked work items with SLA-driven remediation timelines
The value is not just in finding new issues faster — it is in proving, continuously, that the remediation actions the security team takes are actually closing the exposure. Point-in-time assessments create a verification gap between fix and confirmation. CART closes it.
Defensive Guidance
Based on findings across our external attack surface engagements, the controls that reduce supply chain attack surface most effectively are:
- Maintain a living asset register driven by discovery, not declaration. Do not rely on teams to self-report assets. Run weekly automated subdomain enumeration against all owned domains and ASN ranges. Reconcile results against your CMDB monthly.
- Implement subdomain takeover detection. Monitor CNAME and NS records for all known subdomains. Alert immediately when a record points to a cloud provider hostname that resolves to NXDOMAIN or an unclaimed resource.
- Enforce certificate lifecycle management. Automate certificate issuance and renewal. Treat a certificate expiry notification as a signal to review whether the asset is still required — not just to renew the certificate.
- Restrict CI/CD pipeline permissions to the minimum required. Build agents should not have persistent production credentials. Use short-lived OIDC tokens scoped to the specific deployment target. Pin third-party GitHub Actions to full commit SHAs.
- Audit OAuth grants and service principal permissions quarterly. Remove vendor integrations that are no longer in use. Downscope surviving integrations to the minimum required API permissions. Require certificate-based authentication for all Application-permission service principals.
- Test your dependency confusion posture. Verify that all internal package names are either registered on public registries (to block squatting) or that your package manager is configured to prefer private registry sources exclusively.
- Treat staging environments with production-equivalent security controls. Apply the same credential policies, patch cadence, and network exposure rules to staging infrastructure. If a staging environment cannot meet those standards, it should not be internet-facing.
- Include vendor security posture in third-party risk reviews. A vendor questionnaire response is not evidence of security. Perform passive reconnaissance against your key vendors' external attack surfaces annually, and include attack surface metrics in vendor contract security requirements.