How Asana leverages AWS Inspector for total visibility over infrastructure vulnerabilities

Jordan JamaliJordan Jamali
April 15th, 2026
5 min read
facebookx-twitterlinkedin
Asana Engineering Spotlight

Scanning for vulnerabilities across multiple AWS accounts, eliminating noise, and turning vulnerability findings into actionable work is a challenging but important undertaking. As infrastructure grows, we need a vulnerability management program that scales with it. 

Why We Needed Full Coverage and Automation

Like most companies operating at scale, we run dozens of AWS accounts across various production environments. Vulnerabilities don't respect service boundaries. A single CVE in a shared base image can affect hundreds of workloads simultaneously, and compliance frameworks like FedRAMP require documented evidence that vulnerabilities are being identified, tracked, and remediated within defined timelines. Manual processes like spreadsheets, periodic audits, and ad hoc scans aren’t scalable. 

How We Did It: AWS Inspector 

We use AWS Inspector as the scanning engine. It continuously monitors three classes of resources:

  • EC2 instances, where installed agents report package versions back to Inspector for analysis

  • ECR container images, which are scanned at push time and rescanned as new CVEs are published

  • Lambda functions, where dependencies and shared layers are continuously monitored for vulnerabilities

Inspector automatically cross-references findings against the National Vulnerability Database and other sources, and once a day we generate a full org-wide findings report. These scans happen in all AWS accounts where we’ve enabled Inspector, and all findings are consolidated into one single report. This property of Inspector makes full coverage of our AWS infrastructure relatively easy.

Basic vs Enhanced ECR scans

AWS Inspector offers two scanning modes for ECR images: basic and enhanced. The differences matter. In a previous architecture, we were using basic ECR scanning to find vulnerabilities in our ECR images. Basic scanning detects vulnerabilities in OS-level packages only - things like openssl, curl, or libc installed via a system package manager. If a vulnerability exists in an application-level dependency such as a Python pip package, an npm module, or a Java JAR, basic scanning won't find it. It also only runs at image push time. An image pushed on Monday won't be rescanned when a new CVE is published on Tuesday. We worked around this by building a Lambda that manually triggered ECR rescans on active images every 24 hours - It kept us current, but we were still completely blind to application-level dependencies. Furthermore, basic scanning is harder to scale up to multiple AWS accounts as there isn’t an out-of-the-box way to initiate and read scans in one place (unlike Inspector which allows you to delegate an AWS account as admin to manage all accounts in your org).

Enhanced scanning, powered by Inspector, addresses these gaps. It detects vulnerabilities in OS packages and programming language packages. It also continuously rescans - when a new CVE is published, Inspector re-evaluates images it has already scanned without anyone triggering anything. Furthermore, it is easy to configure Inspector to scan multiple accounts while still outputting its report in one place.

Here are some metrics on how basic and advanced scans differed in our production environment:

Metric

Basic Scans

Advanced scans

Difference

Total finding records

21,956

260,141

+238,185 (+1,085%)

Unique CVEs

1,920

7,310

+5,390 (+281%)

Unique packages

506

686

+180 (+36%)

Unique (CVE, package) pairs

3,924

11,416

+7,492 (+191%)

Unique resources

738

1,533

+795 (+108%)

A tremendous increase in coverage! We are now covering much more of our attack surface (1,085% more findings) and blast radius (108% more unique resources).

Advanced scanning doesn’t come without some tradeoffs, the main one being cost. Basic scanning is free, while enhanced scanning charges per image scanned and rescanned. At our scale of hundreds of thousands of container images, this is not an insignificant chunk of money. However, the simplicity and increased scanning coverage that comes with a fully Inspector-based approach is worth the price tag for Asana. 

To help a bit with the costs, we only provide scanning in the AWS accounts where it provides real security value - this means that non-production accounts like staging and sandboxes are excluded. 

Now that we’ve captured all these findings, let’s take a look at how we built the pipeline to process and act on them!

The Pipeline: Three Stages of Refinement

Raw Inspector output is noisy - it includes findings for container images that are still within Inspector's scanning window but no longer actively deployed, and it reports each affected resource individually rather than grouping by vulnerability. Turning this output into useful work requires filtration, aggregation, and syncing to a platform that makes vulnerabilities easily visible. To solve this problem, we built a three-stage Lambda pipeline. 

Let’s walk through it:

Vulnerability Management Pipeline Diagram

Vulnerability Management Pipeline Diagram

Generating the Report: JSON over CSV

The first step of our pipeline is to tell Inspector to generate a findings report. We optimized our pipeline by exporting Inspector findings as JSON instead of CSV. Unlike CSVs, which include every possible column regardless of data, JSON’s sparse representation omits empty fields. This is crucial at our scale, where EC2 findings lack ECR fields and vice versa. In our production account, JSON reports - already ~15GB - are roughly half the size of CSVs, leading to faster processing and reduced storage overhead. This report is generated daily into S3, giving us a consistent snapshot of our vulnerability landscape. 

Filtering the Noise: Inactive Images

The next step, performed in the “inspector_findings_processor” lambda, is about filtering and aggregating the report data so that we’re ready to feed the asana sync lambda with meaningful and well-formatted data.

Inspector offers limited control over scanning. It indiscriminately scans any ECR image not archived or accessed (e.g. pushed to ECR) within a configurable window, even if they’re not deployed. We don’t care to scan or report on inactive images. We’ve set that configurable window to the minimum setting of 14 days to manage costs and because we don’t run images for longer than 14 days. Despite this, Inspector still captures numerous images that are not actively deployed across our thousands of repositories. 

Inspector offers an “in use” filter which filters out findings from inactive ECR images. However, multi-architecture images, which make up a significant portion of our images, are not respected by this filter. While a multi-architecture image index and the images it points to are scanned, the pointed-to images are never considered “in use”. This feature is thus rendered ineffective for our needs. Hopefully this feature is fixed in the future, allowing engineers to filter out inactive images from their findings report directly.

Here’s what a multi-architecture image looks like:

Sample Multi-architecture Image

Sample Multi-architecture Image

To filter out inactive images, we maintain a custom record of active images - sourced from the Datadog Containers API and an internal deployment registry which maps each deployed application to its ECR image. These combined data sources provide a comprehensive view of production workloads, allowing our pipeline to accurately filter AWS Inspector's raw findings. But where and how do we apply this filter?

While Inspector supports excluding EC2 instances via tags, it lacks a similar mechanism for ECR images. Another potential route is suppression rules, which help exclude noisy findings from reports but do not prevent scanning or reduce billing. We decided to filter out inactive images after report generation rather than using suppression rules. This is because the overall pipeline design is simpler if we filter out inactive images in the same lambda that also aggregates findings (inspector_findings_processor can handle both, saving a whole lambda function).

Beyond just filtering out inactive images, this lambda also groups any remaining findings by CVE and package name. The result is one signal per vulnerability, not dozens of per-resource duplicates. 

Using Asana to Secure Asana

The final Lambda reads the aggregated findings to create, update, or close the relevant Asana tasks. This lambda is idempotent, removing the need for any persistence of previous states. 

By integrating our vulnerability management pipeline into Asana, we treat every relevant CVE as a manageable work item with clear remediation plans and status tracking. Asana serves as our centralized vulnerability database, offering visibility into each finding. Tasks, which each represent a CVE, track a few things such as CVSS scores, affected packages, triage status, and more. Subtasks pinpoint specific finding locations across our infrastructure. Here’s what a vulnerability and a finding look like in Asana:

Totally Real Vulnerability As Seen In Asana

Totally Real Vulnerability As Seen In Asana

Totally Real Finding As Seen in Asana

Totally Real Finding As Seen in Asana

While automated scanning handles scale and data aggregation, we’ve added Asana AI rules to our Asana project to automatically triage vulnerabilities alongside human triagers. These AI triagers analyze CVEs to set risk priorities, update task fields, and provide detailed reasoning via comments. As the nature of each vulnerability with respect to our infrastructure changes, so does its asana representation.

This automation removes repetitive manual work, allowing engineers to focus on action. The overall result of this pipeline is full coverage over our infrastructure, auditability, and direct integration into existing engineering workflows. 

I hope that this blogpost is helpful to anyone using or considering using AWS Inspector! I recommend using this approach when trying to scale vulnerability management in a multi-account environment. Automation and noise reduction are key to the creation of a vulnerability management program that enables action.


Author Biography

Jordan Jamali is a Software Engineer on the Security Development team, focused on least-privilege access, scalable detection and alerting, and automation to eliminate entire classes of risk.

Team Shout Outs

The architecting and implementation of this project have required collaboration from multiple teams: shoutout to Kyle Ip, Magni Thorbjornsson, Eleanor Mount, and the Security Development team!

Related articles

Engineering

How Asana built a custom LLM evaluation framework for AI Teammates