DevSecOps Governance (Part 3): Implementing Dynamic Application Security Testing (DAST) in Build Pipelines

Introduction

Following on from my original blog ‘When “Shift‑Left” Leaves the Back Door Open: Why Governance Matters More Than Ever this is part 3 of a 3 part follow up series where ill discuss examples of tools and strategies I’ve used to implement some of the security considerations I outlined in the aforementioned post. In Part 1 I discussed Tooling, Secret Scanning, Branch and Security policies and Approval gates. In Part 2 I discussed building SAST and SCA scanning into build pipelines. In this final post I’ll be diving deeper into DAST and how we can build in active testing of our deployed websites post deployment to our hosting environments.

Implementing DAST scanning into build pipelines

So what is DAST and why do we need it? DAST is all about testing the integrity and security of web systems post deployment. DAST is essentially designed to attempt to hack your website after you’ve deployed new changes and is typically triggered at the end of your build pipeline and will fail the build if any problems are discovered.

DAST is effective only if the application is running, iterating the various endpoints and looks for issues such as weak authentication, injection attacks (SQL), Cross site scripting (XSS), Input validation, security header misconfigurations, server misconfigurations, runtime security vulnerabilities.

DAST is typically built into the build pipeline and runs AFTER your new website has been deployed.

As DAST isn’t built into Github Advanced Security (I discussed GHAS in parts 1 and 2) it required a third party Open Source tool. In this example the tool of choice was OWASP Zap which is free, easy to configure and highly reputable.

Whilst the DAST execution logic is confiured in the main build pipeline there is a dependency on a plan *.yaml file which acts as the DAST definition configuration.

The other key thing to remember is if your website requires authentication – you will need to build this into the pipeline workstream. Playwright is really good for this. You just need a test account with username and password (securely stored of course!). You should also make sure this test account is non-privileged and is exempt from any additional security controls such as location based conditional access or mfa as this require user interaction which we don’t want in this instance (this is recommended practice).

Below is an example of a DAST implementation I built into an application yaml build pipeline. The environmental variables below represent the key dynamic configuration values

  • WEBAPP_BASE_URL: The url of the website being tested
  • WEBAPP_LOGIN_ENDPOINT: The specific authentication endpoint for the url being tested
  • TEST_USER_NAME: The user name for the test login account
  • TEST_USER_PASSWORD: The password for the test user account

Example DAST snippet. Run this after your code has deployed at the end of your yaml pipeline

# ============================================================
#   DAST Stage (Authenticated via Browser Automation)
# ============================================================

##This scan is configured NOT to RUN on PR commits

- stage: DAST_SCAN
  displayName: "DAST Scan (Authenticated)"
  condition: ne(variables['Build.Reason'], 'PullRequest')

  variables:
    - group: webAppAuthSecrets   # <-- Variable group containing credentials + app URLs for this environment

  jobs:
    - job: DAST_TEST
      displayName: "Run Authenticated DAST Scan"
      pool:
        vmImage: 'ubuntu-latest'

      steps:
        - checkout: self

        # ------------------------------------------------------
        # Install Node + Playwright Browser Automation Framework
        # ------------------------------------------------------
        - task: NodeTool@0
          inputs:
            versionSpec: '18.x'
          displayName: "Install Node.js"

        - script: |
            npm install -D playwright
            npx playwright install --with-deps chromium
          displayName: "Install Browser Automation Dependencies"

        # ------------------------------------------------------
        # STEP 1 — Authenticate using a scripted browser login
        # Produces an authenticated session cookie or token and runs in a headless browser session
	# Make sure you have a Playwright browser-login.js file with login logic located at the 'security/auth/' directory
	# Make sure you have a DAST configuration file dast-plan.yaml located at the 'security/dast/' directory
        # ------------------------------------------------------
        - script: |
            set -e
            echo "Running scripted login to obtain session..."

            node security/auth/browser-login.js

            AUTH_COOKIE=$(cat auth_cookie.txt)
            if [ -z "$AUTH_COOKIE" ]; then
              echo "❌ Authentication step produced no session token/cookie"
              exit 1
            fi

            echo "##vso[task.setvariable variable=AUTH_COOKIE]$AUTH_COOKIE"
            echo "Authentication complete."
          displayName: "Run Authenticated Browser Login"
          env:
            WEBAPP_BASE_URL: $(WEBAPP_BASE_URL)
            WEBAPP_LOGIN_ENDPOINT: $(WEBAPP_LOGIN_ENDPOINT)
            TEST_USER_NAME: $(TEST_USER_NAME)
            TEST_USER_PASSWORD: $(TEST_USER_PASSWORD)

        # ------------------------------------------------------
        # STEP 2 — Load DAST Scan Plan
        # ------------------------------------------------------
        - script: |
            set -e
            PLAN_SRC="security/dast/dast-plan.yaml"
            PLAN_DST="$(Pipeline.Workspace)/dast-plan.yaml"

            if [ ! -f "$PLAN_SRC" ]; then
              echo "❌ Missing scan plan configuration: $PLAN_SRC"
              exit 1
            fi

            cp "$PLAN_SRC" "$PLAN_DST"
            echo "----- DAST Scan Plan -----"
            cat "$PLAN_DST"
          displayName: "Load DAST Scan Plan"

        # ------------------------------------------------------
        # STEP 3 — Execute DAST Scan Container
        # Generic: replace with any DAST engine container.
	# Runs as a docker container
	#-u: ensures process runs as root inside the container so that directories can be written too etc
	#-e: grabs and sets the authentication context for the logged in user via the cookie obtained from the Playwright login step earlier and also sets the SUT url
	#-v: mounts the working pipeline in the container and sets where to output the results of the scan
	#scan.sh: loads the scan plan, configures the authentication headers, runs spidering, sets attack levels i.e. active or passive (active is a more aggressive scan form then passive
        # ------------------------------------------------------
        - script: |
            set -e
            mkdir -p "$(Pipeline.Workspace)/TestResults"

            echo "Running DAST scan against target: $(WEBAPP_BASE_URL)"
            TARGET_HOST=$(echo $(WEBAPP_BASE_URL) | sed -E 's#https?://([^/]+)/?.*#\1#')

            docker run --rm \
              -u 0:0 \
              -e AUTH_HEADER_NAME="Cookie" \
              -e AUTH_HEADER_VALUE="$(AUTH_COOKIE)" \
              -e AUTH_HEADER_DOMAIN="$TARGET_HOST" \
              -v "$(Pipeline.Workspace)":/scan/work \
              -v "$(Pipeline.Workspace)/TestResults":/scan/output \
              $(DAST_ENGINE_IMAGE) \
              scan.sh --plan /scan/work/dast-plan.yaml --loglevel DEBUG \
              2>&1 | tee "$(Pipeline.Workspace)/dast_output.log"

            echo "✔ DAST scan complete."
          displayName: "Run DAST Scan (Authenticated)"

        # ------------------------------------------------------
        # STEP 4 — Fail Only on High Severity Findings
        # Generic XML/JSON parsing to detect critical results.
        # ------------------------------------------------------
        - script: |
            set -e
            REPORT="$(Pipeline.Workspace)/TestResults/dast-report.xml"

            if [ ! -f "$REPORT" ]; then
              echo "❌ Missing DAST report: $REPORT"
              tail -n 200 "$(Pipeline.Workspace)/dast_output.log" || true
              exit 1
            fi

            echo "Scanning DAST report for HIGH severity issues..."
            if grep -q "<riskcode>3</riskcode>" "$REPORT"; then
              echo "❌ High severity vulnerabilities detected!"
              exit 1
            fi

            echo "✔ No HIGH severity vulnerabilities found."
          displayName: "Fail Only on HIGH Severity"

        # ------------------------------------------------------
        # STEP 5 — Publish Results
        # ------------------------------------------------------
        - task: PublishBuildArtifacts@1
          inputs:
            PathtoPublish: "$(Pipeline.Workspace)/TestResults"
            ArtifactName: "DAST-Scan-Results"
            publishLocation: "Container"
          displayName: "Publish DAST Results"

An example of the Playwright login file (referred to as browser-login.js in the pipeline logic). This does the heavy lifting of logging into the website, and generating the authentication cookie for DAST to reuse and authenticate

const { chromium } = require('playwright');

function joinUrl(base, path) {
  // Trim trailing slash from base and leading slash from path, then join with one slash
  const b = (base || '').replace(/\/+$/, '');
  const p = (path || '').replace(/^\/+/, '');
  return `${b}/${p}`;
}

(async () => {
  const browser = await chromium.launch({ headless: true });
  const context = await browser.newContext();
  const page = await context.newPage();

  const siteUrl = process.env.WEBAPP_BASE_URL;                  // e.g., https://www.mytestsite.com
  const signinEndpoint = process.env.WEBAPP_LOGIN_ENDPOINT; // e.g., signin-aad-b2c_1_endpoint
  const username = process.env.TEST_USER_NAME;
  const password = process.env.TEST_USER_PASSWORD;

  if (!siteUrl || !signinEndpoint) {
    throw new Error('WEBAPP_BASE_URL and/or WEBAPP_LOGIN_ENDPOINT not provided.');
  }
  if (!username || !password) {
    throw new Error('TEST_USER_NAME and/or TEST_USER_PASSWORD not provided.');
  }

  const portalLoginUrl = joinUrl(siteUrl, signinEndpoint);
  console.log("Navigating to portal login:", portalLoginUrl);
  await page.goto(portalLoginUrl, { waitUntil: 'networkidle' });

  // --- Adjust selectors to your B2C page template if needed ---
  // Common default field names on B2C page templates:
  await page.fill('input[name="login"], input[name="Email"], input[type="email"]', username, { timeout: 15000 }).catch(() => {});
  await page.fill('input[name="password"], input[type="password"]', password, { timeout: 15000 }).catch(() => {});
  // A typical submit button
  const submitSelector = 'button[type="submit"], input[type="submit"], #next';
  await page.click(submitSelector);

  // Wait for redirect back to the portal after successful sign-in
  await page.waitForLoadState('networkidle');

  // Export cookies as a single Cookie header string
  const cookies = await context.cookies();
  const cookieHeader = cookies.map(c => `${c.name}=${c.value}`).join('; ');

  if (!cookieHeader || cookieHeader.trim().length === 0) {
    throw new Error('No cookies captured after login. Check the selectors or B2C policy behavior.');
  }

  console.log("COOKIE_HEADER=" + cookieHeader);

  // Persist for the pipeline
  const fs = require('fs');
  fs.writeFileSync('pp_cookie_header.txt', cookieHeader);

  await browser.close();
})();

Example of the DAST plan *.yaml file (referred to as the dast-plan.yaml in the DAST pipeline snippet). Sets configuration values for the test url, the authentication type, whether its active or passive testing (or both), where to output the report, and what criteria to fail on.

env:
  contexts:
    - name: testmysite
      urls:
        - "https://www.mytestsite.com/"
      includePaths:
        - "https://www.mytestsite.com/.*"
      authentication:
        method: "manual"
      sessionManagement:
        method: "cookie"

jobs:
  # -------------------------------------------
  # Crawl the web site
  # -------------------------------------------
  - type: spider
    parameters:
      context: "testmysite"
      maxDuration: 3

  # -------------------------------------------
  # Wait for passive scanning to finish
  # -------------------------------------------
  - type: passiveScan-wait
    parameters:
      maxDuration: 180

  # -------------------------------------------
  # Active Scan (full attack)
  # -------------------------------------------
  - type: activeScan
    parameters:
      context: "testmysite"
      maxRuleDurationInMins: 5

  # -------------------------------------------
  # Report output
  # /zap/output is mounted from $(Pipeline.Workspace)/TestResults
  # -------------------------------------------
  - type: report
    parameters:
      template: "traditional-xml"
      reportDir: "/zap/output"
      reportFile: "active-report"

  # -------------------------------------------
  # Exit Status (Medium triggers warning)
  # FAIL pipeline on High (handled in YAML stage)
  # -------------------------------------------
  - type: exitStatus
    parameters:
      errorLevel: "High"
      warnLevel: "Medium"
      okExitValue: 0
      errorExitValue: 1
      warnExitValue: 2

Once the results of the scan have been generated you can find them in the published artifacts as an active-report.xml file.

Summary

In summary this was the final part of a 3 part series where i’ve discussed DevSecOps governance and the implementation of Dynamic Analysis Software Testing (DAST) within build pipelines for the purpose of testing the integrity and security of newly deployed web sites. I hope you find this helpful on your journey towards a shift-left mind set.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *