Cloud Native Buildpacks Security Assessment

Security reviewers: Andres Vega, Adith Sudhakar, Cole Kennedy, Daniel Papandrea, Daniel Tobin, Magno Logan, Matthew Giasa, Matt Jarvis.

Buildpacks team: Stephen Levine, Sambhav Kothari.

This document details the design goals and security implications of Cloud Native Buildpacks to aid in the security assessment by CNCF Security Technical Advisory Group.

Metadata

SoftwareAll available code: https://github.com/buildpacks Core implementation: https://github.com/buildpacks/lifecycle Specification: https://github.com/buildpacks/spec
Security ProviderNo. A primary function of the Cloud Native Buildpacks tooling is to build container images that are secure, compliant, and up-to-date. However, security is one of many goals for the project.
SBOMUntil SPDX SBOM is available: https://github.com/buildpacks/pack/blob/main/go.mod https://github.com/buildpacks/lifecycle/blob/main/go.mod https://github.com/buildpacks/libcnb/blob/main/go.mod https://github.com/buildpacks/imgutil/blob/main/go.mod https://github.com/buildpacks/registry-api/blob/main/Gemfile
Docurl
Buildpack API Security Considerationshttps://github.com/buildpacks/spec/blob/main/buildpack.md#security-considerations
Platform API Security Considerationshttps://github.com/buildpacks/spec/blob/master/platform.md#security-considerations
Default and optional configsThe default guidance for platform maintainers, including security trade-offs, is covered by the specification: https://github.com/buildpacks/spec
Architectural Diagramshttps://docs.google.com/presentation/d/1G6slFtpHPjIx-JHzRXAjJcWnQx6a5mRgW6QO_3XqeQo/edit#slide=id.g6e655be570_3_2356

Overview

The Cloud Native Buildpacks project provides tooling to transform source code into container images using modular, reusable build functions called buildpacks. To accomplish this, the project takes advantage of advanced features in the OCI image standard that are underutilized by the Dockerfile model.

  • Background.

    The original buildpack concept was conceived by Heroku in 2011. The Cloud Native Buildpacks project was initiated by Pivotal (now VMware) and Heroku in January 2018 and joined the Cloud Native Sandbox in October 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and that incorporates learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku. Cloud Native Buildpacks brings the benefits of a managed dependency stack to Container Native standards (like OCI images and Docker registries), while taking advantage of cutting-edge features like content-addressable image layers and cross-repo blob mounting to achieve scalability and security outcomes that are difficult to achieve with Dockerfiles.

  • Goal.

    The goal of the Cloud Native Buildpacks (CNB) project is to transform source code into container images with a focus on developer productivity, container security, and day-2 operations involving container images at scale.

    The security guarantees imparted to users are:

  1. Container images generated by Cloud Native Buildpacks tooling meet a minimum standard of container security, for example:

    1. All processes must use a non-root UID/GID
    2. Build-time and runtime base images are always specified separately, so that build-time dependencies such as compilers are not included in the image
    3. Build-time and runtime environment variables are always specified separately, so that sensitive build-time configuration is not included in the image
  2. Container images generated by Cloud Native Buildpacks tooling must be bit-for-bit reproducible when the build tooling provided by the buildpacks support reproducibility.

  3. Container images generated by Cloud Native Buildpacks tooling must be “rebasable,” so that ABI-compatible OS packages with critical security patches may be upgraded without rebuilding application-level or runtime-level layers.

  4. Container images generated by Cloud Native Buildpacks tooling must contain metadata about their dependencies for auditing purposes. (This is reliant on buildpack implementations to a degree, but metadata is mandatory to use certain API features.)

  5. That the container build process must be usable with untrusted application code and buildpack code inside of a controlled infrastructure environment. This implies, but is not limited to -

    1. All containers running CNB tooling must run without any capabilities or privileges.
    2. All containers that may execute buildpack or app code must additionally run as a non-root user.
    3. Infrastructure credentials, such as VCS and registry credentials, must not be present in containers that execute buildpack or app code.
    4. CNB tooling must allow buildpacks to generate images without egress network traffic, i.e., buildpacks must be allowed to bundle language-specific runtimes and other dependencies so that egress traffic is unnecessary.

    See Security Considerations for more details.

  • Non-goals. Non-goals that a reasonable reader of the project’s literature could believe may be in scope (e.g., Flibble does not intend to stop a party with a key from storing an arbitrarily large amount of data, possibly incurring financial cost or overwhelming the servers)

    The Cloud Native Buildpacks project does not provide language-specific buildpacks. Instead, individual communities and vendors provide buildpacks, which they may publish to the Buildpack Registry. Some examples include Paketo Buildpacks, Heroku Buildpacks, and Google Cloud Buildpacks. The goal of the Cloud Native Buildpacks project is to provide only the specification and tooling that make it easy for platforms to implement building images using buildpacks.

Intended Use

Cloud Native Buildpacks may be used to build container images in any cloud environment that supports running OCI or Docker v2 images and has access to a Docker v2 registry. Platform Maintainers may use the Cloud Native Buildpacks tooling to achieve this outcome.

Cloud Native Buildpacks may be used to run buildpack builder images on source code to create application container images. Buildpack Maintainers may use the Cloud Native Buildpacks tooling to develop and deploy their buildpack builder images. Application Developers may use the buildpack builder images with Cloud Native Buildpacks tooling to create application container images.

The Cloud Native Buildpacks lifecycle, which allows platforms to implement the Buildpack API, executes in a series of containers. Platforms are encouraged to execute these containers without any host privileges or capabilities, and are encouraged to execute containers that require infrastructure access (i.e., access to Docker registries or VCS) in containers that don’t execute buildpack or application code. Additionally, containers that execute buildpack or app code are always run with a non-zero in-container UID and GID. The lifecycle is designed to facilitate this mode of operation by default. See the “Project Design” section below for a more complete description of the operation aspects.

Project Design

image

Additional architectural diagrams are available in this presentation.

An ideal example of how a platform might implement an application build using lifecycle could be:

  1. Detection runs in a user-provided builder image using the non-zero UID and GID specified in the builder image labels. A structured list of buildpacks is produced in a TOML file that is left in a dedicated volume.
  2. Analysis runs in a platform-provided image using in-container UID zero and GID zero to restore layer metadata into a dedicated volume. Registry credentials are provided for this stage, but no user code is executed.
  3. Restore runs in a platform-provided image using in-container UID zero and GID zero to restore layers into a dedicated volume. Registry credentials are provided for this stage, but no user code is executed.
  4. Build runs in a user-provided builder image using the non-zero UID and GID specified in the builder image labels. A structured list of buildpacks is provided, and layer directories are produced in a volume owned by the aforementioned UID and GID. After buildpacks are executed, file and directory timestamps are set to a fixed, non-zero time in 1980 (for OS compatibility) so that the image is reproducible.
  5. Export runs in a platform-provided image using in-container UID zero and GID zero to transfer newly-generated layers from the dedicated volume to the Docker registry. Registry credentials are provided for this stage, but no user code is executed. The image creation date is set to a fixed, non-zero time in 1980 (for OS compatibility) so that it is reproducible.

An ideal example of how a platform might implement security patching at scale using lifecycle could be:

  1. A new base runtime image is uploaded to the Docker registry by a platform operator. This new base runtime image contains only ABI-compatible security patches to LTS bits.
  2. Rebase executes inside or outside of a containerized environment to modify each application image manifest that uses an older copy of the base runtime image so that it points to the new base runtime.
  3. All modified images are deployed using their new image digests, which have changed due to the updated manifest pointer. During the deploy, only a single copy of the new base runtime image is transferred to each VM node. No other image layers are transferred during the deploy process.

Configuration and Set-Up

The default guidance for platform maintainers, including security trade-offs, is covered by the specification: https://github.com/buildpacks/spec

User-facing documentation for buildpack maintainers and application developers is available here: https://buildpacks.io/docs/

The current default recommendations for lifecycle configuration by platforms assume that all buildpack and application code is untrusted. These recommendations sacrifice performance to achieve complete isolation of credentials and privileges from untrusted code. This RFC paves the way to provide more flexibility for single-user platforms.

  • Secure

More detailed security considerations are addressed in the specification:

Project Compliance

  • Compliance

    While the Cloud Native Buildpacks tooling itself is not documented to meet specific security standards, it facilitates implementing those standards in ways that are unique to CNB as a container build solution.

    For example:

  1. CNB tooling allows platform maintainers to enforce the use of a certified base image (e.g., meeting DISA STIG) that can be patched for many applications, including pre-built applications, without changing application behavior.
  2. CNB tooling allows buildpack maintainers to provide metadata that conforms to security standards. For example, dependencies installed in the image may be described in image metadata using NVD identifiers.

Security Analysis

  • Attacker Motivations

    An attacker might interact with CNB tooling or specification when motivated by these direct outcomes:

  1. Supply-chain compromise

    • Comprising the CNB project tooling or third-party buildpacks
    • Providing malicious buildpacks, builders, or build extensions
  2. System intrusion via build infrastructure

    • Taking advantage of flaws in CNB tooling, such as inadequate isolation of untrusted code, to compromise build infrastructure.
    • Taking advantage of indirect platform misconfigurations, such as lax egress policies, to access internal resources
  3. System intrusion via built applications

    • Taking advantage of flaws in running application images that are due to deficiencies in the way the CNB tooling or buildpacks built them.

    These direct outcomes may be used to achieve second-order outcomes such as data exfiltration, denial of service, etc.

  • Predisposing Conditions

    1. Platform maintainers may fail to isolate untrusted application and/or buildpack code from registry credentials by running certain build stages (such as detect or build) with credentials present in the environment.
    2. Platform maintainers may use privileged containers, or containers with unnecessary capabilities, to run untrusted buildpack or application code. This could lead to inappropriate changes to host configuration or container breakout when combined with additional vulnerabilities.
    3. Platform maintainers may use build-time containers with lax egress networking policies that allow access to internal subnets to run untrusted buildpack or app code. This could lead to compromise of internal systems.
    4. Buildpack maintainers may provide vulnerable dependencies to the application or misconfigure the application.
    5. A malicious actor may re-distribute safe, third-party buildpacks in a builder that contains a modified lifecycle. Application developers may not realize that certain non-buildpack components of the builder are exposed to registry credentials. This is actively being addressed.
  • Expected Attacker Capabilities

    When unmodified CNB tooling is properly configured by a platform maintainer, we assume that an attacker may be able to compromise an application by providing malicious buildpacks, stacks, or application extensions or by taking advantage of those bits when they are vulnerable or improperly configured. However, we assume that an attacker is unable to attain registry credentials to compromise other images.

    When CNB tooling is improperly configured or the tooling itself is compromised, we assume that an attacker may be able to compromise any number of applications on the registry, build infrastructure that is exposed to untrusted code, and supply chains involving compromised images.

  • Attack Risks and Effects

    Supply-chain attacks are incredibly risky. Many enterprises rate them as potential company-ending-events. Not only could they lead to complete compromise of any data or infrastructure systems that application code has access to, but they could lead to compromise of customer systems when compromised products are distributed to customers in the form of pre-built images.

    Application vulnerabilities introduced by outdated buildpacks or stacks present a level of risk that is quantified by the CVSS scale for a given vulnerability.

    Build system vulnerabilities could lead to risky supply-chain attacks, but they could also lead to less risky scenarios such as denial of service or improper use of resources.

  • Security Degradation

    If an attacker is able to obtain registry credentials, then all applications on the registry may be compromised. However, build infrastructure would not necessarily be compromised unless it executes images on the registry with privileges.

    If an attacker is able to provide malicious buildpacks or stack images, then all applications built using those artifacts may be compromised. However, build infrastructure would not necessarily be compromised unless it executes those images with privileges.

    If an attacker is able to compromise build infrastructure (e.g., via a container escape executed by malicious buildpacks, stack images, or application code; or by compromising images that comprise the build infrastructure itself), then all of the above mentioned degradations may apply.

  • Compensating Mechanisms

    A properly configured CNB build executes with complete isolation of registry / VCS credentials and untrusted buildpack or application code. This means that platforms building untrusted applications with untrusted buildpacks should not be vulnerable to VCS or registry compromise.

    Additionally, a CNB build may be executed in unprivileged containers with zero capabilities. This means that compromised buildpacks, stacks, application code, and/or CNB tooling cannot be used to compromise the host build system without a severe underlying kernel or hardware vulnerability.

    Both buildpack code and application code execute with a non-zero UID and GID. This means that many OS-level files cannot be modified by untrusted code.

Secure Development Practices

  • Development Pipeline

    • Automated testing is employed extensively throughout all code bases.
    • Automated testing is enforced via CI systems (mostly Github Actions).
    • All PRs require sub-team maintainer approval.
    • All changes to the specification or RFCs for project-wide changes require super majority approval of the core team.
    • Repositories use gosec and codeql for static analysis
    • Repositories use dependabot to ensure dependencies secure
    • More information: https://github.com/buildpacks/community
  • Communication Channels

    Team members use Slack (slack.buildpacks.io) and Github (github.com/buildpacks) for all internal and inbound asynchronous communication. Internal, inboard, and outbound communication happens synchronously at twice-weekly working group meetings, which are open. Delicate topics (such as code of conduct violations) are discussed in private, maintainer-only slack channels. Some outbound communication happens over the CNCF CNB mailing list.

  • Ecosystem

    Cloud Native Buildpacks tooling builds container images that can be deployed on all platforms that support container standards (OCI), including K8s. Additionally, Cloud Native Buildpack builds can be securely configured to execute on those platforms. As far as we know, CNB is the only vendor-neutral API for creating OCI images. CNB is a true, language-agnostic alternative to Dockerfiles.

The following vendors provide Cloud Native Buildpacks:

Additionally, CNB is adopted by the following platforms/tools within the Cloud Native Ecosystem:

Security Issue Resolution

  • Responsible Disclosures Process

    • Vulnerability Response Process. Who is responsible for responding to a report. What is the reporting process? How would you respond?

    Documented here: https://github.com/buildpacks/.github/blob/master/SECURITY.md

  • Incident Response

    Designated core team members decrypt and respond to reports within 24 hours. Vulnerabilities are patched and announced following responsible disclosure best practices.

Roadmap

Appendix

  • Known Issues Over Time

Security-related design issues come up occasionally and are addressed:

Automated testing is employed extensively.

Given that CNB is developer tooling, many common classes of security vulnerability (such that those applicable to a service) do not apply.

  • CII Best Practices

Currently, Buildpacks is at the passing criteria in the Core Infrastructure Initiative (CII) best practices badging program..

  • Case Studies

Numerous real-world commercial offerings employ CNB:

In general, these are commercial on-prem, private cloud, or SaaS offerings that provide end-user build functionality.

  • Related Projects / Vendors

CNB is compares closely to the following technologies:

  • Jib - Similar to CNB in that it constructs OCI images directly without a Dockerfile or Docker. Jib is Java-specific, but uses the same techniques as CNB without requiring build containers.
  • Ko - Similar to CNB in that it constructs OCI images directly without a Dockerfile or Docker. Ko is Go-specific, but uses the same techniques as CNB without requiring build containers.

The following technologies can be used to build container images using Dockerfiles. Like CNB, they don’t require Docker. Unlike CNB, they are limited to Dockerfile-based workflows:

  • Kaniko - Provides Dockerfile support using similar userspace techniques for container image generation.
  • Buildah, podman, img, etc. - Provide Dockerfile support, and unlike CNB, require nested containers with user-namespacing.

In general, CNB often competes with Dockerfiles for developer mindshare. Compared to CNB, Dockerfiles do not allow at-scale security patching for pre-built images or declarative build definitions. They also generally require build logic to be present in each application. For more information, see buildpacks.io and this deck.