Internet-Draft | Attestation Results | August 2023 |
Voit, et al. | Expires 2 March 2024 | [Page] |
This document defines reusable Attestation Result information elements. When these elements are offered to Relying Parties as Evidence, different aspects of Attester trustworthiness can be evaluated. Additionally, where the Relying Party is interfacing with a heterogeneous mix of Attesting Environment and Verifier types, consistent policies can be applied to subsequent information exchange between each Attester and the Relying Party.¶
This note is to be removed before publishing as an RFC.¶
Source for this draft and an issue tracker can be found at https://github.com/ietf-rats-wg/draft-ietf-rats-ar4si.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 2 March 2024.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The first paragraph of the May 2021 US Presidential Executive Order on Improving the Nation's Cybersecurity [US-Executive-Order] ends with the statement "the trust we place in our digital infrastructure should be proportional to how trustworthy and transparent that infrastructure is." Later this order explores aspects of trustworthiness such as an auditable trust relationship, which it defines as an "agreed-upon relationship between two or more system elements that is governed by criteria for secure interaction, behavior, and outcomes."¶
The Remote ATtestation procedureS (RATS) architecture [RFC9334] provides a useful context for programmatically establishing and maintaining such auditable trust relationships. Specifically, the architecture defines conceptual messages conveyed between architectural subsystems to support trustworthiness appraisal. The RATS conceptual message used to convey evidence of trustworthiness is the Attestation Results. The Attestation Results includes Verifier generated appraisals of an Attester including such information as the identity of the Attester, the security mechanisms employed on this Attester, and the Attester's current state of trustworthiness.¶
Generated Attestation Results are ultimately conveyed to one or more Relying Parties. Reception of an Attestation Result enables a Relying Party to determine what action to take with regards to an Attester. Frequently, this action will be to choose whether to allow the Attester to securely interact with the Relying Party over some connection between the two.¶
When determining whether to allow secure interactions with an Attester, a Relying Party is challenged with a number of difficult problems which it must be able to handle successfully. These problems include:¶
To address these problems, it is important that specific Attestation Result information elements are framed independently of Attesting Environment specific constraints. If they are not, a Relying Party would be forced to adapt to the syntax and semantics of many vendor specific environments. This is not a reasonable ask as there can be many types of Attesters interacting with or connecting to a Relying Party.¶
The business need therefore is for common Attestation Result information element definitions. With these definitions, consistent interaction or connectivity decisions can be made by a Relying Party where there is a heterogenous mix of Attesting Environment types and Verifier types.¶
This document defines information elements for Attestation Results in a way which normalizes the trustworthiness assertions that can be made from a diverse set of Attesters.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The following terms are imported from [RFC9334]: Appraisal Policy for Attestation Results, Attester, Attesting Environment, Claims, Evidence, Relying Party, Target Environment and Verifier.¶
[RFC9334] also describes topological patterns that illustrate the need for interoperable conceptual messages. The two patterns called "background-check model" and "passport model" are imported from the RATS architecture and used in this document as a reference to the architectural concepts: Background-Check Model and Passport Model.¶
Newly defined terms for this document:¶
a bundle of Evidence which includes at least the following:¶
Evidence which unambiguously identifies an identity. Identity Evidence could take different forms, such as a certificate, or a signature which can be appraised to have only been generated by a specific private/public key pair.¶
a specific quanta of trustworthiness which can be assigned by a Verifier based on its appraisal policy.¶
a categorization of the levels of trustworthiness which may be assigned by a Verifier to a specific Trustworthiness Claim. These enumerated categories are: Affirming, Warning, Contraindicated, and None.¶
a set of zero to many Trustworthiness Claims assigned during a single appraisal procedure by a Verifier using Evidence generated by an Attester. The vector is included within Attestation Results.¶
A Verifier generates the Attestation Results used by a Relying Party. When a Relying Party needs to determine whether to permit communications with an Attester, these Attestation Results must contain a specific set of information elements. This section defines those information elements, and in some cases encodings for information elements.¶
When the action is a communication establishment attempt with an Attester, there is only a limited set of actions which a Relying Party might take. These actions include:¶
There are three categories of information which must be conveyed to the Relying Party (which also is integrated with a Verifier) before it determines which of these actions to take.¶
The following sections detail requirements for these three categories.¶
Identity Evidence must be conveyed during the establishment of any trust-based relationship. Specific use cases will define the minimum types of identities required by a particular Relying Party as it evaluates Attestation Results, and perhaps additional associated Evidence. At a bare minimum, a Relying Party MUST start with the ability to verify the identity of a Verifier it chooses to trust. Attester identities may then be acquired through signed or encrypted communications with the Verifier identity and/or the pre-provisioning Attester public keys in the Attester.¶
During the Remote Attestation process, the Verifier's identity must be established with a Relying Party, often via a Verifier signature across recent Attestation Results. This Verifier identity could only have come from a key pair maintained by a trusted developer or operator of the Verifier.¶
Additionally, each set of Attestation Results must be provably and non-reputably bound to the identity of the original Attesting Environment which was evaluated by the Verifier. This is accomplished via satisfying two requirements. First the Verifier signed Attestation Results MUST include sufficient Identity Evidence to ensure that this Attesting Environment signature refers to the same Attesting Environment appraised by the Verifier. Second, where the passport model is used as a subsystem, an Attesting Environment signature which spans the Verifier signature MUST also be included. As the Verifier signature already spans the Attester Identity as well as the Attestation Results, this restricts the viability of spoofing attacks.¶
In a subset of use cases, these two pieces of Identity Evidence may be sufficient for a Relying Party to successfully meet the criteria for its Appraisal Policy for Attestation Results. If the use case is a connection request, a Relying Party may simply then establish a transport session with an Attester after a successful appraisal. However an Appraisal Policy for Attestation Results will often be more nuanced, and the Relying Party may need additional information. Some Identity Evidence related policy questions which the Relying Party may consider include:¶
For any of these more nuanced appraisals, additional Identity Evidence or other policy related information must be conveyed or pre-provisioned during the formation of a trust context between the Relying Party, the Attester, the Attester's Attesting Environment, and the Verifier.¶
Per [RFC9334] Figure 2, an Attester and a corresponding Attesting Environment might not share common code or even hardware boundaries. Consequently, an Attester implementation needs to ensure that any Evidence which originates from outside the Attesting Environment MUST have been collected and delivered securely before any Attesting Environment signing may occur. After the Verifier performs its appraisal, it will include sufficient information in the Attestation Results to enable a Relying Party to have confidence that the Attester's trustworthiness is represented via Trustworthiness Claims signed by the appropriate Attesting Environment.¶
This document recognizes three general categories of Attesters.¶
Each of these categories of Attesters above will be capable of generating Evidence which is protected using private keys / certificates which are not accessible outside of the corresponding Attesting Environment. The owner of these secrets is the owner of the identity which is bound within the Attesting Environment. Effectively this means that for any Attester identity, there will exist a chain of trust ultimately bound to a hardware-based root of trust in the Attesting Environment. It is upon this root of trust that unique, non-repudiable Attester identities may be founded.¶
There are several types of Attester identities defined in this document. This list is extensible:¶
Based on the category of the Attesting Environment, different types of identities might be exposed by an Attester.¶
Attester Identity type | Process-based | VM-based | HSM-based |
---|---|---|---|
chip-vendor | Mandatory | Mandatory | Mandatory |
chip-hardware | Mandatory | Mandatory | Mandatory |
target-environment | Mandatory | Mandatory | Optional |
target-developer | Mandatory | Optional | Optional |
instance | Optional | Optional | Optional |
It is expected that drafts subsequent to this specification will provide the definitions and value domains for specific identities, each of which falling within the Attester identity types listed above. In some cases the actual unique identities might encoded as complex structures. An example complex structure might be a 'target-environment' encoded as a Software Bill of Materials (SBOM).¶
With the identity definitions and value domains, a Relying Party will have sufficient information to ensure that the Attester identities and Trustworthiness Claims asserted are actually capable of being supported by the underlying type of Attesting Environment. Consequently, the Relying Party SHOULD require Identity Evidence which indicates of the type of Attesting Environment when it considers its Appraisal Policy for Attestation Results.¶
For the Verifier identity, it is critical for a Relying Party to review the certificate and chain of trust for that Verifier. Additionally, the Relying Party must have confidence that the Trustworthiness Claims being relied upon from the Verifier considered the chain of trust for the Attesting Environment.¶
There are two categorizations of Verifier identities defined in this document:¶
Within each category, communicating the identity can be accomplished via a variety of objects and encodings.¶
Any of the above identities used by the Appraisal Policy for Attestation Results needed to be pre-established by the Relying Party before, or provided during, the exchange of Attestation Results. When provided during this exchange, the identity may be communicated either implicitly or explicitly.¶
An example of explicit communication would be to include the following Identity Evidence directly within the Attestation Results: a unique identifier for an Attesting Environment, the name of a key which can be provably associated with that unique identifier, and the set of Attestation Results which are signed using that key. As these Attestation Results are signed by the Verifier, it is the Verifier which is explicitly asserting the credentials it believes are trustworthy.¶
An example of implicit communication would be to include Identity Evidence in the form of a signature which has been placed over the Attestation Results asserted by a Verifier. It would be then up to the Relying Party's Appraisal Policy for Attestation Results to extract this signature and confirm that it only could have been generated by an Attesting Environment having access to a specific private key. This implicit identity communication is only viable if the Attesting Environment's public key is already known by the Relying Party.¶
One final step in communicating identity is proving the freshness of the Attestation Results to the degree needed by the Relying Party. A typical way to accomplish this is to include an element of freshness be embedded within a signed portion of the Attestation Results. This element of freshness reduces the identity spoofing risks from a replay attack. For more on this, see Section 2.4.¶
Trust is not absolute. Trust is a belief in some aspect about an entity (in this case an Attester), and that this aspect is something which can be depended upon (in this case by a Relying Party.) Within the context of Remote Attestation, believability of this aspect is facilitated by a Verifier. This facilitation depends on the Verifier's ability to parse detailed Evidence from an Attester and then to assert conclusions about this aspect in a way interpretable by a Relying Party.¶
Specific aspects for which a Verifier will assert trustworthiness are defined in this section. These are known as Trustworthiness Claims. These claims have been designed to enable a common understanding between a broad array of Attesters, Verifiers, and Relying Parties. The following set of design principles have been applied in the Trustworthiness Claim definitions:¶
Expose a small number of Trustworthiness Claims.¶
Reason: a plethora of similar Trustworthiness Claims will result in divergent choices made on which to support between different Verifiers. This would place a lot of complexity in the Relying Party as it would be up to the Relying Party (and its policy language) to enable normalization across rich but incompatible Verifier object definitions.¶
Each Trustworthiness Claim enumerates only the specific states that could viably result in a different outcome after the Policy for Attestation Results has been applied.¶
Reason: by explicitly disallowing the standardization of enumerated states which cannot easily be connected to a use case, we avoid forcing implementers from making incompatible guesses on what these states might mean.¶
Verifier and RP developers need explicit definitions of each state in order to accomplish the goals of (1) and (2).¶
Reason: without such guidance, the Verifier will append plenty of raw supporting info. This relieves the Verifier of making the hard decisions. Of course, this raw info will be mostly non-interpretable and therefore non-actionable by the Relying Party.¶
Support standards and non-standard extensibility for (1) and (2).¶
Reason: standard types of Verifier generated Trustworthiness Claims should be vetted by the full RATS working group, rather than being maintained in a repository which doesn't follow the RFC process. This will keep a tight lid on extensions which must be considered by the Relying Party's policy language. Because this process takes time, non-standard extensions will be needed for implementation speed and flexibility.¶
These design principles are important to keep the number of Verifier generated claims low, and to retain the complexity in the Verifier rather than the Relying Party.¶
Per design principle (2), each Trustworthiness Claim will only expose specific encoded values. To simplify the processing of these enumerations by the Relying Party, the enumeration will be encoded as a single signed 8 bit integer. These value assignments for this integer will be in four Trustworthiness Tiers which follow these guidelines:¶
None: The Verifier makes no assertions regarding this aspect of trustworthiness.¶
Affirming: The Verifier affirms the Attester support for this aspect of trustworthiness.¶
Warning: The Verifier warns about this aspect of trustworthiness.¶
Contraindicated: The Verifier asserts the Attester is explicitly untrustworthy in regard to this aspect.¶
This enumerated encoding listed above will simplify the Appraisal Policy for Attestation Results. Such a policies may be as simple as saying that a specific Verifier has recently asserted Trustworthiness Claims, all of which are Affirming.¶
In order to simplify design, only a single encoded value is asserted by a Verifier for any Trustworthiness Claim within a using the following process.¶
Following are the Trustworthiness Claims and their supported enumerations which may be asserted by a Verifier:¶
A Verifier has appraised an Attester's configuration, and is able to make conclusions regarding the exposure of known vulnerabilities¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
The configuration is a known and approved config.¶
The configuration includes or exposes no known vulnerabilities.¶
The configuration includes or exposes known vulnerabilities.¶
Elements of the configuration relevant to security are unavailable to the Verifier.¶
The configuration is unsupportable as it exposes unacceptable security vulnerabilities.¶
Cryptographic validation of the Evidence has failed.¶
A Verifier has appraised and evaluated relevant runtime files, scripts, and/or other objects which have been loaded into the Target environment's memory.¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
Only a recognized genuine set of approved executables, scripts, files, and/or objects have been loaded during and after the boot process.¶
Only a recognized genuine set of approved executables have been loaded during the boot process.¶
Only a recognized genuine set of executables, scripts, files, and/or objects have been loaded. However the Verifier cannot vouch for a subset of these due to known bugs or other known vulnerabilities.¶
Runtime memory includes executables, scripts, files, and/or objects which are not recognized.¶
Runtime memory includes executables, scripts, files, and/or object which are contraindicated.¶
Cryptographic validation of the Evidence has failed.¶
A Verifier has evaluated a specific set of directories within the Attester's file system. (Note: the Verifier may or may not indicate what these directory and expected files are via an unspecified management interface.)¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
Only a recognized set of approved files are found.¶
The file system includes unrecognized executables, scripts, or files.¶
The file system includes contraindicated executables, scripts, or files.¶
Cryptographic validation of the Evidence has failed.¶
A Verifier has appraised any Attester hardware and firmware which are able to expose fingerprints of their identity and running code.¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
An Attester has passed its hardware and/or firmware verifications needed to demonstrate that these are genuine/supported.¶
32: An Attester contains only genuine/supported hardware and/or firmware, but there are known security vulnerabilities.¶
A Verifier has appraised an Attesting Environment's unique identity based upon private key signed Evidence which can be correlated to a unique instantiated instance of the Attester. (Note: this Trustworthiness Claim should only be generated if the Verifier actually expects to recognize the unique identity of the Attester.)¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
The Attesting Environment is recognized, and the associated instance of the Attester is not known to be compromised.¶
The Attesting Environment is recognized, and but its unique private key indicates a device which is not trustworthy.¶
The Attesting Environment is not recognized; however the Verifier believes it should be.¶
Cryptographic validation of the Evidence has failed.¶
A Verifier has appraised the visibility of Attester objects in memory from perspectives outside the Attester.¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
the Attester's executing Target Environment and Attesting Environments are encrypted and within Trusted Execution Environment(s) opaque to the operating system, virtual machine manager, and peer applications. (Note: This value corresponds to the protections asserted by O.RUNTIME_CONFIDENTIALITY from [GP-TEE-PP])¶
the Attester's executing Target Environment and Attesting Environments inaccessible from any other parallel application or Guest VM running on the Attester's physical device. (Note that unlike "1" these environments are not encrypted in a way which restricts the Attester's root operator visibility. See O.TA_ISOLATION from [GP-TEE-PP].)¶
The Verifier has concluded that in memory objects are unacceptably visible within the physical host that supports the Attester.¶
Cryptographic validation of the Evidence has failed.¶
A Verifier has evaluated of the integrity of data objects from external systems used by the Attester.¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
All essential Attester source data objects have been provided by other Attester(s) whose most recent appraisal(s) had both no Trustworthiness Claims of "0" where the current Trustworthiness Claim is "Affirming", as well as no "Warning" or "Contraindicated" Trustworthiness Claims.¶
Attester source data objects come from unattested sources, or attested sources with "Warning" type Trustworthiness Claims.¶
Attester source data objects come from contraindicated sources.¶
Cryptographic validation of the Evidence has failed.¶
A Verifier has appraised that an Attester is capable of encrypting persistent storage. (Note: Protections must meet the capabilities of [OMTP-ATE] Section 5, but need not be hardware tamper resistant.)¶
No assertion¶
Evidence contains unknown elements which inhibit Verifer evaluation.¶
Verifier malfunction¶
the Attester encrypts all secrets in persistent storage via using keys which are never visible outside an HSM or the Trusted Execution Environment hardware.¶
the Attester encrypts all persistently stored secrets, but without using hardware backed keys¶
There are persistent secrets which are stored unencrypted in an Attester.¶
Cryptographic validation of the Evidence has failed.¶
It is possible for additonal Trustworthiness Claims and enumerated values to be defined in subsequent documents. At the same time, the standardized Trustworthiness Claim values listed above have been designed so there is no overlap within a Trustworthiness Tier. As a result, it is possible to imagine a future where overlapping Trustworthiness Claims within a single Trustworthiness Tier may be defined. Wherever possible, the Verifier SHOULD assign the best fitting standardized value.¶
Where a Relying Party doesn't know how to handle a particular Trustworthiness Claim, it MAY choose an appropriate action based on the Trustworthiness Tier under which the enumerated value fits.¶
It is up to the Verifier to publish the types of evaluations it performs when determining how Trustworthiness Claims are derived for a type of any particular type of Attester. It is out of the scope of this document for the Verifier to provide proof or specific logic on how a particular Trustworthiness Claim which it is asserting was derived.¶
Multiple Trustworthiness Claims may be asserted about an Attesting Environment at single point in time. The set of Trustworthiness Claims inserted into an instance of Attestation Results by a Verifier is known as a Trustworthiness Vector. The order of Claims in the vector is NOT meaningful. A Trustworthiness Vector with no Trustworthiness Claims (i.e., a null Trustworthiness Vector) is a valid construct. In this case, the Verifier is making no Trustworthiness Claims but is confirming that an appraisal has been made.¶
Some Trustworthiness Claims are implicit based on the underlying type of Attesting Environment. For example, a validated MRSIGNER identity can be present where the underlying [SGX] hardware is 'hw-authentic'. Where such implicit Trustworthiness Claims exist, they do not have to be explicitly included in the Trustworthiness Vector. However, these implicit Trustworthiness Claims SHOULD be considered as being present by the Relying Party. Another way of saying this is if a Trustworthiness Claim is automatically supported as a result of coming from a specific type of TEE, that claim need not be redundantly articulated. Such implicit Trustworthiness Claims can be seen in the tables within Appendix B.2 and Appendix B.3.¶
Additionally, there are some Trustworthiness Claims which cannot be adequately supported by an Attesting Environment. For example, it would be difficult for an Attester that includes only a TPM (and no other TEE) from ever having a Verifier appraise support for 'runtime-opaque'. As such, a Relying Party would be acting properly if it rejects any non-supportable Trustworthiness Claims asserted from a Verifier.¶
As a result, the need for the ability to carry a specific Trustworthiness Claim will vary by the type of Attesting Environment. Example mappings can be seen in Appendix B.¶
A Relying Party will care about the recentness of the Attestation Results, and the specific Trustworthiness Claims which are embedded. All freshness mechanisms of [RFC9334], Section 10 are supportable by this specification.¶
Additionally, a Relying Party may track when a Verifier expires its confidence for the Trustworthiness Claims or the Trustworthiness Vector as a whole. Mechanisms for such expiry are not defined within this document.¶
There is a subset of secure interactions where the freshness of Trustworthiness Claims may need to be revisited asynchronously. This subset is when trustworthiness depends on the continuous availability of a transport session between the Attester and Relying Party. With such connectivity dependent Attestation Results, if there is a reboot which resets transport connectivity, all established Trustworthiness Claims should be cleared. Subsequent connection re-establishment will allow fresh new Trustworthiness Claims to be delivered.¶
There are multiple ways of providing a Trustworthiness Vector to a Relying Party. This section describes two alternatives.¶
It is possible to for a Relying Party to follow the Background-Check Model defined in Section 5.2 of [RFC9334]. In this case, a Relying Party will receive Attestation Results containing the Trustworthiness Vector directly from a Verifier. These Attestation Results can then be used by the Relying Party in determining the appropriate treatment for interactions with the Attester.¶
While applicable in some cases, the utilization of the Background-Check Model without modification has potential drawbacks in other cases. These include:¶
An implementer should examine these potential drawbacks before selecting this alternative.¶
A simplified Background-Check Model may exist in a very specific case. This is where the Relying Party and Verifier functions are co-resident. This model is appropriate when:¶
Effectively this means that detailed forensic capabilities of a robust Verifier are unnecessary because it is accepted that the code and operational behavior of the Attester cannot be manipulated after TEE initialization.¶
An example of such a scenario may be when an SGX's MRENCLAVE and MRSIGNER values have been associated with a known QUOTE value. And the code running within the TEE is not modifiable after launch.¶
Zero Trust Architectures are referenced in [US-Executive-Order] eleven times. However despite this high profile, there is an architectural gap with Zero Trust. The credentials used for authentication and admission control can be manipulated on the endpoint. Attestation can fill this gap through the generation of a compound credential called AR-augmented Evidence. This compound credential is rooted in the hardware based Attesting Environment of an endpoint, plus the trustworthiness of a Verifier. The overall solution is known as "Below Zero Trust" as the compound credential cannot be manipulated or spoofed by an administrator of an endpoint with root access. This solution is not adversely impacted by the potential drawbacks with pure background-check described above.¶
To kick-off the "Below Zero Trust" compound credential creation sequence, a Verifier evaluates an Attester and returns signed Attestation Results back to this original Attester no less frequently than a well-known interval. This interval may also be asynchronous, based on the changing of certain Evidence as described in [I-D.ietf-rats-network-device-subscription].¶
When a Relying Party is to receive information about the Attester's trustworthiness, the Attesting Environment assembles the minimal set of Evidence which can be used to confirm or refute whether the Attester remains in the state of trustworthiness represented by the AR. To this Evidence, the Attesting Environment appends the signature from the most recent AR as well as a Relying Party Proof-of-Freshness. The Attesting Environment then signs the combination.¶
The Attester then assembles AR Augmented Evidence by taking the signed combination and appending the full AR. The assembly now consists of two independent but semantically bound sets of signed Evidence.¶
The AR Augmented Evidence is then sent to the Relying Party. The Relying Party then can appraise these semantically bound sets of signed Evidence by applying an Appraisal Policy for Attestation Results as described below. This policy will consider both the AR as well as additional information about the Attester within the AR Augmented Evidence the when determining what action to take.¶
This alternative combines the [RFC9334] Sections 5.1 Passport Model and Section 5.2 Background-Check Model. Figure 1 describes this flow of information. The flows within this combined model are mapped to [RFC9334] in the following way. "Verifier A" below corresponds to the "Verifier" Figure 5 within [RFC9334]. And "Relying Party/Verifier B" below corresponds to the union of the "Relying Party" and "Verifier" boxes within Figure 6 of [RFC9334]. This union is possible because Verifier B can be implemented as a simple, self-contained process. The resulting combined process can appraise the AR-augmented Evidence to determine whether an Attester qualifies for secure interactions with the Relying Party. The specific steps of this process are defined later in this section.¶
The interaction model depicted above includes specific time related events from Appendix A of [RFC9334]. With the identification of these time related events, time duration/interval tracking becomes possible. Such duration/interval tracking can become important if the Relying Party cares if too much time has elapsed between the Verifier PoF and Relying Party PoF. If too much time has elapsed, perhaps the Attestation Results themselves are no longer trustworthy.¶
Note that while time intervals will often be relevant, there is a simplified case that does not require a Relying Party's PoF in step (3). In this simplified case, the Relying Party trusts that the Attester cannot be meaningfully changed from the outside during any reportable interval. Based on that assumption, and when this is the case then the step of the Relying Party PoF can be safely omitted.¶
In all cases, appraisal policies define the conditions and prerequisites for when an Attester does qualify for secure interactions. To qualify, an Attester has to be able to provide all of the mandatory affirming Trustworthiness Claims and identities needed by a Relying Party's Appraisal Policy for Attestation Results, and none of the disqualifying detracting Trustworthiness Claims.¶
More details on each interaction step of Below Zero Trust are as follows. The numbers used in this sequence match to the numbered steps in Figure 1:¶
Verifier A appraises (1), then sends the following items back to that Attester within Attestation Results:¶
The Attester generates and sends AR-augmented Evidence to the Relying Party/Verifier B. This AR-augmented Evidence includes:¶
On receipt of (4), the Relying Party applies its Appraisal Policy for Attestation Results. At minimum, this appraisal policy process must include the following:¶
Assemble the Verifier B Trustworthiness Vector¶
The Relying Party takes action based on Verifier B's appraised Trustworthiness Vector, and applies the Appraisal Policy for Attestation Results. Following is a reasonable process for such evaluation:¶
As link layer protocols re-authenticate, steps (1) to (2) and steps (3) to (6) will independently refresh. This allows the Trustworthiness of Attester to be continuously re-appraised. There are only specific event triggers which will drive the refresh of Evidence generation (1), Attestation Result generation (2), or AR-augmented Evidence generation (4):¶
In the interaction models described above, each device on either side of a secure interaction may require remote attestation of its peer. This process is known as mutual-attestation. To support mutual-attestation, the interaction models listed above may be run independently on either side of the connection.¶
Either unidirectional attestation or mutual attestation may be supported within the protocol interactions needed for the establishment of a single transport session. While this document does not mandate specific transport protocols, messages containing the Attestation Results and AR Augmented Evidence can be passed within an authentication framework such the EAP protocol [RFC5247] over TLS [RFC8446].¶
Privacy Considerations Text¶
Security Considerations Text¶
See Body.¶
What has been encoded into each Trustworthiness Claim is the domain of integer values which is likely to drive a different programmatic decision in the Relying Party's Appraisal Policy for Attestation Results. This will not be the only thing a Relying Party's Operations team might care to track for measurement or debugging purposes.¶
There is also the opportunity for the Verifier to include supplementary Evidence beyond a set of asserted Trustworthiness Claims. It is recommended that if supplementary Evidence is provided by the Verifier within the Attestation Results, that this supplementary Evidence includes a reference to a specific Trustworthiness Claim. This will allow a deeper understanding of some of the reasoning behind the integer value assigned.¶
The following is a table which shows what Claims are supportable by different Attesting Environment types. Note that claims MAY BE implicit to an Attesting Environment type, and therefore do not have to be included in the Trustworthiness Vector to be considered as set by the Relying Party.¶
Following are Trustworthiness Claims which MAY be set for a HSM-based Confidential Computing Attester. (Such as a TPM [TPM-ID].)¶
Trustworthiness Claim | Required? | Appraisal Method |
---|---|---|
configuration | Optional | Verifier evaluation of Attester reveals no configuration lines which expose the Attester to known security vulnerabilities. This may be done with or without the involvement of a TPM PCR. |
executables | Yes | Checks the TPM PCRs for the static operating system, and for any tracked files subsequently loaded |
file-system | No | Can be supported, but TPM tracking is unlikely |
hardware | Yes | If TPM PCR check ok from BIOS checks, through Master Boot Record configuration |
instance-identity | Optional | Check IDevID |
runtime-opaque | n/a | TPMs are not recommended to provide a sufficient technology base for this Trustworthiness Claim. |
sourced-data | n/a | TPMs are not recommended to provide a sufficient technology base for this Trustworthiness Claim. |
storage-opaque | Minimal | With a TPM, secure storage space exists and is writeable by external applications. But the space is so limited that it often is used just be used to store keys. |
Setting the Trustworthiness Claims may follow the following logic at the Verifier A within (2) of Figure 1:¶
Start: Evidence received starts the generation of a new Trustworthiness Vector. (e.g., TPM Quote Received, log received, or appraisal timer expired) Step 0: set Trustworthiness Vector = Null Step 1: Is there sufficient fresh signed evidence to appraise? (yes) - No Action (no) - Goto Step 6 Step 2: Appraise Hardware Integrity PCRs if (hardware NOT "0") - push onto vector if (hardware NOT affirming or warning), go to Step 6 Step 3: Appraise Attesting Environment identity if (instance-identity <> "0") - push onto vector Step 4: Appraise executable loaded and filesystem integrity if (executables NOT "0") - push onto vector if (executables NOT affirming or warning), go to Step 6 Step 5: Appraise all remaining Trustworthiness Claims Independently and set as appropriate. Step 6: Assemble Attestation Results, and push to Attester End¶
Following are Trustworthiness Claims which MAY be set for a process-based Confidential Computing based Attester. (Such as a SGX Enclaves and TrustZone.)¶
Trustworthiness Claim | Required? | Appraisal Method |
---|---|---|
instance-identity | Optional | Internally available in TEE. But keys might not be known/exposed to the Relying Party by the Attesting Environment. |
configuration | Optional | If done, this is at the Application Layer. Plus each process needs it own protection mechanism as the protection is limited to the process itself. |
executables | Optional | Internally available in TEE. But keys might not be known/exposed to the Relying Party by the Attesting Environment. |
file-system | Optional | Can be supported by application, but process-based CC is not a sufficient technology base for this Trustworthiness Claim. |
hardware | Implicit in signature | At least the TEE is protected here. Other elements of the system outside of the TEE might need additional protections is used by the application process. |
runtime-opaque | Implicit in signature | From the TEE |
storage-opaque | Implicit in signature | Although the application must assert that this function is used by the code itself. |
sourced-data | Optional | Will need to be supported by application code |
Following are Trustworthiness Claims which MAY be set for a VM-based Confidential Computing based Attester. (Such as SEV, TDX, ACCA, SEV-SNP.)¶
Trustworthiness Claim | Required? | Appraisal Method |
---|---|---|
instance-identity | Optional | Internally available in TEE. But keys might not be known/exposed to the Relying Party by the Attesting Environment. |
configuration | Optional | Requires application integration. Easier than with process-based solution, as the whole protected machine can be evaluated. |
executables | Optional | Internally available in TEE. But keys might not be known/exposed to the Relying Party by the Attesting Environment. |
file-system | Optional | Can be supported by application |
hardware | Chip dependent | At least the TEE is protected here. Other elements of the system outside of the TEE might need additional protections is used by the application process. |
runtime-opaque | Implicit in signature | From the TEE |
storage-opaque | Chip dependent | Although the application must assert that this function is used by the code itself. |
sourced-data | Optional | Will need to be supported by application code |
It is possible for a cluster/hierarchy of Verifiers to have aggregate AR which are perhaps signed/endorsed by a lead Verifier. What should be the Proof-of-Freshness or Verifier associated with any of the aggregate set of Trustworthiness Claims?¶
There will need to be a subsequent document which documents how these objects which will be translated into a protocol on a wire (e.g. EAP on TLS). Some breakpoint between what is in this draft, and what is in specific drafts for wire encoding will need to be determined. Questions like architecting the cluster/hierarchy of Verifiers fall into this breakdown.¶
For some Trustworthiness Claims, there could be value in identifying a specific Appraisal Policy for Attestation Results applied within the Attester. One way this could be done would be a URI which identifies the policy used at Verifier A, and this URI would reference a specific Trustworthiness Claim. As the URI also could encode the version of the software, it might also act as a mechanism to signal the Relying Party to refresh/re-evaluate its view of Verifier A. Do we need this type of structure to be included here? Should it be in subsequent documents?¶
Expand the variant of Figure 1 which requires no Relying Party PoF into its own picture.¶
In what document (if any) do we attempt normalization of the identity claims between different types of TEE. E.g., does MRSIGNER plus extra loaded software = the sum of TrustZone Signer IDs for loaded components?¶
Guy Fedorkow¶
Email: gfedorkow@juniper.net¶
Dave Thaler¶
Email: dthaler@microsoft.com¶
Ned Smith¶
Email: ned.smith@intel.com¶
Lawrence Lundblade¶
Email: lgl@island-resort.com¶