Compare commits

..

202 commits

Author SHA1 Message Date
Harald Hoyer
8773078d5a
Merge pull request #328 from matter-labs/renovate/cachix-install-nix-action-31.x
Some checks failed
lint / check-spdx-headers (push) Failing after 35s
lint / taplo (push) Failing after 13s
nix / check (push) Failing after 13s
nix / build (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-self-attestation-test-sgx-azure]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-self-attestation-test-sgx-dcap]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-tdx-test]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-vault-admin-sgx-azure]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-vault-admin]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-vault-sgx-azure]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-vault-unseal-sgx-azure]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-vault-unseal]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-verify-attestation-sgx]) (push) Has been skipped
nix / push_to_docker (map[nixpackage:container-verify-era-proof-attestation-sgx]) (push) Has been skipped
nix-non-x86 / macos-latest (push) Has been cancelled
chore(deps): update cachix/install-nix-action action to v31
2025-07-09 10:44:28 +02:00
renovate[bot]
ce9560cff0
chore(deps): update cachix/install-nix-action action to v31 2025-06-27 15:14:32 +00:00
Harald Hoyer
093d6c44ed
Merge pull request #339 from matter-labs/crates_io_readme
docs: add README files for teepot-related crates
2025-06-27 17:12:38 +02:00
Harald Hoyer
ddbf099e45
docs: add README files for teepot-related crates
- Added comprehensive README files for the following new crates:
  - `teepot`
  - `teepot-tdx-attest-rs`
  - `teepot-tdx-attest-sys`
  - `teepot-tee-quote-verification-rs`
  - `teepot-vault`
- Each includes an overview, usage examples, installation instructions, and licensing details.
2025-06-25 13:55:00 +02:00
Harald Hoyer
18ed1aa769
Merge pull request #338 from matter-labs/bv0.6.0-ver
chore(deps): update `teepot` crates to version 0.6.0
2025-06-25 11:52:31 +02:00
Harald Hoyer
49bb4a3bef
chore(deps): update teepot crates to version 0.6.0
- Set `teepot`, `teepot-tee-quote-verification-rs`, and `teepot-vault` crate versions to 0.6.0 in `Cargo.toml`.
- Ensures consistency with the planned 0.6.0 release preparation.
2025-06-25 09:31:29 +02:00
Harald Hoyer
c34ff7ad27
Merge pull request #336 from matter-labs/bv0.6.0
chore(deps): prepare release 0.6.0
2025-06-24 17:25:09 +02:00
Harald Hoyer
b4e0014e4e
chore(deps): prepare release 0.6.0
- vendor unpublished tdx-attest-rs and tdx-attest-sys crates
  to be able to publish to crates.io
- Updated package versions in `Cargo.toml` and `Cargo.lock` to 0.6.0.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-06-24 16:39:00 +02:00
Harald Hoyer
8d965aa388
Merge pull request #333 from matter-labs/feature/update-dependencies
chore(deps): Update all dependencies to latest
2025-06-23 12:24:19 +02:00
Lucille L. Blumire
412e3b1698
chore(deps): Update all dependencies to latest 2025-06-23 10:09:50 +01:00
Harald Hoyer
f7c3717241
Merge pull request #327 from matter-labs/renovate/docker-login-action-3.x
chore(deps): update docker/login-action action to v3.4.0
2025-06-04 17:20:04 +02:00
renovate[bot]
c42d692863
chore(deps): update docker/login-action action to v3.4.0 2025-06-04 15:07:28 +00:00
Harald Hoyer
626dbbf846
Merge pull request #325 from matter-labs/cargo_update
chore(deps): update crates and nix flakes
2025-06-04 17:05:10 +02:00
Harald Hoyer
8c7922ae39
Merge branch 'main' into cargo_update 2025-06-04 16:02:47 +02:00
Harald Hoyer
716c782e6f
chore(deps): update crates and nix flakes
- Updated multiple Rust dependencies, including `opentelemetry`, `const-oid`, and `webpki-roots` for enhanced features and bug fixes.
- Upgraded `nixpkgs` and `crane` in the nix flake configuration.
- Removed unused dependencies and introduced missing dependencies for improved build integrity.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-05-30 17:54:30 +02:00
Harald Hoyer
e78fb22f88
Merge pull request #311 from matter-labs/renovate/enarx-spdx-digest
chore(deps): update enarx/spdx digest to d4020ee
2025-05-30 09:13:27 +02:00
renovate[bot]
a7e2939a54
chore(deps): update enarx/spdx digest to d4020ee 2025-05-30 06:43:14 +00:00
Harald Hoyer
37e7f7f8e2
Merge pull request #323 from matter-labs/intel-dcap-api-impr
feat(intel-dcap-api): add automatic retry logic for 429 rate limiting
2025-05-30 08:41:24 +02:00
Harald Hoyer
7c133c4e4b
ci(nix): disable sandbox in nix-non-x86 workflow
otherwise the mockito tests fail, because it cannot bind to 127.0.0.1 0

- Updated `nix build` command to include `--no-sandbox` flag.
2025-05-28 13:31:15 +02:00
Harald Hoyer
bb9c5b195e
feat(intel-dcap-api): add automatic retry logic for 429 rate limiting
- Add `max_retries` field to ApiClient with default of 3 retries
- Implement `execute_with_retry()` helper method in helpers.rs
- Update all HTTP requests to use retry wrapper for automatic 429 handling
- Add `TooManyRequests` error variant with request_id and retry_after fields
- Respect Retry-After header duration before retrying requests
- Add `set_max_retries()` method to configure retry behavior (0 disables)
- Update documentation and add handle_rate_limit example
- Enhanced error handling in check_status() for 429 responses

The client now transparently handles Intel API rate limiting while remaining
configurable for users who need different retry behavior or manual handling.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-05-28 11:52:32 +02:00
Harald Hoyer
205113ecfa
feat(intel-dcap-api): add comprehensive testing infrastructure and examples
- Add mock tests using real Intel API response data (25 tests)
- Create fetch_test_data tool to retrieve real API responses for testing
- Add integration_test example covering 17 API endpoints
- Add common_usage example demonstrating attestation verification patterns
- Add issuer chain validation checks to ensure signature verification is possible
- Add comprehensive documentation in CLAUDE.md

The test suite now covers all major Intel DCAP API functionality including
TCB info, enclave identities, PCK CRLs, FMSPCs, and evaluation data numbers
for both SGX and TDX platforms across API v3 and v4.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-05-28 11:52:31 +02:00
renovate[bot]
aeff962224
chore(deps): update rust crate enumset to v1.1.6 (#313) 2025-05-23 14:33:54 +01:00
renovate[bot]
c8692df37a
fix(deps): update rust crate chrono to v0.4.41 (#320) 2025-05-23 14:01:22 +01:00
renovate[bot]
7c655d151c
chore(deps): update rust crate reqwest to v0.12.15 (#315) 2025-05-23 12:13:46 +00:00
renovate[bot]
426e22138e
chore(deps): update rust crate sha2 to v0.10.9 (#318) 2025-05-23 11:40:12 +00:00
renovate[bot]
b16592ec34
chore(deps): update rust crate async-trait to v0.1.88 (#286) 2025-05-23 11:06:13 +00:00
renovate[bot]
119c2abe09
chore(deps): update rust crate bytes to v1.10.1 (#312) 2025-05-23 11:32:13 +01:00
renovate[bot]
5789fdd433
chore(deps): update rust crate getrandom to v0.3.3 (#314) 2025-05-22 16:49:26 +00:00
renovate[bot]
de010fd093
chore(deps): update rust crate thiserror to v2.0.12 (#287) 2025-05-22 16:14:06 +00:00
renovate[bot]
e039adf158
chore(deps): update rust crate tracing-actix-web to v0.7.18 (#280) 2025-05-22 15:40:23 +00:00
renovate[bot]
f2718456ef
chore(deps): update rust crate serde_json to v1.0.140 (#274) 2025-05-22 15:05:49 +00:00
renovate[bot]
bef406c456
chore(deps): update rust crate anyhow to v1.0.98 (#273) 2025-05-22 14:28:51 +00:00
renovate[bot]
bfd895e8f7
chore(deps): update rust crate clap to v4.5.38 (#266)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [clap](https://redirect.github.com/clap-rs/clap) |
workspace.dependencies | patch | `4.5.30` -> `4.5.38` |

---

### Release Notes

<details>
<summary>clap-rs/clap (clap)</summary>

###
[`v4.5.38`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4538---2025-05-11)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.37...v4.5.38)

##### Fixes

-   *(help)* When showing aliases, include leading `--` or `-`

###
[`v4.5.37`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4537---2025-04-18)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.36...v4.5.37)

##### Features

-   Added `ArgMatches::try_clear_id()`

###
[`v4.5.36`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4536---2025-04-11)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.35...v4.5.36)

##### Fixes

- *(help)* Revert 4.5.35's "Don't leave space for shorts if there are
none" for now

###
[`v4.5.35`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4535---2025-04-01)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.34...v4.5.35)

##### Fixes

- *(help)* Align positionals and flags when put in the same
`help_heading`
-   *(help)* Don't leave space for shorts if there are none

###
[`v4.5.34`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4534---2025-03-27)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.33...v4.5.34)

##### Fixes

- *(help)* Don't add extra blank lines with `flatten_help(true)` and
subcommands without arguments

###
[`v4.5.33`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4533---2025-03-26)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.32...v4.5.33)

##### Fixes

- *(error)* When showing the usage of a suggestion for an unknown
argument, don't show the group

###
[`v4.5.32`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4532---2025-03-10)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.31...v4.5.32)

##### Features

-   Add `Error::remove`

##### Documentation

-   *(cookbook)* Switch from `humantime` to `jiff`
-   *(tutorial)* Better cover required vs optional

##### Internal

-   Update `pulldown-cmark`

###
[`v4.5.31`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4531---2025-02-24)

[Compare
Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.30...v4.5.31)

##### Features

-   Add `ValueParserFactory` for `Saturating<T>`

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/matter-labs/teepot).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4xNjQuMSIsInVwZGF0ZWRJblZlciI6IjQwLjE2LjAiLCJ0YXJnZXRCcmFuY2giOiJtYWluIiwibGFiZWxzIjpbXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-22 14:52:46 +01:00
Harald Hoyer
8b01d8d5b0
Merge pull request #267 from matter-labs/renovate/trufflesecurity-trufflehog-3.x
chore(deps): update trufflesecurity/trufflehog action to v3.88.30
2025-05-22 09:08:54 +02:00
renovate[bot]
ad26c5e9ae
chore(deps): update trufflesecurity/trufflehog action to v3.88.30 2025-05-16 21:21:53 +00:00
Harald Hoyer
336576d812
Merge pull request #310 from matter-labs/add-dcap-collateral-updater
feat(teepot): add `Quote::tee_type` method for TEE type determination
2025-05-06 13:46:58 +02:00
Harald Hoyer
6379e9aa9e
feat(teepot): add Quote::tee_type method for TEE type determination
- Introduced `tee_type` method to extract TEE type from the quote header.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-05-06 13:18:17 +02:00
Harald Hoyer
1536e00d63
Merge pull request #309 from matter-labs/platform
feat: add platform-specific implementations for quote verification
2025-05-06 13:08:45 +02:00
Harald Hoyer
2a8614c08f
feat: add platform-specific implementations for quote verification
- Introduced conditional compilation for Intel SGX/TDX quote verification based on target OS and architecture.
- Moved Intel-specific logic to a separate module and added a fallback for unsupported platforms.

This is done, so we can pull in the `teepot` crate even on `linux-x86_64`
without the Intel SGX SDK lib dependency.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-05-06 12:36:01 +02:00
Harald Hoyer
905487dac8
Merge pull request #307 from matter-labs/fmspc
feat(quote): add FMSPC and CPUSVN extraction support
2025-05-06 12:31:15 +02:00
Harald Hoyer
2bbfb2415c
feat(quote): add FMSPC and CPUSVN extraction support
- Introduced new types `Fmspc`, `CpuSvn`, and `Svn` for SGX metadata.
- Added methods to extract raw certificate chains and FMSPC from SGX quotes.
- Created new test file for validating FMSPC extraction with example quotes.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-05-06 11:43:51 +02:00
Harald Hoyer
fca60adc1a
Merge pull request #306 from matter-labs/rm_dupl
refactor: replace custom Quote parsing with library version
2025-05-06 11:11:08 +02:00
Harald Hoyer
2118466a8a
refactor: replace custom Quote parsing with library version
- Removed custom `Quote` structure and parsing logic in `teepot/src/sgx/mod.rs`.
- Updated references to use the library-provided `Quote` methods, such as `Quote::parse` and `get_report_data`.
- Simplified code and reduced redundancy by leveraging existing library functionality.
2025-05-05 14:54:41 +02:00
Lucille Blumire
9bd0e9c36e
Merge pull request #305 from matter-labs/small-quality
refactor: many small code quality improvements
2025-04-17 17:43:22 +01:00
Lucille L. Blumire
d54f7b14ad
refactor: remove redundant continue 2025-04-17 16:53:01 +01:00
Lucille L. Blumire
2ca0b47169
refactor: improve punctuation readability 2025-04-17 16:52:59 +01:00
Lucille L. Blumire
6a9e035d19
refactor: combine equivalent match branches 2025-04-17 16:52:59 +01:00
Lucille L. Blumire
36afc85d38
refactor: prefer if let to single variant match 2025-04-17 16:52:57 +01:00
Lucille L. Blumire
2ff169da9f
refactor: improve type ergonomics 2025-04-17 16:52:56 +01:00
Lucille L. Blumire
0768b0ad67
refactor: prefer conversion methods to infallable casts 2025-04-17 16:52:54 +01:00
Lucille L. Blumire
2dea589c0e
refactor: prefer inline format args 2025-04-17 16:52:53 +01:00
Lucille L. Blumire
71a04ad4e2
refactor: bring items to top level of files 2025-04-17 16:52:49 +01:00
Harald Hoyer
b8398ad15f
Merge pull request #303 from matter-labs/ld_library_path
refactor(shells): simplify environment variable declarations
2025-04-14 18:02:07 +02:00
Harald Hoyer
8903c1dc62
Merge branch 'main' into ld_library_path 2025-04-14 17:52:44 +02:00
Harald Hoyer
2d9a7bd384
Merge pull request #304 from matter-labs/intel-dcap-api-descriptionj
feat: add description to intel-dcap-api package
2025-04-14 17:52:21 +02:00
Harald Hoyer
d03ed96bb8
feat: add description to intel-dcap-api package
- Added a description field to the Cargo.toml for the intel-dcap-api crate.
2025-04-14 17:26:21 +02:00
Harald Hoyer
7b1c386e14
refactor(shells): simplify environment variable declarations
Refactored the environment variable setup by consolidating into a single `env` map for better clarity.
- Removed `TEE_LD_LIBRARY_PATH` and inlined its logic directly within `LD_LIBRARY_PATH`.
- Improved structure and readability of configuration-specific variables like `QCNL_CONF_PATH`.

Let us run directly on x86_64:
```
❯ cargo run --bin verify-era-proof-attestation -- \
            --rpc https://mainnet.era.zksync.io \
            --continuous 493220 \
            --attestation-policy-file bin/verify-era-proof-attestation/examples/attestation_policy.yaml \
            --tee-types sgx \
            --log-level info
```
2025-04-14 17:07:35 +02:00
Harald Hoyer
9b9acfc0c6
Merge pull request #302 from matter-labs/intel-dcap-api
feat(api): add Intel DCAP API client module
2025-04-11 20:12:31 +02:00
Harald Hoyer
1a392e800a
fixup! refactor(intel-dcap-api): split client.rs into smaller files
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-11 12:34:09 +02:00
Harald Hoyer
4501b3421c
fixup! refactor(intel-dcap-api): split client.rs into smaller files
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-11 12:23:53 +02:00
Harald Hoyer
0e69105a43
refactor(intel-dcap-api): split client.rs into smaller files
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-11 11:06:13 +02:00
Harald Hoyer
ed84a424db
feat(api): add Intel DCAP API client module
Introduced a new `intel-dcap-api` crate for interacting with Intel's DCAP APIs.
- Implemented various API client functionalities for SGX/TDX attestation services.
- Added support for registration, certification, enclave identity, and FMSPC retrieval.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-10 14:51:51 +02:00
Harald Hoyer
93c35dad38
Merge pull request #300 from matter-labs/darwin
feat: compat code for non `x86_64-linux`
2025-04-10 13:25:33 +02:00
Harald Hoyer
0b8f1d54c7
feat: bump rust version to 1.86
fixes the hardcoded `/usr/bin/strip` issue on macos

see https://github.com/rust-lang/rust/issues/131206

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-10 11:57:47 +02:00
Harald Hoyer
eb39705ff1
feat: compat code for non x86_64-linux
- do not build packages, which require `x86_64-linux`
- use Phala `dcap-qvl` crate for remote attestation, if possible
- nix: exclude `nixsgx` on non `x86_64-linux` platforms

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-10 11:57:46 +02:00
Harald Hoyer
ed808efd03
Merge pull request #296 from matter-labs/verify-era-proof-attestation-tdx
refactor(verify-era-proof-attestation): modularize and restructure proof verification logic
2025-04-07 10:13:52 +02:00
Harald Hoyer
95b6a2d70a
refactor(verify-era-proof-attestation): replace watch channel with CancellationToken
Refactored stop signal handling in all components to use `tokio_util::sync::CancellationToken` instead of `tokio::sync::watch`.

- Improved cancellation logic by leveraging `CancellationToken` for cleaner and more efficient handling.
- Updated corresponding dependency to `tokio-util` version `0.7.14`.
2025-04-07 08:54:00 +02:00
Harald Hoyer
2605e2ae3a
refactor(verify-era-proof-attestation): modularize and restructure proof verification logic
- Split `verify-era-proof-attestation` into modular subcomponents for maintainability.
- Moved client, proof handling, and core types into dedicated modules.
2025-04-04 17:05:30 +02:00
Harald Hoyer
1e853f653a
refactor(quote): move TCB level logic to a dedicated module
- Extracted `TcbLevel` functionality from `sgx` module to `quote::tcblevel`.
- Updated all references to import `TcbLevel` and related utilities from `quote::tcblevel`.
- Updated copyright headers to reflect the new year range.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-04 17:05:23 +02:00
Harald Hoyer
2ba5c45d31
Merge pull request #299 from matter-labs/leftover
fix(teepot-vault): remove leftover `tdx` module
2025-04-04 16:04:09 +02:00
Harald Hoyer
8596e0dc6a
fix(teepot-vault): remove leftover tdx module
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-04 14:40:43 +02:00
Harald Hoyer
fdad63e4b1
Merge pull request #298 from matter-labs/yaml
feat(ci): switch to GitHub Container Registry for images
2025-04-02 17:28:06 +02:00
Harald Hoyer
3257f316b5
feat(ci): switch to GitHub Container Registry for images
Updated the workflow to push container images to GitHub Container Registry instead of Docker Hub. Added a login step for GHCR and updated image tagging and pushing commands accordingly.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-02 17:10:20 +02:00
Harald Hoyer
542e3a9fcc
Merge pull request #297 from matter-labs/pre-exec-context
fix(tee-key-preexec): add context to file write operations
2025-04-02 16:43:25 +02:00
Harald Hoyer
e27b5da856
fix(tee-key-preexec): add context to file write operations
- Add context to `std::fs::write` calls to improve error tracing.
- Ensures better debugging by attaching filenames to potential errors.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-04-02 16:18:27 +02:00
Harald Hoyer
9114c47b90
Merge pull request #292 from matter-labs/teepot_vault
chore: split-out vault code from `teepot` in `teepot-vault`
2025-04-02 15:18:01 +02:00
Harald Hoyer
f03a8ba643
Merge branch 'main' into teepot_vault 2025-03-28 14:13:14 +01:00
Harald Hoyer
49568c66a7
Merge pull request #295 from matter-labs/sha384-extend
feat(bin): enhance SHA384 extend utility with padding and tests
2025-03-28 13:57:21 +01:00
Harald Hoyer
fa2ecee4bd
feat(sha384-extend): enhance SHA384 extend utility with padding and tests
- Refactor `sha384-extend` to include digest padding and validation.
- Add `extend_sha384` function for hex-string-based digest extension.
- Introduce comprehensive test coverage for edge cases and errors.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-03-28 12:55:13 +01:00
Harald Hoyer
7258452b79
Merge pull request #294 from matter-labs/proper_otlp_http_logging
feat(config): update OTLP endpoint and protocol handling
2025-03-26 16:21:07 +01:00
Harald Hoyer
982fcc363b
Merge branch 'main' into teepot_vault 2025-03-25 13:40:50 +01:00
Harald Hoyer
e62aff3511
feat(config): update OTLP endpoint and protocol handling
- Change default OTLP endpoint to match the HTTP/JSON spec.
- Add dynamic protocol-based exporter configuration.
- Support both gRPC and HTTP/JSON transports for logging.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-03-25 11:49:57 +01:00
Harald Hoyer
6c3bd96617
Merge pull request #293 from matter-labs/tdx_wait_for_vector
feat(tdx_google): add iproute2 and vector initialization wait
2025-03-21 13:26:34 +01:00
Harald Hoyer
3f90e4f80b
feat(tdx_google): add iproute2 and vector initialization wait
- Include iproute2 in the container path for required networking tools.
- Add a script to wait for vector to initialize before proceeding.
2025-03-21 13:11:23 +01:00
Harald Hoyer
f8bd9e6a08
chore: split-out vault code from teepot in teepot-vault
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-03-06 09:47:51 +01:00
Harald Hoyer
63c16b1177
Merge pull request #291 from matter-labs/no_quote
fix(verify-attestation): bail out, if no quote provided
2025-03-06 09:44:32 +01:00
Harald Hoyer
7cb3af4b65
Merge branch 'main' into no_quote 2025-03-06 09:30:33 +01:00
Harald Hoyer
51dc68b12f
Merge pull request #290 from matter-labs/self-attestation-readme-podman
docs(tee-self-attestation-test): add podman example
2025-03-06 09:30:17 +01:00
Harald Hoyer
55ea2a6069
fix(verify-attestation): bail out, if no quote provided
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-03-06 09:07:31 +01:00
Harald Hoyer
98ed802b75
docs(tee-self-attestation-test): add podman example
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-03-06 08:57:55 +01:00
Harald Hoyer
89145514b0
Merge pull request #285 from matter-labs/missing_recoverid_two
fix(verify-era-proof-attestation): handle missing RecoveryId signatures
2025-03-03 10:47:59 +01:00
Harald Hoyer
bece17f7bf
Merge branch 'main' into missing_recoverid_two 2025-03-03 08:52:32 +01:00
Harald Hoyer
bce991f77c
Merge pull request #283 from matter-labs/rustls_ring_provider
fix(teepot-vault): use `ring` as `CryptoProvider` for `rustls`
2025-03-01 09:36:27 +01:00
Harald Hoyer
589e375d47
Merge branch 'main' into rustls_ring_provider 2025-03-01 09:11:21 +01:00
Harald Hoyer
a6ea98a096
fix(verify-era-proof-attestation): handle missing RecoveryId signatures
- add `RecoveryId::Two` and `RecoveryId::Three`

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-03-01 09:07:29 +01:00
Harald Hoyer
736fe10200
Merge pull request #284 from matter-labs/missing_recoverid
fix(verify-era-proof-attestation): handle missing RecoveryId signatures
2025-02-28 19:31:49 +01:00
Harald Hoyer
c26b3db290
fix(verify-era-proof-attestation): handle missing RecoveryId signatures
- Add fallback for missing RecoveryId in 64-byte signatures.
- Improve error context for invalid signature length.
- Add debug and trace logs for better diagnosis during verification.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-28 17:31:59 +01:00
Harald Hoyer
d6061c35a8
fix(teepot-vault): use ring as CryptoProvider for rustls
New `rustls` needs global install of default `CryptoProvider`.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-28 14:14:57 +01:00
Harald Hoyer
0a73ed5012
Merge pull request #279 from matter-labs/cargo_update
chore: remove unused `rand` dependency and update crates
2025-02-27 10:40:34 +01:00
Harald Hoyer
d3c17a7ace
Merge branch 'main' into cargo_update 2025-02-25 13:22:35 +01:00
Harald Hoyer
942091d3ae
Merge pull request #277 from matter-labs/rtmr3
feat(tdx): add TDX RTMR extension support with UEFI marker
2025-02-25 13:21:57 +01:00
Harald Hoyer
bd24825ece
Merge branch 'main' into cargo_update 2025-02-21 09:31:25 +01:00
Harald Hoyer
46b9269fc1
Merge branch 'main' into rtmr3 2025-02-21 09:31:18 +01:00
Harald Hoyer
d345c62db7
Merge pull request #278 from matter-labs/metadata-fail
feat(tdx_google): add onFailure action to reboot on metadata.service errors
2025-02-21 09:28:28 +01:00
Harald Hoyer
f822c70721
chore: remove unused rand dependency and update crates
- Removed `rand` dependency from multiple `.toml` files and updated relevant imports to use `rand_core::OsRng`.
- Updated OpenTelemetry dependencies to latest versions and refactored SDK initialization to use `SdkLoggerProvider`.
- Bumped versions of several dependencies including `clap`, `awc`, `ring`, and `smallvec` for compatibility and features.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-20 15:40:13 +01:00
Harald Hoyer
cf4a6cfb60
feat(tdx_google): add onFailure action to reboot on metadata.service errors
- Introduce `onFailure` handler to trigger reboot after 5 minutes.
- Enhances system reliability by automating recovery measures.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-20 15:32:51 +01:00
Harald Hoyer
049f1b3de8
feat(tdx): add TDX RTMR extension support with UEFI marker
- Added `UEFI_MARKER_DIGEST_BYTES` constant for TDX RTMR extension.
- Implemented RTMR3 extension in `tee-key-preexec` for TDX attestation flow.
- Updated `rtmr-calc` to use `UEFI_MARKER_DIGEST_BYTES` for RTMR1 extension.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-20 15:15:44 +01:00
Harald Hoyer
a430e2f93b
Merge pull request #276 from matter-labs/sys
feat(tdx_google): add support for attestation in container
2025-02-20 12:55:39 +01:00
Harald Hoyer
a5cf220c57
feat(tdx_google): add support for attestation in container
- Mount `/sys/kernel/config` to enable attestation for TDX containers.
- Ensures compatibility with TDX guest measurements during runtime.
2025-02-20 12:14:10 +01:00
Harald Hoyer
e936f5079d
Merge pull request #272 from matter-labs/refactor
refactor(tdx_google): modularize tdx_google configuration
2025-02-20 10:04:11 +01:00
Harald Hoyer
439574f22c
chore(tdx_google): remove unused teepot package from system environment
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-19 15:01:02 +01:00
Harald Hoyer
760ff7eff1
refactor(tdx_google): simplify service configurations
- Replaced hardcoded metadata-fetching logic with shared metadata service.
- Removed custom pre-start scripts and refactored environment handling.
- Updated Vector configuration to include custom field transformations.
- Streamlined container startup process and ensured proper cleanup.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-19 15:00:43 +01:00
Harald Hoyer
5d2ad57cfd
refactor(tdx_google): modularize tdx_google configuration
- Split `tdx_google/configuration.nix` into smaller modules: `vector.nix`, and `container.nix`.
- Simplified the main configuration by leveraging modular imports for better readability and maintainability.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>

# Conflicts:
#	packages/tdx_google/configuration.nix
2025-02-19 15:00:42 +01:00
Harald Hoyer
4d273076ee
Merge pull request #271 from matter-labs/Metadata-Flavor
fix(teepot): add custom HTTP header for google metadata and update default endpoint
2025-02-19 14:59:09 +01:00
Harald Hoyer
98a71b3e3a
fix(teepot): add custom HTTP header for google metadata and update default endpoint
- Replace `reqwest::get` with a configured `reqwest::Client` to support custom headers (e.g., "Metadata-Flavor: Google").
- Update default OTLP endpoint to include the "http://" prefix for clarity.
2025-02-19 13:58:39 +01:00
Harald Hoyer
ee3061b2ec
Merge pull request #270 from matter-labs/serial
feat(configuration): update journald and serial settings
2025-02-19 11:30:28 +01:00
Harald Hoyer
bbbce81541
feat(configuration): update journald and serial settings
- Set journald console to `/dev/ttyS0` for improved logging.
- Disable `serial-getty@ttyS0` service to avoid conflicts.
2025-02-19 11:16:34 +01:00
Harald Hoyer
c4b1431221
Merge pull request #268 from matter-labs/tdx-test
feat: rewrite google-metadata test as tdx-test
2025-02-18 08:36:12 +01:00
Harald Hoyer
daf375836b
chore: remove unused deps
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-14 16:47:45 +01:00
Harald Hoyer
fbbb37ca5a
tests(tdxtest): ramp up the testing
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-14 16:47:44 +01:00
Harald Hoyer
a41460b7f0
feat(tdx-google): enhance container service setup
- Add `vector.service` and `chronyd.service` dependencies to `docker_start_container` service.
- Use `EnvironmentFile` and a pre-start script to dynamically generate environment variables for container setup.
- Improve error handling and clarity in container initialization.
2025-02-14 16:47:43 +01:00
Harald Hoyer
908579cd60
feat: rewrite google-metadata test as tdx-test
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-14 16:47:42 +01:00
Harald Hoyer
3325312c0d
Merge pull request #255 from matter-labs/vector_kafka
feat(google-tdx): add vector pushing to kafka for logging
2025-02-13 10:00:59 +01:00
Harald Hoyer
9266a9f072
feat(google-tdx): add Vector service integration
- Enable Vector service and configure OpenTelemetry source.
- Add sinks for logs output to console and Kafka.
- Configure environment setup for Kafka using GCP metadata API.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-12 08:34:18 +01:00
Harald Hoyer
ff22db6054
chore(google-tdx): removed commented-out ssh debugging
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-11 08:29:34 +01:00
Harald Hoyer
c5cdc1e4ab
feat(google-tdx): disable LLMNR and MulticastDNS
- Configured resolved service, disabling LLMNR and MulticastDNS
  for improved resolution settings.

- Removed commented-out Prometheus Node config

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-11 08:29:29 +01:00
Harald Hoyer
fae9ad7f58
Merge pull request #264 from matter-labs/renovate/trufflesecurity-trufflehog-3.x
chore(deps): update trufflesecurity/trufflehog action to v3.88.6
2025-02-11 08:28:54 +01:00
renovate[bot]
f3f5147bb1
chore(deps): update trufflesecurity/trufflehog action to v3.88.6 2025-02-10 18:59:21 +00:00
Harald Hoyer
a65e25742c
Merge pull request #263 from matter-labs/cargo_update
chore: cargo deps update
2025-02-10 19:58:55 +01:00
Harald Hoyer
45309e58f4
chore: cargo deps update
with code fixes for the new versions.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-02-10 15:44:16 +01:00
Harald Hoyer
99ab2f2b76
Merge pull request #231 from matter-labs/renovate/enarx-spdx-digest
chore(deps): update enarx/spdx digest to b5bfdd4
2025-02-10 15:37:47 +01:00
renovate[bot]
49faaa984b
chore(deps): update enarx/spdx digest to b5bfdd4 2025-02-10 13:49:56 +00:00
Harald Hoyer
584a07defa
Merge pull request #243 from matter-labs/renovate/reqwest-0.x-lockfile
chore(deps): update rust crate reqwest to v0.12.12
2025-02-10 14:49:30 +01:00
renovate[bot]
7d01a240d4
chore(deps): update rust crate reqwest to v0.12.12 2025-02-10 13:37:44 +00:00
Harald Hoyer
7a33be4a68
Merge pull request #230 from matter-labs/renovate/actions-checkout-digest
chore(deps): update actions/checkout digest to 11bd719
2025-02-10 14:36:18 +01:00
renovate[bot]
01eac64182
chore(deps): update actions/checkout digest to 11bd719 2025-02-10 12:56:55 +00:00
Harald Hoyer
129afe25e6
Merge pull request #256 from matter-labs/renovate/rustls-0.x-lockfile
chore(deps): update rust crate rustls to v0.23.22
2025-02-10 13:56:28 +01:00
renovate[bot]
87dd281437
chore(deps): update rust crate rustls to v0.23.22 2025-02-10 12:23:22 +00:00
Harald Hoyer
f95b6c52d6
Merge pull request #257 from matter-labs/renovate/serde-monorepo
chore(deps): update rust crate serde to v1.0.217
2025-02-10 13:20:52 +01:00
renovate[bot]
decdc55a89
chore(deps): update rust crate serde to v1.0.217 2025-02-10 11:38:07 +00:00
Harald Hoyer
3bad44d38f
Merge pull request #200 from matter-labs/renovate/trufflesecurity-trufflehog-3.x
chore(deps): update trufflesecurity/trufflehog action to v3.88.5
2025-02-10 12:36:25 +01:00
renovate[bot]
129c3c1333
chore(deps): update trufflesecurity/trufflehog action to v3.88.5 2025-02-10 11:23:00 +00:00
Harald Hoyer
c5273f2cc9
Merge pull request #254 from matter-labs/renovate/clap-4.x-lockfile
chore(deps): update rust crate clap to v4.5.28
2025-02-10 12:20:53 +01:00
renovate[bot]
6b9984f4d6
chore(deps): update rust crate clap to v4.5.28 2025-02-04 03:12:07 +00:00
Patrick
c6e236cf46
Merge pull request #252 from matter-labs/tdx-test
feat: add Google Metadata support and TDX container test
2025-02-03 17:17:20 +01:00
Harald Hoyer
11a22c9e67
feat: add Google Metadata support and TDX container test
- Introduced `google-metadata` binary for reading GCP instance attributes.
- Added TDX container test with new `container-test-tdx` package.
- Updated Nix workflow and deployment scripts for Google Metadata integration.
- Bumped `anyhow` to 1.0.95 and updated Cargo.lock.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-01-27 16:18:58 +01:00
Harald Hoyer
e2c31919c9
Merge pull request #251 from matter-labs/pab/onchain-verification
feat(tee-key-preexec): support for onchain-compatible pubkey in report_data
2025-01-17 13:17:16 +01:00
Patryk Bęza
afa524c18c
Address code review comments 2025-01-17 12:41:07 +01:00
Patryk Bęza
2d04ba0508
feat(tee-key-preexec): add support for Solidity-compatible pubkey in report_data
This PR is part of the effort to implement on-chain TEE proof
verification. This PR goes hand in hand with https://github.com/matter-labs/zksync-era/pull/3414.
2025-01-16 20:46:16 +01:00
Patrick
e5cca31ac0
Merge pull request #250 from matter-labs/preexec-test
feat(tee-key-preexec): add test container for tee-key-preexec
2025-01-15 16:01:59 +01:00
Harald Hoyer
99037ceb6c
feat(tee-key-preexec): add test container for tee-key-preexec
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-01-15 15:48:21 +01:00
Harald Hoyer
e649fdab87
Merge pull request #248 from matter-labs/tdx_nix
feat(tdx): add nix build for TDX google VMs
2025-01-14 16:10:31 +01:00
Harald Hoyer
dc1e756ec6
feat(tdx): add nix build for TDX google VMs
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-01-14 14:50:43 +01:00
Harald Hoyer
8270c389e4
Merge pull request #247 from matter-labs/collateral_free_on_error
fix(teepot-tee-quote-verification-rs): free collateral on ffi error
2025-01-13 15:29:52 +01:00
Harald Hoyer
dc9263911f
fix(teepot-tee-quote-verification-rs): free collateral on ffi error
Free the FFI collateral on rust checks anyway to prevent memory leaks.

Also remove the `TryFrom<&sgx_ql_qve_collateral_t>` as it is unsafe.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-01-13 13:50:04 +01:00
Harald Hoyer
1f88d506a3
Merge pull request #246 from matter-labs/fix_leak
fix(teepot-tee-quote-verification-rs): memory leak
2025-01-13 10:53:50 +01:00
Harald Hoyer
584223dc93
fix(teepot-tee-quote-verification-rs): memory leak
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2025-01-13 10:35:12 +01:00
Harald Hoyer
9de56d3adb
Merge pull request #234 from matter-labs/renovate/cachix-install-nix-action-30.x
chore(deps): update cachix/install-nix-action action to v30
2025-01-07 11:28:08 +01:00
renovate[bot]
102f73b1eb
chore(deps): update cachix/install-nix-action action to v30 2024-12-20 16:13:47 +00:00
Patrick
d2fbdb5bed
Merge pull request #236 from matter-labs/flake_update
chore(flake): update nixsgx flake input
2024-12-20 17:11:54 +01:00
Harald Hoyer
d11f63701f
chore: fix deny.toml
see https://github.com/EmbarkStudios/cargo-deny/pull/611

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 15:32:55 +01:00
Harald Hoyer
c5373dfd8f
chore(flake): update nixsgx flake input
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 14:29:04 +01:00
Harald Hoyer
cc46f8db77
Merge pull request #232 from matter-labs/tdx_extend
feat: add tdx-extend, sha384-extend and rtmr-calc
2024-12-20 14:08:52 +01:00
Harald Hoyer
5d32396966
feat: add tdx-extend, sha384-extend and rtmr-calc
This enables pre-calculating the TDX rtmr[1,2,3] values for an attested boot process.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 13:27:55 +01:00
Harald Hoyer
fbc4897dad
Merge pull request #229 from matter-labs/cargo_update
chore: cargo update
2024-12-20 12:44:04 +01:00
Harald Hoyer
0b67a14cd1
chore: cargo update
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 12:20:39 +01:00
Harald Hoyer
68805b10a8
Merge pull request #226 from matter-labs/TDX
feat: add TDX
2024-12-20 12:00:27 +01:00
Harald Hoyer
4610475fae
feat: add TDX support
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 10:54:24 +01:00
Harald Hoyer
f4fba51e3e
chore: rustfmt 2024-12-20 09:31:03 +01:00
Harald Hoyer
c2e8bb6f94
chore(licensing): clarify licenses for TDX packages
- Added explicit license clarifications for `tdx-attest-sys` and `tdx-attest-rs` packages.
- Ensured compliance with BSD-3-Clause for both packages.
2024-12-20 09:31:02 +01:00
Harald Hoyer
a0f101acf1
feat(teepot-crate): add libtdx_attest to dependencies
- Included `nixsgx.sgx-dcap.libtdx_attest` in the dependencies list.
- Ensures support for TDX attestation in the build environment.
2024-12-20 09:31:01 +01:00
Harald Hoyer
34a00bc5bd
feat(shell): enhance teepot shell with Rust tools support
- Add rustfmt, clippy, and rust-src as extensions in the Rust toolchain.
- Include bindgenHook and pkg-config in nativeBuildInputs for improved build support.
- Set RUST_SRC_PATH for better Rust library integration.
2024-12-20 09:31:01 +01:00
Harald Hoyer
b066cdd15a
fix: update build process for teepot package
- Fix output format for propagated-user-env-packages.
- Remove empty bin directory after binaries are moved.
2024-12-20 09:31:00 +01:00
Harald Hoyer
f818ac61c2
chore(flake.nix): update crane to ref 8ff9c45
- Upgraded crane from v0.17.3 to v0.19.3 using a specific commit ref.
- Ensures compatibility with the latest improvements and fixes in crane.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 09:30:59 +01:00
Harald Hoyer
83d57bf354
chore: update Rust toolchain to version 1.83
- Upgraded the Rust version in rust-toolchain.toml to 1.83.
- Ensures compatibility and access to the latest features and fixes.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-12-20 09:29:43 +01:00
Patrick
e4629aee55
Merge pull request #225 from matter-labs/handle_old_proof_response
fix(proof-validation): handle optional proof status
2024-11-28 17:45:21 +01:00
Patrick
78471f5b64
Merge branch 'main' into handle_old_proof_response 2024-11-28 17:28:53 +01:00
Patrick
6e88e200da
Merge pull request #224 from matter-labs/fix_logging
refactor(logging): enhance logging setup and usage
2024-11-28 17:25:25 +01:00
Harald Hoyer
a7951f95bc
Merge branch 'main' into handle_old_proof_response 2024-11-28 16:49:04 +01:00
Harald Hoyer
4c2a096917
Merge branch 'main' into fix_logging 2024-11-28 16:48:28 +01:00
Harald Hoyer
ba7868c6b0
Merge pull request #223 from matter-labs/nix_flake_update
chore: update dependencies and enhance shell configuration
2024-11-28 16:48:13 +01:00
Harald Hoyer
f0fea5c122
refactor(logging): enhance logging setup and usage
- Modified the `setup_logging` function to return a `Subscriber`, improving flexibility and reuse.
- Integrated `tracing::subscriber::set_global_default` in the main functions to establish the logging subscriber globally.
- Added configurations for span events and control over file and line information display.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-11-28 15:49:15 +01:00
Harald Hoyer
4a0a4f6e5e
fix(proof-validation): handle optional proof status
Ensure proof status is treated as optional, preventing crashes when status is absent.
- Modify status field to `Option<String>` in `Proof` struct.
- Update validation logic to handle `None` values safely.
- Adjust main logic to check for "permanently_ignored" safely.
2024-11-28 15:48:23 +01:00
Harald Hoyer
d8239dba2f
chore: update dependencies and enhance shell configuration
- Updated multiple dependencies in flake.lock to their latest revisions.
- Improved the shell configuration in the teepot with enhanced environment variable settings for SGX support.
- Reinstated OPENSSL_NO_VENDOR and added library paths to ensure compatibility and proper linking.
2024-11-28 15:45:39 +01:00
Harald Hoyer
5b7f7482e6
Merge pull request #221 from matter-labs/tee/pab/error-handling-get-tee-proofs-api
feat(verifier): don't retry verifying permanently ignored batches
2024-11-27 11:09:21 +01:00
Harald Hoyer
35db54779e
Merge branch 'main' into tee/pab/error-handling-get-tee-proofs-api 2024-11-27 10:48:35 +01:00
Patrick
73ce227070
Merge pull request #222 from matter-labs/license
chore: update lint workflow actions
2024-11-27 10:33:30 +01:00
Harald Hoyer
2c6a62a471
chore: update lint workflow actions
- Changed spdx action to reference a stable commit instead of master.
- Changed license list to conform to new action parameter format
2024-11-27 08:50:42 +01:00
Patryk Bęza
e63d0901fa
feat(verifier): don't retry verifying permanently ignored batches
Currently, the [TEE verifier][1] – the tool for continuous SGX
attestation and batch signature verification – is [stuck][2] on batches
that failed to be proven and are marked as `permanently_ignored`. The
tool should be able to distinguish between batches that are permanently
ignored (and should be skipped) and batches that have failed but will be
retried. This PR enables that distinction.

This commit goes hand in hand with the following PR:
https://github.com/matter-labs/zksync-era/pull/3321

[1]: https://github.com/matter-labs/teepot/blob/main/bin/verify-era-proof-attestation/src/main.rs
[2]: https://grafana.matterlabs.dev/goto/unFqf57Hg?orgId=1
2024-11-26 17:19:55 +01:00
Harald Hoyer
1a8a9f17fa
Merge pull request #212 from matter-labs/logging
feat(logging): centralize logging setup in teepot crate
2024-09-18 16:38:39 +02:00
Harald Hoyer
af3ab51320
feat(logging): centralize logging setup in teepot crate
- Added a new logging module in `teepot` crate.
- Removed redundant logging setup code from individual projects.
- Updated dependencies and references for logging setup.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-09-18 16:08:13 +02:00
Harald Hoyer
2ff3b1168d
Merge pull request #210 from matter-labs/crane
fix(flake.nix): remove redundant crane input follow
2024-09-18 15:46:41 +02:00
Harald Hoyer
b7f4828a6d
Merge branch 'main' into crane 2024-09-18 15:36:26 +02:00
Harald Hoyer
7c61f81137
Merge pull request #211 from matter-labs/magix_nix_cache
ci: remove magic nix cache
2024-09-18 15:36:15 +02:00
Harald Hoyer
69ae1d39e3
Merge branch 'main' into magix_nix_cache 2024-09-18 15:24:08 +02:00
Harald Hoyer
538782e1f9
Merge pull request #209 from matter-labs/feat/hex-serialization
feat(tee): use hex deserialization for RPC requests
2024-09-18 15:22:43 +02:00
Harald Hoyer
9bce6edfaa
ci: remove magic nix cache
Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-09-18 14:56:04 +02:00
Harald Hoyer
21a9ecdee1
fix(flake.nix): remove redundant crane input follow
- Removed the unnecessary crane input follow from flake.nix.

```
warning: input 'crane' has an override for a non-existent input 'nixpkgs'
```
2024-09-18 14:46:33 +02:00
Patryk Bęza
9bf40c9cb9
feat(tee): use hex deserialization for RPC requests
Following Anton's suggestion, we have switched to hex serialization for
API/RPC requests and responses. Previously, we used default JSON
serialization for Vec<u8>, which resulted in a lengthy comma-separated
list of integers.

This change standardizes serialization, making it more efficient and
reducing the size of the responses. The previous format, with a series
of comma-separated integers for pubkey-like fields, looked odd.

Then:
```
curl -X POST\
     -H "Content-Type: application/json" \
     --data '{"jsonrpc": "2.0", "id": 1, "method": "unstable_getTeeProofs", "params": [491882, "Sgx"] }' \
        https://mainnet.era.zksync.io
{"jsonrpc":"2.0","result":[{"attestation":[3,0,2,0,0,0,0,0,10,<dozens of comma-separated integers here>
```

Now:
```
$ curl -X POST \
       -H "Content-Type: application/json" \
       --data '{"jsonrpc": "2.0", "id": 1, "method": "unstable_getTeeProofs", "params": [1, "sgx"] }' \
          http://localhost:3050
{"jsonrpc":"2.0","result":[{"l1BatchNumber":1,"teeType":"sgx","pubkey":"0506070809","signature":"0001020304","proof":"0a0b0c0d0e","provedAt":"2024-09-16T11:53:38.253033Z","attestation":"0403020100"}],"id":1}
```

This change needs to be deployed in lockstep with:
https://github.com/matter-labs/zksync-era/pull/2887.
2024-09-18 14:10:21 +02:00
Harald Hoyer
2c326f83bd
Merge pull request #207 from matter-labs/container-tag
chore: tag container with git tag
2024-09-17 15:09:02 +02:00
Harald Hoyer
e7b743b213
chore: tag container with git tag
Allow all tags and tag the matterlabsrobot container with it.

Signed-off-by: Harald Hoyer <harald@matterlabs.dev>
2024-09-17 14:48:49 +02:00
Harald Hoyer
3b7041b459
Merge pull request #206 from matter-labs/cargo-release
chore: Release
2024-09-16 17:38:14 +02:00
229 changed files with 21755 additions and 4391 deletions

View file

@ -15,15 +15,18 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: checkout - name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: enarx/spdx@master - uses: enarx/spdx@d4020ee98e3101dd487c5184f27c6a6fb4f88709
with: with:
licenses: Apache-2.0 BSD-3-Clause MIT licenses: |-
Apache-2.0
BSD-3-Clause
MIT
taplo: taplo:
name: taplo name: taplo
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: cachix/install-nix-action@v27 - uses: cachix/install-nix-action@v31
- run: nix run nixpkgs#taplo -- fmt --check - run: nix run nixpkgs#taplo -- fmt --check

38
.github/workflows/nix-non-x86.yml vendored Normal file
View file

@ -0,0 +1,38 @@
name: nix-non-x86
permissions:
contents: read
pull-requests: read
on:
pull_request:
branches: ["main"]
push:
branches: ["main"]
tags: ["*"]
jobs:
macos-latest:
runs-on: macos-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: cachix/install-nix-action@v31
with:
extra_nix_config: |
access-tokens = github.com=${{ github.token }}
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg=
substituters = https://cache.nixos.org/ https://attic.teepot.org/tee-pot
sandbox = true
- name: Setup Attic cache
uses: ryanccn/attic-action@v0
with:
endpoint: https://attic.teepot.org/
cache: tee-pot
token: ${{ secrets.ATTIC_TOKEN }}
- name: nixci
# FIXME: this prevents it from running on macos
# https://github.com/NixOS/nix/pull/12570
# run: nix run github:nixos/nixpkgs/nixos-24.11#nixci -- build
run: nix build -L .#teepot --no-sandbox

View file

@ -2,10 +2,10 @@ name: nix
on: on:
pull_request: pull_request:
branches: [ "main" ] branches: ["main"]
push: push:
branches: [ "main" ] branches: ["main"]
tags: [ "*-sgx-*" ] tags: ["*"]
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref }} group: ${{ github.workflow }}-${{ github.ref }}
@ -15,9 +15,10 @@ jobs:
check: check:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: cachix/install-nix-action@v27 - uses: cachix/install-nix-action@v31
with: with:
install_url: https://releases.nixos.org/nix/nix-2.28.3/install
extra_nix_config: | extra_nix_config: |
access-tokens = github.com=${{ github.token }} access-tokens = github.com=${{ github.token }}
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg= trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg=
@ -29,18 +30,17 @@ jobs:
endpoint: https://attic.teepot.org/ endpoint: https://attic.teepot.org/
cache: tee-pot cache: tee-pot
token: ${{ secrets.ATTIC_TOKEN }} token: ${{ secrets.ATTIC_TOKEN }}
- name: Enable magic Nix cache
uses: DeterminateSystems/magic-nix-cache-action@main
- run: nix flake check -L --show-trace --keep-going - run: nix flake check -L --show-trace --keep-going
build: build:
needs: check needs: check
runs-on: [ matterlabs-default-infra-runners ] runs-on: [matterlabs-default-infra-runners]
steps: steps:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: cachix/install-nix-action@v27 - uses: cachix/install-nix-action@v31
with: with:
install_url: https://releases.nixos.org/nix/nix-2.28.3/install
extra_nix_config: | extra_nix_config: |
access-tokens = github.com=${{ github.token }} access-tokens = github.com=${{ github.token }}
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg= trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg=
@ -52,15 +52,13 @@ jobs:
endpoint: https://attic.teepot.org/ endpoint: https://attic.teepot.org/
cache: tee-pot cache: tee-pot
token: ${{ secrets.ATTIC_TOKEN }} token: ${{ secrets.ATTIC_TOKEN }}
- name: Enable magic Nix cache
uses: DeterminateSystems/magic-nix-cache-action@main
- name: nix build - name: nix build
run: nix run github:nixos/nixpkgs/nixos-23.11#nixci run: nix run github:nixos/nixpkgs/nixos-23.11#nixci
push_to_docker: push_to_docker:
needs: build needs: build
runs-on: [ matterlabs-default-infra-runners ] runs-on: [matterlabs-default-infra-runners]
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.config.nixpackage }} group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.config.nixpackage }}
cancel-in-progress: true cancel-in-progress: true
@ -77,10 +75,12 @@ jobs:
- { nixpackage: 'container-self-attestation-test-sgx-azure' } - { nixpackage: 'container-self-attestation-test-sgx-azure' }
- { nixpackage: 'container-verify-attestation-sgx' } - { nixpackage: 'container-verify-attestation-sgx' }
- { nixpackage: 'container-verify-era-proof-attestation-sgx' } - { nixpackage: 'container-verify-era-proof-attestation-sgx' }
- { nixpackage: 'container-tdx-test' }
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: cachix/install-nix-action@v27 - uses: cachix/install-nix-action@v31
with: with:
install_url: https://releases.nixos.org/nix/nix-2.28.3/install
extra_nix_config: | extra_nix_config: |
access-tokens = github.com=${{ github.token }} access-tokens = github.com=${{ github.token }}
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg= trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= tee-pot:SS6HcrpG87S1M6HZGPsfo7d1xJccCGev7/tXc5+I4jg=
@ -92,14 +92,13 @@ jobs:
endpoint: https://attic.teepot.org/ endpoint: https://attic.teepot.org/
cache: tee-pot cache: tee-pot
token: ${{ secrets.ATTIC_TOKEN }} token: ${{ secrets.ATTIC_TOKEN }}
- name: Enable magic Nix cache
uses: DeterminateSystems/magic-nix-cache-action@main
- name: Log in to Docker Hub - name: Login to GitHub Container Registry
uses: docker/login-action@v3 uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with: with:
username: ${{ secrets.DOCKERHUB_USER }} registry: ghcr.io
password: ${{ secrets.DOCKERHUB_TOKEN }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load container - name: Load container
id: build id: build
@ -111,15 +110,21 @@ jobs:
- name: Push container - name: Push container
run: | run: |
echo "Pushing image ${{ steps.build.outputs.IMAGE_TAG }} to Docker Hub" echo "Pushing image ${{ steps.build.outputs.IMAGE_TAG }} to GitHub Container Registry"
docker tag "${{ steps.build.outputs.IMAGE_TAG }}" matterlabsrobot/"${{ steps.build.outputs.IMAGE_TAG }}" docker tag "${{ steps.build.outputs.IMAGE_TAG }}" "ghcr.io/${{ github.repository_owner }}"/"${{ steps.build.outputs.IMAGE_TAG }}"
docker push matterlabsrobot/"${{ steps.build.outputs.IMAGE_TAG }}" docker push "ghcr.io/${{ github.repository_owner }}"/"${{ steps.build.outputs.IMAGE_TAG }}"
- name: Tag container as latest - name: Tag container as latest
if: ${{ github.event_name == 'push' }} if: ${{ github.event_name == 'push' }}
run: | run: |
docker tag "${{ steps.build.outputs.IMAGE_TAG }}" matterlabsrobot/"${{ steps.build.outputs.IMAGE_NAME }}:latest" docker tag "${{ steps.build.outputs.IMAGE_TAG }}" "ghcr.io/${{ github.repository_owner }}"/"${{ steps.build.outputs.IMAGE_NAME }}:latest"
docker push matterlabsrobot/"${{ steps.build.outputs.IMAGE_NAME }}:latest" docker push "ghcr.io/${{ github.repository_owner }}"/"${{ steps.build.outputs.IMAGE_NAME }}:latest"
- name: Tag container with tag
if: ${{ github.event_name == 'push' && github.ref_type == 'tag' }}
run: |
docker tag "${{ steps.build.outputs.IMAGE_TAG }}" "ghcr.io/${{ github.repository_owner }}"/"${{ steps.build.outputs.IMAGE_NAME }}:$GITHUB_REF_NAME"
docker push "ghcr.io/${{ github.repository_owner }}"/"${{ steps.build.outputs.IMAGE_NAME }}:$GITHUB_REF_NAME"
- name: Generate build ID for Flux Image Automation - name: Generate build ID for Flux Image Automation
id: flux id: flux

View file

@ -5,11 +5,11 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: TruffleHog OSS - name: TruffleHog OSS
uses: trufflesecurity/trufflehog@06bbd6fd493fcac4a6db0e4850a92bcf932fafed # v3.81.10 uses: trufflesecurity/trufflehog@c8921694a53d95ce424af6ae76dbebf3b6a83aef # v3.88.30
with: with:
path: ./ path: ./
base: ${{ github.event.repository.default_branch }} base: ${{ github.event.repository.default_branch }}

4350
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,12 +1,19 @@
[workspace] [workspace]
members = ["crates/*", "bin/*"] members = ["crates/*", "bin/*", "crates/teepot-vault/bin/*"]
resolver = "2" resolver = "2"
# exclude x86_64 only crates
exclude = [
"crates/teepot-tee-quote-verification-rs",
"crates/teepot-tdx-attest-rs",
"crates/teepot-tdx-attest-sys",
]
[profile.release] [profile.release]
strip = true strip = true
[workspace.package] [workspace.package]
version = "0.3.0" version = "0.6.0"
edition = "2021" edition = "2021"
authors = ["Harald Hoyer <hh@matterlabs.dev>"] authors = ["Harald Hoyer <hh@matterlabs.dev>"]
# rest of the workspace, if not specified in the package section # rest of the workspace, if not specified in the package section
@ -17,56 +24,68 @@ homepage = "https://github.com/matter-labs/teepot"
[workspace.dependencies] [workspace.dependencies]
actix-http = "3" actix-http = "3"
actix-tls = "3" actix-web = { version = "4.5", features = ["rustls-0_23"] }
actix-web = { version = "4.5", features = ["rustls-0_22"] }
anyhow = "1.0.82" anyhow = "1.0.82"
awc = { version = "3.4", features = ["rustls-0_22-webpki-roots"] } asn1_der = { version = "0.7", default-features = false, features = ["native_types"] }
async-trait = "0.1.86"
awc = { version = "3.5", features = ["rustls-0_23-webpki-roots"] }
base64 = "0.22.0" base64 = "0.22.0"
bitflags = "2.5"
bytemuck = { version = "1.15.0", features = ["derive", "min_const_generics", "extern_crate_std"] } bytemuck = { version = "1.15.0", features = ["derive", "min_const_generics", "extern_crate_std"] }
bytes = "1" bytes = "1"
chrono = "0.4.40"
clap = { version = "4.5", features = ["std", "derive", "env", "error-context", "help", "usage", "wrap_help"], default-features = false } clap = { version = "4.5", features = ["std", "derive", "env", "error-context", "help", "usage", "wrap_help"], default-features = false }
const-oid = { version = "0.9", default-features = false } config = { version = "0.15.8", default-features = false, features = ["yaml", "json", "toml", "async"] }
ctrlc = "3.4" const-oid = { version = "0.9.6", default-features = false }
der = "0.7.9" dcap-qvl = "0.2.3"
enumset = { version = "1.1", features = ["serde"] } enumset = { version = "1.1", features = ["serde"] }
futures-core = { version = "0.3.30", features = ["alloc"], default-features = false } futures = "0.3.31"
getrandom = "0.2.14" futures-core = { version = "0.3.30", default-features = false }
getrandom = { version = "0.3.1", features = ["std"] }
gpt = "4.0.0"
hex = { version = "0.4.3", features = ["std"], default-features = false } hex = { version = "0.4.3", features = ["std"], default-features = false }
intel-tee-quote-verification-rs = { package = "teepot-tee-quote-verification-rs", path = "crates/teepot-tee-quote-verification-rs", version = "0.3.0" } intel-dcap-api = { path = "crates/intel-dcap-api" }
intel-tee-quote-verification-sys = { version = "0.2.1" } jsonrpsee-types = "0.25.1"
jsonrpsee-types = { version = "0.23", default-features = false } mockito = "1.4"
log = "0.4"
num-integer = "0.1.46" num-integer = "0.1.46"
num-traits = "0.2.18" num-traits = "0.2.18"
opentelemetry = { version = "0.30", features = ["default", "logs"] }
opentelemetry-appender-tracing = { version = "0.30", features = ["experimental_metadata_attributes", "log"] }
opentelemetry-otlp = { version = "0.30", features = ["grpc-tonic", "logs"] }
opentelemetry-semantic-conventions = { version = "0.30", features = ["semconv_experimental"] }
opentelemetry_sdk = { version = "0.30", features = ["tokio", "rt-tokio"] }
p256 = "0.13.2" p256 = "0.13.2"
pgp = "0.13" pe-sign = "0.1.10"
percent-encoding = "2.3.1"
pgp = { version = "0.16", default-features = false }
pkcs8 = { version = "0.10" } pkcs8 = { version = "0.10" }
rand = "0.8"
reqwest = { version = "0.12", features = ["json"] } reqwest = { version = "0.12", features = ["json"] }
ring = { version = "0.17.8", features = ["std"], default-features = false }
rsa = { version = "0.9.6", features = ["sha2", "pem"] } rsa = { version = "0.9.6", features = ["sha2", "pem"] }
rustls = { version = "0.22" } rustls = { version = "0.23.20", default-features = false, features = ["std", "logging", "tls12", "ring"] }
rustls-pemfile = "2" secp256k1 = { version = "0.31", features = ["rand", "global-context"] }
sec1 = { version = "0.7.3", features = ["der"], default-features = false }
secp256k1 = { version = "0.29", features = ["rand-std", "global-context"] }
serde = { version = "1", features = ["derive", "rc"] } serde = { version = "1", features = ["derive", "rc"] }
serde_json = "1" serde_json = "1"
serde_with = { version = "3.8", features = ["base64", "hex"] } serde_with = { version = "3.8", features = ["base64", "hex"] }
serde_yaml = "0.9.33"
sha2 = "0.10.8" sha2 = "0.10.8"
sha3 = "0.10.8"
signature = "2.2.0" signature = "2.2.0"
teepot = { path = "crates/teepot" } teepot = { version = "0.6.0", path = "crates/teepot" }
teepot-tee-quote-verification-rs = { version = "0.6.0", path = "crates/teepot-tee-quote-verification-rs" }
teepot-vault = { version = "0.6.0", path = "crates/teepot-vault" }
testaso = "0.1.0" testaso = "0.1.0"
thiserror = "1.0.59" thiserror = "2.0.11"
tokio = { version = "1", features = ["sync", "macros", "rt-multi-thread", "fs", "time"] } tokio = { version = "1", features = ["sync", "macros", "rt-multi-thread", "fs", "time", "signal"] }
tokio-util = "0.7.14"
tracing = "0.1" tracing = "0.1"
tracing-actix-web = "0.7" tracing-actix-web = "0.7"
tracing-futures = { version = "0.2.5", features = ["std"] }
tracing-log = "0.2" tracing-log = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter"] } tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "ansi"] }
tracing-test = { version = "0.2.5", features = ["no-env-filter"] }
url = "2.5.2" url = "2.5.2"
webpki-roots = "0.26.1" webpki-roots = "1.0.0"
x509-cert = { version = "0.2", features = ["builder", "signature"] } x509-cert = { version = "0.2", features = ["builder", "signature", "default"] }
zeroize = { version = "1.7.0", features = ["serde"] } zeroize = { version = "1.7.0", features = ["serde"] }
zksync_basic_types = "=0.1.0" zksync_basic_types = "28.6.0-non-semver-compat"
zksync_types = "=0.1.0" zksync_types = "28.6.0-non-semver-compat"
zksync_web3_decl = "=0.1.0" zksync_web3_decl = "28.6.0-non-semver-compat"

View file

@ -1,27 +1,37 @@
# teepot # teepot
Key Value store in a TEE with Remote Attestation for Authentication
## Introduction
This project is a key-value store that runs in a Trusted Execution Environment (TEE) and uses Remote Attestation for
Authentication.
The key-value store is implemented using Hashicorp Vault running in an Intel SGX enclave via the Gramine runtime.
## Parts of this project ## Parts of this project
- `teepot`: The main rust crate that abstracts TEEs and key-value stores. ### teepot - lib
- `tee-vault-unseal`: An enclave that uses the Vault API to unseal a vault as a proxy.
- `vault-unseal`: A client utility, that talks to `tee-vault-unseal` to unseal a vault. - `teepot`: The main rust crate that abstracts TEEs.
- `tee-vault-admin`: An enclave that uses the Vault API to administer a vault as a proxy. - `verify-attestation`: A client utility that verifies the attestation of an enclave.
- `vault-admin`: A client utility, that talks to `tee-vault-admin` to administer a vault. - `tee-key-preexec`: A pre-exec utility that generates a p256 secret key and passes it as an environment variable to
- `teepot-read` : A pre-exec utility that reads from the key-value store and passes the key-value pairs as environment the
variables to the enclave.
- `teepot-write` : A pre-exec utility that reads key-values from the environment variables and writes them to the
key-value store.
- `verify-attestation`: A client utility that verifies the attestation of an enclave.
- `tee-key-preexec`: A pre-exec utility that generates a p256 secret key and passes it as an environment variable to the
enclave along with the attestation quote containing the hash of the public key. enclave along with the attestation quote containing the hash of the public key.
- `tdx_google`: A base VM running on Google Cloud TDX. It receives a container URL via the instance metadata,
measures the sha384 of the URL to RTMR3 and launches the container.
- `tdx-extend`: A utility to extend an RTMR register with a hash value.
- `rtmr-calc`: A utility to calculate RTMR1 and RTMR2 from a GPT disk, the linux kernel, the linux initrd
and a UKI (unified kernel image).
- `sha384-extend`: A utility to calculate RTMR registers after extending them with a digest.
### Vault
Part of this project is a key-value store that runs in a Trusted Execution Environment (TEE) and uses Remote Attestation
for Authentication. The key-value store is implemented using Hashicorp Vault running in an Intel SGX enclave via the
Gramine runtime.
- `teepot-vault`: A crate lib with for the TEE key-value store components:
- `tee-vault-unseal`: An enclave that uses the Vault API to unseal a vault as a proxy.
- `vault-unseal`: A client utility, that talks to `tee-vault-unseal` to unseal a vault.
- `tee-vault-admin`: An enclave that uses the Vault API to administer a vault as a proxy.
- `vault-admin`: A client utility, that talks to `tee-vault-admin` to administer a vault.
- `teepot-read` : A pre-exec utility that reads from the key-value store and passes the key-value pairs as
environment
variables to the enclave.
- `teepot-write` : A pre-exec utility that reads key-values from the environment variables and writes them to the
key-value store.
## Development ## Development
@ -73,7 +83,7 @@ $ nix run .#fmt
### Build as the CI would ### Build as the CI would
```shell ```shell
$ nix run github:nixos/nixpkgs/nixos-23.11#nixci $ nix run github:nixos/nixpkgs/nixos-24.11#nixci -- build
``` ```
### Build and test individual container ### Build and test individual container
@ -96,3 +106,18 @@ Attributes:
isv_svn: 0 isv_svn: 0
debug_enclave: False debug_enclave: False
``` ```
### TDX VM testing
```shell
nixos-rebuild -L --flake .#tdxtest build-vm && ./result/bin/run-tdxtest-vm
```
## Release
```shell
$ cargo release 0.1.0 --manifest-path crates/teepot-tdx-attest-sys/Cargo.toml --sign
$ cargo release 0.1.2 --manifest-path crates/teepot-tdx-attest-rs/Cargo.toml --sign
$ cargo release 0.6.0 --manifest-path crates/teepot-tee-quote-verification-rs/Cargo.toml --sign
$ cargo release 0.6.0 --sign
```

4
assets/config.json Normal file
View file

@ -0,0 +1,4 @@
{
"foo": "bar",
"bar": "baz"
}

50
assets/gcloud-deploy.sh Executable file
View file

@ -0,0 +1,50 @@
#!/usr/bin/env bash
#
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2025 Matter Labs
#
set -ex
BASE_DIR=${0%/*}
NO=${NO:-1}
ZONE=${ZONE:-us-central1-c}
nix build -L .#tdx_google
gsutil cp result/tdx_base_1.vmdk gs://tdx_vms/
gcloud migration vms image-imports create \
--location=us-central1 \
--target-project=tdx-pilot \
--project=tdx-pilot \
--skip-os-adaptation \
--source-file=gs://tdx_vms/tdx_base_1.vmdk \
tdx-img-pre-"${NO}"
gcloud compute instances stop tdx-pilot --zone ${ZONE} --project tdx-pilot || :
gcloud compute instances delete tdx-pilot --zone ${ZONE} --project tdx-pilot || :
while gcloud migration vms image-imports list --location=us-central1 --project=tdx-pilot | grep -F RUNNING; do
sleep 1
done
gcloud compute images create \
--project tdx-pilot \
--guest-os-features=UEFI_COMPATIBLE,TDX_CAPABLE,GVNIC,VIRTIO_SCSI_MULTIQUEUE \
--storage-location=us-central1 \
--source-image=tdx-img-pre-"${NO}" \
tdx-img-f-"${NO}"
gcloud compute instances create tdx-pilot \
--machine-type c3-standard-4 --zone ${ZONE} \
--confidential-compute-type=TDX \
--maintenance-policy=TERMINATE \
--image-project=tdx-pilot \
--project tdx-pilot \
--metadata=container_hub="docker.io",container_image="ghcr.io/matter-labs/test-tdx:117p5y281limw0w7b03v802ij00c5gzw" \
--metadata-from-file=container_config=$BASE_DIR/config.json \
--image tdx-img-f-"${NO}"

19
bin/rtmr-calc/Cargo.toml Normal file
View file

@ -0,0 +1,19 @@
[package]
name = "rtmr-calc"
publish = false
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
[dependencies]
anyhow.workspace = true
clap.workspace = true
gpt.workspace = true
hex.workspace = true
pe-sign.workspace = true
sha2.workspace = true
teepot.workspace = true
tracing.workspace = true

241
bin/rtmr-calc/src/main.rs Normal file
View file

@ -0,0 +1,241 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2024-2025 Matter Labs
use anyhow::{anyhow, Result};
use clap::Parser;
use pesign::PE;
use sha2::{Digest, Sha384};
use std::{
fmt::{Display, Formatter},
io::{Error, Read, Seek, SeekFrom},
path::PathBuf,
};
use teepot::{
log::{setup_logging, LogLevelParser},
tdx::UEFI_MARKER_DIGEST_BYTES,
};
use tracing::{debug, info, level_filters::LevelFilter};
/// Precalculate rtmr1 and rtmr2 values.
///
/// Currently tested with the Google confidential compute engines.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Arguments {
/// disk image to measure the GPT table from
#[arg(long)]
image: PathBuf,
/// path to the used UKI EFI binary
#[arg(long)]
bootefi: PathBuf,
/// path to the used linux kernel EFI binary (contained in the UKI)
#[arg(long)]
kernel: PathBuf,
/// Log level for the log output.
/// Valid values are: `off`, `error`, `warn`, `info`, `debug`, `trace`
#[clap(long, default_value_t = LevelFilter::WARN, value_parser = LogLevelParser)]
pub log_level: LevelFilter,
}
struct Rtmr {
state: Vec<u8>,
}
impl Rtmr {
pub fn extend(&mut self, hash: &[u8]) -> &[u8] {
self.state.extend(hash);
let bytes = Sha384::digest(&self.state);
self.state.resize(48, 0);
self.state.copy_from_slice(&bytes);
&self.state
}
}
impl Default for Rtmr {
fn default() -> Self {
Self {
state: [0u8; 48].to_vec(),
}
}
}
impl Display for Rtmr {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", hex::encode(&self.state))
}
}
const CHUNK_SIZE: u64 = 1024 * 128;
fn main() -> Result<()> {
let args = Arguments::parse();
tracing::subscriber::set_global_default(setup_logging(
env!("CARGO_CRATE_NAME"),
&args.log_level,
)?)?;
let mut rtmr1 = Rtmr::default();
let mut rtmr2 = Rtmr::default();
/*
- pcr_index: 1
event: efiaction
digests:
- method: sha384
digest: 77a0dab2312b4e1e57a84d865a21e5b2ee8d677a21012ada819d0a98988078d3d740f6346bfe0abaa938ca20439a8d71
digest_verification_status: verified
data: Q2FsbGluZyBFRkkgQXBwbGljYXRpb24gZnJvbSBCb290IE9wdGlvbg==
parsed_data:
Ok:
text: Calling EFI Application from Boot Option
*/
rtmr1.extend(&hex::decode("77a0dab2312b4e1e57a84d865a21e5b2ee8d677a21012ada819d0a98988078d3d740f6346bfe0abaa938ca20439a8d71")?);
/*
- pcr_index: 1
event: separator
digests:
- method: sha384
digest: 394341b7182cd227c5c6b07ef8000cdfd86136c4292b8e576573ad7ed9ae41019f5818b4b971c9effc60e1ad9f1289f0
digest_verification_status: verified
data: AAAAAA==
parsed_data:
Ok:
validseparator: UEFI
*/
rtmr1.extend(&UEFI_MARKER_DIGEST_BYTES);
// Open disk image.
let cfg = gpt::GptConfig::new().writable(false);
let disk = cfg.open(args.image)?;
// Print GPT layout.
info!("Disk (primary) header: {:#?}", disk.primary_header());
info!("Partition layout: {:#?}", disk.partitions());
let header = disk.primary_header()?;
let mut msr = Vec::<u8>::new();
let lb_size = disk.logical_block_size();
let mut device = disk.device_ref();
device.seek(SeekFrom::Start(lb_size.as_u64()))?;
let mut buf = [0u8; 92];
device.read_exact(&mut buf)?;
msr.extend_from_slice(&buf);
let pstart = header
.part_start
.checked_mul(lb_size.as_u64())
.ok_or_else(|| Error::other("partition overflow - start offset"))?;
let _ = device.seek(SeekFrom::Start(pstart))?;
assert_eq!(header.part_size, 128);
assert!(header.num_parts < u32::from(u8::MAX));
let empty_bytes = [0u8; 128];
msr.extend_from_slice(&disk.partitions().len().to_le_bytes());
for _ in 0..header.num_parts {
let mut bytes = empty_bytes;
device.read_exact(&mut bytes)?;
if bytes.eq(&empty_bytes) {
continue;
}
msr.extend_from_slice(&bytes);
}
let mut hasher = Sha384::new();
hasher.update(&msr);
let result = hasher.finalize();
info!("GPT hash: {:x}", result);
rtmr1.extend(&result);
let mut pe = PE::from_path(&args.bootefi)?;
let hash = pe.calc_authenticode(pesign::cert::Algorithm::Sha384)?;
info!("hash of {:?}: {hash}", args.bootefi);
rtmr1.extend(&hex::decode(&hash)?);
let section_table = pe.get_section_table()?;
for section in &section_table {
debug!(section_name = ?section.name()?);
}
for sect in [".linux", ".osrel", ".cmdline", ".initrd", ".uname", ".sbat"] {
let mut hasher = Sha384::new();
hasher.update(sect.as_bytes());
hasher.update([0u8]);
let out = hasher.finalize();
debug!(sect, "name: {out:x}");
rtmr2.extend(&out);
let s = section_table
.iter()
.find(|s| s.name().unwrap().eq(sect))
.ok_or(anyhow!("Failed to find section `{sect}`"))?;
let mut start = u64::from(s.pointer_to_raw_data);
let end = start + u64::from(s.virtual_size);
debug!(sect, start, end, len = (s.virtual_size));
let mut hasher = Sha384::new();
loop {
if start >= end {
break;
}
let mut buf = vec![0; CHUNK_SIZE.min(end - start) as _];
pe.read_exact_at(start, buf.as_mut_slice())?;
hasher.update(buf.as_slice());
start += CHUNK_SIZE;
}
let digest = hasher.finalize();
debug!(sect, "binary: {digest:x}");
rtmr2.extend(&digest);
}
let hash = PE::from_path(&args.kernel)?.calc_authenticode(pesign::cert::Algorithm::Sha384)?;
info!("hash of {:?}: {hash}", args.kernel);
rtmr1.extend(&hex::decode(&hash)?);
/*
- pcr_index: 1
event: efiaction
digests:
- method: sha384
digest: 214b0bef1379756011344877743fdc2a5382bac6e70362d624ccf3f654407c1b4badf7d8f9295dd3dabdef65b27677e0
digest_verification_status: verified
data: RXhpdCBCb290IFNlcnZpY2VzIEludm9jYXRpb24=
parsed_data:
Ok:
text: Exit Boot Services Invocation
*/
rtmr1.extend(&hex::decode("214b0bef1379756011344877743fdc2a5382bac6e70362d624ccf3f654407c1b4badf7d8f9295dd3dabdef65b27677e0")?);
/*
- pcr_index: 1
event: efiaction
digests:
- method: sha384
digest: 0a2e01c85deae718a530ad8c6d20a84009babe6c8989269e950d8cf440c6e997695e64d455c4174a652cd080f6230b74
digest_verification_status: verified
data: RXhpdCBCb290IFNlcnZpY2VzIFJldHVybmVkIHdpdGggU3VjY2Vzcw==
parsed_data:
Ok:
text: Exit Boot Services Returned with Success
*/
rtmr1.extend(&hex::decode("0a2e01c85deae718a530ad8c6d20a84009babe6c8989269e950d8cf440c6e997695e64d455c4174a652cd080f6230b74")?);
println!("{{");
println!("\t\"rtmr1\": \"{rtmr1}\",");
println!("\t\"rtmr2\": \"{rtmr2}\"");
println!("}}");
Ok(())
}

View file

@ -0,0 +1,16 @@
[package]
name = "sha384-extend"
publish = false
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
[dependencies]
anyhow.workspace = true
clap.workspace = true
hex.workspace = true
sha2.workspace = true
teepot.workspace = true

View file

@ -0,0 +1,164 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2024-2025 Matter Labs
//! A tool for extending SHA384 digests, commonly used in TPM and TDX operations
//!
//! # Overview
//! This utility implements the extend operation used in Trusted Platform Module (TPM)
//! Platform Configuration Registers (PCRs) and Intel Trust Domain Extensions (TDX)
//! Runtime Measurement Registers (RTMRs). The extend operation combines two SHA384
//! digests by concatenating and then hashing them.
//!
//! # Usage
//! ```shell
//! sha384-extend <extend-value> [--base <initial-value>]
//! ```
//! Where:
//! - `extend-value`: SHA384 digest in hex format to extend with
//! - `initial-value`: Optional initial SHA384 digest in hex format (defaults to "00")
//!
//! # Example
//! ```shell
//! sha384-extend --base 01 26bb0c
//! ```
#![deny(missing_docs)]
#![deny(clippy::all)]
use anyhow::{Context, Result};
use clap::Parser;
use sha2::Digest;
use teepot::util::pad;
/// Calculate e.g. a TDX RTMR or TPM PCR SHA384 digest by extending it with another
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Arguments {
/// The SHA384 digest (in hex format) to extend the base value with.
/// Must be a valid hex string that can be padded to 48 bytes (384 bits).
extend: String,
/// The initial SHA384 digest (in hex format) to extend from.
/// Must be a valid hex string that can be padded to 48 bytes (384 bits).
#[arg(long, default_value = "00", required = false)]
base: String,
}
/// Extends a base SHA384 digest with another digest
///
/// # Arguments
/// * `base` - Base hex string to extend from
/// * `extend` - Hex string to extend with
///
/// # Returns
/// * `Result<String>` - The resulting SHA384 digest as a hex string
///
/// # Examples
/// ```
/// let result = extend_sha384("00", "aa").unwrap();
/// ```
pub fn extend_sha384(base: &str, extend: &str) -> Result<String> {
let mut hasher = sha2::Sha384::new();
hasher.update(pad::<48>(&hex::decode(base).context(format!(
"Failed to decode base digest '{base}' - expected hex string",
))?)?);
hasher.update(pad::<48>(&hex::decode(extend).context(format!(
"Failed to decode extend digest '{extend}' - expected hex string",
))?)?);
Ok(hex::encode(hasher.finalize()))
}
fn main() -> Result<()> {
let args = Arguments::parse();
let hex = extend_sha384(&args.base, &args.extend)?;
println!("{hex}");
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
const TEST_BASE: &str = "00";
const TEST_EXTEND: &str = "d3a665eb2bf8a6c4e6cee0ccfa663ee4098fc4903725b1823d8d0316126bb0cb";
const EXPECTED_RESULT: &str = "971fb52f90ec98a234301ca9b8fc30b613c33e3dd9c0cc42dcb8003d4a95d8fb218b75baf028b70a3cabcb947e1ca453";
const EXPECTED_RESULT_00: &str = "f57bb7ed82c6ae4a29e6c9879338c592c7d42a39135583e8ccbe3940f2344b0eb6eb8503db0ffd6a39ddd00cd07d8317";
#[test]
fn test_extend_sha384_with_test_vectors() {
let result = extend_sha384(TEST_BASE, TEST_EXTEND).unwrap();
assert_eq!(
result, EXPECTED_RESULT,
"SHA384 extend result didn't match expected value"
);
// Test with empty base
let result = extend_sha384("", TEST_EXTEND).unwrap();
assert_eq!(
result, EXPECTED_RESULT,
"SHA384 extend result didn't match expected value"
);
// Test with empty base
let result = extend_sha384("", "").unwrap();
assert_eq!(
result, EXPECTED_RESULT_00,
"SHA384 extend result didn't match expected value"
);
}
#[test]
fn test_extend_sha384_with_invalid_base() {
// Test with invalid hex in base
let result = extend_sha384("not_hex", TEST_EXTEND);
assert!(result.is_err(), "Should fail with invalid base hex");
// Test with odd length hex string
let result = extend_sha384("0", TEST_EXTEND);
assert!(result.is_err(), "Should fail with odd-length hex string");
}
#[test]
fn test_extend_sha384_with_invalid_extend() {
// Test with invalid hex in extend
let result = extend_sha384(TEST_BASE, "not_hex");
assert!(result.is_err(), "Should fail with invalid extend hex");
// Test with odd length hex string
let result = extend_sha384(TEST_BASE, "0");
assert!(result.is_err(), "Should fail with odd-length hex string");
}
#[test]
fn test_extend_sha384_with_oversized_input() {
// Create a hex string that's too long (more than 48 bytes when decoded)
let oversized = "00".repeat(49); // 49 bytes when decoded
let result = extend_sha384(TEST_BASE, &oversized);
assert!(result.is_err(), "Should fail with oversized extend value");
let result = extend_sha384(&oversized, TEST_EXTEND);
assert!(result.is_err(), "Should fail with oversized base value");
}
#[test]
fn test_extend_sha384_idempotent() {
// Test that extending with the same values produces the same result
let result1 = extend_sha384(TEST_BASE, TEST_EXTEND).unwrap();
let result2 = extend_sha384(TEST_BASE, TEST_EXTEND).unwrap();
assert_eq!(result1, result2, "Same inputs should produce same output");
}
#[test]
fn test_extend_sha384_case_sensitivity() {
// Test that upper and lower case hex strings produce the same result
let upper_extend = TEST_EXTEND.to_uppercase();
let result1 = extend_sha384(TEST_BASE, TEST_EXTEND).unwrap();
let result2 = extend_sha384(TEST_BASE, &upper_extend).unwrap();
assert_eq!(result1, result2, "Case should not affect the result");
}
}

16
bin/tdx-extend/Cargo.toml Normal file
View file

@ -0,0 +1,16 @@
[package]
name = "tdx-extend"
publish = false
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
[dependencies]
anyhow.workspace = true
clap.workspace = true
hex.workspace = true
teepot.workspace = true
tracing.workspace = true

View file

@ -0,0 +1,71 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2024-2025 Matter Labs
//! Extend the TDX measurement
#![deny(missing_docs)]
#![deny(clippy::all)]
use tracing::error;
#[cfg(all(target_os = "linux", target_arch = "x86_64"))]
mod os {
use anyhow::{Context as _, Result};
use clap::Parser;
use teepot::{
log::{setup_logging, LogLevelParser},
tdx::rtmr::TdxRtmrEvent,
util::pad,
};
use tracing::level_filters::LevelFilter;
/// Extend a TDX rtmr with a hash digest for measured boot.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Arguments {
/// digest in hex to extend the rtmr with
#[arg(long)]
digest: String,
/// the number or the rtmr
#[arg(long, default_value = "2")]
rtmr: u64,
/// Log level for the log output.
/// Valid values are: `off`, `error`, `warn`, `info`, `debug`, `trace`
#[clap(long, default_value_t = LevelFilter::WARN, value_parser = LogLevelParser)]
pub log_level: LevelFilter,
}
pub fn main_with_error() -> Result<()> {
let args = Arguments::parse();
tracing::subscriber::set_global_default(setup_logging(
env!("CARGO_CRATE_NAME"),
&args.log_level,
)?)?;
// Parse the digest string as a hex array
let digest_bytes = hex::decode(&args.digest).context("Invalid digest format")?;
let extend_data: [u8; 48] = pad(&digest_bytes).context("Invalid digest length")?;
// Extend the TDX measurement with the extend data
TdxRtmrEvent::default()
.with_extend_data(extend_data)
.with_rtmr_index(args.rtmr)
.extend()?;
Ok(())
}
}
#[cfg(not(all(target_os = "linux", target_arch = "x86_64")))]
mod os {
pub fn main_with_error() -> anyhow::Result<()> {
anyhow::bail!("OS or architecture not supported");
}
}
fn main() -> anyhow::Result<()> {
let ret = os::main_with_error();
if let Err(e) = &ret {
error!(error = %e, "Execution failed");
}
ret
}

17
bin/tdx-test/Cargo.toml Normal file
View file

@ -0,0 +1,17 @@
[package]
name = "tdx-test"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
publish = false
[dependencies]
anyhow.workspace = true
serde.workspace = true
teepot.workspace = true
thiserror.workspace = true
tokio.workspace = true
tracing.workspace = true

60
bin/tdx-test/src/main.rs Normal file
View file

@ -0,0 +1,60 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use anyhow::Result;
use serde::{Deserialize, Serialize};
use teepot::config::{load_config_with_telemetry, TelemetryConfig};
use thiserror::Error;
use tracing::{debug, error, info, trace, warn};
// Configuration struct
#[derive(Debug, Serialize, Deserialize)]
struct AppConfig {
server: ServerConfig,
telemetry: TelemetryConfig,
}
impl Default for AppConfig {
fn default() -> Self {
Self {
server: ServerConfig::default(),
telemetry: TelemetryConfig::new(
env!("CARGO_CRATE_NAME").into(),
env!("CARGO_PKG_VERSION").into(),
),
}
}
}
#[derive(Debug, Serialize, Deserialize)]
struct ServerConfig {
port: u16,
}
impl Default for ServerConfig {
fn default() -> Self {
Self { port: 8080 }
}
}
// Error handling
#[derive(Error, Debug)]
enum AppError {
#[error("Internal server error")]
Internal(#[from] anyhow::Error),
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let config =
load_config_with_telemetry("APP".into(), |config: &AppConfig| &config.telemetry).await?;
loop {
error!(?config, "error test!");
warn!(?config, "warn test!");
info!(?config, "info test!");
debug!(?config, "debug test!");
trace!(?config, "trace test!");
tokio::time::sleep(std::time::Duration::from_secs(60)).await;
}
}

View file

@ -12,7 +12,6 @@ repository.workspace = true
[dependencies] [dependencies]
anyhow.workspace = true anyhow.workspace = true
clap.workspace = true clap.workspace = true
rand.workspace = true
secp256k1.workspace = true secp256k1.workspace = true
teepot.workspace = true teepot.workspace = true
tracing.workspace = true tracing.workspace = true

View file

@ -1,23 +1,15 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2024 Matter Labs // Copyright (c) 2024-2025 Matter Labs
//! Pre-exec for binary running in a TEE needing attestation of a secret signing key //! Pre-exec for binary running in a TEE needing attestation of a secret signing key
#![deny(missing_docs)] #![deny(missing_docs)]
#![deny(clippy::all)] #![deny(clippy::all)]
use anyhow::{Context, Result}; use anyhow::Result;
use clap::Parser; use clap::Parser;
use secp256k1::{rand, Keypair, PublicKey, Secp256k1, SecretKey};
use std::ffi::OsString; use std::ffi::OsString;
use std::os::unix::process::CommandExt;
use std::process::Command;
use teepot::quote::get_quote;
use tracing::error; use tracing::error;
use tracing_log::LogTracer;
use tracing_subscriber::{fmt, prelude::*, EnvFilter, Registry};
const TEE_QUOTE_FILE: &str = "/tmp/tee_quote";
#[derive(Parser, Debug)] #[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)] #[command(author, version, about, long_about = None)]
@ -30,7 +22,20 @@ struct Args {
cmd_args: Vec<OsString>, cmd_args: Vec<OsString>,
} }
#[cfg(all(target_os = "linux", target_arch = "x86_64"))]
fn main_with_error() -> Result<()> { fn main_with_error() -> Result<()> {
use anyhow::Context;
use secp256k1::{rand, Secp256k1};
use std::{os::unix::process::CommandExt, process::Command};
use teepot::{
ethereum::public_key_to_ethereum_address, prover::reportdata::ReportDataV1,
quote::get_quote, tdx::rtmr::TdxRtmrEvent,
};
use tracing_log::LogTracer;
use tracing_subscriber::{fmt, prelude::*, EnvFilter, Registry};
const TEE_QUOTE_FILE: &str = "/tmp/tee_quote";
LogTracer::init().context("Failed to set logger")?; LogTracer::init().context("Failed to set logger")?;
let subscriber = Registry::default() let subscriber = Registry::default()
@ -39,23 +44,34 @@ fn main_with_error() -> Result<()> {
tracing::subscriber::set_global_default(subscriber).context("Failed to set logger")?; tracing::subscriber::set_global_default(subscriber).context("Failed to set logger")?;
let args = Args::parse(); let args = Args::parse();
let mut rng = rand::rng();
let mut rng = rand::thread_rng();
let secp = Secp256k1::new(); let secp = Secp256k1::new();
let keypair = Keypair::new(&secp, &mut rng); let (signing_key, verifying_key) = secp.generate_keypair(&mut rng);
let signing_key = SecretKey::from_keypair(&keypair); let ethereum_address = public_key_to_ethereum_address(&verifying_key);
let verifying_key = PublicKey::from_keypair(&keypair); let report_data = ReportDataV1 { ethereum_address };
let verifying_key_bytes = verifying_key.serialize(); let report_data_bytes: [u8; 64] = report_data.into();
let tee_type = match get_quote(verifying_key_bytes.as_ref()) { let tee_type = match get_quote(&report_data_bytes) {
Ok(quote) => { Ok((teepot::quote::TEEType::TDX, quote)) => {
// In the case of TDX, we want to advance RTMR 3 after getting the quote,
// so that any breach can't generate a new attestation with the expected RTMRs
TdxRtmrEvent::default()
.with_rtmr_index(3)
.with_extend_data(teepot::tdx::UEFI_MARKER_DIGEST_BYTES)
.extend()?;
// save quote to file // save quote to file
std::fs::write(TEE_QUOTE_FILE, quote)?; std::fs::write(TEE_QUOTE_FILE, quote).context(TEE_QUOTE_FILE)?;
"sgx" teepot::quote::TEEType::TDX.to_string()
}
Ok((tee_type, quote)) => {
// save quote to file
std::fs::write(TEE_QUOTE_FILE, quote).context(TEE_QUOTE_FILE)?;
tee_type.to_string()
} }
Err(e) => { Err(e) => {
error!("Failed to get quote: {}", e); error!("Failed to get quote: {}", e);
std::fs::write(TEE_QUOTE_FILE, [])?; std::fs::write(TEE_QUOTE_FILE, []).context(TEE_QUOTE_FILE)?;
"none" "none".to_string()
} }
}; };
@ -80,6 +96,11 @@ fn main_with_error() -> Result<()> {
}) })
} }
#[cfg(not(all(target_os = "linux", target_arch = "x86_64")))]
fn main_with_error() -> Result<()> {
anyhow::bail!("OS or architecture not supported");
}
fn main() -> Result<()> { fn main() -> Result<()> {
let ret = main_with_error(); let ret = main_with_error();
if let Err(e) = &ret { if let Err(e) = &ret {

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2024 Matter Labs // Copyright (c) 2024-2025 Matter Labs
//! Pre-exec for binary running in a TEE needing attestation of a secret signing key //! Pre-exec for binary running in a TEE needing attestation of a secret signing key
@ -17,7 +17,7 @@ use std::io::Write;
use std::os::unix::process::CommandExt; use std::os::unix::process::CommandExt;
use std::path::PathBuf; use std::path::PathBuf;
use std::process::Command; use std::process::Command;
use teepot::server::pki::make_signed_cert; use teepot::pki::make_signed_cert;
use tracing::error; use tracing::error;
use tracing_log::LogTracer; use tracing_log::LogTracer;
use tracing_subscriber::{fmt, prelude::*, EnvFilter, Registry}; use tracing_subscriber::{fmt, prelude::*, EnvFilter, Registry};

View file

@ -1,6 +1,6 @@
# self-attestation-test # self-attestation-test
Optionally build and load the containers (remove the `matterlabsrobot/` repo from the commands below then) Optionally build and load the containers (remove the `ghcr.io/matter-labs/` repo from the commands below then)
```bash ```bash
$ nix build -L .#container-verify-attestation-sgx && docker load -i result $ nix build -L .#container-verify-attestation-sgx && docker load -i result
@ -12,9 +12,9 @@ $ nix build -L .#container-self-attestation-test-sgx-azure && docker load -i res
```bash ```bash
docker run -i --init --rm --privileged --device /dev/sgx_enclave \ docker run -i --init --rm --privileged --device /dev/sgx_enclave \
matterlabsrobot/teepot-self-attestation-test-sgx-azure:latest \ ghcr.io/matter-labs/teepot-self-attestation-test-sgx-azure:latest \
| base64 -d --ignore-garbage \ | base64 -d --ignore-garbage \
| docker run -i --rm matterlabsrobot/verify-attestation-sgx:latest - | docker run -i --rm ghcr.io/matter-labs/verify-attestation-sgx:latest -
aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute in foreground. aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute in foreground.
Gramine is starting. Parsing TOML manifest file, this may take some time... Gramine is starting. Parsing TOML manifest file, this may take some time...
@ -31,9 +31,9 @@ reportdata: 00000000000000000000000000000000000000000000000000000000000000000000
```bash ```bash
docker run -i --init --rm --privileged --device /dev/sgx_enclave \ docker run -i --init --rm --privileged --device /dev/sgx_enclave \
matterlabsrobot/teepot-self-attestation-test-sgx-dcap:latest \ ghcr.io/matter-labs/teepot-self-attestation-test-sgx-dcap:latest \
| base64 -d --ignore-garbage \ | base64 -d --ignore-garbage \
| docker run -i --rm matterlabsrobot/verify-attestation-sgx:latest - | docker run -i --rm ghcr.io/matter-labs/verify-attestation-sgx:latest -
aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute in foreground. aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute in foreground.
Gramine is starting. Parsing TOML manifest file, this may take some time... Gramine is starting. Parsing TOML manifest file, this may take some time...
@ -48,9 +48,9 @@ On an outdated machine, this might look like this:
```bash ```bash
docker run -i --init --rm --privileged --device /dev/sgx_enclave \ docker run -i --init --rm --privileged --device /dev/sgx_enclave \
matterlabsrobot/teepot-self-attestation-test-sgx-dcap:latest \ ghcr.io/matter-labs/teepot-self-attestation-test-sgx-dcap:latest \
| base64 -d --ignore-garbage \ | base64 -d --ignore-garbage \
| docker run -i --rm matterlabsrobot/verify-attestation-sgx:latest - | docker run -i --rm ghcr.io/matter-labs/verify-attestation-sgx:latest -
aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute in foreground. aesm_service: warning: Turn to daemon. Use "--no-daemon" option to execute in foreground.
Gramine is starting. Parsing TOML manifest file, this may take some time... Gramine is starting. Parsing TOML manifest file, this may take some time...
@ -68,3 +68,14 @@ mrsigner: c5591a72b8b86e0d8814d6e8750e3efe66aea2d102b8ba2405365559b858697d
mrenclave: 7ffe70789261a51769f50e129bfafb2aafe91a4e17c3f0d52839006777c652f6 mrenclave: 7ffe70789261a51769f50e129bfafb2aafe91a4e17c3f0d52839006777c652f6
reportdata: 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 reportdata: 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
``` ```
## podman
```bash
podman run -i --rm --group-add=keep-groups -v /var/run/aesmd:/var/run/aesmd -v /dev/sgx_enclave:/dev/sgx_enclave \
ghcr.io/matter-labs/teepot-self-attestation-test-sgx-dcap:latest \
| base64 -d --ignore-garbage \
| podman run -i --rm --net host \
-v /etc/sgx_default_qcnl.conf:/etc/sgx_default_qcnl.conf \
ghcr.io/matter-labs/verify-attestation-sgx-dcap:latest
```

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs // Copyright (c) 2023-2025 Matter Labs
//! Simple TEE self-attestation test //! Simple TEE self-attestation test
@ -8,7 +8,7 @@
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use base64::{engine::general_purpose, Engine as _}; use base64::{engine::general_purpose, Engine as _};
use teepot::server::attestation::get_quote_and_collateral; use teepot::quote::attestation::get_quote_and_collateral;
use tracing_log::LogTracer; use tracing_log::LogTracer;
use tracing_subscriber::{fmt, prelude::*, EnvFilter, Registry}; use tracing_subscriber::{fmt, prelude::*, EnvFilter, Registry};
@ -26,7 +26,7 @@ async fn main() -> Result<()> {
.context("failed to get quote and collateral")?; .context("failed to get quote and collateral")?;
let base64_string = general_purpose::STANDARD.encode(report.quote.as_ref()); let base64_string = general_purpose::STANDARD.encode(report.quote.as_ref());
print!("{}", base64_string); print!("{base64_string}");
Ok(()) Ok(())
} }

View file

@ -10,7 +10,4 @@ repository.workspace = true
[dependencies] [dependencies]
anyhow.workspace = true anyhow.workspace = true
clap.workspace = true clap.workspace = true
hex.workspace = true
secp256k1.workspace = true
teepot.workspace = true teepot.workspace = true
zksync_basic_types.workspace = true

View file

@ -1,17 +1,13 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs // Copyright (c) 2023-2025 Matter Labs
//! Tool for SGX attestation and batch signature verification //! Tool for SGX attestation and batch signature verification
use anyhow::{Context, Result}; use anyhow::{bail, Context, Result};
use clap::{Args, Parser, Subcommand}; use clap::Parser;
use secp256k1::{ecdsa::Signature, Message, PublicKey};
use std::{fs, io::Read, path::PathBuf, str::FromStr, time::UNIX_EPOCH}; use std::{fs, io::Read, path::PathBuf, str::FromStr, time::UNIX_EPOCH};
use teepot::{ use teepot::quote::{get_collateral, verify_quote_with_collateral, QuoteVerificationResult};
client::TcbLevel,
sgx::{tee_qv_get_collateral, verify_quote_with_collateral, QuoteVerificationResult},
};
use zksync_basic_types::H256;
#[derive(Parser, Debug)] #[derive(Parser, Debug)]
#[command(author = "Matter Labs", version, about = "SGX attestation and batch signature verifier", long_about = None)] #[command(author = "Matter Labs", version, about = "SGX attestation and batch signature verifier", long_about = None)]
@ -19,9 +15,6 @@ struct Arguments {
/// Attestation quote proving the signature originated from a TEE enclave. /// Attestation quote proving the signature originated from a TEE enclave.
#[clap(name = "attestation_file", value_parser)] #[clap(name = "attestation_file", value_parser)]
attestation: ArgSource, attestation: ArgSource,
/// An optional subcommand, for instance, for optional signature verification.
#[clap(subcommand)]
command: Option<SubCommands>,
} }
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
@ -41,22 +34,6 @@ impl FromStr for ArgSource {
} }
} }
#[derive(Args, Debug)]
struct SignatureArgs {
/// File containing a batch signature signed within a TEE enclave.
#[arg(long)]
signature_file: PathBuf,
/// Batch root hash for signature verification.
#[arg(long)]
root_hash: H256,
}
#[derive(Subcommand, Debug)]
enum SubCommands {
/// Verify a batch signature signed within a TEE enclave.
SignVerify(SignatureArgs),
}
fn main() -> Result<()> { fn main() -> Result<()> {
let args = Arguments::parse(); let args = Arguments::parse();
let attestation_quote_bytes = match args.attestation { let attestation_quote_bytes = match args.attestation {
@ -71,40 +48,18 @@ fn main() -> Result<()> {
}; };
let quote_verification_result = verify_attestation_quote(&attestation_quote_bytes)?; let quote_verification_result = verify_attestation_quote(&attestation_quote_bytes)?;
print_quote_verification_summary(&quote_verification_result); print_quote_verification_summary(&quote_verification_result);
match &args.command {
Some(SubCommands::SignVerify(signature_args)) => {
verify_signature(&quote_verification_result, signature_args)?;
}
None => {}
}
Ok(())
}
fn verify_signature(
quote_verification_result: &QuoteVerificationResult,
signature_args: &SignatureArgs,
) -> Result<()> {
let reportdata = &quote_verification_result.quote.report_body.reportdata;
let public_key = PublicKey::from_slice(reportdata)?;
println!("Public key from attestation quote: {}", public_key);
let signature_bytes = fs::read(&signature_args.signature_file)?;
let signature = Signature::from_compact(&signature_bytes)?;
let root_hash_msg = Message::from_digest_slice(&signature_args.root_hash.0)?;
if signature.verify(&root_hash_msg, &public_key).is_ok() {
println!("Signature verified successfully");
} else {
println!("Failed to verify signature");
}
Ok(()) Ok(())
} }
fn verify_attestation_quote(attestation_quote_bytes: &[u8]) -> Result<QuoteVerificationResult> { fn verify_attestation_quote(attestation_quote_bytes: &[u8]) -> Result<QuoteVerificationResult> {
if attestation_quote_bytes.is_empty() {
bail!("Empty quote provided!");
}
println!( println!(
"Verifying quote ({} bytes)...", "Verifying quote ({} bytes)...",
attestation_quote_bytes.len() attestation_quote_bytes.len()
); );
let collateral = let collateral = get_collateral(attestation_quote_bytes)?;
tee_qv_get_collateral(attestation_quote_bytes).context("Failed to get collateral")?;
let unix_time: i64 = std::time::SystemTime::now() let unix_time: i64 = std::time::SystemTime::now()
.duration_since(UNIX_EPOCH)? .duration_since(UNIX_EPOCH)?
.as_secs() as _; .as_secs() as _;
@ -115,7 +70,7 @@ fn verify_attestation_quote(attestation_quote_bytes: &[u8]) -> Result<QuoteVerif
fn print_quote_verification_summary(quote_verification_result: &QuoteVerificationResult) { fn print_quote_verification_summary(quote_verification_result: &QuoteVerificationResult) {
let QuoteVerificationResult { let QuoteVerificationResult {
collateral_expired, collateral_expired,
result, result: tcblevel,
quote, quote,
advisories, advisories,
.. ..
@ -123,12 +78,10 @@ fn print_quote_verification_summary(quote_verification_result: &QuoteVerificatio
if *collateral_expired { if *collateral_expired {
println!("Freshly fetched collateral expired"); println!("Freshly fetched collateral expired");
} }
let tcblevel = TcbLevel::from(*result);
for advisory in advisories { for advisory in advisories {
println!("\tInfo: Advisory ID: {advisory}"); println!("\tInfo: Advisory ID: {advisory}");
} }
println!("Quote verification result: {}", tcblevel); println!("Quote verification result: {tcblevel}");
println!("mrsigner: {}", hex::encode(quote.report_body.mrsigner));
println!("mrenclave: {}", hex::encode(quote.report_body.mrenclave)); println!("{:#}", &quote.report);
println!("reportdata: {}", hex::encode(quote.report_body.reportdata));
} }

View file

@ -8,18 +8,22 @@ repository.workspace = true
version.workspace = true version.workspace = true
[dependencies] [dependencies]
anyhow.workspace = true bytes.workspace = true
clap.workspace = true clap.workspace = true
ctrlc.workspace = true enumset.workspace = true
hex.workspace = true hex.workspace = true
jsonrpsee-types.workspace = true jsonrpsee-types.workspace = true
reqwest.workspace = true reqwest.workspace = true
secp256k1.workspace = true secp256k1.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true
serde_with = { workspace = true, features = ["hex"] }
serde_yaml.workspace = true
teepot.workspace = true teepot.workspace = true
thiserror.workspace = true
tokio.workspace = true tokio.workspace = true
tokio-util.workspace = true
tracing.workspace = true tracing.workspace = true
tracing-log.workspace = true
tracing-subscriber.workspace = true tracing-subscriber.workspace = true
url.workspace = true url.workspace = true
zksync_basic_types.workspace = true zksync_basic_types.workspace = true

View file

@ -0,0 +1,76 @@
# Era Proof Attestation Verifier
This tool verifies the SGX/TDX attestations and signatures for zkSync Era L1 batches.
## Usage
Basic usage with attestation policy provided from a YAML file:
```bash
verify-era-proof-attestation --rpc https://mainnet.era.zksync.io \
--continuous 493220 \
--attestation-policy-file examples/attestation_policy.yaml \
--log-level info
```
## Attestation Policy Configuration
You can specify the attestation policy either through command-line arguments or by providing a YAML configuration file.
### Command-line Arguments
The following command-line arguments are available:
- `--batch`, `-n <BATCH>`: The batch number or range of batch numbers to verify the attestation and signature (e.g., "
42" or "42-45"). Mutually exclusive with `--continuous`.
- `--continuous <FIRST_BATCH>`: Continuous mode: keep verifying new batches starting from the specified batch number
until interrupted. Mutually exclusive with `--batch`.
- `--rpc <URL>`: URL of the RPC server to query for the batch attestation and signature.
- `--chain <CHAIN_ID>`: Chain ID of the network to query (default: L2ChainId::default()).
- `--rate-limit <MILLISECONDS>`: Rate limit between requests in milliseconds (default: 0).
- `--log-level <LEVEL>`: Log level for the log output. Valid values are: `off`, `error`, `warn`, `info`, `debug`,
`trace` (default: `warn`).
- `--attestation-policy-file <PATH>`: Path to a YAML file containing attestation policy configuration. This overrides
any attestation policy settings provided via command line options.
Either `--batch` or `--continuous` mode must be specified.
### YAML Configuration File
The attestation policy is loaded from a YAML file using the `--attestation-policy-file` option.
Example YAML configuration file:
```yaml
sgx:
mrenclaves:
- a2caa7055e333f69c3e46ca7ba65b135a86c90adfde2afb356e05075b7818b3c
- 36eeb64cc816f80a1cf5818b26710f360714b987d3799e757cbefba7697b9589
- 4a8b79e5123f4dbf23453d583cb8e5dcf4d19a6191a0be6dd85b7b3052c32faf
- 1498845b3f23667356cc49c38cae7b4ac234621a5b85fdd5c52b5f5d12703ec9
- 1b2374631bb2572a0e05b3be8b5cdd23c42e9d7551e1ef200351cae67c515a65
- 6fb19e47d72a381a9f3235c450f8c40f01428ce19a941f689389be3eac24f42a
- b610fd1d749775cc3de88beb84afe8bb79f55a19100db12d76f6a62ac576e35d
- a0b1b069b01bdcf3c1517ef8d4543794a27ed4103e464be7c4afdc6136b42d66
- 71e2a11a74b705082a7286b2008f812f340c0e4de19f8b151baa347eda32d057
- d5a0bf8932d9a3d7af6d9405d4c6de7dcb7b720bb5510666b4396fc58ee58bb2
allowed_tcb_levels:
- Ok
- SwHardeningNeeded
allowed_advisory_ids:
- INTEL-SA-00615
tdx:
mrs:
- - 2a90c8fa38672cafd791d994beb6836b99383b2563736858632284f0f760a6446efd1e7ec457cf08b629ea630f7b4525
- 3300980705adf09d28b707b79699d9874892164280832be2c386a715b6e204e0897fb564a064f810659207ba862b304f
- c08ab64725566bcc8a6fb1c79e2e64744fcff1594b8f1f02d716fb66592ecd5de94933b2bc54ffbbc43a52aab7eb1146
- 092a4866a9e6a1672d7439a5d106fbc6eb57b738d5bfea5276d41afa2551824365fdd66700c1ce9c0b20542b9f9d5945
- 971fb52f90ec98a234301ca9b8fc30b613c33e3dd9c0cc42dcb8003d4a95d8fb218b75baf028b70a3cabcb947e1ca453
- - 2a90c8fa38672cafd791d994beb6836b99383b2563736858632284f0f760a6446efd1e7ec457cf08b629ea630f7b4525
- 3300980705adf09d28b707b79699d9874892164280832be2c386a715b6e204e0897fb564a064f810659207ba862b304f
- c08ab64725566bcc8a6fb1c79e2e64744fcff1594b8f1f02d716fb66592ecd5de94933b2bc54ffbbc43a52aab7eb1146
- 092a4866a9e6a1672d7439a5d106fbc6eb57b738d5bfea5276d41afa2551824365fdd66700c1ce9c0b20542b9f9d5945
- f57bb7ed82c6ae4a29e6c9879338c592c7d42a39135583e8ccbe3940f2344b0eb6eb8503db0ffd6a39ddd00cd07d8317
allowed_tcb_levels:
- Ok
```

View file

@ -0,0 +1,31 @@
sgx:
mrenclaves:
- a2caa7055e333f69c3e46ca7ba65b135a86c90adfde2afb356e05075b7818b3c
- 36eeb64cc816f80a1cf5818b26710f360714b987d3799e757cbefba7697b9589
- 4a8b79e5123f4dbf23453d583cb8e5dcf4d19a6191a0be6dd85b7b3052c32faf
- 1498845b3f23667356cc49c38cae7b4ac234621a5b85fdd5c52b5f5d12703ec9
- 1b2374631bb2572a0e05b3be8b5cdd23c42e9d7551e1ef200351cae67c515a65
- 6fb19e47d72a381a9f3235c450f8c40f01428ce19a941f689389be3eac24f42a
- b610fd1d749775cc3de88beb84afe8bb79f55a19100db12d76f6a62ac576e35d
- a0b1b069b01bdcf3c1517ef8d4543794a27ed4103e464be7c4afdc6136b42d66
- 71e2a11a74b705082a7286b2008f812f340c0e4de19f8b151baa347eda32d057
- d5a0bf8932d9a3d7af6d9405d4c6de7dcb7b720bb5510666b4396fc58ee58bb2
allowed_tcb_levels:
- Ok
- SwHardeningNeeded
allowed_advisory_ids:
- INTEL-SA-00615
tdx:
mrs:
- - 2a90c8fa38672cafd791d994beb6836b99383b2563736858632284f0f760a6446efd1e7ec457cf08b629ea630f7b4525
- 3300980705adf09d28b707b79699d9874892164280832be2c386a715b6e204e0897fb564a064f810659207ba862b304f
- c08ab64725566bcc8a6fb1c79e2e64744fcff1594b8f1f02d716fb66592ecd5de94933b2bc54ffbbc43a52aab7eb1146
- 092a4866a9e6a1672d7439a5d106fbc6eb57b738d5bfea5276d41afa2551824365fdd66700c1ce9c0b20542b9f9d5945
- 971fb52f90ec98a234301ca9b8fc30b613c33e3dd9c0cc42dcb8003d4a95d8fb218b75baf028b70a3cabcb947e1ca453
- - 2a90c8fa38672cafd791d994beb6836b99383b2563736858632284f0f760a6446efd1e7ec457cf08b629ea630f7b4525
- 3300980705adf09d28b707b79699d9874892164280832be2c386a715b6e204e0897fb564a064f810659207ba862b304f
- c08ab64725566bcc8a6fb1c79e2e64744fcff1594b8f1f02d716fb66592ecd5de94933b2bc54ffbbc43a52aab7eb1146
- 092a4866a9e6a1672d7439a5d106fbc6eb57b738d5bfea5276d41afa2551824365fdd66700c1ce9c0b20542b9f9d5945
- f57bb7ed82c6ae4a29e6c9879338c592c7d42a39135583e8ccbe3940f2344b0eb6eb8503db0ffd6a39ddd00cd07d8317
allowed_tcb_levels:
- Ok

View file

@ -1,123 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs
use anyhow::{anyhow, Result};
use clap::{ArgGroup, Args, Parser};
use std::time::Duration;
use teepot::sgx::{parse_tcb_levels, EnumSet, TcbLevel};
use tracing_subscriber::filter::LevelFilter;
use url::Url;
use zksync_basic_types::L1BatchNumber;
use zksync_types::L2ChainId;
#[derive(Parser, Debug, Clone)]
#[command(author = "Matter Labs", version, about = "SGX attestation and batch signature verifier", long_about = None)]
#[clap(group(
ArgGroup::new("mode")
.required(true)
.args(&["batch_range", "continuous"]),
))]
pub struct Arguments {
#[clap(long, default_value_t = LevelFilter::WARN, value_parser = LogLevelParser)]
pub log_level: LevelFilter,
/// The batch number or range of batch numbers to verify the attestation and signature (e.g.,
/// "42" or "42-45"). This option is mutually exclusive with the `--continuous` mode.
#[clap(short = 'n', long = "batch", value_parser = parse_batch_range)]
pub batch_range: Option<(L1BatchNumber, L1BatchNumber)>,
/// Continuous mode: keep verifying new batches until interrupted. This option is mutually
/// exclusive with the `--batch` option.
#[clap(long, value_name = "FIRST_BATCH")]
pub continuous: Option<L1BatchNumber>,
/// URL of the RPC server to query for the batch attestation and signature.
#[clap(long = "rpc")]
pub rpc_url: Url,
/// Chain ID of the network to query.
#[clap(long = "chain", default_value_t = L2ChainId::default().as_u64())]
pub chain_id: u64,
/// Rate limit between requests in milliseconds.
#[clap(long, default_value = "0", value_parser = parse_duration)]
pub rate_limit: Duration,
/// Criteria for valid attestation policy. Invalid proofs will be rejected.
#[clap(flatten)]
pub attestation_policy: AttestationPolicyArgs,
}
/// Attestation policy implemented as a set of criteria that must be met by SGX attestation.
#[derive(Args, Debug, Clone)]
pub struct AttestationPolicyArgs {
/// Comma-separated list of allowed hex-encoded SGX mrsigners. Batch attestation must consist of
/// one of these mrsigners. If the list is empty, the mrsigner check is skipped.
#[arg(long = "mrsigners")]
pub sgx_mrsigners: Option<String>,
/// Comma-separated list of allowed hex-encoded SGX mrenclaves. Batch attestation must consist
/// of one of these mrenclaves. If the list is empty, the mrenclave check is skipped.
#[arg(long = "mrenclaves")]
pub sgx_mrenclaves: Option<String>,
/// Comma-separated list of allowed TCB levels. If the list is empty, the TCB level check is
/// skipped. Allowed values: Ok, ConfigNeeded, ConfigAndSwHardeningNeeded, SwHardeningNeeded,
/// OutOfDate, OutOfDateConfigNeeded.
#[arg(long, value_parser = parse_tcb_levels, default_value = "Ok")]
pub sgx_allowed_tcb_levels: EnumSet<TcbLevel>,
}
fn parse_batch_range(s: &str) -> Result<(L1BatchNumber, L1BatchNumber)> {
let parse = |s: &str| {
s.parse::<u32>()
.map(L1BatchNumber::from)
.map_err(|e| anyhow!(e))
};
match s.split_once('-') {
Some((start, end)) => {
let (start, end) = (parse(start)?, parse(end)?);
if start > end {
Err(anyhow!(
"Start batch number ({}) must be less than or equal to end batch number ({})",
start,
end
))
} else {
Ok((start, end))
}
}
None => {
let batch_number = parse(s)?;
Ok((batch_number, batch_number))
}
}
}
fn parse_duration(s: &str) -> Result<Duration> {
let millis = s.parse()?;
Ok(Duration::from_millis(millis))
}
#[derive(Clone)]
struct LogLevelParser;
impl clap::builder::TypedValueParser for LogLevelParser {
type Value = LevelFilter;
fn parse_ref(
&self,
cmd: &clap::Command,
arg: Option<&clap::Arg>,
value: &std::ffi::OsStr,
) -> Result<Self::Value, clap::Error> {
clap::builder::TypedValueParser::parse(self, cmd, arg, value.to_owned())
}
fn parse(
&self,
cmd: &clap::Command,
arg: Option<&clap::Arg>,
value: std::ffi::OsString,
) -> std::result::Result<Self::Value, clap::Error> {
use std::str::FromStr;
let p = clap::builder::PossibleValuesParser::new([
"off", "error", "warn", "info", "debug", "trace",
]);
let v = p.parse(cmd, arg, value)?;
Ok(LevelFilter::from_str(&v).unwrap())
}
}

View file

@ -1,45 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs
use anyhow::{anyhow, Context, Result};
use url::Url;
use zksync_basic_types::{L1BatchNumber, H256};
use zksync_types::L2ChainId;
use zksync_web3_decl::{
client::{Client as NodeClient, L2},
error::ClientRpcContext,
namespaces::ZksNamespaceClient,
};
pub trait JsonRpcClient {
async fn get_root_hash(&self, batch_number: L1BatchNumber) -> Result<H256>;
// TODO implement get_tee_proofs(batch_number, tee_type) once https://crates.io/crates/zksync_web3_decl crate is updated
}
pub struct MainNodeClient(NodeClient<L2>);
impl MainNodeClient {
pub fn new(rpc_url: Url, chain_id: u64) -> Result<Self> {
let node_client = NodeClient::http(rpc_url.into())
.context("failed creating JSON-RPC client for main node")?
.for_network(
L2ChainId::try_from(chain_id)
.map_err(anyhow::Error::msg)?
.into(),
)
.build();
Ok(MainNodeClient(node_client))
}
}
impl JsonRpcClient for MainNodeClient {
async fn get_root_hash(&self, batch_number: L1BatchNumber) -> Result<H256> {
self.0
.get_l1_batch_details(batch_number)
.rpc_context("get_l1_batch_details")
.await?
.and_then(|res| res.base.root_hash)
.ok_or_else(|| anyhow!("No root hash found for batch #{}", batch_number))
}
}

View file

@ -0,0 +1,66 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! HTTP client for making requests to external services
use reqwest::Client;
use serde::{de::DeserializeOwned, Serialize};
use std::time::Duration;
use url::Url;
use crate::{
core::DEFAULT_HTTP_REQUEST_TIMEOUT,
error::{Error, Result},
};
/// Client for making HTTP requests
#[derive(Clone)]
pub struct HttpClient {
client: Client,
}
impl HttpClient {
/// Create a new HTTP client with default configuration
pub fn new() -> Self {
let client = Client::builder()
.timeout(Duration::from_secs(DEFAULT_HTTP_REQUEST_TIMEOUT))
.build()
.expect("Failed to create HTTP client");
Self { client }
}
/// Make a POST request to the specified URL with the provided body
pub async fn post<T: Serialize>(&self, url: &Url, body: T) -> Result<String> {
let response = self.client.post(url.clone()).json(&body).send().await?;
self.handle_response(response).await
}
/// Send a JSON request and parse the response
pub async fn send_json<T: Serialize, R: DeserializeOwned>(
&self,
url: &Url,
body: T,
) -> Result<R> {
let response_text = self.post(url, body).await?;
let response: R = serde_json::from_str(&response_text)
.map_err(|e| Error::JsonRpcInvalidResponse(e.to_string()))?;
Ok(response)
}
/// Handle the HTTP response
async fn handle_response(&self, response: reqwest::Response) -> Result<String> {
let status = response.status();
let body = response.text().await?;
if status.is_success() {
Ok(body)
} else {
Err(Error::Http {
status_code: status.as_u16(),
message: body,
})
}
}
}

View file

@ -0,0 +1,56 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use url::Url;
use zksync_basic_types::{L1BatchNumber, H256};
use zksync_types::L2ChainId;
use zksync_web3_decl::{
client::{Client as NodeClient, L2},
error::ClientRpcContext,
namespaces::ZksNamespaceClient,
};
use crate::error;
/// Trait for interacting with the JSON-RPC API
pub trait JsonRpcClient {
/// Get the root hash for a specific batch
async fn get_root_hash(&self, batch_number: L1BatchNumber) -> error::Result<H256>;
// TODO implement get_tee_proofs(batch_number, tee_type) once https://crates.io/crates/zksync_web3_decl crate is updated
}
/// Client for interacting with the main node
pub struct MainNodeClient(NodeClient<L2>);
impl MainNodeClient {
/// Create a new client for the main node
pub fn new(rpc_url: Url, chain_id: u64) -> error::Result<Self> {
let chain_id = L2ChainId::try_from(chain_id)
.map_err(|e| error::Error::Internal(format!("Invalid chain ID: {e}")))?;
let node_client = NodeClient::http(rpc_url.into())
.map_err(|e| error::Error::Internal(format!("Failed to create JSON-RPC client: {e}")))?
.for_network(chain_id.into())
.build();
Ok(MainNodeClient(node_client))
}
}
impl JsonRpcClient for MainNodeClient {
async fn get_root_hash(&self, batch_number: L1BatchNumber) -> error::Result<H256> {
let batch_details = self
.0
.get_l1_batch_details(batch_number)
.rpc_context("get_l1_batch_details")
.await
.map_err(|e| error::Error::JsonRpc(format!("Failed to get batch details: {e}")))?
.ok_or_else(|| {
error::Error::JsonRpc(format!("No details found for batch #{batch_number}"))
})?;
batch_details.base.root_hash.ok_or_else(|| {
error::Error::JsonRpc(format!("No root hash found for batch #{batch_number}"))
})
}
}

View file

@ -0,0 +1,12 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Client modules for external API communication
mod http;
mod json_rpc;
mod retry;
pub use http::HttpClient;
pub use json_rpc::{JsonRpcClient, MainNodeClient};
pub use retry::{RetryConfig, RetryHelper};

View file

@ -0,0 +1,107 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Retry mechanism for handling transient failures
use std::time::Duration;
use tokio::time::sleep;
use crate::{
core::{DEFAULT_RETRY_DELAY_MS, MAX_PROOF_FETCH_RETRIES},
error::{Error, Result},
};
/// Configuration for retry behavior
#[derive(Debug, Clone)]
pub struct RetryConfig {
/// Maximum number of retry attempts
pub max_attempts: u32,
/// Delay between retry attempts
pub delay: Duration,
/// Whether to use exponential backoff
pub use_exponential_backoff: bool,
}
impl Default for RetryConfig {
fn default() -> Self {
Self {
max_attempts: MAX_PROOF_FETCH_RETRIES,
delay: Duration::from_millis(DEFAULT_RETRY_DELAY_MS),
use_exponential_backoff: true,
}
}
}
/// Helper for executing operations with retries
pub struct RetryHelper {
config: RetryConfig,
}
impl RetryHelper {
/// Create a new retry helper with the given configuration
pub fn new(config: RetryConfig) -> Self {
Self { config }
}
/// Execute an operation with retries
pub async fn execute<T, F, Fut>(&self, operation_name: &str, operation: F) -> Result<T>
where
F: Fn() -> Fut,
Fut: std::future::Future<Output = Result<T>>,
{
let mut attempt = 0;
let mut last_error;
loop {
attempt += 1;
tracing::debug!(
"Executing operation '{}' (attempt {}/{})",
operation_name,
attempt,
self.config.max_attempts
);
match operation().await {
Ok(result) => {
tracing::debug!(
"Operation '{}' succeeded on attempt {}",
operation_name,
attempt
);
return Ok(result);
}
Err(Error::Interrupted) => return Err(Error::Interrupted),
Err(e) => {
last_error = e;
if attempt >= self.config.max_attempts {
tracing::warn!(
"Operation '{}' failed after {} attempts. Giving up.",
operation_name,
attempt
);
break;
}
let delay = if self.config.use_exponential_backoff {
self.config.delay.mul_f32(2.0_f32.powi(attempt as i32 - 1))
} else {
self.config.delay
};
tracing::warn!(
"Operation '{}' failed on attempt {}: {}. Retrying in {:?}...",
operation_name,
attempt,
last_error,
delay
);
sleep(delay).await;
}
}
}
Err(last_error)
}
}

View file

@ -0,0 +1,449 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Configuration settings for the verification process
use crate::{
core::{SGX_HASH_SIZE, TDX_HASH_SIZE},
error,
};
use bytes::{Bytes, BytesMut};
use clap::{ArgGroup, Parser};
use enumset::EnumSet;
use serde::{Deserialize, Serialize};
use std::{collections::HashSet, fs, ops::Deref, path::PathBuf, str::FromStr, time::Duration};
use teepot::{log::LogLevelParser, quote::tcblevel::TcbLevel};
use tracing_subscriber::filter::LevelFilter;
use url::Url;
use zksync_basic_types::{tee_types::TeeType, L1BatchNumber};
use zksync_types::L2ChainId;
/// Primary configuration for the verification process
#[derive(Parser, Debug, Clone)]
#[command(author = "Matter Labs", version, about = "SGX attestation and batch signature verifier", long_about = None
)]
#[clap(group(
ArgGroup::new("mode")
.required(true)
.args(&["batch_range", "continuous"]),
))]
pub struct VerifierConfigArgs {
/// Log level for the log output.
/// Valid values are: `off`, `error`, `warn`, `info`, `debug`, `trace`
#[clap(long, default_value_t = LevelFilter::WARN, value_parser = LogLevelParser)]
pub log_level: LevelFilter,
/// The batch number or range of batch numbers to verify the attestation and signature (e.g.,
/// "42" or "42-45"). This option is mutually exclusive with the `--continuous` mode.
#[clap(short = 'n', long = "batch", value_parser = parse_batch_range)]
pub batch_range: Option<(L1BatchNumber, L1BatchNumber)>,
/// Continuous mode: keep verifying new batches until interrupted. This option is mutually
/// exclusive with the `--batch` option.
#[clap(long, value_name = "FIRST_BATCH")]
pub continuous: Option<L1BatchNumber>,
/// URL of the RPC server to query for the batch attestation and signature.
#[clap(long = "rpc")]
pub rpc_url: Url,
/// Chain ID of the network to query.
#[clap(long = "chain", default_value_t = L2ChainId::default().as_u64())]
pub chain_id: u64,
/// Rate limit between requests in milliseconds.
#[clap(long, default_value = "0", value_parser = parse_duration)]
pub rate_limit: Duration,
/// Path to a YAML file containing attestation policy configuration.
/// This overrides any attestation policy settings provided via command line options.
#[clap(long = "attestation-policy-file")]
pub attestation_policy_file: Option<PathBuf>,
/// Comma separated list of Tee types to process
#[clap(long)]
pub tee_types: TeeTypes,
}
/// Attestation policy implemented as a set of criteria that must be met by SGX attestation.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct SgxAttestationPolicyConfig {
/// List of allowed hex-encoded SGX mrsigners. Batch attestation must consist of
/// one of these mrsigners. If the list is empty, the mrsigner check is skipped.
#[serde(default)]
pub mrsigners: Option<Vec<String>>,
/// List of allowed hex-encoded SGX mrenclaves. Batch attestation must consist
/// of one of these mrenclaves. If the list is empty, the mrenclave check is skipped.
#[serde(default)]
pub mrenclaves: Option<Vec<String>>,
/// List of allowed SGX TCB levels. If the list is empty, the TCB level check is
/// skipped. Allowed values: Ok, ConfigNeeded, ConfigAndSwHardeningNeeded, SwHardeningNeeded,
/// OutOfDate, OutOfDateConfigNeeded.
#[serde(default = "default_tcb_levels")]
pub allowed_tcb_levels: EnumSet<TcbLevel>,
/// List of allowed SGX Advisories. If the list is empty, theAdvisories check is skipped.
#[serde(default)]
pub allowed_advisory_ids: Option<Vec<String>>,
}
/// Attestation policy implemented as a set of criteria that must be met by TDX attestation.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct TdxAttestationPolicyConfig {
/// List of allowed hex-encoded TDX mrs. Batch attestation must consist
/// of one of these mrs. If the list is empty, the mrs check is skipped.
#[serde(default)]
pub mrs: Option<Vec<[String; 5]>>,
/// List of allowed SGX TCB levels. If the list is empty, the TCB level check is
/// skipped. Allowed values: Ok, ConfigNeeded, ConfigAndSwHardeningNeeded, SwHardeningNeeded,
/// OutOfDate, OutOfDateConfigNeeded.
#[serde(default = "default_tcb_levels")]
pub allowed_tcb_levels: EnumSet<TcbLevel>,
/// List of allowed TDX Advisories. If the list is empty, theAdvisories check is skipped.
#[serde(default)]
pub allowed_advisory_ids: Option<Vec<String>>,
}
/// Attestation policy implemented as a set of criteria that must be met by SGX or TDX attestation.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct AttestationPolicyConfig {
/// SGX attestation policy
pub sgx: SgxAttestationPolicyConfig,
/// TDX attestation policy
pub tdx: TdxAttestationPolicyConfig,
}
#[derive(Debug, Clone)]
pub struct AttestationPolicy {
pub sgx_mrsigners: Option<Vec<Bytes>>,
pub sgx_mrenclaves: Option<Vec<Bytes>>,
pub sgx_allowed_tcb_levels: EnumSet<TcbLevel>,
pub sgx_allowed_advisory_ids: Option<Vec<String>>,
pub tdx_allowed_tcb_levels: EnumSet<TcbLevel>,
pub tdx_mrs: Option<Vec<Bytes>>,
pub tdx_allowed_advisory_ids: Option<Vec<String>>,
}
/// Default TCB levels used for Serde deserialization
fn default_tcb_levels() -> EnumSet<TcbLevel> {
let mut set = EnumSet::new();
set.insert(TcbLevel::Ok);
set
}
// TODO:
// When moving this binary to the `zksync-era` repo, we
// should be using `EnumSet<TeeType>` but this requires
// #[derive(EnumSetType, Debug, Serialize, Deserialize)]
// #[enumset(serialize_repr = "list")]
// for `TeeType`
#[derive(Clone, Debug)]
pub struct TeeTypes(HashSet<TeeType>);
impl FromStr for TeeTypes {
type Err = error::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut hs = HashSet::new();
let tee_strs: Vec<&str> = s.split(',').collect();
for tee_str in tee_strs {
match tee_str.to_ascii_lowercase().as_str() {
"sgx" => {
hs.insert(TeeType::Sgx);
}
"tdx" => {
hs.insert(TeeType::Tdx);
}
_ => {
return Err(error::Error::internal("Unknown TEE type"));
}
}
}
Ok(Self(hs))
}
}
impl Default for TeeTypes {
fn default() -> Self {
Self(HashSet::from([TeeType::Sgx, TeeType::Tdx]))
}
}
impl Deref for TeeTypes {
type Target = HashSet<TeeType>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[derive(Debug, Clone)]
pub struct VerifierConfig {
pub args: VerifierConfigArgs,
pub policy: AttestationPolicy,
}
impl VerifierConfig {
pub fn new(args: VerifierConfigArgs) -> error::Result<Self> {
let policy = if let Some(path) = &args.attestation_policy_file {
let policy_content = fs::read_to_string(path).map_err(|e| {
error::Error::internal(format!("Failed to read attestation policy file: {e}"))
})?;
let policy_config: AttestationPolicyConfig = serde_yaml::from_str(&policy_content)
.map_err(|e| {
error::Error::internal(format!("Failed to parse attestation policy file: {e}"))
})?;
tracing::info!("Loaded attestation policy from file: {:?}", path);
policy_config
} else {
AttestationPolicyConfig::default()
};
let policy = AttestationPolicy {
sgx_mrsigners: decode_hex_vec_option(policy.sgx.mrsigners, SGX_HASH_SIZE)?,
sgx_mrenclaves: decode_hex_vec_option(policy.sgx.mrenclaves, SGX_HASH_SIZE)?,
sgx_allowed_tcb_levels: policy.sgx.allowed_tcb_levels,
sgx_allowed_advisory_ids: policy.sgx.allowed_advisory_ids,
tdx_allowed_tcb_levels: policy.tdx.allowed_tcb_levels,
tdx_mrs: decode_tdx_mrs(policy.tdx.mrs, TDX_HASH_SIZE)?,
tdx_allowed_advisory_ids: policy.tdx.allowed_advisory_ids,
};
if policy.sgx_mrsigners.is_none() && policy.sgx_mrenclaves.is_none() {
tracing::error!(
"Neither `--sgx-mrenclaves` nor `--sgx-mrsigners` specified. Any code could have produced the SGX proof."
);
}
if policy.tdx_mrs.is_none() {
tracing::error!(
"`--tdxmrs` not specified. Any code could have produced the TDX proof."
);
}
Ok(Self { args, policy })
}
}
// Helper function to decode a vector of hex strings
fn decode_hex_vec_option(
hex_strings: Option<Vec<String>>,
bytes_length: usize,
) -> Result<Option<Vec<Bytes>>, hex::FromHexError> {
hex_strings
.map(|strings| {
strings
.into_iter()
.map(|s| {
if s.len() > (bytes_length * 2) {
return Err(hex::FromHexError::InvalidStringLength);
}
hex::decode(s).map(Bytes::from)
})
.collect::<Result<Vec<_>, _>>()
})
.transpose()
}
// Improved decode_tdx_mrs function
fn decode_tdx_mrs(
tdx_mrs_opt: Option<Vec<[String; 5]>>,
bytes_length: usize,
) -> Result<Option<Vec<Bytes>>, hex::FromHexError> {
match tdx_mrs_opt {
None => Ok(None),
Some(mrs_array) => {
let result = mrs_array
.into_iter()
.map(|strings| decode_and_combine_mrs(&strings, bytes_length))
.collect::<Result<Vec<_>, _>>()?;
Ok(Some(result))
}
}
}
// Helper function to decode and combine MRs
fn decode_and_combine_mrs(
strings: &[String; 5],
bytes_length: usize,
) -> Result<Bytes, hex::FromHexError> {
let mut buffer = BytesMut::with_capacity(bytes_length * 5);
for s in strings {
if s.len() > (bytes_length * 2) {
return Err(hex::FromHexError::InvalidStringLength);
}
let decoded = hex::decode(s)?;
buffer.extend(decoded);
}
Ok(buffer.freeze())
}
/// Parse a batch range from a string like "42" or "42-45"
fn parse_batch_range(s: &str) -> error::Result<(L1BatchNumber, L1BatchNumber)> {
let parse = |s: &str| {
s.parse::<u32>()
.map(L1BatchNumber::from)
.map_err(|e| error::Error::internal(format!("Can't convert batch {s} to number: {e}")))
};
if let Some((start, end)) = s.split_once('-') {
let (start, end) = (parse(start)?, parse(end)?);
if start > end {
Err(error::Error::InvalidBatchRange(s.into()))
} else {
Ok((start, end))
}
} else {
let batch_number = parse(s)?;
Ok((batch_number, batch_number))
}
}
/// Parse a duration from a millisecond string
fn parse_duration(s: &str) -> error::Result<Duration> {
let millis = s
.parse()
.map_err(|e| error::Error::internal(format!("Can't convert {s} to duration: {e}")))?;
Ok(Duration::from_millis(millis))
}
#[cfg(test)]
mod test {
use super::*;
use std::{env, fs, path::PathBuf};
use teepot::quote::tcblevel::TcbLevel;
#[test]
fn test_load_attestation_policy_from_yaml() {
// Create a temporary directory for the test
let temp_dir = env::temp_dir().join("test_attestation_policy");
fs::create_dir_all(&temp_dir).expect("Failed to create temp directory");
// Create a temporary YAML file
let yaml_path = temp_dir.join("policy.yaml");
let yaml_content = r#"
sgx:
mrenclaves:
- a2caa7055e333f69c3e46ca7ba65b135a86c90adfde2afb356e05075b7818b3c
- 36eeb64cc816f80a1cf5818b26710f360714b987d3799e757cbefba7697b9589
allowed_tcb_levels:
- Ok
- SwHardeningNeeded
tdx:
mrs:
- - 2a90c8fa38672cafd791d994beb6836b99383b2563736858632284f0f760a6446efd1e7ec457cf08b629ea630f7b4525
- 3300980705adf09d28b707b79699d9874892164280832be2c386a715b6e204e0897fb564a064f810659207ba862b304f
- c08ab64725566bcc8a6fb1c79e2e64744fcff1594b8f1f02d716fb66592ecd5de94933b2bc54ffbbc43a52aab7eb1146
- 092a4866a9e6a1672d7439a5d106fbc6eb57b738d5bfea5276d41afa2551824365fdd66700c1ce9c0b20542b9f9d5945
- 971fb52f90ec98a234301ca9b8fc30b613c33e3dd9c0cc42dcb8003d4a95d8fb218b75baf028b70a3cabcb947e1ca453
"#;
fs::write(&yaml_path, yaml_content).expect("Failed to write YAML file");
// Create a minimal config
let config = VerifierConfig::new(VerifierConfigArgs {
log_level: LevelFilter::INFO,
batch_range: Some((L1BatchNumber(1), L1BatchNumber(10))),
continuous: None,
rpc_url: Url::parse("http://localhost:8545").unwrap(),
chain_id: 270,
rate_limit: Duration::from_millis(0),
attestation_policy_file: Some(yaml_path.clone()),
tee_types: Default::default(),
})
.expect("Failed to load attestation policy");
// Verify that the attestation policy was loaded correctly
assert_eq!(config.policy.sgx_mrsigners, None);
assert_eq!(
config.policy.sgx_mrenclaves,
Some(vec![
Bytes::from(
hex::decode("a2caa7055e333f69c3e46ca7ba65b135a86c90adfde2afb356e05075b7818b3c")
.unwrap(),
),
Bytes::from(
hex::decode("36eeb64cc816f80a1cf5818b26710f360714b987d3799e757cbefba7697b9589")
.unwrap(),
),
])
);
assert!(config.policy.sgx_allowed_tcb_levels.contains(TcbLevel::Ok));
assert!(config
.policy
.sgx_allowed_tcb_levels
.contains(TcbLevel::SwHardeningNeeded));
assert_eq!(
config.policy.tdx_mrs,
Some(vec![Bytes::from(
hex::decode(concat!(
"2a90c8fa38672cafd791d994beb6836b99383b2563736858632284f0f760a6446efd1e7ec457cf08b629ea630f7b4525",
"3300980705adf09d28b707b79699d9874892164280832be2c386a715b6e204e0897fb564a064f810659207ba862b304f",
"c08ab64725566bcc8a6fb1c79e2e64744fcff1594b8f1f02d716fb66592ecd5de94933b2bc54ffbbc43a52aab7eb1146",
"092a4866a9e6a1672d7439a5d106fbc6eb57b738d5bfea5276d41afa2551824365fdd66700c1ce9c0b20542b9f9d5945",
"971fb52f90ec98a234301ca9b8fc30b613c33e3dd9c0cc42dcb8003d4a95d8fb218b75baf028b70a3cabcb947e1ca453"
)).unwrap()),
])
);
// Clean up
fs::remove_file(yaml_path).expect("Failed to remove temp YAML file");
fs::remove_dir_all(temp_dir).expect("Failed to remove temp directory");
}
#[test]
fn test_invalid_yaml_file_path() {
// Create a minimal config with a non-existent YAML file path
let result = VerifierConfig::new(VerifierConfigArgs {
log_level: LevelFilter::INFO,
batch_range: Some((L1BatchNumber(1), L1BatchNumber(10))),
continuous: None,
rpc_url: Url::parse("http://localhost:8545").unwrap(),
chain_id: 270,
rate_limit: Duration::from_millis(0),
attestation_policy_file: Some(PathBuf::from("/non/existent/path.yaml")),
tee_types: Default::default(),
});
assert!(result.is_err());
}
#[test]
fn test_invalid_yaml_content() {
// Create a temporary directory for the test
let temp_dir = env::temp_dir().join("test_invalid_yaml");
fs::create_dir_all(&temp_dir).expect("Failed to create temp directory");
// Create a temporary YAML file with invalid content
let yaml_path = temp_dir.join("invalid_policy.yaml");
let yaml_content = r#"
sgx_mrsigners: 1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
invalid_key: "some value"
allowed_tcb_levels:
- Invalid
- ConfigNeeded
"#;
fs::write(&yaml_path, yaml_content).expect("Failed to write YAML file");
// Create a minimal config
let result = VerifierConfig::new(VerifierConfigArgs {
log_level: LevelFilter::INFO,
batch_range: Some((L1BatchNumber(1), L1BatchNumber(10))),
continuous: None,
rpc_url: Url::parse("http://localhost:8545").unwrap(),
chain_id: 270,
rate_limit: Duration::from_millis(0),
attestation_policy_file: Some(yaml_path.clone()),
tee_types: Default::default(),
});
assert!(result.is_err());
// Clean up
fs::remove_file(yaml_path).expect("Failed to remove temp YAML file");
fs::remove_dir_all(temp_dir).expect("Failed to remove temp directory");
}
}

View file

@ -0,0 +1,19 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Constants used throughout the application
/// Maximum number of retry attempts for fetching proofs
pub const MAX_PROOF_FETCH_RETRIES: u32 = 3;
/// Default delay between retries (in milliseconds)
pub const DEFAULT_RETRY_DELAY_MS: u64 = 1000;
/// Default timeout for HTTP requests (in seconds)
pub const DEFAULT_HTTP_REQUEST_TIMEOUT: u64 = 30;
/// SGX hash size in bytes
pub const SGX_HASH_SIZE: usize = 32;
/// TDX hash size in bytes
pub const TDX_HASH_SIZE: usize = 48;

View file

@ -0,0 +1,12 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Core components for Era proof attestation verification
mod config;
mod constants;
mod types;
pub use config::*;
pub use constants::*;
pub use types::*;

View file

@ -0,0 +1,100 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Common type definitions used throughout the application
use std::fmt;
use zksync_basic_types::L1BatchNumber;
/// Represents the operating mode of the verifier
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum VerifierMode {
/// Run on a single batch or range of batches and then exit
OneShot {
/// Starting batch number
start_batch: L1BatchNumber,
/// Ending batch number
end_batch: L1BatchNumber,
},
/// Run continuously starting from a specific batch, until interrupted
Continuous {
/// Starting batch number
start_batch: L1BatchNumber,
},
}
impl fmt::Display for VerifierMode {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
VerifierMode::OneShot {
start_batch,
end_batch,
} => {
if start_batch == end_batch {
write!(f, "one-shot mode (batch {start_batch})")
} else {
write!(f, "one-shot mode (batches {start_batch}-{end_batch})")
}
}
VerifierMode::Continuous { start_batch } => {
write!(f, "continuous mode (starting from batch {start_batch})")
}
}
}
}
/// Result of proof verification for a single batch
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum VerificationResult {
/// All proofs for the batch were verified successfully
Success,
/// Some proofs for the batch failed verification
PartialSuccess {
/// Number of successfully verified proofs
verified_count: u32,
/// Number of proofs that failed verification
unverified_count: u32,
},
/// No proofs for the batch were verified successfully
Failure,
/// Verification was interrupted before completion
Interrupted,
/// No proofs were found for the batch
NoProofsFound,
}
impl VerificationResult {
/// Check if the majority of the proofs was verified successfully
pub fn is_successful(&self) -> bool {
match self {
VerificationResult::Success => true,
VerificationResult::PartialSuccess {
verified_count,
unverified_count,
} => verified_count > unverified_count,
VerificationResult::Failure
| VerificationResult::Interrupted
| VerificationResult::NoProofsFound => false,
}
}
}
impl fmt::Display for VerificationResult {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
VerificationResult::Success => write!(f, "Success"),
VerificationResult::PartialSuccess {
verified_count,
unverified_count,
} => {
write!(
f,
"Partial Success ({verified_count} verified, {unverified_count} failed)"
)
}
VerificationResult::Failure => write!(f, "Failure"),
VerificationResult::Interrupted => write!(f, "Interrupted"),
VerificationResult::NoProofsFound => write!(f, "No Proofs Found"),
}
}
}

View file

@ -0,0 +1,103 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Error types for the verification process
use teepot::sgx::QuoteError;
use thiserror::Error;
use zksync_basic_types::L1BatchNumber;
/// Result type used throughout the application
pub type Result<T> = std::result::Result<T, Error>;
/// Error types that can occur during verification
#[derive(Error, Debug)]
pub enum Error {
/// Error fetching proof
#[error("Failed to fetch proof for batch {batch_number}: {reason}")]
ProofFetch {
/// Batch number that caused the error
batch_number: L1BatchNumber,
/// Reason for the error
reason: String,
},
/// Error communicating with the HTTP server
#[error("HTTP request failed with status {status_code}: {message}")]
Http {
/// HTTP status code
status_code: u16,
/// Error message
message: String,
},
/// Error communicating with the JSON-RPC server
#[error("JSON-RPC error: {0}")]
JsonRpc(String),
/// JSON-RPC response has an invalid format
#[error("JSON-RPC response has an invalid format")]
JsonRpcInvalidResponse(String),
/// Invalid batch range
#[error("Invalid batch range: {0}")]
InvalidBatchRange(String),
/// Error verifying attestation
#[error(transparent)]
AttestationVerification(#[from] QuoteError),
/// Error verifying signature
#[error("Signature verification failed: {0}")]
SignatureVerification(String),
/// Attestation policy violation
#[error("Attestation policy violation: {0}")]
PolicyViolation(String),
/// Operation interrupted
#[error("Operation interrupted")]
Interrupted,
#[error(transparent)]
FromHex(#[from] hex::FromHexError),
/// Internal error
#[error("Internal error: {0}")]
Internal(String),
}
/// Utility functions for working with errors
impl Error {
/// Create a new proof fetch error
pub fn proof_fetch(batch_number: L1BatchNumber, reason: impl Into<String>) -> Self {
Self::ProofFetch {
batch_number,
reason: reason.into(),
}
}
/// Create a new policy violation error
pub fn policy_violation(reason: impl Into<String>) -> Self {
Self::PolicyViolation(reason.into())
}
/// Create a new signature verification error
pub fn signature_verification(reason: impl Into<String>) -> Self {
Self::SignatureVerification(reason.into())
}
/// Create a new internal error
pub fn internal(reason: impl Into<String>) -> Self {
Self::Internal(reason.into())
}
}
impl From<reqwest::Error> for Error {
fn from(value: reqwest::Error) -> Self {
Self::Http {
status_code: value.status().map_or(0, |v| v.as_u16()),
message: value.to_string(),
}
}
}

View file

@ -1,231 +1,94 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs // Copyright (c) 2023-2025 Matter Labs
//! Tool for SGX attestation and batch signature verification, both continuous and one-shot //! Tool for SGX attestation and batch signature verification, both continuous and one-shot
mod args;
mod client; mod client;
mod core;
mod error;
mod processor;
mod proof; mod proof;
mod verification; mod verification;
use anyhow::{Context, Result}; use crate::{
use args::{Arguments, AttestationPolicyArgs}; core::{VerifierConfig, VerifierConfigArgs},
use clap::Parser; error::Error,
use client::MainNodeClient; processor::ProcessorFactory,
use proof::get_proofs;
use reqwest::Client;
use tokio::{signal, sync::watch};
use tracing::{debug, error, info, trace, warn};
use tracing_log::LogTracer;
use tracing_subscriber::{filter::LevelFilter, fmt, prelude::*, EnvFilter, Registry};
use url::Url;
use zksync_basic_types::L1BatchNumber;
use crate::verification::{
log_quote_verification_summary, verify_attestation_quote, verify_batch_proof,
}; };
use clap::Parser;
use error::Result;
use tokio::signal;
use tokio_util::sync::CancellationToken;
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
let args = Arguments::parse(); // Parse command-line arguments
setup_logging(&args.log_level)?; let config = VerifierConfig::new(VerifierConfigArgs::parse())?;
validate_arguments(&args)?;
let (stop_sender, stop_receiver) = watch::channel(false);
let mut process_handle = tokio::spawn(verify_batches_proofs(stop_receiver, args));
tokio::select! {
ret = &mut process_handle => { return ret?; },
_ = signal::ctrl_c() => {
tracing::info!("Stop signal received, shutting down");
stop_sender.send(true).ok();
// Wait for process_batches to complete gracefully
process_handle.await??;
}
}
Ok(()) // Initialize logging
} tracing::subscriber::set_global_default(
teepot::log::setup_logging(env!("CARGO_CRATE_NAME"), &config.args.log_level)
.map_err(|e| Error::internal(e.to_string()))?,
)
.map_err(|e| Error::internal(e.to_string()))?;
fn setup_logging(log_level: &LevelFilter) -> Result<()> { // Create processor based on config
LogTracer::init().context("Failed to set logger")?; let (processor, mode) = ProcessorFactory::create(config.clone())?;
let filter = EnvFilter::builder()
.try_from_env()
.unwrap_or(match *log_level {
LevelFilter::OFF => EnvFilter::new("off"),
_ => EnvFilter::new(format!(
"warn,{crate_name}={log_level},teepot={log_level}",
crate_name = env!("CARGO_CRATE_NAME"),
log_level = log_level
)),
});
let subscriber = Registry::default()
.with(filter)
.with(fmt::layer().with_writer(std::io::stderr));
tracing::subscriber::set_global_default(subscriber)?;
Ok(()) // Set up a cancellation Token
} let token = CancellationToken::new();
fn validate_arguments(args: &Arguments) -> Result<()> { // Log startup information
if args.attestation_policy.sgx_mrsigners.is_none() tracing::info!("Starting verification in {}", mode);
&& args.attestation_policy.sgx_mrenclaves.is_none()
{
error!("Neither `--sgx-mrenclaves` nor `--sgx-mrsigners` specified. Any code could have produced the proof.");
}
Ok(()) // Spawn processing task
} let mut process_handle = {
let token = token.clone();
/// Verify all TEE proofs for all batches starting from the given batch number up to the specified tokio::spawn(async move { processor.run(token).await })
/// batch number, if a range is provided. Otherwise, continue verifying batches until the stop
/// signal is received.
async fn verify_batches_proofs(
mut stop_receiver: watch::Receiver<bool>,
args: Arguments,
) -> Result<()> {
let node_client = MainNodeClient::new(args.rpc_url.clone(), args.chain_id)?;
let http_client = Client::new();
let first_batch_number = match args.batch_range {
Some((first_batch_number, _)) => first_batch_number,
None => args
.continuous
.expect("clap::ArgGroup should guarantee batch range or continuous option is set"),
}; };
let end_batch_number = args
.batch_range
.map_or(u32::MAX, |(_, end_batch_number)| end_batch_number.0);
let mut unverified_batches_count: u32 = 0;
let mut last_processed_batch_number = first_batch_number.0;
for current_batch_number in first_batch_number.0..=end_batch_number { // Wait for processing to complete or for stop signal
if *stop_receiver.borrow() { tokio::select! {
tracing::warn!("Stop signal received, shutting down"); result = &mut process_handle => {
break; match result {
} Ok(Ok(verification_results)) => {
tracing::info!("Verification completed successfully");
trace!("Verifying TEE proofs for batch #{}", current_batch_number); let total_batches = verification_results.len();
let successful_batches = verification_results.iter()
.filter(|(_, result)| result.is_successful())
.count();
let all_verified = verify_batch_proofs( tracing::info!(
&mut stop_receiver, "Verified {} batches: {} succeeded, {} failed",
current_batch_number.into(), total_batches,
&args.rpc_url, successful_batches,
&http_client, total_batches - successful_batches
&node_client,
&args.attestation_policy,
)
.await?;
if !all_verified {
unverified_batches_count += 1;
}
if current_batch_number < end_batch_number {
tokio::time::timeout(args.rate_limit, stop_receiver.changed())
.await
.ok();
}
last_processed_batch_number = current_batch_number;
}
let verified_batches_count =
last_processed_batch_number + 1 - first_batch_number.0 - unverified_batches_count;
if unverified_batches_count > 0 {
if verified_batches_count == 0 {
error!(
"All {} batches failed verification!",
unverified_batches_count
);
} else {
error!(
"Some batches failed verification! Unverified batches: {}. Verified batches: {}.",
unverified_batches_count, verified_batches_count
); );
Ok(())
},
Ok(Err(e)) => {
tracing::error!("Verification failed: {}", e);
Err(e)
},
Err(e) => {
tracing::error!("Task panicked: {}", e);
Err(Error::internal(format!("Task panicked: {e}")))
} }
} else { }
info!( },
"All {} batches verified successfully!", _ = signal::ctrl_c() => {
verified_batches_count tracing::info!("Stop signal received, shutting down gracefully...");
); token.cancel();
// Wait for processor to complete gracefully
match process_handle.await {
Ok(_) => tracing::info!("Processor stopped gracefully"),
Err(e) => tracing::error!("Error stopping processor: {}", e),
} }
Ok(()) Ok(())
} }
}
/// Verify all TEE proofs for the given batch number. Note that each batch number can potentially
/// have multiple proofs of the same TEE type.
async fn verify_batch_proofs(
stop_receiver: &mut watch::Receiver<bool>,
batch_number: L1BatchNumber,
rpc_url: &Url,
http_client: &Client,
node_client: &MainNodeClient,
attestation_policy: &AttestationPolicyArgs,
) -> Result<bool> {
let proofs = get_proofs(stop_receiver, batch_number, http_client, rpc_url).await?;
let batch_no = batch_number.0;
let mut total_proofs_count: u32 = 0;
let mut unverified_proofs_count: u32 = 0;
for proof in proofs
.into_iter()
// only support SGX proofs for now
.filter(|proof| proof.tee_type.eq_ignore_ascii_case("sgx"))
{
let batch_no = proof.l1_batch_number;
total_proofs_count += 1;
let tee_type = proof.tee_type.to_uppercase();
trace!(batch_no, tee_type, proof.proved_at, "Verifying proof.");
debug!(
batch_no,
"Verifying quote ({} bytes)...",
proof.attestation.len()
);
let quote_verification_result = verify_attestation_quote(&proof.attestation)?;
let verified_successfully = verify_batch_proof(
&quote_verification_result,
attestation_policy,
node_client,
&proof.signature,
L1BatchNumber(proof.l1_batch_number),
)
.await?;
log_quote_verification_summary(&quote_verification_result);
if verified_successfully {
info!(
batch_no,
proof.proved_at, tee_type, "Verification succeeded.",
);
} else {
unverified_proofs_count += 1;
warn!(batch_no, proof.proved_at, tee_type, "Verification failed!",);
}
}
let verified_proofs_count = total_proofs_count - unverified_proofs_count;
if unverified_proofs_count > 0 {
if verified_proofs_count == 0 {
error!(
batch_no,
"All {} proofs failed verification!", unverified_proofs_count
);
} else {
warn!(
batch_no,
"Some proofs failed verification. Unverified proofs: {}. Verified proofs: {}.",
unverified_proofs_count,
verified_proofs_count
);
}
}
// if at least one proof is verified, consider the batch verified
let is_batch_verified = verified_proofs_count > 0;
Ok(is_batch_verified)
} }

View file

@ -0,0 +1,117 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Core functionality for processing individual batches
use crate::error;
use tokio_util::sync::CancellationToken;
use zksync_basic_types::L1BatchNumber;
use crate::{
client::{HttpClient, MainNodeClient, RetryConfig},
core::{VerificationResult, VerifierConfig},
proof::ProofFetcher,
verification::{BatchVerifier, VerificationReporter},
};
/// Responsible for processing individual batches
pub struct BatchProcessor {
config: VerifierConfig,
proof_fetcher: ProofFetcher,
batch_verifier: BatchVerifier<MainNodeClient>,
}
impl BatchProcessor {
/// Create a new batch processor with the given configuration
pub fn new(config: VerifierConfig) -> error::Result<Self> {
// Initialize clients and fetchers
let node_client = MainNodeClient::new(config.args.rpc_url.clone(), config.args.chain_id)?;
let http_client = HttpClient::new();
let retry_config = RetryConfig::default();
let proof_fetcher =
ProofFetcher::new(http_client, config.args.rpc_url.clone(), retry_config);
let batch_verifier = BatchVerifier::new(node_client, config.policy.clone());
Ok(Self {
config,
proof_fetcher,
batch_verifier,
})
}
/// Process a single batch and return the verification result
pub async fn process_batch(
&self,
token: &CancellationToken,
batch_number: L1BatchNumber,
) -> error::Result<VerificationResult> {
if token.is_cancelled() {
tracing::info!("Stop signal received, shutting down");
return Ok(VerificationResult::Interrupted);
}
tracing::trace!("Verifying TEE proofs for batch #{}", batch_number.0);
// Fetch proofs for the current batch across different TEE types
let mut proofs = Vec::new();
for tee_type in self.config.args.tee_types.iter().copied() {
match self
.proof_fetcher
.get_proofs(token, batch_number, tee_type)
.await
{
Ok(batch_proofs) => proofs.extend(batch_proofs),
Err(error::Error::Interrupted) => return Err(error::Error::Interrupted),
Err(e) => {
tracing::error!(
"Failed to fetch proofs for TEE type {:?} at batch {}: {:#}",
tee_type,
batch_number.0,
e
);
}
}
}
if proofs.is_empty() {
tracing::warn!("No proofs found for batch #{}", batch_number.0);
return Ok(VerificationResult::NoProofsFound);
}
// Verify proofs for the current batch
let verification_result = self
.batch_verifier
.verify_batch_proofs(token, batch_number, proofs)
.await?;
let result = if verification_result.total_count == 0 {
VerificationResult::NoProofsFound
} else if verification_result.verified_count == verification_result.total_count {
VerificationResult::Success
} else if verification_result.verified_count > 0 {
VerificationResult::PartialSuccess {
verified_count: verification_result.verified_count,
unverified_count: verification_result.unverified_count,
}
} else {
VerificationResult::Failure
};
tracing::debug!("Batch #{} verification result: {}", batch_number.0, result);
// Apply rate limiting between batches if needed
if !matches!(result, VerificationResult::Interrupted)
&& self.config.args.rate_limit.as_millis() > 0
{
tokio::time::timeout(self.config.args.rate_limit, token.cancelled())
.await
.ok();
}
Ok(result)
}
/// Log the overall verification results
pub fn log_overall_results(success_count: u32, failure_count: u32) {
VerificationReporter::log_overall_verification_results(success_count, failure_count);
}
}

View file

@ -0,0 +1,92 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Continuous batch processor for ongoing verification of new batches
use tokio_util::sync::CancellationToken;
use zksync_basic_types::L1BatchNumber;
use crate::{
core::{VerificationResult, VerifierConfig},
error,
processor::BatchProcessor,
};
/// Processes batches continuously until stopped
pub struct ContinuousProcessor {
batch_processor: BatchProcessor,
start_batch: L1BatchNumber,
}
impl ContinuousProcessor {
/// Create a new continuous processor that starts from the given batch
pub fn new(config: VerifierConfig, start_batch: L1BatchNumber) -> error::Result<Self> {
let batch_processor = BatchProcessor::new(config)?;
Ok(Self {
batch_processor,
start_batch,
})
}
/// Run the processor until stopped
pub async fn run(
&self,
token: &CancellationToken,
) -> error::Result<Vec<(u32, VerificationResult)>> {
tracing::info!(
"Starting continuous verification from batch {}",
self.start_batch.0
);
let mut results = Vec::new();
let mut success_count = 0;
let mut failure_count = 0;
let mut current_batch = self.start_batch.0;
// Continue processing batches until stopped or reaching maximum batch number
while !token.is_cancelled() {
let batch = L1BatchNumber(current_batch);
match self.batch_processor.process_batch(token, batch).await {
Ok(result) => {
match result {
VerificationResult::Success | VerificationResult::PartialSuccess { .. } => {
success_count += 1;
}
VerificationResult::Failure => failure_count += 1,
VerificationResult::Interrupted => {
results.push((current_batch, result));
break;
}
VerificationResult::NoProofsFound => {
// In continuous mode, we might hit batches that don't have proofs yet
// Wait a bit longer before retrying
if !token.is_cancelled() {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
// Don't increment batch number, try again
continue;
}
}
}
results.push((current_batch, result));
}
Err(e) => {
tracing::error!("Error processing batch {}: {}", current_batch, e);
results.push((current_batch, VerificationResult::Failure));
failure_count += 1;
}
}
// Move to the next batch
current_batch = current_batch
.checked_add(1)
.ok_or(error::Error::internal("Maximum batch number reached"))?;
}
// Log overall results
BatchProcessor::log_overall_results(success_count, failure_count);
Ok(results)
}
}

View file

@ -0,0 +1,62 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! Processing logic for batch verification
mod batch_processor;
mod continuous_processor;
mod one_shot_processor;
pub use batch_processor::BatchProcessor;
pub use continuous_processor::ContinuousProcessor;
pub use one_shot_processor::OneShotProcessor;
use crate::{
core::{VerificationResult, VerifierConfig, VerifierMode},
error::Result,
};
use tokio_util::sync::CancellationToken;
// Using an enum instead of a trait because async functions in traits can't be used in trait objects
/// Processor variants for different verification modes
pub enum ProcessorType {
/// One-shot processor for processing a specific range of batches
OneShot(OneShotProcessor),
/// Continuous processor for monitoring new batches
Continuous(ContinuousProcessor),
}
impl ProcessorType {
/// Run the processor until completion or interruption
pub async fn run(&self, token: CancellationToken) -> Result<Vec<(u32, VerificationResult)>> {
match self {
ProcessorType::OneShot(processor) => processor.run(&token).await,
ProcessorType::Continuous(processor) => processor.run(&token).await,
}
}
}
/// Factory for creating the appropriate processor based on configuration
pub struct ProcessorFactory;
impl ProcessorFactory {
/// Create a new processor based on the provided configuration
pub fn create(config: VerifierConfig) -> Result<(ProcessorType, VerifierMode)> {
let mode = if let Some((start, end)) = config.args.batch_range {
let processor = OneShotProcessor::new(config, start, end)?;
let mode = VerifierMode::OneShot {
start_batch: start,
end_batch: end,
};
(ProcessorType::OneShot(processor), mode)
} else if let Some(start) = config.args.continuous {
let processor = ContinuousProcessor::new(config, start)?;
let mode = VerifierMode::Continuous { start_batch: start };
(ProcessorType::Continuous(processor), mode)
} else {
unreachable!("Clap ArgGroup should ensure either batch_range or continuous is set")
};
Ok(mode)
}
}

View file

@ -0,0 +1,77 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
//! One-shot batch processor for verifying a single batch or a range of batches
use crate::error;
use tokio_util::sync::CancellationToken;
use zksync_basic_types::L1BatchNumber;
use crate::{
core::{VerificationResult, VerifierConfig},
processor::BatchProcessor,
};
/// Processes a specific range of batches and then exits
pub struct OneShotProcessor {
batch_processor: BatchProcessor,
start_batch: L1BatchNumber,
end_batch: L1BatchNumber,
}
impl OneShotProcessor {
/// Create a new one-shot processor for the given batch range
pub fn new(
config: VerifierConfig,
start_batch: L1BatchNumber,
end_batch: L1BatchNumber,
) -> error::Result<Self> {
let batch_processor = BatchProcessor::new(config)?;
Ok(Self {
batch_processor,
start_batch,
end_batch,
})
}
/// Run the processor until completion or interruption
pub async fn run(
&self,
token: &CancellationToken,
) -> error::Result<Vec<(u32, VerificationResult)>> {
tracing::info!(
"Starting one-shot verification of batches {} to {}",
self.start_batch.0,
self.end_batch.0
);
let mut results = Vec::new();
let mut success_count = 0;
let mut failure_count = 0;
for batch_number in self.start_batch.0..=self.end_batch.0 {
let batch = L1BatchNumber(batch_number);
let result = self.batch_processor.process_batch(token, batch).await?;
match result {
VerificationResult::Success | VerificationResult::PartialSuccess { .. } => {
success_count += 1;
}
VerificationResult::Failure => failure_count += 1,
VerificationResult::Interrupted => {
results.push((batch_number, result));
break;
}
VerificationResult::NoProofsFound => {}
}
results.push((batch_number, result));
}
// Log overall results
BatchProcessor::log_overall_results(success_count, failure_count);
Ok(results)
}
}

View file

@ -1,159 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs
use anyhow::{bail, Result};
use jsonrpsee_types::error::ErrorObject;
use reqwest::Client;
use serde::{Deserialize, Serialize};
use std::time::Duration;
use tokio::sync::watch;
use tracing::{error, warn};
use url::Url;
use zksync_basic_types::L1BatchNumber;
#[derive(Debug, Serialize, Deserialize)]
pub struct GetProofsRequest {
pub jsonrpc: String,
pub id: u32,
pub method: String,
pub params: (L1BatchNumber, String),
}
pub async fn get_proofs(
stop_receiver: &mut watch::Receiver<bool>,
batch_number: L1BatchNumber,
http_client: &Client,
rpc_url: &Url,
) -> Result<Vec<Proof>> {
let mut proofs_request = GetProofsRequest::new(batch_number);
let mut retries = 0;
let mut backoff = Duration::from_secs(1);
let max_backoff = Duration::from_secs(128);
let retry_backoff_multiplier: f32 = 2.0;
while !*stop_receiver.borrow() {
let proofs = proofs_request
.send(stop_receiver, http_client, rpc_url)
.await?;
if !proofs.is_empty() {
return Ok(proofs);
}
retries += 1;
warn!(
batch_no = batch_number.0, retries,
"No TEE proofs found for batch #{}. They may not be ready yet. Retrying in {} milliseconds.",
batch_number, backoff.as_millis(),
);
tokio::time::timeout(backoff, stop_receiver.changed())
.await
.ok();
backoff = std::cmp::min(backoff.mul_f32(retry_backoff_multiplier), max_backoff);
}
Ok(vec![])
}
impl GetProofsRequest {
pub fn new(batch_number: L1BatchNumber) -> Self {
GetProofsRequest {
jsonrpc: "2.0".to_string(),
id: 1,
method: "unstable_getTeeProofs".to_string(),
params: (batch_number, "sgx".to_string()),
}
}
pub async fn send(
&mut self,
stop_receiver: &mut watch::Receiver<bool>,
http_client: &Client,
rpc_url: &Url,
) -> Result<Vec<Proof>> {
let mut retries = 0;
let max_retries = 5;
let mut backoff = Duration::from_secs(1);
let max_backoff = Duration::from_secs(128);
let retry_backoff_multiplier: f32 = 2.0;
let mut response = None;
while !*stop_receiver.borrow() {
let result = http_client
.post(rpc_url.clone())
.json(self)
.send()
.await?
.error_for_status()?
.json::<GetProofsResponse>()
.await;
match result {
Ok(res) => match res.error {
None => {
response = Some(res);
break;
}
Some(error) => {
// Handle corner case, where the old RPC interface expects 'Sgx'
if let Some(data) = error.data() {
if data.get().contains("unknown variant `sgx`, expected `Sgx`") {
self.params.1 = "Sgx".to_string();
continue;
}
}
error!(?error, "received JSONRPC error {error:?}");
bail!("JSONRPC error {error:?}");
}
},
Err(err) => {
retries += 1;
if retries >= max_retries {
return Err(anyhow::anyhow!(
"Failed to send request to {} after {} retries: {}. Request details: {:?}",
rpc_url,
max_retries,
err,
self
));
}
warn!(
%err,
"Failed to send request to {rpc_url}. {retries}/{max_retries}, retrying in {} milliseconds. Request details: {:?}",
backoff.as_millis(),
self
);
tokio::time::timeout(backoff, stop_receiver.changed())
.await
.ok();
backoff = std::cmp::min(backoff.mul_f32(retry_backoff_multiplier), max_backoff);
}
};
}
Ok(response.map_or_else(Vec::new, |res| res.result.unwrap_or_default()))
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct GetProofsResponse {
pub jsonrpc: String,
pub result: Option<Vec<Proof>>,
pub id: u32,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<ErrorObject<'static>>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Proof {
pub l1_batch_number: u32,
pub tee_type: String,
pub pubkey: Vec<u8>,
pub signature: Vec<u8>,
pub proof: Vec<u8>,
pub proved_at: String,
pub attestation: Vec<u8>,
}

View file

@ -0,0 +1,137 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use crate::{
client::{HttpClient, RetryConfig, RetryHelper},
error::{Error, Result},
proof::{
parsing::ProofResponseParser,
types::{GetProofsRequest, GetProofsResponse, Proof},
},
};
use std::time::Duration;
use tokio_util::sync::CancellationToken;
use url::Url;
use zksync_basic_types::{tee_types::TeeType, L1BatchNumber};
/// Handles fetching proofs from the server with retry logic
pub struct ProofFetcher {
http_client: HttpClient,
rpc_url: Url,
retry_config: RetryConfig,
}
impl ProofFetcher {
/// Create a new proof fetcher
pub fn new(http_client: HttpClient, rpc_url: Url, retry_config: RetryConfig) -> Self {
Self {
http_client,
rpc_url,
retry_config,
}
}
/// Get proofs for a batch number with retry logic
pub async fn get_proofs(
&self,
token: &CancellationToken,
batch_number: L1BatchNumber,
tee_type: TeeType,
) -> Result<Vec<Proof>> {
let mut proofs_request = GetProofsRequest::new(batch_number, tee_type);
let mut backoff = Duration::from_secs(1);
let max_backoff = Duration::from_secs(128);
let retry_backoff_multiplier: f32 = 2.0;
while !token.is_cancelled() {
match self.send_request(&proofs_request, token).await {
Ok(response) => {
// Parse the response using the ProofResponseParser
match ProofResponseParser::parse_response(response) {
Ok(proofs) => {
// Filter valid proofs
let valid_proofs = ProofResponseParser::filter_valid_proofs(&proofs);
if !valid_proofs.is_empty() {
return Ok(valid_proofs);
}
// No valid proofs found, retry
let error_msg = format!(
"No valid TEE proofs found for batch #{}. They may not be ready yet. Retrying in {} milliseconds.",
batch_number.0,
backoff.as_millis()
);
tracing::warn!(batch_no = batch_number.0, "{}", error_msg);
// Here we could use the ProofFetching error if we needed to return immediately
// return Err(Error::ProofFetching(error_msg));
}
Err(e) => {
// Handle specific error for Sgx variant
if let Error::JsonRpc(msg) = &e {
if msg.contains("RPC requires 'Sgx' variant") {
tracing::debug!("Switching to 'Sgx' variant for RPC");
proofs_request.params.1 = "Sgx".to_string();
continue;
}
}
return Err(e);
}
}
}
Err(e) => {
return Err(e);
}
}
tokio::time::timeout(backoff, token.cancelled()).await.ok();
backoff = std::cmp::min(
Duration::from_millis(
(backoff.as_millis() as f32 * retry_backoff_multiplier) as u64,
),
max_backoff,
);
if token.is_cancelled() {
break;
}
}
// If we've reached this point, we've either been stopped or exhausted retries
if token.is_cancelled() {
// Return empty vector if stopped
Ok(vec![])
} else {
// Use the ProofFetching error variant if we've exhausted retries
Err(Error::proof_fetch(batch_number, "exhausted retries"))
}
}
/// Send a request to the server with retry logic
async fn send_request(
&self,
request: &GetProofsRequest,
token: &CancellationToken,
) -> Result<GetProofsResponse> {
let retry_helper = RetryHelper::new(self.retry_config.clone());
let request_clone = request.clone();
let http_client = self.http_client.clone();
let rpc_url = self.rpc_url.clone();
retry_helper
.execute(&format!("get_proofs_{}", request.params.0), || async {
let result = http_client
.send_json::<_, GetProofsResponse>(&rpc_url, &request_clone)
.await;
// Check if we need to abort due to stop signal
if token.is_cancelled() {
return Err(Error::Interrupted);
}
result
})
.await
}
}

View file

@ -0,0 +1,9 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
mod fetcher;
mod parsing;
mod types;
pub use fetcher::ProofFetcher;
pub use types::Proof;

View file

@ -0,0 +1,277 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use super::types::{GetProofsResponse, Proof};
use crate::error;
/// Handles parsing of proof responses and error handling
pub struct ProofResponseParser;
impl ProofResponseParser {
/// Parse a response and extract the proofs
pub fn parse_response(response: GetProofsResponse) -> error::Result<Vec<Proof>> {
// Handle JSON-RPC errors
if let Some(error) = response.error {
// Special case for handling the old RPC interface
if let Some(data) = error.data() {
if data.get().contains("unknown variant `sgx`, expected `Sgx`") {
return Err(error::Error::JsonRpc(
"RPC requires 'Sgx' variant instead of 'sgx'".to_string(),
));
}
}
return Err(error::Error::JsonRpc(format!("JSONRPC error: {error:?}")));
}
// Extract proofs from the result
Ok(response.result.unwrap_or_default())
}
/// Filter proofs to find valid ones
pub fn filter_valid_proofs(proofs: &[Proof]) -> Vec<Proof> {
proofs
.iter()
.filter(|proof| !proof.is_failed_or_picked())
.cloned()
.collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use jsonrpsee_types::error::ErrorObject;
#[test]
fn test_proof_is_permanently_ignored() {
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("permanently_ignored".to_string()),
attestation: None,
};
assert!(proof.is_permanently_ignored());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("PERMANENTLY_IGNORED".to_string()),
attestation: None,
};
assert!(proof.is_permanently_ignored());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("other".to_string()),
attestation: None,
};
assert!(!proof.is_permanently_ignored());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: None,
attestation: None,
};
assert!(!proof.is_permanently_ignored());
}
#[test]
fn test_proof_is_failed_or_picked() {
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("failed".to_string()),
attestation: None,
};
assert!(proof.is_failed_or_picked());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("picked_by_prover".to_string()),
attestation: None,
};
assert!(proof.is_failed_or_picked());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("FAILED".to_string()),
attestation: None,
};
assert!(proof.is_failed_or_picked());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("other".to_string()),
attestation: None,
};
assert!(!proof.is_failed_or_picked());
let proof = Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: None,
attestation: None,
};
assert!(!proof.is_failed_or_picked());
}
#[test]
fn test_parse_response_success() {
let response = GetProofsResponse {
jsonrpc: "2.0".to_string(),
result: Some(vec![Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: None,
attestation: None,
}]),
id: 1,
error: None,
};
let proofs = ProofResponseParser::parse_response(response).unwrap();
assert_eq!(proofs.len(), 1);
assert_eq!(proofs[0].l1_batch_number, 123);
}
#[test]
fn test_parse_response_error() {
let response = GetProofsResponse {
jsonrpc: "2.0".to_string(),
result: None,
id: 1,
error: Some(ErrorObject::owned(1, "Error", None::<()>)),
};
let error = ProofResponseParser::parse_response(response).unwrap_err();
match error {
error::Error::JsonRpc(msg) => {
assert!(msg.contains("JSONRPC error"));
}
_ => panic!("Expected JsonRpc error"),
}
}
#[test]
fn test_parse_response_sgx_variant_error() {
let error_obj = ErrorObject::owned(
1,
"Error",
Some(
serde_json::to_value("unknown variant `sgx`, expected `Sgx`")
.unwrap()
.to_string(),
),
);
let response = GetProofsResponse {
jsonrpc: "2.0".to_string(),
result: None,
id: 1,
error: Some(error_obj),
};
let error = ProofResponseParser::parse_response(response).unwrap_err();
match error {
error::Error::JsonRpc(msg) => {
assert!(msg.contains("RPC requires 'Sgx' variant"));
}
_ => panic!("Expected JsonRpc error about Sgx variant"),
}
}
#[test]
fn test_filter_valid_proofs() {
let proofs = vec![
Proof {
l1_batch_number: 123,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: None,
attestation: None,
},
Proof {
l1_batch_number: 124,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("failed".to_string()),
attestation: None,
},
Proof {
l1_batch_number: 125,
tee_type: "TDX".to_string(),
pubkey: None,
signature: None,
proof: None,
proved_at: "2023-01-01T00:00:00Z".to_string(),
status: Some("picked_by_prover".to_string()),
attestation: None,
},
];
let valid_proofs = ProofResponseParser::filter_valid_proofs(&proofs);
assert_eq!(valid_proofs.len(), 1);
assert_eq!(valid_proofs[0].l1_batch_number, 123);
}
}

View file

@ -0,0 +1,83 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use jsonrpsee_types::error::ErrorObject;
use serde::{Deserialize, Serialize};
use serde_with::{hex::Hex, serde_as};
use zksync_basic_types::{tee_types::TeeType, L1BatchNumber};
/// Request structure for fetching proofs
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct GetProofsRequest {
pub jsonrpc: String,
pub id: u32,
pub method: String,
pub params: (L1BatchNumber, String),
}
impl GetProofsRequest {
/// Create a new request for the given batch number
pub fn new(batch_number: L1BatchNumber, tee_type: TeeType) -> Self {
GetProofsRequest {
jsonrpc: "2.0".to_string(),
id: 1,
method: "unstable_getTeeProofs".to_string(),
params: (batch_number, tee_type.to_string()),
}
}
}
/// Response structure for proof requests
#[derive(Debug, Serialize, Deserialize)]
pub struct GetProofsResponse {
pub jsonrpc: String,
pub result: Option<Vec<Proof>>,
pub id: u32,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<ErrorObject<'static>>,
}
/// Proof structure containing attestation and signature data
#[serde_as]
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Proof {
pub l1_batch_number: u32,
pub tee_type: String,
#[serde_as(as = "Option<Hex>")]
pub pubkey: Option<Vec<u8>>,
#[serde_as(as = "Option<Hex>")]
pub signature: Option<Vec<u8>>,
#[serde_as(as = "Option<Hex>")]
pub proof: Option<Vec<u8>>,
pub proved_at: String,
pub status: Option<String>,
#[serde_as(as = "Option<Hex>")]
pub attestation: Option<Vec<u8>>,
}
impl Proof {
/// Check if the proof is marked as permanently ignored
pub fn is_permanently_ignored(&self) -> bool {
self.status
.as_ref()
.is_some_and(|s| s.eq_ignore_ascii_case("permanently_ignored"))
}
/// Check if the proof is failed or picked by a prover
pub fn is_failed_or_picked(&self) -> bool {
self.status.as_ref().is_some_and(|s| {
s.eq_ignore_ascii_case("failed") || s.eq_ignore_ascii_case("picked_by_prover")
})
}
/// Get the attestation bytes or an empty vector if not present
pub fn attestation_bytes(&self) -> Vec<u8> {
self.attestation.clone().unwrap_or_default()
}
/// Get the signature bytes or an empty vector if not present
pub fn signature_bytes(&self) -> Vec<u8> {
self.signature.clone().unwrap_or_default()
}
}

View file

@ -1,129 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2024 Matter Labs
use anyhow::{Context, Result};
use hex::encode;
use secp256k1::{constants::PUBLIC_KEY_SIZE, ecdsa::Signature, Message, PublicKey};
use teepot::{
client::TcbLevel,
sgx::{tee_qv_get_collateral, verify_quote_with_collateral, QuoteVerificationResult},
};
use tracing::{debug, info, warn};
use zksync_basic_types::{L1BatchNumber, H256};
use crate::args::AttestationPolicyArgs;
use crate::client::JsonRpcClient;
pub async fn verify_batch_proof(
quote_verification_result: &QuoteVerificationResult<'_>,
attestation_policy: &AttestationPolicyArgs,
node_client: &impl JsonRpcClient,
signature: &[u8],
batch_number: L1BatchNumber,
) -> Result<bool> {
if !is_quote_matching_policy(attestation_policy, quote_verification_result) {
return Ok(false);
}
let batch_no = batch_number.0;
let public_key = PublicKey::from_slice(
&quote_verification_result.quote.report_body.reportdata[..PUBLIC_KEY_SIZE],
)?;
debug!(batch_no, "public key: {}", public_key);
let root_hash = node_client.get_root_hash(batch_number).await?;
debug!(batch_no, "root hash: {}", root_hash);
let is_verified = verify_signature(signature, public_key, root_hash)?;
if is_verified {
info!(batch_no, signature = %encode(signature), "Signature verified successfully.");
} else {
warn!(batch_no, signature = %encode(signature), "Failed to verify signature!");
}
Ok(is_verified)
}
pub fn verify_attestation_quote(attestation_quote_bytes: &[u8]) -> Result<QuoteVerificationResult> {
let collateral =
tee_qv_get_collateral(attestation_quote_bytes).context("Failed to get collateral!")?;
let unix_time: i64 = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)?
.as_secs() as _;
verify_quote_with_collateral(attestation_quote_bytes, Some(&collateral), unix_time)
.context("Failed to verify quote with collateral!")
}
pub fn log_quote_verification_summary(quote_verification_result: &QuoteVerificationResult) {
let QuoteVerificationResult {
collateral_expired,
result,
quote,
advisories,
..
} = quote_verification_result;
if *collateral_expired {
warn!("Freshly fetched collateral expired!");
}
let tcblevel = TcbLevel::from(*result);
info!(
"Quote verification result: {}. mrsigner: {}, mrenclave: {}, reportdata: {}. Advisory IDs: {}.",
tcblevel,
hex::encode(quote.report_body.mrsigner),
hex::encode(quote.report_body.mrenclave),
hex::encode(quote.report_body.reportdata),
if advisories.is_empty() {
"None".to_string()
} else {
advisories.iter().map(ToString::to_string).collect::<Vec<_>>().join(", ")
}
);
}
fn verify_signature(signature: &[u8], public_key: PublicKey, root_hash: H256) -> Result<bool> {
let signature = Signature::from_compact(signature)?;
let root_hash_msg = Message::from_digest_slice(&root_hash.0)?;
Ok(signature.verify(&root_hash_msg, &public_key).is_ok())
}
fn is_quote_matching_policy(
attestation_policy: &AttestationPolicyArgs,
quote_verification_result: &QuoteVerificationResult<'_>,
) -> bool {
let quote = &quote_verification_result.quote;
let tcblevel = TcbLevel::from(quote_verification_result.result);
if !attestation_policy.sgx_allowed_tcb_levels.contains(tcblevel) {
warn!(
"Quote verification failed: TCB level mismatch (expected one of: {:?}, actual: {})",
attestation_policy.sgx_allowed_tcb_levels, tcblevel
);
return false;
}
check_policy(
attestation_policy.sgx_mrsigners.as_deref(),
&quote.report_body.mrsigner,
"mrsigner",
) && check_policy(
attestation_policy.sgx_mrenclaves.as_deref(),
&quote.report_body.mrenclave,
"mrenclave",
)
}
fn check_policy(policy: Option<&str>, actual_value: &[u8], field_name: &str) -> bool {
if let Some(valid_values) = policy {
let valid_values: Vec<&str> = valid_values.split(',').collect();
let actual_value = hex::encode(actual_value);
if !valid_values.contains(&actual_value.as_str()) {
warn!(
"Quote verification failed: {} mismatch (expected one of: {:?}, actual: {})",
field_name, valid_values, actual_value
);
return false;
}
debug!(field_name, actual_value, "Attestation policy check passed");
}
true
}

View file

@ -0,0 +1,29 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use teepot::quote::{get_collateral, verify_quote_with_collateral, QuoteVerificationResult};
use crate::error;
/// Handles verification of attestation quotes
pub struct AttestationVerifier;
impl AttestationVerifier {
/// Verify an attestation quote
pub fn verify_quote(attestation_quote_bytes: &[u8]) -> error::Result<QuoteVerificationResult> {
// Get collateral for the quote
let collateral = get_collateral(attestation_quote_bytes)?;
// Get current time for verification
let unix_time: i64 = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map_err(|e| error::Error::internal(format!("Failed to get system time: {e}")))?
.as_secs() as _;
// Verify the quote with the collateral
let res =
verify_quote_with_collateral(attestation_quote_bytes, Some(&collateral), unix_time)?;
Ok(res)
}
}

View file

@ -0,0 +1,141 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use crate::{
client::JsonRpcClient,
core::AttestationPolicy,
error,
proof::Proof,
verification::{AttestationVerifier, PolicyEnforcer, SignatureVerifier, VerificationReporter},
};
use tokio_util::sync::CancellationToken;
use zksync_basic_types::L1BatchNumber;
/// Result of a batch verification
#[derive(Debug, Clone, Copy)]
pub struct BatchVerificationResult {
/// Total number of proofs processed
pub total_count: u32,
/// Number of proofs that were verified successfully
pub verified_count: u32,
/// Number of proofs that failed verification
pub unverified_count: u32,
}
/// Handles the batch verification process
pub struct BatchVerifier<C: JsonRpcClient> {
node_client: C,
attestation_policy: AttestationPolicy,
}
impl<C: JsonRpcClient> BatchVerifier<C> {
/// Create a new batch verifier
pub fn new(node_client: C, attestation_policy: AttestationPolicy) -> Self {
Self {
node_client,
attestation_policy,
}
}
/// Verify proofs for a batch
pub async fn verify_batch_proofs(
&self,
token: &CancellationToken,
batch_number: L1BatchNumber,
proofs: Vec<Proof>,
) -> error::Result<BatchVerificationResult> {
let batch_no = batch_number.0;
let mut total_proofs_count: u32 = 0;
let mut verified_proofs_count: u32 = 0;
for proof in proofs {
if token.is_cancelled() {
tracing::warn!("Stop signal received during batch verification");
return Ok(BatchVerificationResult {
total_count: total_proofs_count,
verified_count: verified_proofs_count,
unverified_count: total_proofs_count - verified_proofs_count,
});
}
total_proofs_count += 1;
let tee_type = proof.tee_type.to_uppercase();
if proof.is_permanently_ignored() {
tracing::debug!(
batch_no,
tee_type,
"Proof is marked as permanently ignored. Skipping."
);
continue;
}
tracing::debug!(batch_no, tee_type, proof.proved_at, "Verifying proof.");
let attestation_bytes = proof.attestation_bytes();
let signature_bytes = proof.signature_bytes();
tracing::debug!(
batch_no,
"Verifying quote ({} bytes)...",
attestation_bytes.len()
);
// Verify attestation
let quote_verification_result = AttestationVerifier::verify_quote(&attestation_bytes)?;
// Log verification results
VerificationReporter::log_quote_verification_summary(&quote_verification_result);
// Check if attestation matches policy
let policy_matches = PolicyEnforcer::validate_policy(
&self.attestation_policy,
&quote_verification_result,
);
if let Err(e) = policy_matches {
tracing::error!(batch_no, tee_type, "Attestation policy check failed: {e}");
continue;
}
// Verify signature
let root_hash = self
.node_client
.get_root_hash(L1BatchNumber(proof.l1_batch_number))
.await?;
let signature_verified = SignatureVerifier::verify_batch_proof(
&quote_verification_result,
root_hash,
&signature_bytes,
)?;
if signature_verified {
tracing::info!(
batch_no,
proof.proved_at,
tee_type,
"Verification succeeded.",
);
verified_proofs_count += 1;
} else {
tracing::warn!(batch_no, proof.proved_at, tee_type, "Verification failed!");
}
}
let unverified_proofs_count = total_proofs_count.saturating_sub(verified_proofs_count);
// Log batch verification results
VerificationReporter::log_batch_verification_results(
batch_no,
verified_proofs_count,
unverified_proofs_count,
);
Ok(BatchVerificationResult {
total_count: total_proofs_count,
verified_count: verified_proofs_count,
unverified_count: unverified_proofs_count,
})
}
}

View file

@ -0,0 +1,14 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
mod attestation;
mod batch;
mod policy;
mod reporting;
mod signature;
pub use attestation::AttestationVerifier;
pub use batch::BatchVerifier;
pub use policy::PolicyEnforcer;
pub use reporting::VerificationReporter;
pub use signature::SignatureVerifier;

View file

@ -0,0 +1,207 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use crate::{
core::AttestationPolicy,
error::{Error, Result},
};
use bytes::Bytes;
use enumset::EnumSet;
use teepot::quote::{tcblevel::TcbLevel, QuoteVerificationResult, Report};
/// Enforces policy requirements on attestation quotes
pub struct PolicyEnforcer;
impl PolicyEnforcer {
/// Check if a quote matches the attestation policy
pub fn validate_policy(
attestation_policy: &AttestationPolicy,
quote_verification_result: &QuoteVerificationResult,
) -> Result<()> {
let quote = &quote_verification_result.quote;
let tcblevel = quote_verification_result.result;
match &quote.report {
Report::SgxEnclave(report_body) => {
// Validate TCB level
Self::validate_tcb_level(attestation_policy.sgx_allowed_tcb_levels, tcblevel)?;
// Validate SGX Advisories
for advisory in &quote_verification_result.advisories {
Self::check_policy(
attestation_policy.sgx_allowed_advisory_ids.as_deref(),
advisory,
"advisories",
)?;
}
// Validate SGX policies
Self::check_policy_hash(
attestation_policy.sgx_mrsigners.as_deref(),
&report_body.mr_signer,
"mrsigner",
)?;
Self::check_policy_hash(
attestation_policy.sgx_mrenclaves.as_deref(),
&report_body.mr_enclave,
"mrenclave",
)
}
Report::TD10(report_body) => {
// Validate TCB level
Self::validate_tcb_level(attestation_policy.tdx_allowed_tcb_levels, tcblevel)?;
// Validate TDX Advisories
for advisory in &quote_verification_result.advisories {
Self::check_policy(
attestation_policy.tdx_allowed_advisory_ids.as_deref(),
advisory,
"mrsigner",
)?;
}
// Build combined TDX MR and validate
let tdx_mr = Self::build_tdx_mr([
&report_body.mr_td,
&report_body.rt_mr0,
&report_body.rt_mr1,
&report_body.rt_mr2,
&report_body.rt_mr3,
]);
Self::check_policy_hash(attestation_policy.tdx_mrs.as_deref(), &tdx_mr, "tdxmr")
}
Report::TD15(report_body) => {
// Validate TCB level
Self::validate_tcb_level(attestation_policy.tdx_allowed_tcb_levels, tcblevel)?;
// Validate TDX Advisories
for advisory in &quote_verification_result.advisories {
Self::check_policy(
attestation_policy.tdx_allowed_advisory_ids.as_deref(),
advisory,
"advisories",
)?;
}
// Build combined TDX MR and validate
let tdx_mr = Self::build_tdx_mr([
&report_body.base.mr_td,
&report_body.base.rt_mr0,
&report_body.base.rt_mr1,
&report_body.base.rt_mr2,
&report_body.base.rt_mr3,
]);
Self::check_policy_hash(attestation_policy.tdx_mrs.as_deref(), &tdx_mr, "tdxmr")
}
_ => Err(Error::policy_violation("Unknown quote report format")),
}
}
/// Helper method to validate TCB levels
fn validate_tcb_level(allowed_levels: EnumSet<TcbLevel>, actual_level: TcbLevel) -> Result<()> {
if !allowed_levels.contains(actual_level) {
let error_msg = format!(
"Quote verification failed: TCB level mismatch (expected one of: {allowed_levels:?}, actual: {actual_level})",
);
return Err(Error::policy_violation(error_msg));
}
Ok(())
}
/// Helper method to build combined TDX measurement register
fn build_tdx_mr<const N: usize>(parts: [&[u8]; N]) -> Vec<u8> {
parts.into_iter().flatten().copied().collect()
}
/// Check if a policy value matches the actual value
fn check_policy(policy: Option<&[String]>, actual_value: &str, field_name: &str) -> Result<()> {
if let Some(valid_values) = policy {
if !valid_values.iter().any(|value| value == actual_value) {
let error_msg =
format!(
"Quote verification failed: {} mismatch (expected one of: [ {} ], actual: {})",
field_name, valid_values.join(", "), actual_value
);
return Err(Error::policy_violation(error_msg));
}
tracing::debug!(field_name, actual_value, "Attestation policy check passed");
}
Ok(())
}
fn check_policy_hash(
policy: Option<&[Bytes]>,
actual_value: &[u8],
field_name: &str,
) -> Result<()> {
if let Some(valid_values) = policy {
let actual_value = Bytes::copy_from_slice(actual_value);
if !valid_values.contains(&actual_value) {
let valid_values = valid_values
.iter()
.map(hex::encode)
.collect::<Vec<_>>()
.join(", ");
let error_msg = format!(
"Quote verification failed: {field_name} mismatch (expected one of: [ {valid_values} ], actual: {actual_value:x})"
);
return Err(Error::policy_violation(error_msg));
}
tracing::debug!(
field_name,
actual_value = format!("{actual_value:x}"),
"Attestation policy check passed"
);
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_check_policy() {
// Test with no policy (should pass)
PolicyEnforcer::check_policy_hash(None, &[1, 2, 3], "test").unwrap();
// Test with matching policy
let actual_value: Bytes = hex::decode("01020304").unwrap().into();
PolicyEnforcer::check_policy_hash(
Some(vec![actual_value.clone()]).as_deref(),
&actual_value,
"test",
)
.unwrap();
//.clone() Test with matching policy (multiple values)
PolicyEnforcer::check_policy_hash(
Some(vec![
"aabbcc".into(),
"01020304".into(),
"ddeeff".into(),
actual_value.clone(),
])
.as_deref(),
&actual_value,
"test",
)
.unwrap();
// Test with non-matching policy
PolicyEnforcer::check_policy_hash(
Some(vec!["aabbcc".into(), "ddeeff".into()]).as_deref(),
&actual_value,
"test",
)
.unwrap_err();
}
}

View file

@ -0,0 +1,92 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use teepot::quote::QuoteVerificationResult;
/// Handles reporting and logging of verification results
pub struct VerificationReporter;
impl VerificationReporter {
/// Log summary of a quote verification
pub fn log_quote_verification_summary(quote_verification_result: &QuoteVerificationResult) {
let QuoteVerificationResult {
collateral_expired,
result: tcblevel,
quote,
advisories,
..
} = quote_verification_result;
if *collateral_expired {
tracing::warn!("Freshly fetched collateral expired!");
}
let advisories = if advisories.is_empty() {
"None".to_string()
} else {
advisories
.iter()
.map(ToString::to_string)
.collect::<Vec<_>>()
.join(", ")
};
tracing::debug!(
"Quote verification result: {tcblevel}. {report}. Advisory IDs: {advisories}.",
report = &quote.report
);
}
/// Log the results of batch verification
pub fn log_batch_verification_results(
batch_no: u32,
verified_proofs_count: u32,
unverified_proofs_count: u32,
) {
if unverified_proofs_count > 0 {
if verified_proofs_count == 0 {
tracing::error!(
batch_no,
"All {} proofs failed verification!",
unverified_proofs_count
);
} else {
tracing::warn!(
batch_no,
"Some proofs failed verification. Unverified proofs: {}. Verified proofs: {}.",
unverified_proofs_count,
verified_proofs_count
);
}
} else if verified_proofs_count > 0 {
tracing::info!(
batch_no,
"All {} proofs verified successfully!",
verified_proofs_count
);
}
}
/// Log overall verification results for multiple batches
pub fn log_overall_verification_results(
verified_batches_count: u32,
unverified_batches_count: u32,
) {
if unverified_batches_count > 0 {
if verified_batches_count == 0 {
tracing::error!(
"All {} batches failed verification!",
unverified_batches_count
);
} else {
tracing::error!(
"Some batches failed verification! Unverified batches: {}. Verified batches: {}.",
unverified_batches_count,
verified_batches_count
);
}
} else {
tracing::info!("{} batches verified successfully!", verified_batches_count);
}
}
}

View file

@ -0,0 +1,156 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2023-2025 Matter Labs
use secp256k1::{
ecdsa::{RecoverableSignature, RecoveryId, Signature},
Message, SECP256K1,
};
use teepot::{
ethereum::{public_key_to_ethereum_address, recover_signer},
prover::reportdata::ReportData,
quote::QuoteVerificationResult,
};
use zksync_basic_types::H256;
use crate::error;
const SIGNATURE_LENGTH_WITH_RECOVERY_ID: usize = 65;
const SIGNATURE_LENGTH_WITHOUT_RECOVERY_ID: usize = 64;
/// Handles verification of signatures in proofs
pub struct SignatureVerifier;
impl SignatureVerifier {
/// Verify a batch proof signature
pub fn verify_batch_proof(
quote_verification_result: &QuoteVerificationResult,
root_hash: H256,
signature: &[u8],
) -> error::Result<bool> {
let report_data_bytes = quote_verification_result.quote.get_report_data();
tracing::trace!(?report_data_bytes);
let report_data = ReportData::try_from(report_data_bytes)
.map_err(|e| error::Error::internal(format!("Could not convert to ReportData: {e}")))?;
Self::verify(&report_data, &root_hash, signature)
}
/// Verify signature against report data and root hash
pub fn verify(
report_data: &ReportData,
root_hash: &H256,
signature: &[u8],
) -> error::Result<bool> {
match report_data {
ReportData::V0(report) => Self::verify_v0(report, root_hash, signature),
ReportData::V1(report) => Self::verify_v1(report, root_hash, signature),
ReportData::Unknown(_) => Ok(false),
}
}
/// Verify a V0 report
fn verify_v0(
report: &teepot::prover::reportdata::ReportDataV0,
root_hash: &H256,
signature: &[u8],
) -> error::Result<bool> {
tracing::debug!("ReportData::V0");
let signature = Signature::from_compact(signature)
.map_err(|e| error::Error::signature_verification(e.to_string()))?;
let root_hash_msg = Message::from_digest(root_hash.0);
Ok(signature.verify(root_hash_msg, &report.pubkey).is_ok())
}
/// Verify a V1 report
fn verify_v1(
report: &teepot::prover::reportdata::ReportDataV1,
root_hash: &H256,
signature: &[u8],
) -> error::Result<bool> {
tracing::debug!("ReportData::V1");
let ethereum_address_from_report = report.ethereum_address;
let root_hash_msg = Message::from_digest(
root_hash
.as_bytes()
.try_into()
.map_err(|_| error::Error::signature_verification("root hash not 32 bytes"))?,
);
tracing::trace!("sig len = {}", signature.len());
// Try to recover Ethereum address from signature
let ethereum_address_from_signature = match signature.len() {
// Handle 64-byte signature case (missing recovery ID)
SIGNATURE_LENGTH_WITHOUT_RECOVERY_ID => {
SignatureVerifier::recover_address_with_missing_recovery_id(
signature,
&root_hash_msg,
)?
}
// Standard 65-byte signature case
SIGNATURE_LENGTH_WITH_RECOVERY_ID => {
let signature_bytes: [u8; SIGNATURE_LENGTH_WITH_RECOVERY_ID] =
signature.try_into().map_err(|_| {
error::Error::signature_verification(
"Expected 65-byte signature but got a different length",
)
})?;
recover_signer(&signature_bytes, &root_hash_msg).map_err(|e| {
error::Error::signature_verification(format!("Failed to recover signer: {e}"))
})?
}
// Any other length is invalid
len => {
return Err(error::Error::signature_verification(format!(
"Invalid signature length: {len} bytes"
)))
}
};
// Log verification details
tracing::debug!(
"Root hash: {}. Ethereum address from the attestation quote: {}. Ethereum address from the signature: {}.",
root_hash,
hex::encode(ethereum_address_from_report),
hex::encode(ethereum_address_from_signature),
);
Ok(ethereum_address_from_signature == ethereum_address_from_report)
}
/// Helper function to recover Ethereum address when recovery ID is missing
fn recover_address_with_missing_recovery_id(
signature: &[u8],
message: &Message,
) -> error::Result<[u8; 20]> {
tracing::info!("Signature is missing RecoveryId!");
// Try all possible recovery IDs
for rec_id in [
RecoveryId::Zero,
RecoveryId::One,
RecoveryId::Two,
RecoveryId::Three,
] {
let Ok(rec_sig) = RecoverableSignature::from_compact(signature, rec_id) else {
continue;
};
let Ok(public) = SECP256K1.recover_ecdsa(*message, &rec_sig) else {
continue;
};
let ethereum_address = public_key_to_ethereum_address(&public);
tracing::info!("Had to use RecoveryId::{rec_id:?}");
return Ok(ethereum_address);
}
// No valid recovery ID found
Err(error::Error::signature_verification(
"Could not find valid recovery ID",
))
}
}

View file

@ -0,0 +1,8 @@
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2024 Matter Labs
{ teepot }: teepot.teepot.passthru.craneLib.cargoClippy (
teepot.teepot.passthru.commonArgs // {
pname = "teepot";
inherit (teepot.teepot.passthru) cargoArtifacts;
}
)

View file

@ -1,7 +1,7 @@
# SPDX-License-Identifier: Apache-2.0 # SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2024 Matter Labs # Copyright (c) 2024 Matter Labs
{ teepotCrate }: teepotCrate.craneLib.cargoFmt ( { teepot }: teepot.teepot.passthru.craneLib.cargoDeny (
teepotCrate.commonArgs // { teepot.teepot.passthru.commonArgs // {
pname = "teepot"; pname = "teepot";
} }
) )

View file

@ -1,7 +1,7 @@
# SPDX-License-Identifier: Apache-2.0 # SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2024 Matter Labs # Copyright (c) 2024 Matter Labs
{ teepotCrate }: teepotCrate.craneLib.cargoDeny ( { teepot }: teepot.teepot.passthru.craneLib.cargoFmt (
teepotCrate.commonArgs // { teepot.teepot.passthru.commonArgs // {
pname = "teepot"; pname = "teepot";
} }
) )

View file

@ -0,0 +1,143 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Overview
This crate (`intel-dcap-api`) is a Rust client library for Intel's Data Center Attestation Primitives (DCAP) API. It
provides access to Intel's Trusted Services API for SGX and TDX attestation, including TCB info, PCK certificates, CRLs,
and enclave identity verification.
## Features
- Support for both API v3 and v4
- Async/await API using tokio
- Comprehensive error handling with Intel-specific error codes
- Type-safe request/response structures
- Support for SGX and TDX platforms
- Real data integration tests
- **Automatic rate limit handling with configurable retries**
## Development Commands
```bash
# Build
cargo build
cargo build --no-default-features --features rustls # Use rustls instead of default TLS
# Test
cargo test
# Lint
cargo clippy
# Examples
cargo run --example example # Basic usage example
cargo run --example get_pck_crl # Fetch certificate revocation lists
cargo run --example common_usage # Common attestation verification patterns
cargo run --example integration_test # Comprehensive test of most API endpoints
cargo run --example fetch_test_data # Fetch real data from Intel API for tests
cargo run --example handle_rate_limit # Demonstrate automatic rate limiting handling
```
## Architecture
### Client Structure
- **ApiClient** (`src/client/mod.rs`): Main entry point supporting API v3/v4
- Base URL: https://api.trustedservices.intel.com
- Manages HTTP client and API version selection
- Automatic retry logic for 429 (Too Many Requests) responses
- Default: 3 retries, configurable via `set_max_retries()`
### Key Modules
- **client/**: API endpoint implementations
- `tcb_info`: SGX/TDX TCB information retrieval
- `get_sgx_tcb_info()`, `get_tdx_tcb_info()`
- `pck_cert`: PCK certificate operations
- `get_pck_certificate_by_ppid()`, `get_pck_certificate_by_manifest()`
- `get_pck_certificates_by_ppid()`, `get_pck_certificates_by_manifest()`
- `get_pck_certificates_config_by_ppid()`, `get_pck_certificates_config_by_manifest()`
- `pck_crl`: Certificate revocation lists
- `get_pck_crl()` - supports PEM and DER encoding
- `enclave_identity`: SGX QE/QVE/QAE/TDQE identity
- `get_sgx_qe_identity()`, `get_sgx_qve_identity()`, `get_sgx_qae_identity()`, `get_tdx_qe_identity()`
- `fmspc`: FMSPC-related operations (V4 only)
- `get_fmspcs()` - with optional platform filter
- `get_sgx_tcb_evaluation_data_numbers()`, `get_tdx_tcb_evaluation_data_numbers()`
- `registration`: Platform registration
- `register_platform()`, `add_package()`
### Core Types
- **error.rs**: `IntelApiError` for comprehensive error handling
- Extracts error details from Error-Code and Error-Message headers
- **`TooManyRequests` variant for rate limiting (429) after retry exhaustion**
- **types.rs**: Enums (CaType, ApiVersion, UpdateType, etc.)
- **requests.rs**: Request structures
- **responses.rs**: Response structures with JSON and certificate data
### API Pattern
All client methods follow this pattern:
1. Build request with query parameters
2. Send HTTP request with proper headers (with automatic retry on 429)
3. Parse response (JSON + certificate chains)
4. Return typed response or error
### Rate Limiting & Retry Logic
- **Automatic Retries**: All HTTP requests automatically retry on 429 (Too Many Requests) responses
- **Retry Configuration**: Default 3 retries, configurable via `ApiClient::set_max_retries()`
- **Retry-After Handling**: Waits for duration specified in Retry-After header before retrying
- **Error Handling**: `IntelApiError::TooManyRequests` returned only after all retries exhausted
- **Implementation**: `execute_with_retry()` in `src/client/helpers.rs` handles retry logic
### Testing Strategy
- **Mock Tests**: Two test suites using mockito for HTTP mocking
- `tests/mock_api_tests.rs`: Basic API functionality tests with simple data (11 tests)
- `tests/real_data_mock_tests.rs`: Tests using real Intel API responses (25 tests)
- **Test Data**: Real responses stored in `tests/test_data/` (JSON format)
- Fetched using `cargo run --example fetch_test_data`
- Includes TCB info, CRLs, enclave identities for both SGX and TDX
- Covers V3 and V4 API variations, different update types, and evaluation data numbers
- **Key Testing Considerations**:
- Headers with newlines must be URL-encoded for mockito (use `percent_encode` with `NON_ALPHANUMERIC`)
- V3 vs V4 API use different header names:
- V3: `SGX-TCB-Info-Issuer-Chain`
- V4: `TCB-Info-Issuer-Chain`
- Error responses include Error-Code and Error-Message headers
- Examples use real Intel API endpoints
- Test data (FMSPC, PPID) from Intel documentation
- Async tests require tokio runtime
## API Version Differences
### V4-Only Features
- FMSPC listing (`get_fmspcs()`)
- TCB Evaluation Data Numbers endpoints
- PPID encryption key type parameter
- TDX QE identity endpoint
## Common Pitfalls
1. **Mockito Header Encoding**: Always URL-encode headers containing newlines/special characters
2. **API Version Selection**: Some endpoints are V4-only and will return errors on V3
3. **Rate Limiting**: Client automatically retries 429 responses; disable with `set_max_retries(0)` if manual handling
needed
4. **Platform Filters**: Only certain values are valid (All, Client, E3, E5)
5. **Test Data**: PCK certificate endpoints require valid platform data and often need subscription keys
6. **Issuer Chain Validation**: Always check that `issuer_chain` is non-empty - it's critical for signature verification
## Security Considerations
- **Certificate Chain Verification**: The `issuer_chain` field contains the certificates needed to verify the signature
of the response data
- **Signature Validation**: All JSON responses (TCB info, enclave identities) should have their signatures verified
using the issuer chain
- **CRL Verification**: PCK CRLs must be signature-verified before being used for certificate revocation checking
- **Empty Issuer Chains**: Always validate that issuer chains are present and non-empty before trusting response data

View file

@ -0,0 +1,35 @@
[package]
name = "intel-dcap-api"
description = "Intel DCAP API Client"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
keywords = ["sgx", "tdx", "intel", "attestation", "confidential"]
categories = ["api-bindings", "cryptography", "authentication"]
[dependencies]
base64.workspace = true
percent-encoding.workspace = true
reqwest = { workspace = true, features = ["json"] }
serde.workspace = true
serde_json.workspace = true
thiserror.workspace = true
tokio.workspace = true
url.workspace = true
[dev-dependencies]
base64.workspace = true
hex.workspace = true
mockito.workspace = true
x509-cert.workspace = true
[[example]]
name = "integration_test"
required-features = ["default"]
[features]
default = ["reqwest/default-tls"]
rustls = ["reqwest/rustls-tls"]

View file

@ -0,0 +1,182 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use intel_dcap_api::{ApiClient, CaType, IntelApiError, UpdateType};
/// Common usage patterns for the Intel DCAP API client
///
/// This example demonstrates typical use cases for attestation verification.
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a client (defaults to V4 API)
let client = ApiClient::new()?;
// Example 1: Get TCB info for quote verification
println!("Example 1: Getting TCB info for SGX quote verification");
println!("======================================================");
let fmspc = "00906ED50000"; // From SGX quote
match client.get_sgx_tcb_info(fmspc, None, None).await {
Ok(response) => {
// Critical: Check that issuer chain is present for signature verification
if response.issuer_chain.is_empty() {
println!("✗ Error: Empty issuer chain - cannot verify TCB info signature!");
return Ok(());
}
println!("✓ Retrieved TCB info for FMSPC: {}", fmspc);
// Parse the TCB info
let tcb_info: serde_json::Value = serde_json::from_str(&response.tcb_info_json)?;
// Extract useful information
if let Some(tcb_levels) = tcb_info["tcbInfo"]["tcbLevels"].as_array() {
println!(" Found {} TCB levels", tcb_levels.len());
// Show the latest TCB level
if let Some(latest) = tcb_levels.first() {
println!(" Latest TCB level:");
if let Some(status) = latest["tcbStatus"].as_str() {
println!(" Status: {}", status);
}
if let Some(date) = latest["tcbDate"].as_str() {
println!(" Date: {}", date);
}
}
}
// The issuer chain is needed to verify the signature
println!(
" Issuer chain length: {} bytes",
response.issuer_chain.len()
);
// Verify we have certificate chain for signature verification
let cert_count = response.issuer_chain.matches("BEGIN CERTIFICATE").count();
println!(" Certificate chain contains {} certificates", cert_count);
}
Err(IntelApiError::ApiError {
status,
error_message,
..
}) => {
println!(
"✗ API Error {}: {}",
status,
error_message.unwrap_or_default()
);
}
Err(e) => {
println!("✗ Error: {:?}", e);
}
}
println!();
// Example 2: Get QE identity for enclave verification
println!("Example 2: Getting QE identity for enclave verification");
println!("======================================================");
match client.get_sgx_qe_identity(None, None).await {
Ok(response) => {
// Critical: Check that issuer chain is present for signature verification
if response.issuer_chain.is_empty() {
println!("✗ Error: Empty issuer chain - cannot verify QE identity signature!");
return Ok(());
}
println!("✓ Retrieved QE identity");
println!(
" Issuer chain length: {} bytes",
response.issuer_chain.len()
);
let identity: serde_json::Value =
serde_json::from_str(&response.enclave_identity_json)?;
if let Some(enclave_id) = identity["enclaveIdentity"]["id"].as_str() {
println!(" Enclave ID: {}", enclave_id);
}
if let Some(version) = identity["enclaveIdentity"]["version"].as_u64() {
println!(" Version: {}", version);
}
if let Some(mrsigner) = identity["enclaveIdentity"]["mrsigner"].as_str() {
println!(" MRSIGNER: {}...", &mrsigner[..16]);
}
}
Err(e) => {
println!("✗ Failed to get QE identity: {:?}", e);
}
}
println!();
// Example 3: Check certificate revocation
println!("Example 3: Checking certificate revocation status");
println!("================================================");
match client.get_pck_crl(CaType::Processor, None).await {
Ok(response) => {
// Critical: Check that issuer chain is present for CRL verification
if response.issuer_chain.is_empty() {
println!("✗ Error: Empty issuer chain - cannot verify CRL signature!");
return Ok(());
}
println!("✓ Retrieved PCK CRL");
println!(
" Issuer chain length: {} bytes",
response.issuer_chain.len()
);
let crl_pem = String::from_utf8_lossy(&response.crl_data);
// In real usage, you would parse this CRL and check if a certificate is revoked
if crl_pem.contains("BEGIN X509 CRL") {
println!(" CRL format: PEM");
println!(" CRL size: {} bytes", crl_pem.len());
// Count the revoked certificates (naive approach)
let revoked_count = crl_pem.matches("Serial Number:").count();
println!(" Approximate revoked certificates: {}", revoked_count);
}
}
Err(e) => {
println!("✗ Failed to get CRL: {:?}", e);
}
}
println!();
// Example 4: Early update for testing
println!("Example 4: Getting early TCB update (for testing)");
println!("================================================");
match client
.get_sgx_tcb_info(fmspc, Some(UpdateType::Early), None)
.await
{
Ok(response) => {
println!("✓ Retrieved early TCB update");
let tcb_info: serde_json::Value = serde_json::from_str(&response.tcb_info_json)?;
if let Some(next_update) = tcb_info["tcbInfo"]["nextUpdate"].as_str() {
println!(" Next update: {}", next_update);
}
}
Err(IntelApiError::ApiError { status, .. }) if status.as_u16() == 404 => {
println!(" No early update available (this is normal)");
}
Err(e) => {
println!("✗ Error: {:?}", e);
}
}
println!();
println!("Done! These examples show common patterns for attestation verification.");
Ok(())
}

View file

@ -0,0 +1,78 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use intel_dcap_api::{
ApiClient, ApiVersion, CaType, CrlEncoding, EnclaveIdentityResponse, IntelApiError,
PckCrlResponse, PlatformFilter, TcbInfoResponse,
};
#[tokio::main]
async fn main() -> Result<(), IntelApiError> {
for api_version in [ApiVersion::V3, ApiVersion::V4] {
println!("Using API version: {}", api_version);
let client = ApiClient::new_with_version(api_version)?;
// Example: Get SGX TCB Info
let fmspc_example = "00606A000000"; // Example FMSPC from docs
match client.get_sgx_tcb_info(fmspc_example, None, None).await {
Ok(TcbInfoResponse {
tcb_info_json,
issuer_chain,
}) => println!(
"SGX TCB Info for {}:\n{}\nIssuer Chain: {}",
fmspc_example, tcb_info_json, issuer_chain
),
Err(e) => eprintln!("Error getting SGX TCB info: {}", e),
}
// Example: Get FMSPCs
match client.get_fmspcs(Some(PlatformFilter::E3)).await {
// Filter for E3 platform type [cite: 230]
Ok(fmspc_list) => println!("\nE3 FMSPCs:\n{}", fmspc_list),
Err(e) => eprintln!("Error getting FMSPCs: {}", e),
}
// Example: Get SGX QE Identity
match client.get_sgx_qe_identity(None, None).await {
Ok(EnclaveIdentityResponse {
enclave_identity_json,
issuer_chain,
}) => {
println!(
"\nSGX QE Identity:\n{}\nIssuer Chain: {}",
enclave_identity_json, issuer_chain
)
}
Err(e) => eprintln!("Error getting SGX QE Identity: {}", e),
}
// Example: Get PCK CRL (Platform CA, PEM encoding)
match client
.get_pck_crl(CaType::Platform, Some(CrlEncoding::Pem))
.await
{
// [cite: 118, 119]
Ok(PckCrlResponse {
crl_data,
issuer_chain,
}) => {
// Attempt to decode PEM for display, otherwise show byte count
match String::from_utf8(crl_data.clone()) {
Ok(pem_string) => println!(
"\nPlatform PCK CRL (PEM):\n{}\nIssuer Chain: {}",
pem_string, issuer_chain
),
Err(_) => println!(
"\nPlatform PCK CRL ({} bytes, likely DER):\nIssuer Chain: {}",
crl_data.len(),
issuer_chain
),
}
}
Err(e) => eprintln!("Error getting PCK CRL: {}", e),
}
}
Ok(())
}

View file

@ -0,0 +1,515 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use base64::{engine::general_purpose, Engine as _};
use intel_dcap_api::{ApiClient, ApiVersion, CaType, CrlEncoding, PlatformFilter, UpdateType};
use std::{fs, path::Path};
/// Fetch real data from Intel API and save it as JSON files
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create test data directory
let test_data_dir = Path::new("tests/test_data");
fs::create_dir_all(test_data_dir)?;
let client = ApiClient::new()?;
println!("Fetching real test data from Intel API...");
// Keep track of successful fetches
let mut successes: Vec<String> = Vec::new();
let mut failures: Vec<String> = Vec::new();
// 1. Fetch SGX TCB info
println!("\n1. Fetching SGX TCB info...");
match client
.get_sgx_tcb_info("00606A6A0000", Some(UpdateType::Standard), None)
.await
{
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_tcb_info.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX TCB info".to_string());
}
Err(e) => {
failures.push(format!("SGX TCB info: {}", e));
}
}
// 2. Fetch TDX TCB info
println!("\n2. Fetching TDX TCB info...");
match client
.get_tdx_tcb_info("00806F050000", Some(UpdateType::Standard), None)
.await
{
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("tdx_tcb_info.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("TDX TCB info".to_string());
}
Err(e) => {
failures.push(format!("TDX TCB info: {}", e));
}
}
// 3. Fetch PCK CRL for processor
println!("\n3. Fetching PCK CRL (processor)...");
match client.get_pck_crl(CaType::Processor, None).await {
Ok(response) => {
let crl_string = String::from_utf8_lossy(&response.crl_data);
let data = serde_json::json!({
"crl_data": crl_string,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("pck_crl_processor.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("PCK CRL (processor)".to_string());
}
Err(e) => {
failures.push(format!("PCK CRL (processor): {}", e));
}
}
// 4. Fetch PCK CRL for platform
println!("\n4. Fetching PCK CRL (platform)...");
match client.get_pck_crl(CaType::Platform, None).await {
Ok(response) => {
let crl_string = String::from_utf8_lossy(&response.crl_data);
let data = serde_json::json!({
"crl_data": crl_string,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("pck_crl_platform.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("PCK CRL (platform)".to_string());
}
Err(e) => {
failures.push(format!("PCK CRL (platform): {}", e));
}
}
// 5. Fetch SGX QE identity
println!("\n5. Fetching SGX QE identity...");
match client.get_sgx_qe_identity(None, None).await {
Ok(response) => {
let data = serde_json::json!({
"enclave_identity_json": response.enclave_identity_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_qe_identity.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX QE identity".to_string());
}
Err(e) => {
failures.push(format!("SGX QE identity: {}", e));
}
}
// 6. Fetch SGX QVE identity
println!("\n6. Fetching SGX QVE identity...");
match client.get_sgx_qve_identity(None, None).await {
Ok(response) => {
let data = serde_json::json!({
"enclave_identity_json": response.enclave_identity_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_qve_identity.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX QVE identity".to_string());
}
Err(e) => {
failures.push(format!("SGX QVE identity: {}", e));
}
}
// 7. Fetch TDX QE identity
println!("\n7. Fetching TDX QE identity...");
match client.get_tdx_qe_identity(None, None).await {
Ok(response) => {
let data = serde_json::json!({
"enclave_identity_json": response.enclave_identity_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("tdx_qe_identity.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("TDX QE identity".to_string());
}
Err(e) => {
failures.push(format!("TDX QE identity: {}", e));
}
}
// 8. Try an alternative FMSPC
println!("\n8. Fetching alternative SGX TCB info...");
match client.get_sgx_tcb_info("00906ED50000", None, None).await {
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_tcb_info_alt.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("Alternative SGX TCB info".to_string());
}
Err(e) => {
failures.push(format!("Alternative SGX TCB info: {}", e));
}
}
// 9. Fetch PCK certificate
println!("\n9. Attempting to fetch PCK certificate...");
let ppid = "3d6dd97e96f84536a2267e727dd860e4fdd3ffa3e319db41e8f69c9a43399e7b7ce97d7eb3bd05b0a58bdb5b90a0e218";
let cpusvn = "0606060606060606060606060606060606060606060606060606060606060606";
let pcesvn = "0a00";
let pceid = "0000";
match client
.get_pck_certificate_by_ppid(ppid, cpusvn, pcesvn, pceid, None, None)
.await
{
Ok(response) => {
let data = serde_json::json!({
"pck_cert_pem": response.pck_cert_pem,
"issuer_chain": response.issuer_chain,
"tcbm": response.tcbm,
"fmspc": response.fmspc,
});
fs::write(
test_data_dir.join("pck_cert.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("PCK certificate".to_string());
}
Err(e) => {
failures.push(format!("PCK certificate: {}", e));
}
}
// 10. Fetch SGX QAE identity
println!("\n10. Fetching SGX QAE identity...");
match client.get_sgx_qae_identity(None, None).await {
Ok(response) => {
let data = serde_json::json!({
"enclave_identity_json": response.enclave_identity_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_qae_identity.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX QAE identity".to_string());
}
Err(e) => {
failures.push(format!("SGX QAE identity: {}", e));
}
}
// 11. Fetch FMSPCs
println!("\n11. Fetching FMSPCs...");
match client.get_fmspcs(Some(PlatformFilter::All)).await {
Ok(fmspcs_json) => {
let data = serde_json::json!({
"fmspcs_json": fmspcs_json,
});
fs::write(
test_data_dir.join("fmspcs.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("FMSPCs".to_string());
}
Err(e) => {
failures.push(format!("FMSPCs: {}", e));
}
}
// 12. Fetch SGX TCB evaluation data numbers
println!("\n12. Fetching SGX TCB evaluation data numbers...");
match client.get_sgx_tcb_evaluation_data_numbers().await {
Ok(response) => {
let data = serde_json::json!({
"tcb_evaluation_data_numbers_json": response.tcb_evaluation_data_numbers_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_tcb_eval_nums.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX TCB evaluation data numbers".to_string());
}
Err(e) => {
failures.push(format!("SGX TCB evaluation data numbers: {}", e));
}
}
// 13. Fetch TDX TCB evaluation data numbers
println!("\n13. Fetching TDX TCB evaluation data numbers...");
match client.get_tdx_tcb_evaluation_data_numbers().await {
Ok(response) => {
let data = serde_json::json!({
"tcb_evaluation_data_numbers_json": response.tcb_evaluation_data_numbers_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("tdx_tcb_eval_nums.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("TDX TCB evaluation data numbers".to_string());
}
Err(e) => {
failures.push(format!("TDX TCB evaluation data numbers: {}", e));
}
}
// 14. Fetch PCK CRL with DER encoding
println!("\n14. Fetching PCK CRL (processor, DER encoding)...");
match client
.get_pck_crl(CaType::Processor, Some(CrlEncoding::Der))
.await
{
Ok(response) => {
// For DER, save as base64
let crl_base64 = general_purpose::STANDARD.encode(&response.crl_data);
let data = serde_json::json!({
"crl_data_base64": crl_base64,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("pck_crl_processor_der.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("PCK CRL (processor, DER)".to_string());
}
Err(e) => {
failures.push(format!("PCK CRL (processor, DER): {}", e));
}
}
// 15. Try different update types
println!("\n15. Fetching SGX TCB info with Early update...");
match client
.get_sgx_tcb_info("00906ED50000", Some(UpdateType::Early), None)
.await
{
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_tcb_info_early.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX TCB info (Early update)".to_string());
}
Err(e) => {
failures.push(format!("SGX TCB info (Early update): {}", e));
}
}
// 16. Try with specific TCB evaluation data number
println!("\n16. Fetching TDX TCB info with specific evaluation number...");
match client
.get_tdx_tcb_info("00806F050000", None, Some(17))
.await
{
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("tdx_tcb_info_eval17.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("TDX TCB info (eval number 17)".to_string());
}
Err(e) => {
failures.push(format!("TDX TCB info (eval number 17): {}", e));
}
}
// 17. Try different FMSPCs
println!("\n17. Fetching more SGX TCB info variations...");
let test_fmspcs = vec!["00906ED50000", "00906C0F0000", "00A06F050000"];
for fmspc in test_fmspcs {
match client.get_sgx_tcb_info(fmspc, None, None).await {
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join(format!("sgx_tcb_info_{}.json", fmspc)),
serde_json::to_string_pretty(&data)?,
)?;
successes.push(format!("SGX TCB info (FMSPC: {})", fmspc));
}
Err(e) => {
failures.push(format!("SGX TCB info (FMSPC: {}): {}", fmspc, e));
}
}
}
// 18. Try FMSPCs with different platform filters
println!("\n18. Fetching FMSPCs with different platform filters...");
match client.get_fmspcs(None).await {
Ok(fmspcs_json) => {
let data = serde_json::json!({
"fmspcs_json": fmspcs_json,
});
fs::write(
test_data_dir.join("fmspcs_no_filter.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("FMSPCs (no filter)".to_string());
}
Err(e) => {
failures.push(format!("FMSPCs (no filter): {}", e));
}
}
match client.get_fmspcs(Some(PlatformFilter::All)).await {
Ok(fmspcs_json) => {
let data = serde_json::json!({
"fmspcs_json": fmspcs_json,
});
fs::write(
test_data_dir.join("fmspcs_all_platforms.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("FMSPCs (all platforms)".to_string());
}
Err(e) => {
failures.push(format!("FMSPCs (all platforms): {}", e));
}
}
// 19. Try PCK certificates with different parameters (encrypted PPID)
println!("\n19. Attempting to fetch PCK certificates with different params...");
// Try with a different encrypted PPID format
let encrypted_ppid = "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000";
let pceid = "0000";
match client
.get_pck_certificates_by_ppid(encrypted_ppid, pceid, None, None)
.await
{
Ok(response) => {
let data = serde_json::json!({
"pck_certificates_json": response.pck_certs_json,
"issuer_chain": response.issuer_chain,
"fmspc": response.fmspc,
});
fs::write(
test_data_dir.join("pck_certificates.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("PCK certificates (by PPID)".to_string());
}
Err(e) => {
failures.push(format!("PCK certificates (by PPID): {}", e));
}
}
// 20. Try TDX TCB info with different FMSPCs
println!("\n20. Fetching TDX TCB info variations...");
let tdx_fmspcs = vec!["00806F050000", "00A06F050000", "00606A0000000"];
for fmspc in tdx_fmspcs {
match client.get_tdx_tcb_info(fmspc, None, None).await {
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join(format!("tdx_tcb_info_{}.json", fmspc)),
serde_json::to_string_pretty(&data)?,
)?;
successes.push(format!("TDX TCB info (FMSPC: {})", fmspc));
}
Err(e) => {
failures.push(format!("TDX TCB info (FMSPC: {}): {}", fmspc, e));
}
}
}
// 21. Try with V3 API for some endpoints
println!("\n21. Testing V3 API endpoints...");
let v3_client =
ApiClient::new_with_options("https://api.trustedservices.intel.com", ApiVersion::V3)?;
match v3_client.get_sgx_tcb_info("00906ED50000", None, None).await {
Ok(response) => {
let data = serde_json::json!({
"tcb_info_json": response.tcb_info_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_tcb_info_v3.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX TCB info (V3 API)".to_string());
}
Err(e) => {
failures.push(format!("SGX TCB info (V3 API): {}", e));
}
}
match v3_client.get_sgx_qe_identity(None, None).await {
Ok(response) => {
let data = serde_json::json!({
"enclave_identity_json": response.enclave_identity_json,
"issuer_chain": response.issuer_chain,
});
fs::write(
test_data_dir.join("sgx_qe_identity_v3.json"),
serde_json::to_string_pretty(&data)?,
)?;
successes.push("SGX QE identity (V3 API)".to_string());
}
Err(e) => {
failures.push(format!("SGX QE identity (V3 API): {}", e));
}
}
println!("\n\nTest data fetching complete!");
println!("\nSuccessful fetches:");
for s in &successes {
println!("{}", s);
}
if !failures.is_empty() {
println!("\nFailed fetches:");
for f in &failures {
println!("{}", f);
}
}
println!("\nData saved in: {}", test_data_dir.display());
Ok(())
}

View file

@ -0,0 +1,75 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use intel_dcap_api::{ApiClient, CaType, CrlEncoding, IntelApiError, PckCrlResponse};
use x509_cert::{
der::{oid::AssociatedOid, Decode, SliceReader},
ext::pkix::{
crl::dp::DistributionPoint,
name::{DistributionPointName, GeneralName},
CrlDistributionPoints,
},
};
#[tokio::main]
async fn main() -> Result<(), IntelApiError> {
let client = ApiClient::new()?;
let PckCrlResponse {
crl_data,
issuer_chain,
} = client
.get_pck_crl(CaType::Platform, Some(CrlEncoding::Der))
.await?;
let certs = x509_cert::certificate::CertificateInner::<
x509_cert::certificate::Rfc5280
>::load_pem_chain(issuer_chain.as_bytes()).map_err(
|_| IntelApiError::InvalidParameter("Could not load a PEM chain")
)?;
for cert in certs {
println!("Issuer: {}", cert.tbs_certificate.issuer);
println!("Subject: {}", cert.tbs_certificate.subject);
println!("Serial Number: {}", cert.tbs_certificate.serial_number);
println!("Not Before: {}", cert.tbs_certificate.validity.not_before);
println!("Not After: {}", cert.tbs_certificate.validity.not_after);
// Extract and print CRL distribution points
if let Some(extensions) = &cert.tbs_certificate.extensions {
for ext in extensions.iter() {
if ext.extn_id == CrlDistributionPoints::OID {
// Create a SliceReader from the byte slice
let mut reader = SliceReader::new(ext.extn_value.as_bytes()).map_err(|_| {
IntelApiError::InvalidParameter(
"Could not create reader from extension value",
)
})?;
// Now pass the reader to decode_value
if let Ok(dist_points) = Vec::<DistributionPoint>::decode(&mut reader) {
for point in dist_points {
if let Some(DistributionPointName::FullName(names)) =
point.distribution_point
{
for name in names {
if let GeneralName::UniformResourceIdentifier(uri) = name {
let uri = uri.as_str();
let crl_bytes = reqwest::get(uri).await?.bytes().await?;
println!("CRL bytes (hex): {}", hex::encode(&crl_bytes));
}
}
}
}
} else {
println!("Could not decode CRL distribution points");
}
}
}
}
}
println!("CRL bytes (hex): {}", hex::encode(&crl_data));
Ok(())
}

View file

@ -0,0 +1,91 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! Example demonstrating automatic rate limit handling
//!
//! The Intel DCAP API client now automatically handles 429 Too Many Requests responses
//! by retrying up to 3 times by default. This example shows how to configure the retry
//! behavior and handle cases where all retries are exhausted.
use intel_dcap_api::{ApiClient, IntelApiError};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create API client with default settings (3 retries)
let mut client = ApiClient::new()?;
println!("Example 1: Default behavior (automatic retries)");
println!("================================================");
// Example FMSPC value
let fmspc = "00606A000000";
// The client will automatically retry up to 3 times if rate limited
match client.get_sgx_tcb_info(fmspc, None, None).await {
Ok(tcb_info) => {
println!("✓ Successfully retrieved TCB info");
println!(
" TCB Info JSON length: {} bytes",
tcb_info.tcb_info_json.len()
);
println!(
" Issuer Chain length: {} bytes",
tcb_info.issuer_chain.len()
);
}
Err(IntelApiError::TooManyRequests {
request_id,
retry_after,
}) => {
println!("✗ Rate limited even after 3 automatic retries");
println!(" Request ID: {}", request_id);
println!(" Last retry-after was: {} seconds", retry_after);
}
Err(e) => {
eprintln!("✗ Other error: {}", e);
}
}
println!("\nExample 2: Custom retry configuration");
println!("=====================================");
// Configure client to retry up to 5 times
client.set_max_retries(5);
println!("Set max retries to 5");
match client.get_sgx_tcb_info(fmspc, None, None).await {
Ok(_) => println!("✓ Request succeeded"),
Err(IntelApiError::TooManyRequests { .. }) => {
println!("✗ Still rate limited after 5 retries")
}
Err(e) => eprintln!("✗ Error: {}", e),
}
println!("\nExample 3: Disable automatic retries");
println!("====================================");
// Disable automatic retries
client.set_max_retries(0);
println!("Disabled automatic retries");
match client.get_sgx_tcb_info(fmspc, None, None).await {
Ok(_) => println!("✓ Request succeeded on first attempt"),
Err(IntelApiError::TooManyRequests {
request_id,
retry_after,
}) => {
println!("✗ Rate limited (no automatic retry)");
println!(" Request ID: {}", request_id);
println!(" Retry after: {} seconds", retry_after);
println!(" You would need to implement manual retry logic here");
}
Err(e) => eprintln!("✗ Error: {}", e),
}
println!("\nNote: The client handles rate limiting automatically!");
println!("You only need to handle TooManyRequests errors if:");
println!("- You disable automatic retries (set_max_retries(0))");
println!("- All automatic retries are exhausted");
Ok(())
}

View file

@ -0,0 +1,495 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use intel_dcap_api::{
ApiClient, ApiVersion, CaType, CrlEncoding, IntelApiError, PlatformFilter, UpdateType,
};
use std::time::Duration;
use tokio::time::sleep;
/// Comprehensive integration test example demonstrating most Intel DCAP API client functions
///
/// This example shows how to use various endpoints of the Intel Trusted Services API.
/// Note: Some operations may fail with 404 or 400 errors if the data doesn't exist on Intel's servers.
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("=== Intel DCAP API Integration Test Example ===\n");
// Create clients for both V3 and V4 APIs
let v4_client = ApiClient::new()?;
let v3_client =
ApiClient::new_with_options("https://api.trustedservices.intel.com", ApiVersion::V3)?;
// Track successes and failures
let mut results = Vec::new();
// Test FMSPC - commonly used for TCB lookups
let test_fmspc = "00906ED50000";
let test_fmspc_tdx = "00806F050000";
println!("1. Testing TCB Info Endpoints...");
println!("================================");
// 1.1 SGX TCB Info (V4)
print!(" - SGX TCB Info (V4): ");
match v4_client.get_sgx_tcb_info(test_fmspc, None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX TCB Info (V4)", false));
} else {
println!("✓ Success");
println!(" FMSPC: {}", test_fmspc);
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
let tcb_info: serde_json::Value = serde_json::from_str(&response.tcb_info_json)?;
if let Some(version) = tcb_info["tcbInfo"]["version"].as_u64() {
println!(" TCB Info Version: {}", version);
}
results.push(("SGX TCB Info (V4)", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX TCB Info (V4)", false));
}
}
// Add small delay between requests to be nice to the API
sleep(Duration::from_millis(100)).await;
// 1.2 SGX TCB Info (V3)
print!(" - SGX TCB Info (V3): ");
match v3_client.get_sgx_tcb_info(test_fmspc, None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX TCB Info (V3)", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
results.push(("SGX TCB Info (V3)", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX TCB Info (V3)", false));
}
}
sleep(Duration::from_millis(100)).await;
// 1.3 TDX TCB Info
print!(" - TDX TCB Info: ");
match v4_client.get_tdx_tcb_info(test_fmspc_tdx, None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("TDX TCB Info", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
let tcb_info: serde_json::Value = serde_json::from_str(&response.tcb_info_json)?;
if let Some(id) = tcb_info["tcbInfo"]["id"].as_str() {
println!(" Platform: {}", id);
}
results.push(("TDX TCB Info", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("TDX TCB Info", false));
}
}
sleep(Duration::from_millis(100)).await;
// 1.4 SGX TCB Info with Early Update
print!(" - SGX TCB Info (Early Update): ");
match v4_client
.get_sgx_tcb_info(test_fmspc, Some(UpdateType::Early), None)
.await
{
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX TCB Info (Early)", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
results.push(("SGX TCB Info (Early)", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX TCB Info (Early)", false));
}
}
sleep(Duration::from_millis(100)).await;
println!("\n2. Testing Enclave Identity Endpoints...");
println!("========================================");
// 2.1 SGX QE Identity
print!(" - SGX QE Identity: ");
match v4_client.get_sgx_qe_identity(None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX QE Identity", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
let identity: serde_json::Value =
serde_json::from_str(&response.enclave_identity_json)?;
if let Some(id) = identity["enclaveIdentity"]["id"].as_str() {
println!(" Enclave ID: {}", id);
}
results.push(("SGX QE Identity", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX QE Identity", false));
}
}
sleep(Duration::from_millis(100)).await;
// 2.2 SGX QVE Identity
print!(" - SGX QVE Identity: ");
match v4_client.get_sgx_qve_identity(None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX QVE Identity", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
results.push(("SGX QVE Identity", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX QVE Identity", false));
}
}
sleep(Duration::from_millis(100)).await;
// 2.3 SGX QAE Identity
print!(" - SGX QAE Identity: ");
match v4_client.get_sgx_qae_identity(None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX QAE Identity", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
results.push(("SGX QAE Identity", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX QAE Identity", false));
}
}
sleep(Duration::from_millis(100)).await;
// 2.4 TDX QE Identity (V4 only)
print!(" - TDX QE Identity: ");
match v4_client.get_tdx_qe_identity(None, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("TDX QE Identity", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
results.push(("TDX QE Identity", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("TDX QE Identity", false));
}
}
sleep(Duration::from_millis(100)).await;
println!("\n3. Testing PCK CRL Endpoints...");
println!("================================");
// 3.1 PCK CRL - Processor (PEM)
print!(" - PCK CRL (Processor, PEM): ");
match v4_client.get_pck_crl(CaType::Processor, None).await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("PCK CRL (Processor)", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
let crl_str = String::from_utf8_lossy(&response.crl_data);
if crl_str.contains("BEGIN X509 CRL") {
println!(" Format: PEM");
}
results.push(("PCK CRL (Processor)", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("PCK CRL (Processor)", false));
}
}
sleep(Duration::from_millis(100)).await;
// 3.2 PCK CRL - Platform (DER)
print!(" - PCK CRL (Platform, DER): ");
match v4_client
.get_pck_crl(CaType::Platform, Some(CrlEncoding::Der))
.await
{
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("PCK CRL (Platform, DER)", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
println!(" CRL size: {} bytes", response.crl_data.len());
results.push(("PCK CRL (Platform, DER)", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("PCK CRL (Platform, DER)", false));
}
}
sleep(Duration::from_millis(100)).await;
println!("\n4. Testing FMSPC Endpoints (V4 only)...");
println!("=======================================");
// 4.1 Get FMSPCs (no filter)
print!(" - Get FMSPCs (no filter): ");
match v4_client.get_fmspcs(None).await {
Ok(fmspcs_json) => {
println!("✓ Success");
let fmspcs: serde_json::Value = serde_json::from_str(&fmspcs_json)?;
if let Some(arr) = fmspcs.as_array() {
println!(" Total FMSPCs: {}", arr.len());
// Show first few FMSPCs
for (i, fmspc) in arr.iter().take(3).enumerate() {
if let (Some(fmspc_val), Some(platform)) =
(fmspc["fmspc"].as_str(), fmspc["platform"].as_str())
{
println!(" [{}] {} - {}", i + 1, fmspc_val, platform);
}
}
if arr.len() > 3 {
println!(" ... and {} more", arr.len() - 3);
}
}
results.push(("Get FMSPCs", true));
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("Get FMSPCs", false));
}
}
sleep(Duration::from_millis(100)).await;
// 4.2 Get FMSPCs with platform filter
print!(" - Get FMSPCs (All platforms): ");
match v4_client.get_fmspcs(Some(PlatformFilter::All)).await {
Ok(_) => {
println!("✓ Success");
results.push(("Get FMSPCs (filtered)", true));
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("Get FMSPCs (filtered)", false));
}
}
sleep(Duration::from_millis(100)).await;
println!("\n5. Testing TCB Evaluation Data Numbers (V4 only)...");
println!("===================================================");
// 5.1 SGX TCB Evaluation Data Numbers
print!(" - SGX TCB Evaluation Data Numbers: ");
match v4_client.get_sgx_tcb_evaluation_data_numbers().await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("SGX TCB Eval Numbers", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
let data: serde_json::Value =
serde_json::from_str(&response.tcb_evaluation_data_numbers_json)?;
if let Some(sgx_data) = data.get("sgx") {
println!(
" SGX entries: {}",
sgx_data.as_array().map(|a| a.len()).unwrap_or(0)
);
}
results.push(("SGX TCB Eval Numbers", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("SGX TCB Eval Numbers", false));
}
}
sleep(Duration::from_millis(100)).await;
// 5.2 TDX TCB Evaluation Data Numbers
print!(" - TDX TCB Evaluation Data Numbers: ");
match v4_client.get_tdx_tcb_evaluation_data_numbers().await {
Ok(response) => {
if response.issuer_chain.is_empty() {
println!("✗ Failed: Empty issuer chain");
results.push(("TDX TCB Eval Numbers", false));
} else {
println!("✓ Success");
println!(" Issuer chain: {} bytes", response.issuer_chain.len());
let data: serde_json::Value =
serde_json::from_str(&response.tcb_evaluation_data_numbers_json)?;
if let Some(tdx_data) = data.get("tdx") {
println!(
" TDX entries: {}",
tdx_data.as_array().map(|a| a.len()).unwrap_or(0)
);
}
results.push(("TDX TCB Eval Numbers", true));
}
}
Err(e) => {
println!("✗ Failed: {:?}", e);
results.push(("TDX TCB Eval Numbers", false));
}
}
sleep(Duration::from_millis(100)).await;
println!("\n6. Testing PCK Certificate Endpoints...");
println!("=======================================");
/* // 6.1 PCK Certificate by PPID (usually requires valid data)
print!(" - PCK Certificate by PPID: ");
let test_ppid = "0000000000000000000000000000000000000000000000000000000000000000";
let test_cpusvn = "00000000000000000000000000000000";
let test_pcesvn = "0000";
let test_pceid = "0000";
match v4_client
.get_pck_certificate_by_ppid(test_ppid, test_cpusvn, test_pcesvn, test_pceid, None, None)
.await
{
Ok(_) => {
println!("✓ Success");
results.push(("PCK Certificate", true));
}
Err(e) => {
// Expected to fail with test data
match &e {
IntelApiError::ApiError { status, .. } => {
println!("✗ Failed (Expected): HTTP {}", status);
}
_ => println!("✗ Failed: {:?}", e),
}
results.push(("PCK Certificate", false));
}
}
sleep(Duration::from_millis(100)).await;
*/
println!("\n7. Testing API Version Compatibility...");
println!("=======================================");
// 7.1 Try V4-only endpoint on V3
print!(" - V4-only endpoint on V3 (should fail): ");
match v3_client.get_fmspcs(None).await {
Ok(_) => {
println!("✗ Unexpected success!");
results.push(("V3/V4 compatibility check", false));
}
Err(IntelApiError::UnsupportedApiVersion(_)) => {
println!("✓ Correctly rejected");
results.push(("V3/V4 compatibility check", true));
}
Err(e) => {
println!("✗ Wrong error: {:?}", e);
results.push(("V3/V4 compatibility check", false));
}
}
println!("\n8. Testing Error Handling...");
println!("============================");
// 8.1 Invalid FMSPC
print!(" - Invalid FMSPC format: ");
match v4_client.get_sgx_tcb_info("invalid", None, None).await {
Ok(_) => {
println!("✗ Unexpected success!");
results.push(("Error handling", false));
}
Err(IntelApiError::ApiError {
status,
error_code,
error_message,
..
}) => {
println!("✓ Correctly handled");
println!(" Status: {}", status);
if let Some(code) = error_code {
println!(" Error Code: {}", code);
}
if let Some(msg) = error_message {
println!(" Error Message: {}", msg);
}
results.push(("Error handling", true));
}
Err(e) => {
println!("✗ Unexpected error: {:?}", e);
results.push(("Error handling", false));
}
}
// Summary
println!("\n\n=== Summary ===");
println!("===============");
let total = results.len();
let successful = results.iter().filter(|(_, success)| *success).count();
let failed = total - successful;
println!("Total tests: {}", total);
println!(
"Successful: {} ({}%)",
successful,
(successful * 100) / total
);
println!("Failed: {} ({}%)", failed, (failed * 100) / total);
println!("\nDetailed Results:");
for (test, success) in &results {
println!(" {} {}", if *success { "" } else { "" }, test);
}
println!("\nNote: Some failures are expected due to:");
println!("- Test data not existing on Intel servers");
println!("- PCK operations requiring valid platform data");
println!("- Subscription key requirements for certain endpoints");
Ok(())
}

View file

@ -0,0 +1,694 @@
# Intel® SGX and Intel® TDX services - V3 API Documentation
## Intel® SGX and Intel® TDX Registration Service for Scalable Platforms
The API exposed by the Intel SGX registration service allows registering an Intel® SGX platform with multiple processor
packages as a single platform instance, which can be remotely attested as a single entity later on[cite: 1]. The minimum
version of the TLS protocol supported by the service is 1.2; any connection attempts with previous versions of TLS/SSL
will be dropped by the server[cite: 2].
### Register Platform
This API allows registering a multi-package SGX platform, covering initial registration and TCB Recovery[cite: 2].
During registration, the platform manifest is authenticated by the Registration Service to verify it originates from a
genuine, non-revoked SGX platform[cite: 2]. If the platform configuration is successfully verified, platform
provisioning root keys are stored in the backend[cite: 2].
Stored platform provisioning root keys are later used to derive the public parts of Provisioning Certification Keys (
PCKs)[cite: 2]. These PCKs are distributed as x.509 certificates by the Provisioning Certification Service for Intel SGX
and are used during the remote attestation of the platform[cite: 3].
#### POST `https://api.trustedservices.intel.com/sgx/registration/v1/platform`
**Request**
**Headers**
Besides the headers explicitly mentioned below, the HTTP request may contain standard HTTP headers (e.g.,
Content-Length)[cite: 3].
| Name | Required | Value | Description |
|:-------------|:---------|:---------------------------|:----------------------------------------|
| Content-Type | True | `application/octet-stream` | MIME type of the request body[cite: 4]. |
**Body**
The body is a binary representation of the Platform Manifest structure an opaque blob representing a registration
manifest for a multi-package platform[cite: 5]. It contains platform provisioning root keys established by the platform
instance and data required to authenticate the platform as genuine and non-revoked[cite: 5].
**Example Request**
```bash
curl -H "Content-Type: application/octet-stream" --data-binary @platform_manifest POST "[https://api.trustedservices.intel.com/sgx/registration/v1/platform](https://api.trustedservices.intel.com/sgx/registration/v1/platform)"
````
**Response**
**Model**
The response is a Hex-encoded representation of the PPID for the registered platform instance (only if the HTTP Status
Code is 201; otherwise, the body is empty).
**Example Response**
```
001122334455667788AABBCCDDEEFF
```
**Status Codes**
| Code | Headers | Body | Description |
|:-----|:--------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 201 | Request-ID: Randomly generated identifier for each request (for troubleshooting purposes). | Hex-encoded representation of PPID. | Operation successful (new platform instance registered). A new platform instance has been registered[cite: 5]. |
| 400 | Request-ID: Randomly generated identifier[cite: 6]. \<br\> Error-Code and Error-Message: Additional details about the error[cite: 9]. | | Invalid Platform Manifest[cite: 8]. The request might be malformed[cite: 6], intended for a different server[cite: 7], contain an invalid/revoked package[cite: 7], an unrecognized package[cite: 7], an incompatible package[cite: 7], an invalid manifest[cite: 7], or violate a key caching policy[cite: 8]. The client should not repeat the request without modifications[cite: 9]. |
| 415 | Request-ID: Randomly generated identifier[cite: 10]. | | MIME type specified in the request is not supported[cite: 10]. |
| 500 | Request-ID: Randomly generated identifier[cite: 10]. | | Internal server error occurred[cite: 10]. |
| 503 | Request-ID: Randomly generated identifier[cite: 10]. | | Server is currently unable to process the request. The client should try again later[cite: 11]. |
-----
### Add Package
This API adds new package(s) to an already registered platform instance[cite: 11]. A subscription is required[cite: 11].
If successful, a Platform Membership Certificate is generated for each processor package in the Add Request[cite: 12].
#### POST `https://api.trustedservices.intel.com/sgx/registration/v1/package`
**Request**
**Headers**
| Name | Required | Value | Description |
|:--------------------------|:---------|:---------------------------|:--------------------------------------------------------------------------------|
| Ocp-Apim-Subscription-Key | True | | Subscription key providing access to this API, found in your Profile[cite: 14]. |
| Content-Type | True | `application/octet-stream` | MIME type of the request body[cite: 14]. |
**Body**
Binary representation of the Add Request structure an opaque blob for adding new processor packages to an existing
platform instance.
**Example Request**
```bash
curl -H "Content-Type: application/octet-stream" --data-binary @add_package POST "[https://api.trustedservices.intel.com/sgx/registration/v1/package](https://api.trustedservices.intel.com/sgx/registration/v1/package)" -H "Ocp-Apim-Subscription-Key: {subscription_key}"
```
**Response**
**Model**
For a 200 HTTP Status Code, the response is a fixed-size array (8 elements) containing binary representations of
Platform Membership Certificate structures[cite: 15]. Certificates are populated sequentially, starting at index 0, with
the rest of the elements zeroed[cite: 15].
**Example Response (hex-encoded)**
```
E4B0E8B80F8B49184488F77273550840984816854488B7CFRP...
```
**Status Codes**
| Code | Headers | Body | Description |
|:-----|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 200 | Content-Type: `application/octet-stream`[cite: 17]. \<br\> Request-ID: Random identifier[cite: 17]. \<br\> CertificateCount: Number of certificates returned[cite: 17]. | Fixed-size array of Platform Membership Certificates[cite: 17]. | Operation successful. Packages added[cite: 17]. |
| 400 | Request-ID: Random identifier[cite: 17]. \<br\> Error-Code and Error-Message: Details on the error[cite: 17]. | | Invalid Add Request Payload[cite: 17]. Can be due to malformed syntax, platform not found, invalid/revoked/unrecognized package, or invalid AddRequest[cite: 17]. |
| 401 | Request-ID: Random identifier[cite: 17]. | | Failed to authenticate or authorize the request[cite: 17]. |
| 415 | Request-ID: Random identifier[cite: 17]. | | MIME type specified is not supported[cite: 17]. |
| 500 | Request-ID: Random identifier[cite: 17]. | | Internal server error occurred[cite: 17]. |
| 503 | Request-ID: Random identifier[cite: 17]. | | Server is currently unable to process the request[cite: 17]. |
-----
## Intel® SGX Provisioning Certification Service for ECDSA Attestation
Download the Provisioning Certification Root CA Certificate (API v3) here:
* [DER](https://www.google.com/search?q=https://certificates.trustedservices.intel.com/Intel_SGX_Provisioning_Certification_RootCA.cer) [cite: 18]
* [PEM](https://www.google.com/search?q=https://certificates.trustedservices.intel.com/intel_SGX_Provisioning_Certification_RootCA.perm) [cite: 18]
### Get PCK Certificate V3
This API allows requesting a single PCK certificate by specifying PPID and SVNs or Platform Manifest and SVNs[cite: 18].
A subscription is required[cite: 18].
* **Using PPID and SVNs**:
* Single-socket platforms: No prerequisites[cite: 18].
* Multi-socket platforms: Requires previous registration via `Register Platform` API[cite: 18]. Platform root keys
must be persistently stored[cite: 19], and the `Keys Caching Policy` must be set to `true`[cite: 21]. The service
uses a PCK public key derived from stored keys[cite: 20].
* **Using Platform Manifest and SVNs**:
* Multi-socket platforms: Does not require previous registration[cite: 21]. It doesn't require keys to be
persistently stored[cite: 22]. The service uses a PCK public key derived from the provided manifest[cite: 23].
Depending on the `Keys Caching Policy`, keys might be stored[cite: 24].
* **Direct Registration** (`Register Platform` first): Sets policy to always store keys[cite: 25]. Keys are
stored when the manifest is sent[cite: 26]. `CachedKeys` flag in PCK Certificates is set to `true`[cite: 27].
* **Indirect Registration** (`Get PCK Certificate(s)` first): Sets policy to never store keys[cite: 27]. Keys
are discarded after use[cite: 28]. Standard metadata is stored, but `Register Platform` cannot be used
anymore[cite: 29]. `CachedKeys` flag is set to `false`[cite: 30].
The PCS returns the PCK Certificate representing the TCB level with the highest security posture based on CPUSVN and PCE
ISVSVN[cite: 30].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/pckcert`
**Request**
| Name | Type | Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------|:---------|:--------------------|:-----------------------------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | True | | Subscription key[cite: 32]. |
| PPID-Encryption-Key | String | Header | False | | Type of key for PPID encryption (Default: `RSA-3072`)[cite: 32]. |
| encrypted\_ppid | String | Query | True | `[0-9a-fA-F]{768}$` | Base16-encoded PPID (encrypted with PPIDEK)[cite: 32]. |
| cpusvn | String | Query | True | `[0-9a-fA-F]{32}$` | Base16-encoded CPUSVN (16 bytes)[cite: 32]. |
| pcesvn | String | Query | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCESVN (2 bytes, little endian)[cite: 32]. |
| pceid | String | Query | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCE-ID (2 bytes, little endian)[cite: 32]. |
**Example Request**
```bash
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcert?encrypted_ppid=...&cpusvn=...&pcesvn=...&pceid=](https://api.trustedservices.intel.com/sgx/certification/v3/pckcert?encrypted_ppid=...&cpusvn=...&pcesvn=...&pceid=)..." -H "Ocp-Apim-Subscription-Key: {subscription_key}"
```
**Response**: Response description can be
found [here](https://www.google.com/search?q=%23response-get-and-post-1)[cite: 34].
#### POST `https://api.trustedservices.intel.com/sgx/certification/v3/pckcert`
**Request**
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:-----------------------------|:---------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | True | | Subscription key[cite: 35]. |
| Content-Type | String | Header | True | | Content Type (`application/json`)[cite: 35]. |
| platformManifest | String | Body Field | True | `[0-9a-fA-F]{16882,112884}$` | Base16-encoded Platform Manifest[cite: 35]. |
| cpusvn | String | Body Field | True | `[0-9a-fA-F]{32}$` | Base16-encoded CPUSVN[cite: 35]. |
| pcesvn | String | Body Field | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCESVN[cite: 35]. |
| pceid | String | Body Field | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCE-ID[cite: 35]. |
**Body**
```json
{
"platformManifest": "...",
"cpusvn": "...",
"pcesvn": "...",
"pceid": "..."
}
```
**Example Request**
```bash
curl -X POST -d '{"platformManifest": "...", "cpusvn": "...", "pcesvn": "...", "pceid": "..."}' -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription_key}" "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcert](https://api.trustedservices.intel.com/sgx/certification/v3/pckcert)"
```
**Response (GET and POST)**
**Model**: PckCert (X-PEM-FILE) - PEM-encoded SGX PCK Certificate[cite: 36].
**Example Response**
```pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
```
**Status Codes**
| Code | Model | Headers | Description |
|:-----|:--------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|
| 200 | PckCert | Content-Type: `application/x-pem-file`[cite: 36]. \<br\> Request-ID[cite: 36]. \<br\> SGX-PCK-Certificate-Issuer-Chain: URL-encoded issuer chain[cite: 36]. \<br\> SGX-TCBm: Hex-encoded CPUSVN and PCESVN[cite: 37]. \<br\> SGX-FMSPC: Hex-encoded FMSPC[cite: 37]. \<br\> SGX-PCK-Certificate-CA-Type: 'processor' or 'platform'[cite: 39]. \<br\> Warning: Optional message[cite: 39]. | Operation successful[cite: 36]. |
| 400 | | Request-ID[cite: 39]. \<br\> Warning[cite: 39]. | Invalid request parameters[cite: 39]. |
| 401 | | Request-ID[cite: 40]. \<br\> Warning[cite: 40]. | Failed to authenticate or authorize the request[cite: 40]. |
| 404 | | Request-ID[cite: 40]. \<br\> Warning[cite: 40]. | PCK Certificate not found[cite: 40]. Reasons: unsupported PPID/PCE-ID, TCB level too low, or Platform Manifest not registered/updated[cite: 41]. |
| 500 | | Request-ID[cite: 41]. \<br\> Warning[cite: 41]. | Internal server error occurred[cite: 41]. |
| 503 | | Request-ID[cite: 42]. \<br\> Warning[cite: 42]. | Server is currently unable to process the request[cite: 42]. |
-----
### Get PCK Certificates V3
This API retrieves PCK certificates for all configured TCB levels using PPID or Platform Manifest[cite: 42].
Subscription required[cite: 42].
* **Using PPID**:
* Single-socket platforms: No prerequisites[cite: 43].
* Multi-socket platforms: Requires prior registration via `Register Platform` API[cite: 44]. Keys must be
persistently stored[cite: 45], and `Keys Caching Policy` must be `true`[cite: 47]. PCS uses stored keys[cite: 46].
* **Using Platform Manifest**:
* Multi-socket platforms: Does not require prior registration[cite: 47]. Does not require persistent
storage[cite: 48]. PCS uses manifest keys[cite: 49]. Caching policy determines storage[cite: 50].
* **Direct Registration**: Always stores keys; `CachedKeys` is `true`[cite: 51, 52].
* **Indirect Registration**: Never stores keys; `CachedKeys` is `false`[cite: 53].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts`
Retrieves certificates based on encrypted PPID and PCE-ID[cite: 53].
**Request**
| Name | Type | Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------|:---------|:--------------------|:--------------------------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | True | | Subscription key[cite: 54]. |
| PPID-Encryption-Key | String | Header | False | | Key type for PPID encryption (Default: `RSA-3072`)[cite: 54]. |
| encrypted\_ppid | String | Query | True | `[0-9a-fA-F]{768}$` | Base16-encoded PPID[cite: 54]. |
| pceid | String | Query | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCE-ID[cite: 54]. |
**Example Request**
```bash
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts?encrypted_ppid=...&pceid=](https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts?encrypted_ppid=...&pceid=)..." -H "Ocp-Apim-Subscription-Key: {subscription_key}"
```
**Response**: Response description can be
found [here](https://www.google.com/search?q=%23response-get-and-post-2)[cite: 55].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts/config`
Retrieves certificates for a specific CPUSVN (multi-package only)[cite: 55].
**Request**
| Name | Type | Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------|:---------|:--------------------|:----------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | True | | Subscription key[cite: 56]. |
| PPID-Encryption-Key | String | Header | False | | Key type for PPID encryption[cite: 56]. |
| encrypted\_ppid | String | Query | True | `[0-9a-fA-F]{768}$` | Base16-encoded PPID[cite: 56]. |
| pceid | String | Query | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCE-ID[cite: 56]. |
| cpusvn | String | Query | True | `[0-9a-fA-F]{32}$` | Base16-encoded CPUSVN[cite: 56]. |
**Example Request**
```bash
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts/config?encrypted_ppid=...&pceid=...&cpusvn=](https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts/config?encrypted_ppid=...&pceid=...&cpusvn=)..." -H "Ocp-Apim-Subscription-Key: {subscription_key}"
```
**Response**: Response description can be
found [here](https://www.google.com/search?q=%23response-get-and-post-2)[cite: 57].
#### POST `https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts`
Retrieves certificates based on Platform Manifest and PCE-ID (multi-package only)[cite: 57].
**Request**
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:-----------------------------|:--------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | True | | Subscription key[cite: 58]. |
| Content-Type | String | Header | True | `application/json` | Content Type[cite: 58]. |
| platformManifest | String | Body Field | True | `[0-9a-fA-F]{16882,112884}$` | Base16-encoded Platform Manifest[cite: 58]. |
| pceid | String | Body Field | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCE-ID[cite: 58]. |
**Body**
```json
{
"platformManifest": "...",
"pceid": "..."
}
```
**Example Request**
```bash
curl -X POST -d '{"platformManifest": "...", "pceid": "..."}' -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription_key}" "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts](https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts)"
```
**Response**: Response description can be
found [here](https://www.google.com/search?q=%23response-get-and-post-2)[cite: 59].
#### POST `https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts/config`
Retrieves certificates for a specific CPUSVN using Platform Manifest (multi-package only)[cite: 59].
**Request**
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:-----------------------------|:--------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | True | | Subscription key[cite: 61]. |
| Content-Type | String | Header | True | `application/json` | Content Type[cite: 61]. |
| platformManifest | String | Body Field | True | `[0-9a-fA-F]{16882,112884}$` | Base16-encoded Platform Manifest[cite: 61]. |
| cpusvn | String | Body Field | True | `[0-9a-fA-F]{32}$` | Base16-encoded CPUSVN[cite: 61]. |
| pceid | String | Body Field | True | `[0-9a-fA-F]{4}$` | Base16-encoded PCE-ID[cite: 61]. |
**Body**
```json
{
"platformManifest": "...",
"cpusvn": "...",
"pceid": "..."
}
```
**Example Request**
```bash
curl -X POST -d '{"platformManifest": "...", "cpusvn": "...", "pceid": "..."}' -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription_key}" "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts/config](https://api.trustedservices.intel.com/sgx/certification/v3/pckcerts/config)"
```
**Response (GET and POST)**
**Model**: PckCerts (JSON) - Array of data structures with `tcb`, `tcm`, and `certificate`[cite: 62].
**PckCerts Structure**
```json
[
{
"tcb": {
"sgxtcbcomp01svn": 0,
// Integer
"sgxtcbcomp02svn": 0,
// Integer
// ... (03 to 16)
"pcesvn": 0
// Integer
},
"tcm": "...",
// String, Hex-encoded TCBm [cite: 63, 64]
"cert": "..."
// String, PEM-encoded certificate or "Not available" [cite: 64]
}
]
```
**Example Response**
```json
[
{
"tcb": {
"sgxtcbcomp01svn": 0,
"sgxtcbcomp02svn": 0,
"sgxtcbcomp03svn": 0,
"sgxtcbcomp04svn": 0,
"sgxtcbcomp05svn": 0,
"sgxtcbcomp06svn": 0,
"sgxtcbcomp07svn": 0,
"sgxtcbcomp08svn": 0,
"sgxtcbcomp09svn": 0,
"sgxtcbcomp10svn": 0,
"sgxtcbcomp11svn": 0,
"sgxtcbcomp12svn": 0,
"sgxtcbcomp13svn": 0,
"sgxtcbcomp14svn": 0,
"sgxtcbcomp15svn": 0,
"sgxtcbcomp16svn": 0,
"pcesvn": 0
},
"tcm": "...",
"cert": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----"
}
]
```
**Status Codes**
| Code | Model | Headers | Description |
|:-----|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| 200 | PckCerts | Content-Type: `application/json`[cite: 65]. \<br\> Request-ID[cite: 65]. \<br\> SGX-PCK-Certificate-Issuer-Chain: Issuer chain[cite: 66]. \<br\> SGX-FMSPC[cite: 66]. \<br\> SGX-PCK-Certificate-CA-Type[cite: 66]. \<br\> Warning[cite: 66]. | Operation successful[cite: 65]. |
| 400 | | Request-ID[cite: 67]. \<br\> Warning[cite: 67]. | Invalid request parameters[cite: 67]. |
| 401 | | Request-ID[cite: 68]. \<br\> Warning[cite: 68]. | Failed to authenticate or authorize the request[cite: 68]. |
| 404 | | Request-ID[cite: 69]. \<br\> Warning[cite: 69]. | PCK Certificate not found[cite: 69]. Reasons: PPID/PCE-ID not supported or Platform Manifest not registered[cite: 70]. |
| 500 | | Request-ID[cite: 70]. \<br\> Warning[cite: 70]. | Internal server error occurred[cite: 70]. |
| 503 | | Request-ID[cite: 70]. \<br\> Warning[cite: 70]. | Server is currently unable to process the request[cite: 70]. |
-----
### Get Revocation List V3
Retrieves the X.509 Certificate Revocation List (CRL) for revoked SGX PCK Certificates[cite: 71]. CRLs are issued by
Intel SGX Processor CA or Platform CA[cite: 71].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/pckcrl`
**Request**
| Name | Type | Request Type | Required | Pattern | Description |
|:---------|:-------|:-------------|:---------|:------------|:------------|
| ca | String | Query | True | `(processor | platform)` | CA that issued the CRL[cite: 71]. |
| encoding | String | Query | False | `(pem | der)` | Encoding (Default: PEM)[cite: 71]. |
**Example Request**
```bash
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/pckcrl?ca=platform&encoding=der](https://api.trustedservices.intel.com/sgx/certification/v3/pckcrl?ca=platform&encoding=der)"
```
**Response**
**Model**: PckCrl (X-PEM-FILE or PKIX-CRL) - PEM or DER-encoded CRL[cite: 71].
**Example Response**
```
-----BEGIN X509 CRL-----
...
-----END X509 CRL-----
```
**Status Codes**
| Code | Model | Headers | Description |
|:-----|:-------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------|
| 200 | PckCrl | Content-Type: `application/x-pem-file` (PEM) or `application/pkix-crl` (DER)[cite: 72]. \<br\> Request-ID[cite: 72]. \<br\> SGX-PCK-CRL-Issuer-Chain: Issuer chain[cite: 72]. \<br\> Warning[cite: 72]. | Operation successful[cite: 72]. |
| 400 | | Request-ID[cite: 72]. \<br\> Warning[cite: 73]. | Invalid request parameters[cite: 72]. |
| 401 | | Request-ID[cite: 73]. \<br\> Warning[cite: 73]. | Failed to authenticate or authorize[cite: 73]. |
| 500 | | Request-ID[cite: 73]. \<br\> Warning[cite: 73]. | Internal server error occurred[cite: 73]. |
| 503 | | Request-ID[cite: 73]. \<br\> Warning[cite: 73]. | Server is currently unable to process[cite: 73]. |
-----
### Get TCB Info V3
Retrieves SGX TCB information for a given FMSPC[cite: 74].
**Algorithm for TCB Status:**
1. Retrieve FMSPC from the SGX PCK Certificate[cite: 74].
2. Retrieve TCB Info matching the FMSPC[cite: 75].
3. Iterate through the sorted TCB Levels[cite: 75]:
* Compare all SGX TCB Comp SVNs (01-16) from the certificate with TCB Level values[cite: 76]. If all are \>=,
proceed[cite: 76]. Otherwise, move to the next item[cite: 76].
* Compare PCESVN from the certificate with the TCB Level value[cite: 77]. If \>=, read the status[cite: 77].
Otherwise, move to the next item[cite: 78].
4. If no match is found, the TCB Level is not supported[cite: 78].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/tcb`
**Request**
| Name | Type | Request Type | Required | Pattern | Description |
|:------------------------|:-------|:-------------|:---------|:-------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| fmspc | String | Query | True | `[0-9a-fA-F]{12}$` | Base16-encoded FMSPC (6 bytes)[cite: 81]. |
| update | String | Query | False | `(early | standard)` | Update type (Default: standard). 'early' provides early access, 'standard' provides standard access[cite: 81]. Cannot be used with `tcbEvaluationDataNumber`[cite: 81]. |
| tcbEvaluationDataNumber | Number | Query | False | `\d+$` | Specifies a TCB Evaluation Data Number. Allows fetching specific versions; returns 410 if \< M, 404 if \> N[cite: 81]. Cannot be used with `update`[cite: 81]. |
**Example Requests**
```bash
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/tcb?fmspc=...&update=early](https://api.trustedservices.intel.com/sgx/certification/v3/tcb?fmspc=...&update=early)"
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/tcb?fmspc=...&tcbEvaluationDataNumber=](https://api.trustedservices.intel.com/sgx/certification/v3/tcb?fmspc=...&tcbEvaluationDataNumber=)..."
```
**Response**
**Model**: TcbInfoV2 (JSON) - SGX TCB Info[cite: 82].
**TcbInfoV2 Structure**
* `version`: Integer[cite: 83].
* `issueDate`: String (date-time, ISO 8601 UTC)[cite: 84].
* `nextUpdate`: String (date-time, ISO 8601 UTC)[cite: 85].
* `fmspc`: String (Base16-encoded FMSPC)[cite: 85].
* `pceId`: String (Base16-encoded PCE-ID)[cite: 85].
* `tcbType`: Integer[cite: 85].
* `tcbEvaluationDataNumber`: Integer, monotonically increasing sequence number for TCB evaluation data set
updates[cite: 86]. Synchronized across TCB Info and Identities[cite: 86]. Helps determine which data supersedes
another[cite: 87].
* `tcbLevels`: Array of TCB level objects[cite: 87].
* `tcb`: Object with `sgxtcbcompXXsvn` (Integer) and `pcesvn` (Integer)[cite: 87].
* `tcbDate`: String (date-time, ISO 8601 UTC)[cite: 89]. If advisories exist after this date with enforced
mitigations, status won't be `UpToDate`[cite: 88].
* `tcbStatus`: String (`UpToDate`, `HardeningNeeded`, `ConfigurationNeeded`, `ConfigurationAndHardeningNeeded`,
`OutOfDate`, `OutOfDateConfigurationNeeded`, `Revoked`)[cite: 90, 91, 92].
* `advisoryIDs`: Array of strings (e.g., `INTEL-SA-XXXXX`, `INTEL-DOC-XXXXX`)[cite: 93, 94].
* `signature`: String (Base16 encoded)[cite: 94].
**Example Response**
```json
{
"tcbInfo": {
"version": 2,
"issueDate": "2018-07-30T12:00:00Z",
"nextUpdate": "2018-08-30T12:00:00Z",
"fmspc": "...",
"pceId": "0000",
"tcbType": 1,
"tcbEvaluationDataNumber": 7,
"tcbLevels": [
{
"tcb": {
"sgxtcbcomp01svn": 0,
/* ... */
"pcesvn": 0
},
"tcbDate": "2018-07-11T12:00:00Z",
"tcbStatus": "UpToDate",
"advisoryIDs": [
"INTEL-SA-00070",
"INTEL-SA-00076"
]
}
]
},
"signature": "..."
}
```
**Status Codes**
| Code | Model | Headers | Description |
|:-----|:----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| 200 | TcbInfoV2 | Content-Type: `application/json`[cite: 96]. \<br\> Request-ID[cite: 96]. \<br\> SGX-TCB-Info-Issuer-Chain: Issuer chain[cite: 96]. \<br\> Warning[cite: 96]. | Operation successful[cite: 96]. |
| 400 | | Request-ID[cite: 96]. \<br\> Warning[cite: 96]. | Invalid request (bad FMSPC or conflicting `update`/`tcbEvaluationDataNumber`)[cite: 96]. |
| 401 | | Request-ID[cite: 96]. \<br\> Warning[cite: 96]. | Failed to authenticate or authorize[cite: 96]. |
| 404 | | Request-ID[cite: 96]. \<br\> Warning[cite: 96]. | TCB info not found for FMSPC or `tcbEvaluationDataNumber`[cite: 96]. |
| 410 | | Request-ID[cite: 98]. \<br\> Warning[cite: 98]. | TCB Information for `tcbEvaluationDataNumber` no longer available[cite: 98]. |
| 500 | | Request-ID[cite: 98]. \<br\> Warning[cite: 98]. | Internal server error[cite: 98]. |
| 503 | | Request-ID[cite: 98]. \<br\> Warning[cite: 98]. | Server unable to process[cite: 98]. |
-----
### Get Quoting Enclave Identity V3
Verifies if an SGX Enclave Report matches a valid Quoting Enclave (QE) identity[cite: 99].
**Algorithm:**
1. Retrieve and validate QE Identity[cite: 99].
2. Compare SGX Enclave Report against QE Identity:
* Verify `MRSIGNER` equals `mrsigner`[cite: 100].
* Verify `ISVPRODID` equals `isvprodid`[cite: 101].
* Verify `(miscselectMask & MISCSELECT)` equals `miscselect`[cite: 102].
* Verify `(attributesMask & ATTRIBUTES)` equals `attributes`[cite: 103, 104].
3. If any check fails, identity doesn't match[cite: 105].
4. Determine TCB status:
* Retrieve TCB Levels[cite: 106].
* Find TCB Level with ISVSVN \<= Enclave Report ISVSVN (descending)[cite: 107].
* Read `tcbStatus`; if not found, it's unsupported[cite: 108].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/qe/identity`
**Request**
| Name | Type | Type | Required | Pattern | Description |
|:------------------------|:-------|:------|:---------|:--------|:------------------------------------------------------------------------------------------|
| update | String | Query | False | `(early | standard)` | Update type (Default: standard)[cite: 110]. Cannot be used with `tcbEvaluationDataNumber`[cite: 110]. |
| tcbEvaluationDataNumber | Number | Query | False | `\d+` | Specifies TCB Evaluation Data Number[cite: 110]. Cannot be used with `update`[cite: 110]. |
**Example Requests**
```bash
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/qe/identity?update=early](https://api.trustedservices.intel.com/sgx/certification/v3/qe/identity?update=early)"
curl -X GET "[https://api.trustedservices.intel.com/sgx/certification/v3/qe/identity?tcbEvaluationDataNumber=](https://api.trustedservices.intel.com/sgx/certification/v3/qe/identity?tcbEvaluationDataNumber=)..."
```
**Response**
**Model**: QEIdentityV2 (JSON) - QE Identity data[cite: 111].
**QEIdentityV2 Structure**
* `enclaveIdentity`:
* `id`: String (`QE`, `QVE`, or `QAE`)[cite: 113].
* `version`: Integer[cite: 113].
* `issueDate`, `nextUpdate`: String (date-time, ISO 8601 UTC)[cite: 114].
* `tcbEvaluationDataNumber`: Integer[cite: 115].
* `miscselect`, `miscselectMask`: String (Base16-encoded)[cite: 115, 116].
* `attributes`, `attributesMask`: String (Base16-encoded)[cite: 116].
* `mrsigner`: String (Base16-encoded)[cite: 116].
* `isvprodid`: Integer[cite: 116].
* `tcbLevels`: Array of TCB level objects[cite: 116].
* `tcb`: Object with `isvsvn` (Integer)[cite: 117].
* `tcbDate`: String (date-time, ISO 8601 UTC)[cite: 117].
* `tcbStatus`: String (`UpToDate`, `OutOfDate`, `Revoked`)[cite: 119].
* `advisoryIDs`: Array of strings[cite: 119].
* `signature`: String (Hex-encoded)[cite: 119].
**Status Codes**
| Code | Model | Headers | Description |
|:-----|:-------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| 200 | QEIdentityV2 | Content-Type: `application/json`[cite: 122]. \<br\> Request-ID[cite: 122]. \<br\> SGX-Enclave-Identity-Issuer-Chain: Issuer chain[cite: 122]. \<br\> Warning[cite: 122]. | Operation successful[cite: 122]. |
| 400 | | Request-ID[cite: 122]. \<br\> Warning[cite: 123]. | Invalid request (bad params or conflicting `update`/`tcbEvaluationDataNumber`)[cite: 122, 124]. |
| 401 | | Request-ID[cite: 123]. \<br\> Warning[cite: 123]. | Failed to authenticate or authorize[cite: 123]. |
| 404 | | Request-ID[cite: 123]. \<br\> Warning[cite: 123]. | QE identity not found for `tcbEvaluationDataNumber`[cite: 124]. |
| 410 | | Request-ID[cite: 124]. \<br\> Warning[cite: 124]. | QEIdentity for `tcbEvaluationDataNumber` no longer available[cite: 124]. |
| 500 | | Request-ID[cite: 125]. \<br\> Warning[cite: 125]. | Internal server error[cite: 125]. |
| 503 | | Request-ID[cite: 125]. \<br\> Warning[cite: 125]. | Server unable to process[cite: 125]. |
-----
### Get Quote Verification Enclave Identity V3
Verifies if an SGX Enclave Report matches a valid QVE identity[cite: 126].
**Algorithm:**
1. Retrieve and validate QVE Identity[cite: 126].
2. Compare Enclave Report: `MRSIGNER`[cite: 127], `ISVPRODID`[cite: 128], `MISCSELECT` (with mask)[cite: 128],
`ATTRIBUTES` (with mask)[cite: 128].
3. If any fails, no match[cite: 129].
4. Determine TCB status via ISVSVN comparison[cite: 129, 130].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/qve/identity`
**Request**: Same parameters as `Get Quoting Enclave Identity V3` (`update` and `tcbEvaluationDataNumber`)[cite: 132].
**Response**: QVEIdentityV2 (JSON) - QVE Identity data[cite: 133]. Structure similar to QE
Identity[cite: 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144].
**Status Codes**: Similar to `Get Quoting Enclave Identity V3`[cite: 145].
-----
### Get Quote Appraisal Enclave Identity V3
Verifies if an SGX Enclave Report matches a valid QAE identity[cite: 149].
**Algorithm:**
1. Retrieve and validate QAE Identity[cite: 149].
2. Compare Enclave Report: `MRSIGNER`[cite: 151], `ISVPRODID`[cite: 151], `MISCSELECT` (with mask)[cite: 152, 153],
`ATTRIBUTES` (with mask)[cite: 154, 155].
3. If any fails, no match[cite: 155].
4. Determine TCB status via ISVSVN comparison[cite: 157, 158].
#### GET `https://api.trustedservices.intel.com/sgx/certification/v3/qae/identity`
**Request**: Same parameters as `Get Quoting Enclave Identity V3` (`update` and `tcbEvaluationDataNumber`)[cite: 160].
**Response**: QAEIdentityV2 (JSON) - QAE Identity data[cite: 161]. Structure similar to QE
Identity[cite: 162, 163, 164, 165, 166, 167, 168, 169, 170].
**Status Codes**: Similar to `Get Quoting Enclave Identity V3`[cite: 171, 174].
-----
### PCK Certificate and CRL Specification
This document specifies the hierarchy and format of X.509 v3 certificates and v2 CRLs for Provisioning Certification
Keys[cite: 175].
Enforcement of a mitigation means the attestation process can detect its presence and the result will differ[cite: 175].
Intel offers `standard` (default) and `early` update parameters, affecting when enforcement occurs[cite: 176]. The
attestation result is an objective assessment[cite: 177]. Relying parties can use additional factors [cite: 178] and may
choose to trust an 'OutOfDate' platform, accepting risks[cite: 180]. Intel will strive to communicate schedule
deviations[cite: 181].

View file

@ -0,0 +1,664 @@
This document outlines the API for Intel® SGX and Intel® TDX services, focusing on platform registration and
provisioning certification using ECDSA attestation.
## Intel® SGX and Intel® TDX Registration Service for Scalable Platforms [cite: 1]
The Intel® SGX and Intel® TDX Registration Service API enables the registration of Intel® SGX platforms with multiple
processor packages as a unified platform instance[cite: 2]. This allows these platforms to be remotely attested as a
single entity[cite: 2]. It is important to note that the service enforces a minimum TLS protocol version of 1.2; any
attempts to connect with older TLS/SSL versions will be rejected[cite: 3].
### Register Platform
This API facilitates the registration of multi-package SGX platforms, encompassing both initial registration and TCB (
Trusted Computing Base) recovery[cite: 4]. During this process, the Registration Service authenticates the platform
manifest to confirm it originates from a genuine, non-revoked SGX platform[cite: 4]. If the platform configuration
passes verification, its provisioning root keys are securely stored[cite: 4]. These stored keys are subsequently used to
derive the public components of Provisioning Certification Keys (PCKs), which are then distributed as X.509 certificates
by the Provisioning Certification Service[cite: 5]. These PCK certificates are integral to the remote attestation
process for the platform[cite: 5].
**POST** `https://api.trustedservices.intel.com/sgx/registration/v1/platform`
**Request**
* **Headers**: In addition to standard HTTP headers (like `Content-Length`), the following is required[cite: 1]:
| Name | Required | Value | Description |
|:-------------|:---------|:-------------------------|:----------------------------------------|
| Content-Type | True | application/octet-stream | MIME type of the request body[cite: 1]. |
* **Body**: The request body must be a binary representation of the Platform Manifest structure[cite: 6]. This is an
opaque blob containing the registration manifest for a multi-package platform[cite: 6]. It includes the platform
provisioning root keys established by the platform instance and the necessary data to authenticate it as a genuine,
non-revoked SGX platform[cite: 6].
* **Example Request**:
```bash
curl -v -X POST "Content-Type: application/octet-stream" --data-binary @platform_manifest.bin "https://api.trustedservices.intel.com/sgx/registration/v1/platform" [cite: 1]
```
**Response**
* **Model**: The response body will contain the hex-encoded representation of the PPID (Platform Provisioning ID) for
the registered platform instance, but only if the HTTP Status Code is 201[cite: 1]. Otherwise, the body will be
empty[cite: 1].
* **Example Response**:
```
00112233445566778899AABBCCDDEEFF [cite: 1]
```
* **Status Codes**:
| Code | Headers | Body | Description |
|:-----|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|:--------------------------------------------------------------------------------------------------|
| 201 | `Request-ID`: Randomly generated identifier for troubleshooting[cite: 7]. | Hex-encoded PPID | Operation successful; a new platform instance has been registered[cite: 7]. |
| 400 | `Request-ID`: Randomly generated identifier[cite: 8]. `Error-Code` & `Error-Message`: Details on the error (e.g., `InvalidRequestSyntax`, `InvalidRegistrationServer`, `InvalidOrRevokedPackage`, `PackageNotFound`, `IncompatiblePackage`, `InvalidPlatformManifest`, `CachedKeysPolicyViolation`)[cite: 8]. | | Invalid Platform Manifest[cite: 10]. The client should not retry without modifications[cite: 10]. |
| 415 | `Request-ID`: Randomly generated identifier[cite: 8]. | | The MIME type specified in the request is not supported[cite: 8]. |
| 500 | `Request-ID`: Randomly generated identifier[cite: 8]. | | An internal server error occurred[cite: 8]. |
| 503 | `Request-ID`: Randomly generated identifier[cite: 8]. | | The server is currently unable to process the request; try again later[cite: 8]. |
### Add Package
This API allows for adding new processor packages to an already registered platform instance[cite: 11]. Upon successful
execution, a Platform Membership Certificate is generated for each processor package included in the Add
Request[cite: 11]. This requires a subscription for registration[cite: 11].
**POST** `https://api.trustedservices.intel.com/sgx/registration/v1/package`
**Request**
* **Headers**: Besides standard headers like `Content-Length`[cite: 12], the following are needed:
| Name | Required | Value | Description |
|:--------------------------|:---------|:-------------------------|:------------------------------------------------------------------|
| Ocp-Apim-Subscription-Key | True | *Your Subscription Key* | Subscription key for API access, found in your profile[cite: 12]. |
| Content-Type | True | application/octet-stream | MIME type of the request body[cite: 12]. |
* **Body**: A binary representation of the Add Request structure, an opaque blob for adding new packages to an existing
platform[cite: 13].
* **Example Request**:
```bash
curl -v -X POST "Content-Type: application/octet-stream" --data-binary @add_package_request.bin "https://api.trustedservices.intel.com/sgx/registration/v1/package" -H "Ocp-Apim-Subscription-Key: {subscription_key}" [cite: 14]
```
**Response**
* **Model**: For a 200 HTTP Status Code, the response is a fixed-size array (8 elements) containing binary Platform
Membership Certificate structures appended together[cite: 14]. Certificates fill the array sequentially, starting from
index 0, with remaining elements zeroed out[cite: 14].
* **Example Response (hex-encoded)**:
```
E8BDBECFEF9040184488777267355084...00000000 [cite: 15]
```
* **Status Codes**:
| Code | Headers | Body | Description |
|:-----|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| 200 | `Content-Type`: application/octet-stream[cite: 18]. `Request-ID`: Randomly generated identifier[cite: 18]. `Certificate-Count`: Number of certificates returned[cite: 18]. | Fixed-size array (8 elements) with binary Platform Membership Certificates[cite: 18, 19]. | Operation successful; packages added to the platform[cite: 18]. |
| 400 | `Request-ID`: Randomly generated identifier[cite: 18]. `Error-Code` & `Error-Message`: Details on the error (e.g., `InvalidRequestSyntax`, `PlatformNotFound`, `InvalidOrRevokedPackage`, `PackageNotFound`[cite: 17], `InvalidAddRequest`)[cite: 18]. | | Invalid Add Request Payload[cite: 20]. Do not retry without modifications[cite: 20]. |
| 401 | `Request-ID`: Randomly generated identifier[cite: 18]. | | Failed to authenticate or authorize the request[cite: 18]. |
| 415 | `Request-ID`: Randomly generated identifier[cite: 18]. | | The MIME type specified is not supported[cite: 18]. |
| 500 | `Request-ID`: Randomly generated identifier[cite: 18]. | | Internal server error occurred[cite: 18]. |
| 503 | `Request-ID`: Randomly generated identifier[cite: 18]. | | Server is currently unable to process the request[cite: 18]. |
## Intel® SGX and Intel® TDX Provisioning Certification Service for ECDSA Attestation [cite: 21]
This service provides PCK certificates. You can download the Provisioning Certification Root CA Certificate (v4) in both
DER and PEM formats[cite: 21].
### Get/Post PCK Certificate V4
This API allows requesting a single PCK certificate. It offers two primary methods:
1. **Using PPID and SVNs**:
* **Single-socket platforms**: No prerequisites[cite: 22].
* **Multi-socket platforms**: Requires prior platform registration via the Register Platform API[cite: 22]. This
flow necessitates that platform root keys are persistently stored in the backend[cite: 23], and the Keys Caching
Policy must be `true`[cite: 23].
2. **Using Platform Manifest and SVNs**:
* **Multi-socket platforms**: Does *not* require prior registration[cite: 24]. Platform root keys are *not* required
to be persistently stored[cite: 24]. The Keys Caching Policy determines whether keys are stored or not[cite: 25].
* **Direct Registration (via Register Platform API)**: Keys are always stored; `CachedKeys` flag in PCK
certificates is `true`[cite: 26, 27].
* **Indirect Registration (via Get PCK Certificate(s) API)**: Keys are never stored; `CachedKeys` flag is
`false`[cite: 28, 30]. Register Platform API cannot be used afterward[cite: 29].
**Note**: The PCS returns the PCK Certificate representing the highest TCB security level based on the CPUSVN and PCE
ISVSVN inputs[cite: 31].
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcert`
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:-------------------|:--------------------------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | False | | Subscription key[cite: 32]. |
| PPID-Encryption-Key | String | Header | False | | Key type for PPID encryption (default: "RSA-3072")[cite: 32]. |
| encrypted_ppid | String | Query | True | `[0-9a-fA-F]{768}` | Base16-encoded encrypted PPID[cite: 32]. |
| cpusvn | String | Query | True | `[0-9a-fA-F]{32}` | Base16-encoded CPUSVN[cite: 32]. |
| pcesvn | String | Query | True | `[0-9a-fA-F]{4}` | Base16-encoded PCESVN (little endian)[cite: 32]. |
| pceid | String | Query | True | `[0-9a-fA-F]{4}` | Base16-encoded PCE-ID (little endian)[cite: 32]. |
* **Example Request**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/pckcert?encrypted_ppid={encrypted_ppid}&cpusvn={cpusvn}&pcesvn={pcesvn}&pceid={pceid}" -H "Ocp-Apim-Subscription-Key: {subscription_key}" [cite: 33]
```
**POST** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcert`
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:----------------------------|:-------------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | False | | Subscription key[cite: 33]. |
| Content-Type | String | Header | True | `application/json` | Content type[cite: 35]. |
| platformManifest | String | Body Field | True | `[0-9a-fA-F]{16862,112884}` | Base16-encoded Platform Manifest[cite: 35]. |
| cpusvn | String | Body Field | True | `[0-9a-fA-F]{32}` | Base16-encoded CPUSVN[cite: 35]. |
| pcesvn | String | Body Field | True | `[0-9a-fA-F]{4}` | Base16-encoded PCESVN (little endian)[cite: 35]. |
| pceid | String | Body Field | True | `[0-9a-fA-F]{4}` | Base16-encoded PCE-ID (little endian)[cite: 35]. |
* **Body**:
```json
{
"platformManifest": "...", [cite: 36]
"cpusvn": "...", [cite: 36]
"pcesvn": "...", [cite: 36]
"pceid": "..." [cite: 36]
}
```
* **Example Request**:
```bash
curl -v -X POST --data '{"platformManifest":"...","cpusvn":"...","pcesvn":"...","pceid":"..."}' "https://api.trustedservices.intel.com/sgx/certification/v4/pckcert" -H "Ocp-Apim-Subscription-Key: {subscription_key}" -H "Content-Type: application/json" [cite: 36]
```
**Response (Both GET & POST)**
* **Model**: `PckCert (X-PEM-FILE)` - PEM-encoded SGX PCK Certificate[cite: 36].
* **Example Response**:
```pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE----- [cite: 36]
```
* **Status Codes**:
| Code | Model | Headers | Description |
|:-----|:--------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------|
| 200 | PckCert | `Content-Type`: application/x-pem-file[cite: 37]. `Request-ID`: Identifier[cite: 37]. `SGX-PCK-Certificate-Issuer-Chain`: PEM-encoded Issuer Chain[cite: 37]. `SGX-TCBm`: Hex-encoded CPUSVN & PCESVN[cite: 37]. `SGX-FMSPC`: Hex-encoded FMSPC[cite: 37]. `SGX-PCK-Certificate-CA-Type`: "processor" or "platform"[cite: 37]. `Warning` (Optional)[cite: 37]. | Operation successful[cite: 37]. |
| 400 | | `Request-ID`: Identifier[cite: 37]. `Warning` (Optional)[cite: 37]. `Error-Code` & `Error-Message` (e.g., `InvalidRequestSyntax`, `InvalidRegistrationServer`, `InvalidOrRevokedPackage`, `PackageNotFound`, `IncompatiblePackage`, `InvalidPlatformManifest`)[cite: 37]. | Invalid request parameters[cite: 37]. |
| 401 | | `Request-ID`: Identifier[cite: 37]. `Warning` (Optional)[cite: 37]. | Failed to authenticate or authorize[cite: 37]. |
| 404 | | `Request-ID`: Identifier[cite: 37]. `Warning` (Optional)[cite: 37]. | PCK Certificate not found (e.g., unsupported PPID/PCE-ID, TCB below minimum, Platform Manifest not registered/updated)[cite: 37]. |
| 429 | | `Retry-After`: Wait time in seconds[cite: 37]. `Warning` (Optional)[cite: 37]. | Too many requests[cite: 37]. |
| 500 | | `Request-ID`: Identifier[cite: 37]. `Warning` (Optional)[cite: 37]. | Internal server error[cite: 37]. |
| 503 | | `Request-ID`: Identifier[cite: 39]. `Warning` (Optional)[cite: 39]. | Server is currently unable to process[cite: 39]. |
### Get PCK Certificates V4
This API retrieves PCK certificates for *all* configured TCB levels for a platform. The usage conditions (single-socket
vs. multi-socket, PPID vs. Platform Manifest, key caching) are similar to the single PCK certificate
API[cite: 40, 41, 42, 43, 44, 45, 46, 47, 48].
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts` (Using PPID & PCE-ID)
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:-------------------|:------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | False | | Subscription key[cite: 49]. |
| PPID-Encryption-Key | String | Header | False | | Key type (default: "RSA-3072")[cite: 49]. |
| encrypted_ppid | String | Query | True | `[0-9a-fA-F]{768}` | Encrypted PPID[cite: 49]. |
| pceid | String | Query | True | `[0-9a-fA-F]{4}` | PCE-ID[cite: 49]. |
* **Example Request**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts?encrypted_ppid={...}&pceid={...}" -H "Ocp-Apim-Subscription-Key: {subscription_key}" [cite: 50]
```
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts/config` (Using PPID, PCE-ID &
CPUSVN) [cite: 51]
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:-------------------|:------------------------------------------|
| Ocp-Apim-Subscription-Key | String | Header | False | | Subscription key[cite: 52]. |
| PPID-Encryption-Key | String | Header | False | | Key type (default: "RSA-3072")[cite: 52]. |
| encrypted_ppid | String | Query | True | `[0-9a-fA-F]{768}` | Encrypted PPID[cite: 52]. |
| pceid | String | Query | True | `[0-9a-fA-F]{4}` | PCE-ID[cite: 52]. |
| cpusvn | String | Query | True | `[0-9a-fA-F]{32}` | CPUSVN[cite: 52]. |
* **Example Request**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts/config?encrypted_ppid={...}&pceid={...}&cpusvn={...}" -H "Ocp-Apim-Subscription-Key: {subscription_key}" [cite: 53]
```
**POST** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts` (Using Platform Manifest & PCE-ID)
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:----------------------------|:-----------------------------|
| Ocp-Apim-Subscription-Key | String | Header | False | | Subscription key[cite: 54]. |
| Content-Type | String | Header | True | `application/json` | Content type[cite: 54]. |
| platformManifest | String | Body Field | True | `[0-9a-fA-F]{16862,112884}` | Platform Manifest[cite: 54]. |
| pceid | String | Body Field | True | `[0-9a-fA-F]{4}` | PCE-ID[cite: 54]. |
* **Body**:
```json
{
"platformManifest": "...", [cite: 55]
"pceid": "..." [cite: 55]
}
```
* **Example Request**:
```bash
curl -v -X POST --data '{"platformManifest":"...","pceid":"..."}' "https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts" -H "Ocp-Apim-Subscription-Key: {subscription_key}" -H "Content-Type: application/json" [cite: 55]
```
**POST** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts/config` (Using Platform Manifest, PCE-ID &
CPUSVN)
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:--------------------------|:-------|:-------------|:---------|:----------------------------|:-----------------------------|
| Ocp-Apim-Subscription-Key | String | Header | False | | Subscription key[cite: 56]. |
| Content-Type | String | Header | True | `application/json` | Content type[cite: 57]. |
| platformManifest | String | Body Field | True | `[0-9a-fA-F]{16862,112884}` | Platform Manifest[cite: 56]. |
| cpusvn | String | Body Field | True | `[0-9a-fA-F]{32}` | CPUSVN[cite: 56]. |
| pceid | String | Body Field | True | `[0-9a-fA-F]{4}` | PCE-ID[cite: 56]. |
* **Body**:
```json
{
"platformManifest": "...", [cite: 57]
"cpusvn": "...", [cite: 57]
"pceid": "..." [cite: 57]
}
```
* **Example Request**:
```bash
curl -v -X POST --data '{"platformManifest":"...","cpusvn":"...","pceid":"..."}' "https://api.trustedservices.intel.com/sgx/certification/v4/pckcerts/config" -H "Ocp-Apim-Subscription-Key: {subscription_key}" -H "Content-Type: application/json" [cite: 57]
```
**Response (All GET & POST for multiple certs)**
* **Model**: `PckCerts` (JSONArray of objects, each containing `tcb`, `tcbm`, and `cert`)[cite: 56].
* `tcb`: Object with 16 `sgxtcbcompXXsvn` fields (integer 0-255) and `pcesvn` (integer 0-65535)[cite: 59].
* `tcbm`: Hex-encoded string of CPUSVN (16 bytes) and PCESVN (2 bytes)[cite: 7].
* `cert`: URL-encoded PEM PCK Certificate, or "Not available" string[cite: 60].
* **Example Response**:
```json
[
{
"tcb": {
"sgxtcbcomp01svn": 3,
"sgxtcbcomp02svn": 1,
...
"pcesvn": 11
},
"tcbm": "...",
"cert": "-----BEGIN%20CERTIFICATE-----%0A...%0A-----END%20CERTIFICATE-----" [cite: 61]
},
...
]
```
* **Status Codes**:
| Code | Model | Headers | Description |
|:-----|:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|
| 200 | PckCerts | `Content-Type`: application/json[cite: 8]. `Request-ID`: Identifier[cite: 8]. `SGX-PCK-Certificate-Issuer-Chain`: Issuer Chain[cite: 62]. `SGX-FMSPC`: FMSPC[cite: 8]. `SGX-PCK-Certificate-CA-Type`: "processor" or "platform"[cite: 63]. `Warning` (Optional)[cite: 8]. | Operation successful[cite: 8]. |
| 400 | | `Request-ID`: Identifier[cite: 8]. `Warning` (Optional)[cite: 8]. `Error-Code` & `Error-Message` (e.g., `InvalidRequestSyntax`[cite: 65], `InvalidRegistrationServer`[cite: 65], `InvalidOrRevokedPackage`[cite: 65], `PackageNotFound`[cite: 65], `IncompatiblePackage`[cite: 65], `InvalidPlatformManifest` [cite: 66]) | Invalid request parameters[cite: 8]. |
| 401 | | `Request-ID`: Identifier[cite: 68]. `Warning` (Optional)[cite: 68]. | Failed to authenticate or authorize[cite: 68]. |
| 404 | | `Request-ID`: Identifier[cite: 68]. `Warning` (Optional)[cite: 68]. | PCK Certificate not found (e.g., unsupported PPID/PCE-ID, Platform Manifest not registered)[cite: 68]. |
| 429 | | `Retry-After`: Wait time[cite: 68]. `Warning` (Optional)[cite: 68]. | Too many requests[cite: 68]. |
| 500 | | `Request-ID`: Identifier[cite: 68]. `Warning` (Optional)[cite: 68]. | Internal server error[cite: 68]. |
| 503 | | `Request-ID`: Identifier[cite: 68]. `Warning` (Optional)[cite: 68]. | Server is currently unable to process[cite: 68]. |
### Get Revocation List V4
This API retrieves the X.509 Certificate Revocation List (CRL) for revoked SGX PCK Certificates, issued by either the
Intel SGX Processor CA or Platform CA[cite: 69, 70].
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/pckcrl` [cite: 71]
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:---------|:-------|:-------------|:---------|:------------|:------------|
| ca | String | Query | True | `(processor | platform)` | CA identifier ("processor" or "platform")[cite: 71, 72]. |
| encoding | String | Query | False | `(pem | der)` | CRL encoding (default: PEM)[cite: 71]. |
* **Example Request**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/pckcrl?ca=platform&encoding=pem" [cite: 71]
```
**Response**
* **Model**: `PckCrl` (X-PEM-FILE or PKIX-CRL) - PEM or DER encoded CRL[cite: 71].
* **Example Response**:
```pem
-----BEGIN X509 CRL-----
...
-----END X509 CRL----- [cite: 71]
```
* **Status Codes**:
| Code | Model | Headers | Description |
|:-----|:-------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------|
| 200 | PckCrl | `Content-Type`: "application/x-pem-file" or "application/pkix-crl"[cite: 71]. `Request-ID`: Identifier[cite: 71]. `SGX-PCK-CRL-Issuer-Chain`: Issuer Chain[cite: 73]. `Warning` (Optional)[cite: 71]. | Operation successful[cite: 71]. |
| 400 | | `Request-ID`: Identifier[cite: 71]. `Warning` (Optional)[cite: 71]. | Invalid request parameters[cite: 71]. |
| 401 | | `Request-ID`: Identifier[cite: 74]. `Warning` (Optional)[cite: 74]. | Failed to authenticate or authorize[cite: 74]. |
| 500 | | `Request-ID`: Identifier[cite: 74]. `Warning` (Optional)[cite: 74]. | Internal server error[cite: 74]. |
| 503 | | `Request-ID`: Identifier[cite: 74]. `Warning` (Optional)[cite: 74]. | Server is currently unable to process[cite: 74]. |
### Get SGX TCB Info V4
This API retrieves SGX TCB information for a specific FMSPC, which is crucial for determining the TCB status of a
platform[cite: 75]. The process involves:
1. Retrieving the FMSPC from the SGX PCK Certificate[cite: 75].
2. Fetching the corresponding SGX TCB info[cite: 76].
3. Iterating through the TCB Levels:
* Comparing all 16 SGX TCB Comp SVNs from the certificate against the TCB Level; they must be >=[cite: 77, 78].
* Comparing the PCESVN from the certificate against the TCB Level; it must be >=[cite: 79, 80]. If both match, the
TCB level's status is found[cite: 80].
4. If no match is found, the TCB level is unsupported[cite: 82].
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/tcb` [cite: 82]
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:------------------------|:-------|:-------------|:---------|:------------------|:-------------------------------------------------------------------------------------------------------|
| fmspc | String | Query | True | `[0-9a-fA-F]{12}` | Base16-encoded FMSPC[cite: 83]. |
| update | String | Query | False | `(early | standard)` | TCB Info update type (default: standard). `early` provides access sooner than `standard`[cite: 83]. Cannot be used with `tcbEvaluationDataNumber`[cite: 83]. |
| tcbEvaluationDataNumber | Number | Query | False | `\d+` | Retrieves TCB info for a specific evaluation number[cite: 83]. Cannot be used with `update`[cite: 83]. |
* **Example Requests**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/tcb?fmspc={fmspc_value}&update=early" [cite: 84]
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/tcb?fmspc={fmspc_value}&tcbEvaluationDataNumber={number}" [cite: 84]
```
**Response**
* **Model**: `Appendix A: TCB info V3`[cite: 86]. (See Appendix A below).
* **Example Response**: (JSON structure as shown in the document)[cite: 85].
* **Status Codes**:
| Code | Model | Headers | Description |
|:-----|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------|
| 200 | TcbInfoV3 | `Content-Type`: application/json[cite: 86]. `Request-ID`: Identifier[cite: 86]. `TCB-Info-Issuer-Chain`: Issuer Chain[cite: 86]. `Warning` (Optional)[cite: 86]. | Operation successful[cite: 86]. |
| 400 | | `Request-ID`: Identifier[cite: 86]. `Warning` (Optional)[cite: 86]. | Invalid request (bad `fmspc`, invalid params, or `update` & `tcbEvaluationDataNumber` used together)[cite: 86]. |
| 401 | | `Request-ID`: Identifier[cite: 86]. `Warning` (Optional)[cite: 87]. | Failed to authenticate or authorize[cite: 86]. |
| 404 | | `Request-ID`: Identifier[cite: 86]. `Warning` (Optional)[cite: 87]. | TCB info not found for the given `fmspc` or `tcbEvaluationDataNumber`[cite: 86]. |
| 410 | | `Request-ID`: Identifier[cite: 88]. `Warning` (Optional)[cite: 88]. | TCB info for the provided `tcbEvaluationDataNumber` is no longer available[cite: 88]. |
| 500 | | `Request-ID`: Identifier[cite: 88]. `Warning` (Optional)[cite: 88]. | Internal server error[cite: 88]. |
| 503 | | `Request-ID`: Identifier[cite: 88]. `Warning` (Optional)[cite: 88]. | Server currently unable to process[cite: 88]. |
### Get TDX TCB Info V4
This API retrieves TDX TCB information[cite: 89]. The TCB status determination follows a similar process to SGX but
includes additional steps for TDX TEE TCB SVNs and TDX Module
Identity[cite: 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102].
**GET** `https://api.trustedservices.intel.com/tdx/certification/v4/tcb` [cite: 102]
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:------------------------|:-------|:-------------|:---------|:------------------|:---------------------------------------------------------------------------------------------------------|
| fmspc | String | Query | True | `[0-9a-fA-F]{12}` | Base16-encoded FMSPC[cite: 103]. |
| update | String | Query | False | `(early | standard)` | TCB Info update type (default: standard)[cite: 103]. Cannot be used with `tcbEvaluationDataNumber`[cite: 103]. |
| tcbEvaluationDataNumber | Number | Query | False | `\d+` | Retrieves TCB info for a specific evaluation number[cite: 103]. Cannot be used with `update`[cite: 103]. |
* **Example Requests**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/tdx/certification/v4/tcb?fmspc={fmspc_value}&update=early" [cite: 104]
curl -v -X GET "https://api.trustedservices.intel.com/tdx/certification/v4/tcb?fmspc={fmspc_value}&tcbEvaluationDataNumber={number}" [cite: 104]
```
**Response**
* **Model**: `Appendix A: TCB info V3`[cite: 107]. (See Appendix A below).
* **Example Response**: (JSON structure including `tdxModule` and `tdxtcbcomponents` as shown in the
document)[cite: 105, 106].
* **Status Codes**: Similar to Get SGX TCB Info V4[cite: 108].
### Enclave Identity V4
This set of APIs allows for determining if an SGX Enclave's identity matches Intel's published identity[cite: 109]. The
process involves:
1. Retrieving the Enclave Identity (SGX QE, TDX QE, QVE, or QAE)[cite: 109].
2. Comparing `MRSIGNER` and `ISVPRODID` fields[cite: 109].
3. Applying `miscselectMask` and `attributesMask` and comparing the results[cite: 111, 112, 113, 114].
4. If checks pass, determining the TCB status by finding the highest TCB Level (sorted by ISVSVN) whose ISVSVN is <= the
Enclave Report's ISVSVN[cite: 116, 117].
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/qe/identity` [cite: 118]
**GET** `https://api.trustedservices.intel.com/tdx/certification/v4/qe/identity` [cite: 128]
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/qve/identity` [cite: 133]
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/qae/identity` [cite: 138]
* **Request**:
| Name | Type | Request Type | Required | Pattern | Description |
|:------------------------|:-------|:-------------|:---------|:--------|:--------------------------------------------------------------------------------------------------------------------------------------------|
| update | String | Query | False | `(early | standard)` | Identity update type (default: standard)[cite: 118, 127, 132, 137]. Cannot be used with `tcbEvaluationDataNumber`[cite: 118, 121, 127, 132, 137]. |
| tcbEvaluationDataNumber | Number | Query | False | `\d+` | Retrieves Identity for a specific evaluation number[cite: 119, 120, 127, 132, 137]. Cannot be used with `update`[cite: 121, 127, 132, 137]. |
* **Example Requests** (SGX QE shown):
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/qe/identity?update=early" [cite: 122]
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/qe/identity?tcbEvaluationDataNumber={number}" [cite: 122]
```
**Response**
* **Model**: `Appendix B: Enclave Identity V2`[cite: 122, 128, 134, 139]. (See Appendix B below).
* **Example Response**: (JSON structure as shown in the document for QE[cite: 125], TDX QE[cite: 131], QVE[cite: 136],
and QAE [cite: 141]).
* **Status Codes** (SGX QE shown, others are similar):
| Code | Model | Headers | Description |
|:-----|:------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| 200 | EIdentityV2 | `Content-Type`: application/json[cite: 122]. `Request-ID`: Identifier[cite: 122]. `SGX-Enclave-Identity-Issuer-Chain`: Issuer Chain[cite: 122]. `Warning` (Optional)[cite: 122]. | Operation successful[cite: 122]. |
| 400 | | `Request-ID`: Identifier[cite: 122]. `Warning` (Optional)[cite: 122]. | Invalid request (params or `update` & `tcbEvaluationDataNumber` conflict)[cite: 122]. |
| 401 | | `Request-ID`: Identifier[cite: 123]. `Warning` (Optional)[cite: 123]. | Failed to authenticate or authorize[cite: 122]. |
| 404 | | `Request-ID`: Identifier[cite: 123]. `Warning` (Optional)[cite: 123]. | Identity info not found[cite: 122]. |
| 410 | | `Request-ID`: Identifier[cite: 124]. `Warning` (Optional)[cite: 124]. | Identity info no longer available[cite: 124]. |
| 500 | | `Request-ID`: Identifier[cite: 124]. `Warning` (Optional)[cite: 124]. | Internal server error[cite: 124]. |
| 503 | | `Request-ID`: Identifier[cite: 124]. `Warning` (Optional)[cite: 124]. | Server currently unable to process[cite: 124]. |
### Retrieve FMSPCs V4
Retrieves a list of FMSPC values for SGX and TDX platforms that support DCAP attestation[cite: 141].
**GET** `https://api.trustedservices.intel.com/sgx/certification/v4/fmspcs` [cite: 141]
* **Request**:
| Name | Type | Request Type | Required | Description |
|:---------|:-------|:-------------|:---------|:----------------------------------------------------------------------------|
| platform | String | Query | False | Optional platform filter: `all` (default), `client`, `E3`, `E5`[cite: 141]. |
* **Example Request**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/fmspcs?platform=E5" [cite: 141]
```
**Response**
* **Example Response**:
```json
[
{"platform": "E3", "fmspc": "123456789000"}, [cite: 142]
{"platform": "E5", "fmspc": "987654321000"}, [cite: 142]
{"platform": "client", "fmspc": "ABCDEF123456"} [cite: 142]
]
```
* **Status Codes**:
| Code | Headers | Description |
|:-----|:-------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------|
| 200 | `Content-Type`: application/json[cite: 142]. `Request-ID`: Identifier[cite: 142]. `Warning` (Optional)[cite: 142]. | Operation successful[cite: 142]. |
| 400 | `Request-ID`: Identifier[cite: 142]. `Warning` (Optional)[cite: 143]. | Invalid request parameters[cite: 142]. |
| 500 | `Request-ID`: Identifier[cite: 142]. `Warning` (Optional)[cite: 142]. | Internal server error[cite: 142]. |
| 503 | `Request-ID`: Identifier[cite: 142]. `Warning` (Optional)[cite: 142]. | Server currently unable to process[cite: 142]. |
### Retrieve TCB Evaluation Data Numbers V4
Retrieves the list of currently supported TCB Evaluation Data Numbers and their associated TCB-R event
states[cite: 142].
**GET** `https://api.trustedservices.intel.com/{sgx|tdx}/certification/v4/tcbevaluationdatanumbers` [cite: 142]
* **Example Requests**:
```bash
curl -v -X GET "https://api.trustedservices.intel.com/sgx/certification/v4/tcbevaluationdatanumbers" [cite: 142]
curl -v -X GET "https://api.trustedservices.intel.com/tdx/certification/v4/tcbevaluationdatanumbers" [cite: 142]
```
**Response**
* **Model**: `Appendix C: TCB Evaluation Data Numbers V1`[cite: 144]. (See Appendix C below).
* **Example Response**:
```json
{
"tcbEvaluationDataNumbers": {
"version": 1,
"issueDate": "2023-04-13T09:38:17Z",
"nextUpdate": "2023-05-13T09:38:17Z",
"tcbNumbers": [
{"tcbEvaluationDataNumber": 12, "tcbRecoveryEventDate": "2023-04-13T00:00:00Z", "tcbDate": "2023-04-13T00:00:00Z"},
{"tcbEvaluationDataNumber": 11, "tcbRecoveryEventDate": "2023-01-14T00:00:00Z", "tcbDate": "2023-01-14T00:00:00Z"}
],
"signature": "..." [cite: 142]
}
}
```
* **Status Codes**:
| Code | Headers | Description |
|:-----|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------|
| 200 | `Content-Type`: application/json[cite: 144]. `Request-ID`: Identifier[cite: 144]. `TCB-Evaluation-Data-Numbers-Issuer-Chain`: Issuer Chain[cite: 145]. `Warning` (Optional)[cite: 144]. | Operation successful[cite: 144]. |
| 500 | `Request-ID`: Identifier[cite: 144]. `Warning` (Optional)[cite: 144]. | Internal server error[cite: 144]. |
| 503 | `Request-ID`: Identifier[cite: 144]. `Warning` (Optional)[cite: 144]. | Server currently unable to process[cite: 146]. |
---
## Appendix A: TCB Info V3 [cite: 147]
This defines the structure of the TCB Info V3 JSON response[cite: 147].
* `tcbInfo`: (Object)
* `id`: (String) Identifier (e.g., "SGX", "TDX")[cite: 148].
* `version`: (Integer) Structure version[cite: 148].
* `issueDate`: (String - datetime) Creation timestamp (ISO 8601 UTC)[cite: 148].
* `nextUpdate`: (String - datetime) Next update timestamp (ISO 8601 UTC)[cite: 149].
* `fmspc`: (String) Base16-encoded FMSPC[cite: 149].
* `pceId`: (String) Base16-encoded PCE ID[cite: 149].
* `tcbType`: (Integer) TCB level composition type[cite: 149].
* `tcbEvaluationDataNumber`: (Integer) Monotonically increasing sequence number, synchronized across TCB Info and
Enclave Identities, indicating updates[cite: 150, 151, 152].
* `tdxModule`: (Object - Optional, only for TDX TCB Info)[cite: 153].
* `mrsigner`: (String) Base16-encoded TDX SEAM module's signer measurement[cite: 154].
* `attributes`: (String) Base16-encoded "golden" attributes[cite: 154].
* `attributesMask`: (String) Base16-encoded attributes mask[cite: 154].
* `tdxModuleIdentities`: (Array - Optional, for multiple TDX SEAM Modules)[cite: 154].
* `id`: (String) Module identifier[cite: 154].
* `mrsigner`: (String) Base16-encoded signer measurement[cite: 155].
* `attributes`: (String) Base16-encoded "golden" attributes[cite: 155].
* `attributesMask`: (String) Base16-encoded attributes mask[cite: 156].
* `tcbLevels`: (Array) Sorted list of TCB levels for this module[cite: 157].
* `tcb`: (Object)
* `isvsvn`: (Integer) ISV SVN[cite: 157].
* `tcbDate`: (String - datetime) TCB date (ISO 8601 UTC)[cite: 158].
* `tcbStatus`: (String) "UpToDate", "OutOfDate", or "Revoked"[cite: 158].
* `advisoryIDs`: (Array - Optional) List of relevant `INTEL-SA-XXXXX` or `INTEL-DOC-XXXXX`
identifiers[cite: 159, 160].
* `tcbLevels`: (Array) Sorted list of TCB levels for the FMSPC[cite: 160].
* `tcb`: (Object)
* `sgxtcbcomponents`: (Array - Optional) 16 SGX TCB Components (SVN, Category, Type)[cite: 161].
* `tdxtcbcomponents`: (Array - Optional, only for TDX TCB Info) 16 TDX TCB Components (SVN, Category,
Type)[cite: 161, 162, 164].
* `pcesvn`: (Integer) PCE SVN[cite: 161].
* `tcbDate`: (String - datetime) TCB date (ISO 8601 UTC)[cite: 165].
* `tcbStatus`: (String) "UpToDate", "HardeningNeeded", "ConfigurationNeeded", "
ConfigurationAndHardeningNeeded", "OutOfDate", "OutOfDateConfigurationNeeded", "Revoked"[cite: 165, 166].
* `advisoryIDs`: (Array - Optional) List of relevant `INTEL-SA-XXXXX` or `INTEL-DOC-XXXXX`
identifiers[cite: 167, 168].
* `signature`: (String) Base16-encoded signature over the `tcbInfo` body[cite: 163].
---
## Appendix B: Enclave Identity V2 [cite: 168]
This defines the structure of the Enclave Identity V2 JSON response[cite: 168].
* `enclaveIdentity`: (Object)
* `id`: (String) Identifier ("QE", "QVE", "QAE", "TD_QE")[cite: 169].
* `version`: (Integer) Structure version[cite: 169].
* `issueDate`: (String - datetime) Creation timestamp (ISO 8601 UTC)[cite: 170].
* `nextUpdate`: (String - datetime) Next update timestamp (ISO 8601 UTC)[cite: 170].
* `tcbEvaluationDataNumber`: (Integer) Monotonically increasing sequence number, synchronized across TCB Info and
Enclave Identities[cite: 171, 172].
* `miscselect`: (String) Base16-encoded "golden" miscselect value[cite: 172].
* `miscselectMask`: (String) Base16-encoded miscselect mask[cite: 172].
* `attributes`: (String) Base16-encoded "golden" attributes value[cite: 172].
* `attributesMask`: (String) Base16-encoded attributes mask[cite: 173].
* `mrsigner`: (String) Base16-encoded mrsigner hash[cite: 173].
* `isvprodid`: (Integer) Enclave Product ID[cite: 173].
* `tcbLevels`: (Array) Sorted list of Enclave TCB levels[cite: 173].
* `tcb`: (Object)
* `isvsvn`: (Integer) Enclave's ISV SVN[cite: 173].
* `tcbDate`: (String - datetime) TCB date (ISO 8601 UTC)[cite: 174].
* `tcbStatus`: (String) "UpToDate", "OutOfDate", or "Revoked"[cite: 174, 176].
* `advisoryIDs`: (Array - Optional) List of relevant `INTEL-SA-XXXXX` or `INTEL-DOC-XXXXX`
identifiers[cite: 177].
* `signature`: (String) Base16-encoded signature over the `enclaveIdentity` body[cite: 175].
---
## Appendix C: TCB Evaluation Data Numbers V1 [cite: 177]
This defines the structure of the TCB Evaluation Data Numbers V1 JSON response[cite: 177].
* `tcbEvaluationDataNumbers`: (Object)
* `id`: (String) Identifier ("SGX" or "TDX")[cite: 178].
* `version`: (Integer) Structure version[cite: 178].
* `issueDate`: (String - datetime) Creation timestamp (ISO 8601 UTC)[cite: 178].
* `nextUpdate`: (String - datetime) Suggested next call timestamp (ISO 8601 UTC)[cite: 179].
* `tcbNumbers`: (Array) List of TCB Evaluation Data Number objects[cite: 179].
* `tcbEvaluationDataNumber`: (Integer) The number itself[cite: 179].
* `tcbRecoveryEventDate`: (String - datetime) The date Intel first publishes related collateral (ISO 8601
UTC)[cite: 179].
* `tcbDate`: (String - datetime) TCB date (ISO 8601 UTC)[cite: 180, 181].
* `signature`: (String) Base16-encoded signature over the structure's body[cite: 181].
---
## Appendix D: PCK Certificate and CRL Specification
This section refers to an external document that specifies the hierarchy and format of X.509 v3 certificates and X.509
v2 CRLs issued by Intel for Provisioning Certification Keys[cite: 181].
---
**Notes on TCB Status and Enforcement:**
* **Enforcement Grace Periods**: Intel provides "early" and "standard" update parameters, offering different enforcement
grace periods[cite: 182]. The attestation result depends on which parameter is used[cite: 182].
* **Relying Party Trust Decisions**: Relying parties can use additional factors beyond the attestation result to make
trust decisions[cite: 183]. They might accept risks even if a platform is technically "OutOfDate" due to low-severity
issues[cite: 184].
* **Communication**: Intel aims to communicate planned deviations via email to registered API subscribers[cite: 185].

View file

@ -0,0 +1,229 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! Enclave Identity
use super::ApiClient; // Import from parent module
use crate::{
error::IntelApiError,
responses::EnclaveIdentityResponse,
types::{ApiVersion, UpdateType},
};
impl ApiClient {
/// Retrieves the SGX QE Identity from the Intel API.
///
/// Returns Enclave Identity JSON string (Appendix B) and Issuer Chain header.
/// Supports both v3 and v4. The `update` and `tcb_evaluation_data_number`
/// parameters are only valid in API v4. Returns the enclave identity JSON
/// and an issuer chain header.
///
/// # Arguments
///
/// * `update` - Optional [`UpdateType`] (v4 only).
/// * `tcb_evaluation_data_number` - Optional TCB Evaluation Data Number (v4 only).
///
/// # Returns
///
/// An [`EnclaveIdentityResponse`] containing the JSON identity and issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails, if conflicting v4 parameters are used,
/// or if the desired identity resource is not found.
pub async fn get_sgx_qe_identity(
&self,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<EnclaveIdentityResponse, IntelApiError> {
self.get_sgx_enclave_identity("qe", update, tcb_evaluation_data_number)
.await
}
/// Retrieves the TDX QE Identity from the Intel API (API v4 only).
///
/// # Arguments
///
/// * `update` - Optional [`UpdateType`] (v4 only).
/// * `tcb_evaluation_data_number` - Optional TCB Evaluation Data Number (v4 only).
///
/// # Returns
///
/// An [`EnclaveIdentityResponse`] containing the JSON identity and issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if an unsupported API version is used,
/// if conflicting parameters are provided, or if the identity resource is not found.
/// GET /tdx/certification/v4/qe/identity - V4 ONLY
pub async fn get_tdx_qe_identity(
&self,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<EnclaveIdentityResponse, IntelApiError> {
// Ensure V4 API
self.ensure_v4_api("get_tdx_qe_identity")?;
// Check conflicting parameters (only relevant for V4, checked inside helper)
self.check_conflicting_update_params(update, tcb_evaluation_data_number)?;
let path = self.build_api_path("tdx", "qe", "identity")?;
let mut url = self.base_url.join(&path)?;
if let Some(upd) = update {
url.query_pairs_mut()
.append_pair("update", &upd.to_string());
}
if let Some(tedn) = tcb_evaluation_data_number {
url.query_pairs_mut()
.append_pair("tcbEvaluationDataNumber", &tedn.to_string());
}
let request_builder = self.client.get(url);
// Special handling for 404/410 when tcbEvaluationDataNumber is specified
if let Some(tedn_val) = tcb_evaluation_data_number {
// Use the helper function to check status before proceeding
self.check_tcb_evaluation_status(&request_builder, tedn_val, "TDX QE Identity")
.await?;
// If the check passes (doesn't return Err), continue to fetch_json_with_issuer_chain
}
// Fetch JSON and header (TDX only exists in V4)
let (enclave_identity_json, issuer_chain) = self
.fetch_json_with_issuer_chain(
request_builder,
"SGX-Enclave-Identity-Issuer-Chain",
None,
)
.await?;
Ok(EnclaveIdentityResponse {
enclave_identity_json,
issuer_chain,
})
}
/// Retrieves the SGX QVE Identity from the Intel API.
///
/// Supports API v3 and v4. The `update` and `tcb_evaluation_data_number` parameters
/// are v4 only. Returns the QVE identity JSON and issuer chain.
///
/// # Arguments
///
/// * `update` - Optional [`UpdateType`] (v4 only).
/// * `tcb_evaluation_data_number` - Optional TCB Evaluation Data Number (v4 only).
///
/// # Returns
///
/// An [`EnclaveIdentityResponse`] containing the QVE identity JSON and issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails, if conflicting parameters are used,
/// or if the identity resource is not found.
/// GET /sgx/certification/{v3,v4}/qve/identity
pub async fn get_sgx_qve_identity(
&self,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<EnclaveIdentityResponse, IntelApiError> {
self.get_sgx_enclave_identity("qve", update, tcb_evaluation_data_number)
.await
}
/// Retrieves the SGX QAE Identity from the Intel API (API v4 only).
///
/// # Arguments
///
/// * `update` - Optional [`UpdateType`] (v4 only).
/// * `tcb_evaluation_data_number` - Optional TCB Evaluation Data Number (v4 only).
///
/// # Returns
///
/// An [`EnclaveIdentityResponse`] containing the QAE identity JSON and issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if an unsupported API version is used,
/// if conflicting parameters are provided, or if the QAE identity is not found.
/// GET /sgx/certification/v4/qae/identity - V4 ONLY
pub async fn get_sgx_qae_identity(
&self,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<EnclaveIdentityResponse, IntelApiError> {
// QAE endpoint requires V4
if self.api_version != ApiVersion::V4 {
return Err(IntelApiError::UnsupportedApiVersion(
"QAE Identity endpoint requires API v4".to_string(),
));
}
// Call the generic helper, it will handle V4 params and 404/410 checks
self.get_sgx_enclave_identity("qae", update, tcb_evaluation_data_number)
.await
}
/// Retrieves generic SGX enclave identity (QE, QVE, QAE) data.
///
/// # Arguments
///
/// * `identity_path_segment` - String slice representing the identity path segment (e.g., "qe", "qve", "qae").
/// * `update` - Optional [`UpdateType`] for API v4.
/// * `tcb_evaluation_data_number` - Optional TCB Evaluation Data Number for API v4.
///
/// # Returns
///
/// An [`EnclaveIdentityResponse`] containing the JSON identity data and issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails or the specified resource
/// is unavailable.
async fn get_sgx_enclave_identity(
&self,
identity_path_segment: &str,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<EnclaveIdentityResponse, IntelApiError> {
self.check_v4_only_param(update, "update")?;
self.check_v4_only_param(tcb_evaluation_data_number, "tcbEvaluationDataNumber")?;
self.check_conflicting_update_params(update, tcb_evaluation_data_number)?;
let path = self.build_api_path("sgx", identity_path_segment, "identity")?;
let mut url = self.base_url.join(&path)?;
if self.api_version == ApiVersion::V4 {
if let Some(upd) = update {
url.query_pairs_mut()
.append_pair("update", &upd.to_string());
}
if let Some(tedn) = tcb_evaluation_data_number {
url.query_pairs_mut()
.append_pair("tcbEvaluationDataNumber", &tedn.to_string());
}
}
let request_builder = self.client.get(url);
if self.api_version == ApiVersion::V4 {
if let Some(tedn_val) = tcb_evaluation_data_number {
let description = format!("SGX {} Identity", identity_path_segment.to_uppercase());
self.check_tcb_evaluation_status(&request_builder, tedn_val, &description)
.await?;
}
}
let (enclave_identity_json, issuer_chain) = self
.fetch_json_with_issuer_chain(
request_builder,
"SGX-Enclave-Identity-Issuer-Chain",
Some("SGX-Enclave-Identity-Issuer-Chain"),
)
.await?;
Ok(EnclaveIdentityResponse {
enclave_identity_json,
issuer_chain,
})
}
}

View file

@ -0,0 +1,134 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! FMSPCs & TCB Evaluation Data Numbers
use super::ApiClient; // Import from parent module
use crate::{
error::{check_status, IntelApiError},
responses::TcbEvaluationDataNumbersResponse,
types::{ApiVersion, PlatformFilter},
FmspcJsonResponse,
};
use reqwest::StatusCode;
impl ApiClient {
/// GET /sgx/certification/{v3,v4}/fmspcs
/// Retrieves a list of FMSPC values for SGX and TDX platforms (API v4 only).
///
/// # Arguments
///
/// * `platform_filter` - An optional filter specifying SGX or TDX platforms.
///
/// # Returns
///
/// Optional 'platform' filter.
/// A `String` containing the JSON array of objects, each containing `fmspc` and `platform`.
///
/// # Errors
///
/// Returns an `IntelApiError` if an unsupported API version is used or if the request fails.
pub async fn get_fmspcs(
&self,
platform_filter: Option<PlatformFilter>,
) -> Result<FmspcJsonResponse, IntelApiError> {
if self.api_version == ApiVersion::V3 {
return Err(IntelApiError::UnsupportedApiVersion(
"API v4 only function".to_string(),
));
}
let path = self.build_api_path("sgx", "", "fmspcs")?;
let mut url = self.base_url.join(&path)?;
if let Some(pf) = platform_filter {
url.query_pairs_mut()
.append_pair("platform", &pf.to_string());
}
let request_builder = self.client.get(url);
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::OK]).await?;
let fmspcs_json = response.text().await?;
Ok(fmspcs_json)
}
/// GET /sgx/certification/v4/tcbevaluationdatanumbers - V4 ONLY
/// Retrieves the currently supported SGX TCB Evaluation Data Numbers (API v4 only).
///
/// # Returns
///
/// A [`TcbEvaluationDataNumbersResponse`] containing the JSON structure of TCB Evaluation
/// Data Numbers and an issuer chain header.
///
/// # Errors
///
/// Returns an `IntelApiError` if an unsupported API version is used or if the request fails.
pub async fn get_sgx_tcb_evaluation_data_numbers(
&self,
) -> Result<TcbEvaluationDataNumbersResponse, IntelApiError> {
// Endpoint requires V4
if self.api_version != ApiVersion::V4 {
return Err(IntelApiError::UnsupportedApiVersion(
"SGX TCB Evaluation Data Numbers endpoint requires API v4".to_string(),
));
}
let path = self.build_api_path("sgx", "", "tcbevaluationdatanumbers")?;
let url = self.base_url.join(&path)?;
let request_builder = self.client.get(url);
let (tcb_evaluation_data_numbers_json, issuer_chain) = self
.fetch_json_with_issuer_chain(
request_builder,
"TCB-Evaluation-Data-Numbers-Issuer-Chain",
None,
)
.await?;
Ok(TcbEvaluationDataNumbersResponse {
tcb_evaluation_data_numbers_json,
issuer_chain,
})
}
/// GET /tdx/certification/v4/tcbevaluationdatanumbers - V4 ONLY
/// Retrieves the currently supported TDX TCB Evaluation Data Numbers (API v4 only).
///
/// # Returns
///
/// A [`TcbEvaluationDataNumbersResponse`] containing the JSON structure of TCB Evaluation
/// Data Numbers and an issuer chain header.
///
/// # Errors
///
/// Returns an `IntelApiError` if an unsupported API version is used or if the request fails.
pub async fn get_tdx_tcb_evaluation_data_numbers(
&self,
) -> Result<TcbEvaluationDataNumbersResponse, IntelApiError> {
// Endpoint requires V4
if self.api_version != ApiVersion::V4 {
return Err(IntelApiError::UnsupportedApiVersion(
"TDX TCB Evaluation Data Numbers endpoint requires API v4".to_string(),
));
}
let path = self.build_api_path("tdx", "", "tcbevaluationdatanumbers")?;
let url = self.base_url.join(&path)?;
let request_builder = self.client.get(url);
let (tcb_evaluation_data_numbers_json, issuer_chain) = self
.fetch_json_with_issuer_chain(
request_builder,
"TCB-Evaluation-Data-Numbers-Issuer-Chain",
None,
)
.await?;
Ok(TcbEvaluationDataNumbersResponse {
tcb_evaluation_data_numbers_json,
issuer_chain,
})
}
}

View file

@ -0,0 +1,291 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! Internal helper methods
use super::ApiClient; // Import from parent module
use crate::{
error::{check_status, extract_api_error_details, IntelApiError},
responses::{PckCertificateResponse, PckCertificatesResponse},
types::{ApiVersion, UpdateType},
};
use percent_encoding::percent_decode_str;
use reqwest::{RequestBuilder, Response, StatusCode};
use std::{io, time::Duration};
use tokio::time::sleep;
impl ApiClient {
/// Helper to construct API paths dynamically based on version and technology (SGX/TDX).
pub(super) fn build_api_path(
&self,
technology: &str,
service: &str,
endpoint: &str,
) -> Result<String, IntelApiError> {
let api_segment = self.api_version.path_segment();
if technology == "tdx" && self.api_version == ApiVersion::V3 {
return Err(IntelApiError::UnsupportedApiVersion(format!(
"TDX endpoint /{service}/{endpoint}/{technology} requires API v4",
)));
}
if technology == "sgx" && service == "registration" {
// Registration paths are fixed at v1 regardless of client's api_version
return Ok(format!("/sgx/registration/v1/{endpoint}").replace("//", "/"));
}
Ok(
format!("/{technology}/certification/{api_segment}/{service}/{endpoint}")
.replace("//", "/"),
)
}
/// Helper to add an optional header if the string is non-empty.
pub(super) fn maybe_add_header(
builder: RequestBuilder,
header_name: &'static str,
header_value: Option<&str>,
) -> RequestBuilder {
match header_value {
Some(value) if !value.is_empty() => builder.header(header_name, value),
_ => builder,
}
}
/// Helper to extract a required header string value, handling potential v3/v4 differences.
pub(super) fn get_required_header(
&self,
response: &Response,
v4_header_name: &'static str,
v3_header_name: Option<&'static str>,
) -> Result<String, IntelApiError> {
let header_name = match self.api_version {
ApiVersion::V4 => v4_header_name,
ApiVersion::V3 => v3_header_name.unwrap_or(v4_header_name),
};
let value = response
.headers()
.get(header_name)
.ok_or(IntelApiError::MissingOrInvalidHeader(header_name))?
.to_str()
.map_err(|e| IntelApiError::HeaderValueParse(header_name, e.to_string()))?;
if value.contains('%') {
percent_decode_str(value)
.decode_utf8()
.map_err(|e| IntelApiError::HeaderValueParse(header_name, e.to_string()))
.map(|s| s.to_string())
} else {
Ok(value.to_string())
}
}
/// Helper to execute a request that returns a single PCK certificate and associated headers.
pub(super) async fn fetch_pck_certificate(
&self,
request_builder: RequestBuilder,
) -> Result<PckCertificateResponse, IntelApiError> {
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::OK]).await?;
let issuer_chain = self.get_required_header(
&response,
"SGX-PCK-Certificate-Issuer-Chain",
Some("SGX-PCK-Certificate-Issuer-Chain"),
)?;
let tcbm = self.get_required_header(&response, "SGX-TCBm", Some("SGX-TCBm"))?;
let fmspc = self.get_required_header(&response, "SGX-FMSPC", Some("SGX-FMSPC"))?;
let pck_cert_pem = response.text().await?;
Ok(PckCertificateResponse {
pck_cert_pem,
issuer_chain,
tcbm,
fmspc,
})
}
/// Helper to execute a request that returns a PCK certificates JSON array and associated headers.
pub(super) async fn fetch_pck_certificates(
&self,
request_builder: RequestBuilder,
) -> Result<PckCertificatesResponse, IntelApiError> {
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::OK]).await?;
let issuer_chain = self.get_required_header(
&response,
"SGX-PCK-Certificate-Issuer-Chain",
Some("SGX-PCK-Certificate-Issuer-Chain"),
)?;
let fmspc = self.get_required_header(&response, "SGX-FMSPC", Some("SGX-FMSPC"))?;
let pck_certs_json = response.text().await?;
Ok(PckCertificatesResponse {
pck_certs_json,
issuer_chain,
fmspc,
})
}
/// Helper to execute a request expected to return JSON plus an Issuer-Chain header.
pub(super) async fn fetch_json_with_issuer_chain(
&self,
request_builder: RequestBuilder,
v4_issuer_chain_header: &'static str,
v3_issuer_chain_header: Option<&'static str>,
) -> Result<(String, String), IntelApiError> {
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::OK]).await?;
let issuer_chain =
self.get_required_header(&response, v4_issuer_chain_header, v3_issuer_chain_header)?;
let json_body = response.text().await?;
Ok((json_body, issuer_chain))
}
/// Checks for HTTP 404 or 410 status when querying TCB Evaluation Data Number based resources.
pub(super) async fn check_tcb_evaluation_status(
&self,
request_builder: &RequestBuilder,
tcb_evaluation_data_number_val: u64,
resource_description: &str,
) -> Result<(), IntelApiError> {
let builder_clone = request_builder.try_clone().ok_or_else(|| {
IntelApiError::Io(io::Error::other(
"Failed to clone request builder for status check",
))
})?;
let response = self.execute_with_retry(builder_clone).await?;
let status = response.status();
if status == StatusCode::NOT_FOUND || status == StatusCode::GONE {
let (request_id, _, _) = extract_api_error_details(&response);
return Err(IntelApiError::ApiError {
status,
request_id,
error_code: None,
error_message: Some(format!(
"{} for TCB Evaluation Data Number {} {}",
resource_description,
tcb_evaluation_data_number_val,
if status == StatusCode::NOT_FOUND {
"not found"
} else {
"is no longer available"
}
)),
});
}
Ok(())
}
/// Ensures the client is configured for API v4, otherwise returns an error.
pub(super) fn ensure_v4_api(&self, function_name: &str) -> Result<(), IntelApiError> {
if self.api_version != ApiVersion::V4 {
return Err(IntelApiError::UnsupportedApiVersion(format!(
"{function_name} requires API v4",
)));
}
Ok(())
}
/// Checks if a V4-only parameter is provided with a V3 API version.
pub(super) fn check_v4_only_param<T: Copy>(
&self,
param_value: Option<T>,
param_name: &str,
) -> Result<(), IntelApiError> {
if self.api_version == ApiVersion::V3 && param_value.is_some() {
Err(IntelApiError::UnsupportedApiVersion(format!(
"'{param_name}' parameter requires API v4",
)))
} else {
Ok(())
}
}
/// Checks for conflicting `update` and `tcb_evaluation_data_number` parameters when using V4.
pub(super) fn check_conflicting_update_params(
&self,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<(), IntelApiError> {
if self.api_version == ApiVersion::V4
&& update.is_some()
&& tcb_evaluation_data_number.is_some()
{
Err(IntelApiError::ConflictingParameters(
"'update' and 'tcbEvaluationDataNumber'",
))
} else {
Ok(())
}
}
/// Executes a request with automatic retry logic for rate limiting (429 responses).
///
/// This method will automatically retry the request up to `max_retries` times
/// when receiving a 429 Too Many Requests response, waiting for the duration
/// specified in the Retry-After header.
pub(super) async fn execute_with_retry(
&self,
request_builder: RequestBuilder,
) -> Result<Response, IntelApiError> {
let mut retries = 0;
loop {
// Clone the request builder for retry attempts
let builder = request_builder.try_clone().ok_or_else(|| {
IntelApiError::Io(io::Error::other(
"Failed to clone request builder for retry",
))
})?;
let response = builder.send().await?;
let status = response.status();
if status != StatusCode::TOO_MANY_REQUESTS {
// Not a rate limit error, return the response
return Ok(response);
}
// Handle 429 Too Many Requests
if retries >= self.max_retries {
// No more retries, return the error
let request_id = response
.headers()
.get("Request-ID")
.and_then(|v| v.to_str().ok())
.unwrap_or("Unknown")
.to_string();
let retry_after = response
.headers()
.get("Retry-After")
.and_then(|v| v.to_str().ok())
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(60);
return Err(IntelApiError::TooManyRequests {
request_id,
retry_after,
});
}
// Parse Retry-After header
let retry_after_secs = response
.headers()
.get("Retry-After")
.and_then(|v| v.to_str().ok())
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(60); // Default to 60 seconds
// Wait before retrying
sleep(Duration::from_secs(retry_after_secs)).await;
retries += 1;
}
}
}

View file

@ -0,0 +1,135 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
mod enclave_identity;
mod fmspc;
mod helpers;
mod pck_cert;
mod pck_crl;
mod registration;
mod tcb_info;
use crate::{error::IntelApiError, types::ApiVersion};
use reqwest::Client;
use url::Url;
// Base URL for the Intel Trusted Services API
const BASE_URL: &str = "https://api.trustedservices.intel.com";
/// Client for interacting with Intel Trusted Services API.
///
/// Provides methods to access both SGX and TDX certification services,
/// supporting API versions V3 and V4. This client offers functionality
/// to register platforms, retrieve PCK certificates and CRLs, fetch TCB
/// information, enclave identities, as well as TCB evaluation data numbers.
///
/// # Examples
///
/// ```rust,no_run
/// use intel_dcap_api::ApiClient;
///
/// #[tokio::main]
/// async fn main() -> Result<(), Box<dyn std::error::Error>> {
/// // Create a client with default settings (V4 API)
/// let client = ApiClient::new()?;
///
/// // Retrieve TCB info for a specific FMSPC
/// let tcb_info = client.get_sgx_tcb_info("00606A000000", None, None).await?;
/// println!("TCB Info: {}", tcb_info.tcb_info_json);
///
/// Ok(())
/// }
/// ```
#[derive(Clone)]
pub struct ApiClient {
client: Client,
base_url: Url,
api_version: ApiVersion,
/// Maximum number of automatic retries for rate-limited requests (429 responses)
max_retries: u32,
}
impl ApiClient {
/// Creates a new client targeting the latest supported API version (V4).
///
/// # Returns
///
/// A result containing the newly created `ApiClient` or an `IntelApiError` if there
/// was an issue building the underlying HTTP client.
///
/// # Errors
///
/// This function may fail if the provided TLS version or base URL
/// cannot be used to build a `reqwest` client.
pub fn new() -> Result<Self, IntelApiError> {
// Default to V4
Self::new_with_options(BASE_URL, ApiVersion::V4)
}
/// Creates a new client targeting a specific API version.
///
/// # Arguments
///
/// * `api_version` - The desired API version to use (V3 or V4).
///
/// # Errors
///
/// Returns an `IntelApiError` if the `reqwest` client cannot be built
/// with the specified options.
pub fn new_with_version(api_version: ApiVersion) -> Result<Self, IntelApiError> {
Self::new_with_options(BASE_URL, api_version)
}
/// Creates a new client with a custom base URL, targeting the latest supported API version (V4).
///
/// # Arguments
///
/// * `base_url` - The custom base URL for the Intel Trusted Services API.
///
/// # Errors
///
/// Returns an `IntelApiError` if the `reqwest` client cannot be built
/// or if the provided base URL is invalid.
pub fn new_with_base_url(base_url: impl reqwest::IntoUrl) -> Result<Self, IntelApiError> {
// Default to V4
Self::new_with_options(base_url, ApiVersion::V4)
}
/// Creates a new client with a custom base URL and specific API version.
///
/// # Arguments
///
/// * `base_url` - The custom base URL for the Intel Trusted Services API.
/// * `api_version` - The desired API version (V3 or V4).
///
/// # Errors
///
/// Returns an `IntelApiError` if the `reqwest` client cannot be built
/// or if the provided base URL is invalid.
pub fn new_with_options(
base_url: impl reqwest::IntoUrl,
api_version: ApiVersion,
) -> Result<Self, IntelApiError> {
Ok(ApiClient {
client: Client::builder()
.min_tls_version(reqwest::tls::Version::TLS_1_2)
.build()?,
base_url: base_url.into_url()?,
api_version,
max_retries: 3, // Default to 3 retries
})
}
/// Sets the maximum number of automatic retries for rate-limited requests.
///
/// When the API returns a 429 (Too Many Requests) response, the client will
/// automatically wait for the duration specified in the Retry-After header
/// and retry the request up to this many times.
///
/// # Arguments
///
/// * `max_retries` - Maximum number of retries (0 disables automatic retries)
pub fn set_max_retries(&mut self, max_retries: u32) {
self.max_retries = max_retries;
}
}

View file

@ -0,0 +1,353 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! Provisioning Certification Service
use super::ApiClient; // Import from parent module
use crate::{
error::IntelApiError,
requests::{PckCertRequest, PckCertsConfigRequest, PckCertsRequest},
responses::{PckCertificateResponse, PckCertificatesResponse},
types::ApiVersion,
};
use reqwest::header;
impl ApiClient {
/// GET /sgx/certification/{v3,v4}/pckcert
/// Retrieves a single SGX PCK certificate using encrypted PPID and SVNs.
///
/// Optionally requires a subscription key. The `ppid_encryption_key_type` parameter
/// is only valid for API v4 and allows specifying the PPID encryption key type (e.g. "RSA-3072").
///
/// # Arguments
///
/// * `encrypted_ppid` - Hex-encoded encrypted PPID.
/// * `cpusvn` - Hex-encoded CPUSVN value.
/// * `pcesvn` - Hex-encoded PCESVN value.
/// * `pceid` - Hex-encoded PCEID value.
/// * `subscription_key` - Optional subscription key if the Intel API requires it.
/// * `ppid_encryption_key_type` - Optional PPID encryption key type (V4 only).
///
/// # Returns
///
/// A [`PckCertificateResponse`] containing the PEM-encoded certificate, issuer chain,
/// TCBm, and FMSPC.
///
/// # Errors
///
/// Returns an `IntelApiError` if the API call fails or the response contains an invalid status.
/// Returns PEM Cert, Issuer Chain, TCBm, FMSPC.
pub async fn get_pck_certificate_by_ppid(
&self,
encrypted_ppid: &str,
cpusvn: &str,
pcesvn: &str,
pceid: &str,
subscription_key: Option<&str>,
ppid_encryption_key_type: Option<&str>,
) -> Result<PckCertificateResponse, IntelApiError> {
// Check V4-only parameter
self.check_v4_only_param(ppid_encryption_key_type, "PPID-Encryption-Key")?;
let path = self.build_api_path("sgx", "", "pckcert")?; // service is empty
let mut url = self.base_url.join(&path)?;
url.query_pairs_mut()
.append_pair("encrypted_ppid", encrypted_ppid)
.append_pair("cpusvn", cpusvn)
.append_pair("pcesvn", pcesvn)
.append_pair("pceid", pceid);
let mut request_builder = self.client.get(url);
request_builder = Self::maybe_add_header(
request_builder,
"Ocp-Apim-Subscription-Key",
subscription_key,
);
// Only add for V4
if self.api_version == ApiVersion::V4 {
request_builder = Self::maybe_add_header(
request_builder,
"PPID-Encryption-Key",
ppid_encryption_key_type,
);
}
self.fetch_pck_certificate(request_builder).await
}
/// POST /sgx/certification/{v3,v4}/pckcert
/// Retrieves a single SGX PCK certificate using a platform manifest and SVNs.
///
/// Optionally requires a subscription key.
///
/// # Arguments
///
/// * `platform_manifest` - Hex-encoded platform manifest.
/// * `cpusvn` - Hex-encoded CPUSVN value.
/// * `pcesvn` - Hex-encoded PCESVN value.
/// * `pceid` - Hex-encoded PCEID value.
/// * `subscription_key` - Optional subscription key if the Intel API requires it.
///
/// # Returns
///
/// A [`PckCertificateResponse`] containing the PEM-encoded certificate, issuer chain,
/// TCBm, and FMSPC.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails or if the response is invalid.
/// Returns PEM Cert, Issuer Chain, TCBm, FMSPC.
pub async fn get_pck_certificate_by_manifest(
&self,
platform_manifest: &str,
cpusvn: &str,
pcesvn: &str,
pceid: &str,
subscription_key: Option<&str>,
) -> Result<PckCertificateResponse, IntelApiError> {
let path = self.build_api_path("sgx", "", "pckcert")?;
let url = self.base_url.join(&path)?;
let request_body = PckCertRequest {
platform_manifest,
cpusvn,
pcesvn,
pceid,
};
let mut request_builder = self
.client
.post(url)
.header(header::CONTENT_TYPE, "application/json")
.json(&request_body);
request_builder = Self::maybe_add_header(
request_builder,
"Ocp-Apim-Subscription-Key",
subscription_key,
);
self.fetch_pck_certificate(request_builder).await
}
/// GET /sgx/certification/{v3,v4}/pckcerts
/// Retrieves all SGX PCK certificates for a platform using encrypted PPID.
///
/// Optionally requires a subscription key. The `ppid_encryption_key_type` parameter
/// is only valid for API v4.
///
/// # Arguments
///
/// * `encrypted_ppid` - Hex-encoded encrypted PPID.
/// * `pceid` - Hex-encoded PCEID value.
/// * `subscription_key` - Optional subscription key if the Intel API requires it.
/// * `ppid_encryption_key_type` - Optional PPID encryption key type (V4 only).
///
/// # Returns
///
/// A [`PckCertificatesResponse`] containing JSON with `{tcb, tcbm, cert}` entries,
/// as well as the issuer chain and FMSPC headers.
///
/// # Errors
///
/// Returns an `IntelApiError` if the API call fails or the response status is invalid.
pub async fn get_pck_certificates_by_ppid(
&self,
encrypted_ppid: &str,
pceid: &str,
subscription_key: Option<&str>,
ppid_encryption_key_type: Option<&str>,
) -> Result<PckCertificatesResponse, IntelApiError> {
// Check V4-only parameter
self.check_v4_only_param(ppid_encryption_key_type, "PPID-Encryption-Key")?;
let path = self.build_api_path("sgx", "", "pckcerts")?;
let mut url = self.base_url.join(&path)?;
url.query_pairs_mut()
.append_pair("encrypted_ppid", encrypted_ppid)
.append_pair("pceid", pceid);
let mut request_builder = self.client.get(url);
request_builder = Self::maybe_add_header(
request_builder,
"Ocp-Apim-Subscription-Key",
subscription_key,
);
// Only add for V4
if self.api_version == ApiVersion::V4 {
request_builder = Self::maybe_add_header(
request_builder,
"PPID-Encryption-Key",
ppid_encryption_key_type,
);
}
self.fetch_pck_certificates(request_builder).await
}
/// POST /sgx/certification/{v3,v4}/pckcerts
/// Retrieves all SGX PCK certificates for a platform using a platform manifest.
///
/// Optionally requires a subscription key.
///
/// # Arguments
///
/// * `platform_manifest` - Hex-encoded platform manifest.
/// * `pceid` - Hex-encoded PCEID value.
/// * `subscription_key` - Optional subscription key if the Intel API requires it.
///
/// # Returns
///
/// A [`PckCertificatesResponse`] containing JSON with `{tcb, tcbm, cert}` entries,
/// as well as the issuer chain and FMSPC headers.
///
/// # Errors
///
/// Returns an `IntelApiError` if the API call fails or the response status is invalid.
pub async fn get_pck_certificates_by_manifest(
&self,
platform_manifest: &str,
pceid: &str,
subscription_key: Option<&str>,
) -> Result<PckCertificatesResponse, IntelApiError> {
let path = self.build_api_path("sgx", "", "pckcerts")?;
let url = self.base_url.join(&path)?;
let request_body = PckCertsRequest {
platform_manifest,
pceid,
};
let mut request_builder = self
.client
.post(url)
.header(header::CONTENT_TYPE, "application/json")
.json(&request_body);
request_builder = Self::maybe_add_header(
request_builder,
"Ocp-Apim-Subscription-Key",
subscription_key,
);
self.fetch_pck_certificates(request_builder).await
}
/// GET /sgx/certification/{v3,v4}/pckcerts/config (using PPID)
/// Retrieves SGX PCK certificates for a specific configuration (CPUSVN) using encrypted PPID.
///
/// Optionally requires a subscription key. The `ppid_encryption_key_type` parameter
/// is only valid for API v4. Returns JSON with `{tcb, tcbm, cert}` entries,
/// as well as the issuer chain and FMSPC headers.
///
/// # Arguments
///
/// * `encrypted_ppid` - Hex-encoded encrypted PPID.
/// * `pceid` - Hex-encoded PCEID value.
/// * `cpusvn` - Hex-encoded CPUSVN value for the requested configuration.
/// * `subscription_key` - Optional subscription key if the Intel API requires it.
/// * `ppid_encryption_key_type` - Optional PPID encryption key type (V4 only).
///
/// # Returns
///
/// A [`PckCertificatesResponse`] with the requested config's certificate data.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails or if the response status
/// is not `200 OK`.
pub async fn get_pck_certificates_config_by_ppid(
&self,
encrypted_ppid: &str,
pceid: &str,
cpusvn: &str,
subscription_key: Option<&str>,
ppid_encryption_key_type: Option<&str>,
) -> Result<PckCertificatesResponse, IntelApiError> {
// V3 does not support PPID-Encryption-Key header/type
if self.api_version == ApiVersion::V3 && ppid_encryption_key_type.is_some() {
return Err(IntelApiError::UnsupportedApiVersion(
"PPID-Encryption-Key header is only supported in API v4".to_string(),
));
}
let path = self.build_api_path("sgx", "", "pckcerts/config")?;
let mut url = self.base_url.join(&path)?;
url.query_pairs_mut()
.append_pair("encrypted_ppid", encrypted_ppid)
.append_pair("pceid", pceid)
.append_pair("cpusvn", cpusvn);
let mut request_builder = self.client.get(url);
request_builder = Self::maybe_add_header(
request_builder,
"Ocp-Apim-Subscription-Key",
subscription_key,
);
// Only add for V4
if self.api_version == ApiVersion::V4 {
request_builder = Self::maybe_add_header(
request_builder,
"PPID-Encryption-Key",
ppid_encryption_key_type,
);
}
self.fetch_pck_certificates(request_builder).await
}
/// POST /sgx/certification/{v3,v4}/pckcerts/config (using Manifest)
/// Retrieves SGX PCK certificates for a specific configuration (CPUSVN) using a platform manifest.
///
/// Optionally requires a subscription key. Returns JSON with `{tcb, tcbm, cert}` entries,
/// as well as the issuer chain and FMSPC headers.
///
/// # Arguments
///
/// * `platform_manifest` - Hex-encoded platform manifest.
/// * `pceid` - Hex-encoded PCEID value.
/// * `cpusvn` - Hex-encoded CPUSVN value for the requested configuration.
/// * `subscription_key` - Optional subscription key if needed by the Intel API.
///
/// # Returns
///
/// A [`PckCertificatesResponse`] with the requested config's certificate data.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails or if the response status
/// is not `200 OK`.
pub async fn get_pck_certificates_config_by_manifest(
&self,
platform_manifest: &str,
pceid: &str,
cpusvn: &str,
subscription_key: Option<&str>,
) -> Result<PckCertificatesResponse, IntelApiError> {
let path = self.build_api_path("sgx", "", "pckcerts/config")?;
let url = self.base_url.join(&path)?;
let request_body = PckCertsConfigRequest {
platform_manifest,
pceid,
cpusvn,
};
let mut request_builder = self
.client
.post(url)
.header(header::CONTENT_TYPE, "application/json")
.json(&request_body);
request_builder = Self::maybe_add_header(
request_builder,
"Ocp-Apim-Subscription-Key",
subscription_key,
);
self.fetch_pck_certificates(request_builder).await
}
}

View file

@ -0,0 +1,69 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! PCK Certificate Revocation List
use super::ApiClient; // Import from parent module
use crate::{
error::{check_status, IntelApiError},
responses::PckCrlResponse,
types::{CaType, CrlEncoding},
};
use reqwest::StatusCode;
impl ApiClient {
/// GET /sgx/certification/{v3,v4}/pckcrl
/// Retrieves the PCK Certificate Revocation List (CRL) for a specified CA type.
///
/// Optionally takes an `encoding` parameter indicating whether the CRL should be
/// returned as PEM or DER. Defaults to PEM if not specified.
///
/// # Arguments
///
/// * `ca_type` - The type of CA to retrieve the CRL for (e.g., "processor" or "platform").
/// * `encoding` - An optional [`CrlEncoding`] (PEM or DER).
///
/// # Returns
///
/// A [`PckCrlResponse`] containing the CRL data and the issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails or if the response status
/// is not `200 OK`.
/// Optional 'encoding' parameter ("pem" or "der").
/// Returns CRL data (PEM or DER) and Issuer Chain header.
pub async fn get_pck_crl(
&self,
ca_type: CaType,
encoding: Option<CrlEncoding>,
) -> Result<PckCrlResponse, IntelApiError> {
let path = self.build_api_path("sgx", "", "pckcrl")?;
let mut url = self.base_url.join(&path)?;
url.query_pairs_mut()
.append_pair("ca", &ca_type.to_string());
if let Some(enc) = encoding {
url.query_pairs_mut()
.append_pair("encoding", &enc.to_string());
}
let request_builder = self.client.get(url);
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::OK]).await?;
let issuer_chain = self.get_required_header(
&response,
"SGX-PCK-CRL-Issuer-Chain",
Some("SGX-PCK-CRL-Issuer-Chain"),
)?;
// Response body is PEM or DER CRL
let crl_data = response.bytes().await?.to_vec();
Ok(PckCrlResponse {
crl_data,
issuer_chain,
})
}
}

View file

@ -0,0 +1,108 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! Registration
use super::ApiClient; // Import from parent module
use crate::{
error::{check_status, IntelApiError},
responses::AddPackageResponse,
};
use reqwest::{header, StatusCode};
use std::num::ParseIntError;
impl ApiClient {
/// POST /sgx/registration/v1/platform
/// Registers a multi-package SGX platform with the Intel Trusted Services API.
///
/// # Arguments
///
/// * `platform_manifest` - Binary data representing the platform manifest.
///
/// # Returns
///
/// Request body is binary Platform Manifest
/// Returns the hex-encoded PPID as a `String` upon success.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails or if the response status
/// is not HTTP `201 CREATED`.
pub async fn register_platform(
&self,
platform_manifest: Vec<u8>,
) -> Result<String, IntelApiError> {
// Registration paths are fixed, use the helper with "registration" service
let path = self.build_api_path("sgx", "registration", "platform")?;
let url = self.base_url.join(&path)?;
let request_builder = self
.client
.post(url)
.header(header::CONTENT_TYPE, "application/octet-stream")
.body(platform_manifest);
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::CREATED]).await?;
// Response body is hex-encoded PPID
let ppid_hex = response.text().await?;
Ok(ppid_hex)
}
/// POST /sgx/registration/v1/package
/// Adds new package(s) to an already registered SGX platform instance.
///
/// # Arguments
///
/// * `add_package_request` - Binary data for the "Add Package" request body.
/// * `subscription_key` - The subscription key required by the Intel API.
///
/// # Returns
///
/// A [`AddPackageResponse`] containing the Platform Membership Certificates and
/// the count of them extracted from the response header.
///
/// # Errors
///
/// Returns an `IntelApiError` if the request fails, if the subscription key is invalid,
/// or if the response status is not HTTP `200 OK`.
pub async fn add_package(
&self,
add_package_request: Vec<u8>,
subscription_key: &str,
) -> Result<AddPackageResponse, IntelApiError> {
if subscription_key.is_empty() {
return Err(IntelApiError::InvalidSubscriptionKey);
}
// Registration paths are fixed
let path = self.build_api_path("sgx", "registration", "package")?;
let url = self.base_url.join(&path)?;
let request_builder = self
.client
.post(url)
.header("Ocp-Apim-Subscription-Key", subscription_key)
.header(header::CONTENT_TYPE, "application/octet-stream")
.body(add_package_request);
let response = self.execute_with_retry(request_builder).await?;
let response = check_status(response, &[StatusCode::OK]).await?;
// Use the generic header helper, assuming header name is stable across reg versions
let cert_count_str = self.get_required_header(&response, "Certificate-Count", None)?;
let pck_cert_count: usize = cert_count_str.parse().map_err(|e: ParseIntError| {
IntelApiError::HeaderValueParse("Certificate-Count", e.to_string())
})?;
// Response body is a binary array of certificates
let pck_certs = response.bytes().await?.to_vec();
Ok(AddPackageResponse {
pck_certs,
pck_cert_count,
})
}
}

View file

@ -0,0 +1,167 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! TCB Info
use super::ApiClient; // Import from parent module
use crate::{
error::IntelApiError,
responses::TcbInfoResponse,
types::{ApiVersion, UpdateType},
};
impl ApiClient {
/// GET /sgx/certification/{v3,v4}/tcb
/// Retrieves SGX TCB information for a given FMSPC.
///
/// Returns TCB Info JSON string (Appendix A) and Issuer Chain header.
/// This function supports both API v3 and v4. The `update` and `tcbEvaluationDataNumber`
/// parameters are only supported by API v4. If both are provided at the same time (for v4),
/// a conflict error is returned.
///
/// # Arguments
///
/// * `fmspc` - Hex-encoded FMSPC value.
/// * `update` - Optional [`UpdateType`] for API v4.
/// * `tcb_evaluation_data_number` - Optional TCB Evaluation Data Number (v4 only).
///
/// # Returns
///
/// A [`TcbInfoResponse`] containing the TCB info JSON and the issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if the API request fails, if conflicting parameters are used,
/// or if the requested TCB data is not found.
pub async fn get_sgx_tcb_info(
&self,
fmspc: &str,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<TcbInfoResponse, IntelApiError> {
// V3 does not support 'update' or 'tcbEvaluationDataNumber'
if self.api_version == ApiVersion::V3 && update.is_some() {
return Err(IntelApiError::UnsupportedApiVersion(
"'update' parameter requires API v4".to_string(),
));
}
if self.api_version == ApiVersion::V3 && tcb_evaluation_data_number.is_some() {
return Err(IntelApiError::UnsupportedApiVersion(
"'tcbEvaluationDataNumber' parameter requires API v4".to_string(),
));
}
if self.api_version == ApiVersion::V4
&& update.is_some()
&& tcb_evaluation_data_number.is_some()
{
return Err(IntelApiError::ConflictingParameters(
"'update' and 'tcbEvaluationDataNumber'",
));
}
let path = self.build_api_path("sgx", "", "tcb")?;
let mut url = self.base_url.join(&path)?;
url.query_pairs_mut().append_pair("fmspc", fmspc);
// Add V4-specific parameters
if self.api_version == ApiVersion::V4 {
if let Some(upd) = update {
url.query_pairs_mut()
.append_pair("update", &upd.to_string());
}
if let Some(tedn) = tcb_evaluation_data_number {
url.query_pairs_mut()
.append_pair("tcbEvaluationDataNumber", &tedn.to_string());
}
}
let request_builder = self.client.get(url);
// Special handling for 404/410 when tcbEvaluationDataNumber is specified (V4 only)
if self.api_version == ApiVersion::V4 {
if let Some(tedn_val) = tcb_evaluation_data_number {
// Use the helper function to check status before proceeding
self.check_tcb_evaluation_status(&request_builder, tedn_val, "SGX TCB Info")
.await?;
// If the check passes (doesn't return Err), continue to fetch_json_with_issuer_chain
}
}
// Fetch JSON and header (header name seems same for v3/v4)
let (tcb_info_json, issuer_chain) = self
.fetch_json_with_issuer_chain(
request_builder,
"TCB-Info-Issuer-Chain",
Some("SGX-TCB-Info-Issuer-Chain"),
)
.await?;
Ok(TcbInfoResponse {
tcb_info_json,
issuer_chain,
})
}
/// GET /tdx/certification/v4/tcb
/// Retrieves TDX TCB information for a given FMSPC (API v4 only).
///
/// # Arguments
///
/// * `fmspc` - Hex-encoded FMSPC value.
/// * `update` - An optional [`UpdateType`] (v4 only).
/// * `tcb_evaluation_data_number` - An optional TCB Evaluation Data Number (v4 only).
///
/// # Returns
///
/// A [`TcbInfoResponse`] containing TDX TCB info JSON and the issuer chain.
///
/// # Errors
///
/// Returns an `IntelApiError` if an unsupported API version is used,
/// if there are conflicting parameters, or if the TDX TCB data is not found.
/// Returns TCB Info JSON string (Appendix A) and Issuer Chain header.
pub async fn get_tdx_tcb_info(
&self,
fmspc: &str,
update: Option<UpdateType>,
tcb_evaluation_data_number: Option<u64>,
) -> Result<TcbInfoResponse, IntelApiError> {
// Ensure V4 API
self.ensure_v4_api("get_tdx_tcb_info")?;
// Check conflicting parameters (only relevant for V4, checked inside helper)
self.check_conflicting_update_params(update, tcb_evaluation_data_number)?;
let path = self.build_api_path("tdx", "", "tcb")?;
let mut url = self.base_url.join(&path)?;
url.query_pairs_mut().append_pair("fmspc", fmspc);
if let Some(upd) = update {
url.query_pairs_mut()
.append_pair("update", &upd.to_string());
}
if let Some(tedn) = tcb_evaluation_data_number {
url.query_pairs_mut()
.append_pair("tcbEvaluationDataNumber", &tedn.to_string());
}
let request_builder = self.client.get(url);
// Special handling for 404/410 when tcbEvaluationDataNumber is specified
if let Some(tedn_val) = tcb_evaluation_data_number {
// Use the helper function to check status before proceeding
self.check_tcb_evaluation_status(&request_builder, tedn_val, "TDX TCB Info")
.await?;
// If the check passes (doesn't return Err), continue to fetch_json_with_issuer_chain
}
// Fetch JSON and header (TDX only exists in V4)
let (tcb_info_json, issuer_chain) = self
.fetch_json_with_issuer_chain(request_builder, "TCB-Info-Issuer-Chain", None)
.await?;
Ok(TcbInfoResponse {
tcb_info_json,
issuer_chain,
})
}
}

View file

@ -0,0 +1,159 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use reqwest::{Response, StatusCode};
use thiserror::Error;
/// Represents all possible errors that can occur when interacting with Intel's DCAP API.
#[derive(Error, Debug)]
pub enum IntelApiError {
/// Indicates that the requested API version or feature is unsupported.
#[error("Unsupported API version or feature: {0}")]
UnsupportedApiVersion(String),
/// Wraps an underlying reqwest error.
#[error("Reqwest error: {0}")]
Reqwest(#[from] reqwest::Error),
/// Wraps a URL parsing error.
#[error("URL parsing error: {0}")]
UrlParse(#[from] url::ParseError),
/// Wraps a Serde JSON error.
#[error("Serde JSON error: {0}")]
JsonError(#[from] serde_json::Error),
/// Represents a general API error, capturing the HTTP status and optional error details.
#[error("API Error: Status={status}, Request-ID={request_id}, Code={error_code:?}, Message={error_message:?}")]
ApiError {
/// HTTP status code returned by the API.
status: StatusCode,
/// The unique request identifier for tracing errors.
request_id: String,
/// An optional server-provided error code.
error_code: Option<String>,
/// An optional server-provided error message.
error_message: Option<String>,
},
/// Indicates that a header is missing or invalid.
#[error("Header missing or invalid: {0}")]
MissingOrInvalidHeader(&'static str),
/// Represents an invalid subscription key.
#[error("Invalid Subscription Key format")]
InvalidSubscriptionKey,
/// Indicates that conflicting parameters were supplied.
#[error("Cannot provide conflicting parameters: {0}")]
ConflictingParameters(&'static str),
/// Wraps a standard I/O error.
#[error("I/O Error: {0}")]
Io(#[from] std::io::Error),
/// Represents an error while parsing a header's value.
#[error("Header value parse error for '{0}': {1}")]
HeaderValueParse(&'static str, String),
/// Indicates an invalid parameter was provided.
#[error("Invalid parameter value: {0}")]
InvalidParameter(&'static str),
/// Indicates that the API rate limit has been exceeded (HTTP 429).
///
/// This error is returned after the client has exhausted all automatic retry attempts
/// for a rate-limited request. The `retry_after` field contains the number of seconds
/// that was specified in the last Retry-After header. By default, the client automatically
/// retries rate-limited requests up to 3 times.
///
/// # Example
///
/// ```rust,no_run
/// use intel_dcap_api::{ApiClient, IntelApiError};
///
/// # async fn example() -> Result<(), Box<dyn std::error::Error>> {
/// let mut client = ApiClient::new()?;
/// client.set_max_retries(0); // Disable automatic retries
///
/// match client.get_sgx_tcb_info("00606A000000", None, None).await {
/// Ok(tcb_info) => println!("Success"),
/// Err(IntelApiError::TooManyRequests { request_id, retry_after }) => {
/// println!("Rate limited after all retries. Last retry-after was {} seconds.", retry_after);
/// }
/// Err(e) => eprintln!("Other error: {}", e),
/// }
/// # Ok(())
/// # }
/// ```
#[error("Too many requests. Retry after {retry_after} seconds")]
TooManyRequests {
/// The unique request identifier for tracing.
request_id: String,
/// Number of seconds to wait before retrying, from Retry-After header.
retry_after: u64,
},
}
/// Extracts common API error details from response headers.
pub(crate) fn extract_api_error_details(
response: &Response,
) -> (String, Option<String>, Option<String>) {
let request_id = response
.headers()
.get("Request-ID")
.and_then(|v| v.to_str().ok())
.unwrap_or("Unknown")
.to_string();
let error_code = response
.headers()
.get("Error-Code")
.and_then(|v| v.to_str().ok())
.map(String::from);
let error_message = response
.headers()
.get("Error-Message")
.and_then(|v| v.to_str().ok())
.map(String::from);
(request_id, error_code, error_message)
}
/// Checks the response status and returns an ApiError if it's not one of the expected statuses.
pub(crate) async fn check_status(
response: Response,
expected_statuses: &[StatusCode],
) -> Result<Response, IntelApiError> {
let status = response.status();
if expected_statuses.contains(&status) {
Ok(response)
} else if status == StatusCode::TOO_MANY_REQUESTS {
// Handle 429 Too Many Requests with Retry-After header
let request_id = response
.headers()
.get("Request-ID")
.and_then(|v| v.to_str().ok())
.unwrap_or("Unknown")
.to_string();
// Parse Retry-After header (can be in seconds or HTTP date format)
let retry_after = response
.headers()
.get("Retry-After")
.and_then(|v| v.to_str().ok())
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(60); // Default to 60 seconds if header is missing or invalid
Err(IntelApiError::TooManyRequests {
request_id,
retry_after,
})
} else {
let (request_id, error_code, error_message) = extract_api_error_details(&response);
Err(IntelApiError::ApiError {
status,
request_id,
error_code,
error_message,
})
}
}

View file

@ -0,0 +1,61 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
//! Intel API Client
//!
//! This module provides an API client for interacting with the Intel API for Trusted Services.
//! The API follows the documentation found at [Intel API Documentation](https://api.portal.trustedservices.intel.com/content/documentation.html).
//!
//! Create an [`ApiClient`] to interface with the Intel API.
//!
//! # Rate Limiting
//!
//! The Intel API implements rate limiting and may return HTTP 429 (Too Many Requests) responses.
//! This client automatically handles rate limiting by retrying requests up to 3 times by default,
//! waiting for the duration specified in the `Retry-After` header. You can configure the retry
//! behavior using [`ApiClient::set_max_retries`]. If all retries are exhausted, the client
//! returns an [`IntelApiError::TooManyRequests`] error.
//!
//! Example
//! ```rust,no_run
//! use intel_dcap_api::{ApiClient, IntelApiError, TcbInfoResponse};
//!
//! #[tokio::main]
//! async fn main() -> Result<(), IntelApiError> {
//! let client = ApiClient::new()?;
//!
//! // Example: Get SGX TCB Info
//! let fmspc_example = "00606A000000"; // Example FMSPC from docs
//! match client.get_sgx_tcb_info(fmspc_example, None, None).await {
//! Ok(TcbInfoResponse {
//! tcb_info_json,
//! issuer_chain,
//! }) => println!(
//! "SGX TCB Info for {}:\n{}\nIssuer Chain: {}",
//! fmspc_example, tcb_info_json, issuer_chain
//! ),
//! Err(e) => eprintln!("Error getting SGX TCB info: {}", e),
//! }
//!
//! Ok(())
//! }
//! ```
#![deny(missing_docs)]
#![deny(clippy::all)]
mod client;
mod error;
mod requests;
mod responses;
mod types;
// Re-export public items
pub use client::ApiClient;
pub use error::IntelApiError;
pub use responses::{
AddPackageResponse, EnclaveIdentityJson, EnclaveIdentityResponse, FmspcJsonResponse,
PckCertificateResponse, PckCertificatesResponse, PckCrlResponse, TcbEvaluationDataNumbersJson,
TcbEvaluationDataNumbersResponse, TcbInfoJson, TcbInfoResponse,
};
pub use types::{ApiVersion, CaType, CrlEncoding, PlatformFilter, UpdateType};

View file

@ -0,0 +1,28 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use serde::Serialize;
#[derive(Serialize)]
pub(crate) struct PckCertRequest<'a> {
#[serde(rename = "platformManifest")]
pub(crate) platform_manifest: &'a str,
pub(crate) cpusvn: &'a str,
pub(crate) pcesvn: &'a str,
pub(crate) pceid: &'a str,
}
#[derive(Serialize)]
pub(crate) struct PckCertsRequest<'a> {
#[serde(rename = "platformManifest")]
pub(crate) platform_manifest: &'a str,
pub(crate) pceid: &'a str,
}
#[derive(Serialize)]
pub(crate) struct PckCertsConfigRequest<'a> {
#[serde(rename = "platformManifest")]
pub(crate) platform_manifest: &'a str,
pub(crate) cpusvn: &'a str,
pub(crate) pceid: &'a str,
}

View file

@ -0,0 +1,111 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
/// JSON structure as defined in Appendix A of the API spec.
/// Content may vary slightly between API v3 and v4.
pub type TcbInfoJson = String;
/// JSON structure as defined in Appendix B of the API spec.
/// Content may vary slightly between API v3 and v4.
pub type EnclaveIdentityJson = String;
/// JSON Array of {tcb, tcbm, cert}.
/// Content structure expected to be consistent between v3 and v4.
pub type PckCertsJsonResponse = String;
/// JSON Array of {fmspc, platform}.
/// Content structure expected to be consistent between v3 and v4.
pub type FmspcJsonResponse = String;
/// JSON structure as defined in Appendix C of the API spec (V4 ONLY).
pub type TcbEvaluationDataNumbersJson = String;
/// Response structure for a PCK (Platform Configuration Key) Certificate.
///
/// Contains the PCK certificate, its issuer chain, TCB measurement, and FMSPC value.
#[derive(Debug, Clone)]
pub struct PckCertificateResponse {
/// PEM-encoded PCK certificate.
pub pck_cert_pem: String,
/// PEM-encoded certificate chain for the PCK certificate issuer.
/// Header name differs between v3 ("PCS-Certificate-Issuer-Chain") and v4 ("SGX-PCK-Certificate-Issuer-Chain").
pub issuer_chain: String,
/// TCBm value associated with the certificate (Hex-encoded).
pub tcbm: String,
/// FMSPC value associated with the certificate (Hex-encoded).
pub fmspc: String,
}
/// Response structure for multiple PCK (Platform Configuration Key) Certificates.
///
/// Contains a JSON array of PCK certificates, their issuer chain, and the associated FMSPC value.
/// This struct represents the response for retrieving multiple PCK certificates from the Intel SGX API.
#[derive(Debug, Clone)]
pub struct PckCertificatesResponse {
/// JSON array containing PCK certificates and their associated TCB levels.
pub pck_certs_json: PckCertsJsonResponse, // String alias for now
/// PEM-encoded certificate chain for the PCK certificate issuer.
/// Header name differs between v3 ("PCS-Certificate-Issuer-Chain") and v4 ("SGX-PCK-Certificate-Issuer-Chain").
pub issuer_chain: String,
/// FMSPC value associated with the certificates (Hex-encoded).
pub fmspc: String,
}
/// Response structure for TCB (Trusted Computing Base) Information.
///
/// Contains the JSON representation of TCB information for a specific platform,
/// along with the certificate chain of the TCB Info signer.
#[derive(Debug, Clone)]
pub struct TcbInfoResponse {
/// JSON containing TCB information for a specific platform (FMSPC).
pub tcb_info_json: TcbInfoJson, // String alias for now
/// PEM-encoded certificate chain for the TCB Info signer.
/// Header name differs slightly between v3 ("SGX-TCB-Info-Issuer-Chain") and v4 ("TCB-Info-Issuer-Chain" - check spec).
pub issuer_chain: String,
}
/// Response structure for Enclave Identity Information.
///
/// Contains the JSON representation of enclave identity details for QE, QvE, or QAE,
/// along with its issuer chain.
#[derive(Debug, Clone)]
pub struct EnclaveIdentityResponse {
/// JSON containing information about the QE, QvE, or QAE.
pub enclave_identity_json: EnclaveIdentityJson, // String alias for now
/// PEM-encoded certificate chain for the Enclave Identity signer.
/// Header name seems consistent ("SGX-Enclave-Identity-Issuer-Chain").
pub issuer_chain: String,
}
/// Response structure for TCB Evaluation Data Numbers (V4 ONLY).
///
/// Contains the JSON representation of supported TCB Evaluation Data Numbers
/// and its corresponding issuer chain.
#[derive(Debug, Clone)]
pub struct TcbEvaluationDataNumbersResponse {
/// JSON containing the list of supported TCB Evaluation Data Numbers (V4 ONLY).
pub tcb_evaluation_data_numbers_json: TcbEvaluationDataNumbersJson, // String alias for now
/// PEM-encoded certificate chain for the TCB Evaluation Data Numbers signer (V4 ONLY).
/// Header: "TCB-Evaluation-Data-Numbers-Issuer-Chain".
pub issuer_chain: String,
}
/// Response structure for Platform Configuration Key Certificate Revocation List (PCK CRL).
///
/// Contains the CRL data and its issuer chain for validating platform configuration keys.
#[derive(Debug, Clone)]
pub struct PckCrlResponse {
/// CRL data (PEM or DER encoded).
pub crl_data: Vec<u8>,
/// PEM-encoded certificate chain for the CRL issuer.
/// Header name differs between v3 ("PCS-CRL-Issuer-Chain") and v4 ("SGX-PCK-CRL-Issuer-Chain").
pub issuer_chain: String,
}
/// Response structure for the request to add a package.
pub struct AddPackageResponse {
/// Platform Membership Certificates
pub pck_certs: Vec<u8>,
/// The certificate count extracted from the response header.
pub pck_cert_count: usize,
}

View file

@ -0,0 +1,122 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use std::fmt;
/// Represents the type of Certificate Authority (CA) for Intel Trusted Services.
///
/// This enum defines the different types of Certificate Authorities used in the Intel DCAP API,
/// specifically distinguishing between processor and platform CAs.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum CaType {
/// Represents a processor-specific Certificate Authority.
Processor,
/// Represents a platform-wide Certificate Authority.
Platform,
}
impl fmt::Display for CaType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
CaType::Processor => write!(f, "processor"),
CaType::Platform => write!(f, "platform"),
}
}
}
/// Represents the encoding format for Certificate Revocation Lists (CRLs).
///
/// This enum defines the supported encoding formats for CRLs in the Intel DCAP API,
/// distinguishing between PEM (Privacy Enhanced Mail) and DER (Distinguished Encoding Rules) formats.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum CrlEncoding {
/// Represents the PEM (Privacy Enhanced Mail) encoding format.
Pem,
/// Represents the DER (Distinguished Encoding Rules) encoding format.
Der,
}
impl fmt::Display for CrlEncoding {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
CrlEncoding::Pem => write!(f, "pem"),
CrlEncoding::Der => write!(f, "der"),
}
}
}
/// Represents the type of update for Intel Trusted Services.
///
/// This enum defines different update types, distinguishing between early and standard updates
/// in the Intel DCAP (Data Center Attestation Primitives) API.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum UpdateType {
/// Represents early updates, typically used for preview or beta releases.
Early,
/// Represents standard updates, which are the regular release cycle.
Standard,
}
impl fmt::Display for UpdateType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
UpdateType::Early => write!(f, "early"),
UpdateType::Standard => write!(f, "standard"),
}
}
}
/// Represents the platform filter options for Intel DCAP (Data Center Attestation Primitives) API.
///
/// This enum allows filtering platforms based on different criteria,
/// such as selecting all platforms, client-specific platforms, or specific Intel processor generations.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum PlatformFilter {
/// Represents a selection of all available platforms.
All,
/// Represents a selection of client-specific platforms.
Client,
/// Represents platforms with Intel E3 processors.
E3,
/// Represents platforms with Intel E5 processors.
E5,
}
impl fmt::Display for PlatformFilter {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
PlatformFilter::All => write!(f, "all"),
PlatformFilter::Client => write!(f, "client"),
PlatformFilter::E3 => write!(f, "E3"),
PlatformFilter::E5 => write!(f, "E5"),
}
}
}
/// Represents the version of the Intel Trusted Services API to target.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ApiVersion {
/// Represents version 3 of the Intel Trusted Services API.
V3,
/// Represents version 4 of the Intel Trusted Services API.
V4,
}
impl ApiVersion {
/// Returns the string representation of the version for URL paths.
pub fn path_segment(&self) -> &'static str {
match self {
ApiVersion::V3 => "v3",
ApiVersion::V4 => "v4",
}
}
}
impl fmt::Display for ApiVersion {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ApiVersion::V3 => write!(f, "v3"),
ApiVersion::V4 => write!(f, "v4"),
}
}
}

View file

@ -0,0 +1,312 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use intel_dcap_api::{ApiClient, ApiVersion, CaType, IntelApiError, UpdateType};
use mockito::Server;
use reqwest::Client;
// Create a test client without TLS requirements
async fn create_test_client(base_url: &str) -> ApiClient {
// Create a custom client without TLS requirements for testing
ApiClient::new_with_base_url(base_url).expect("Failed to create client")
}
#[tokio::test]
async fn test_simple_request() {
let mut server = Server::new_async().await;
// First, test with plain reqwest to ensure mock works
let _m = server
.mock("GET", "/test")
.with_status(200)
.with_body("test")
.create_async()
.await;
let client = Client::new();
let resp = client
.get(format!("{}/test", server.url()))
.send()
.await
.unwrap();
assert_eq!(resp.status(), 200);
assert_eq!(resp.text().await.unwrap(), "test");
}
#[tokio::test]
async fn test_tdx_tcb_minimal() {
let mut server = Server::new_async().await;
// Use minimal response
let _m = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"test123".into(),
))
.with_status(200)
.with_header("TCB-Info-Issuer-Chain", "test-cert")
.with_body("{}")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_tdx_tcb_info("test123", None, None).await;
match &result {
Ok(resp) => {
assert_eq!(resp.tcb_info_json, "{}");
assert_eq!(resp.issuer_chain, "test-cert");
}
Err(e) => {
eprintln!("Error: {:?}", e);
panic!("Request failed");
}
}
}
#[tokio::test]
async fn test_sgx_qe_identity_minimal() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/sgx/certification/v4/qe/identity")
.with_status(200)
.with_header("SGX-Enclave-Identity-Issuer-Chain", "test-cert")
.with_body("{}")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_sgx_qe_identity(None, None).await;
assert!(result.is_ok());
let resp = result.unwrap();
assert_eq!(resp.enclave_identity_json, "{}");
assert_eq!(resp.issuer_chain, "test-cert");
}
#[tokio::test]
async fn test_pck_crl_minimal() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/sgx/certification/v4/pckcrl")
.match_query(mockito::Matcher::UrlEncoded(
"ca".into(),
"processor".into(),
))
.with_status(200)
.with_header("SGX-PCK-CRL-Issuer-Chain", "test-cert")
.with_body("test-crl")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_pck_crl(CaType::Processor, None).await;
assert!(result.is_ok());
let resp = result.unwrap();
assert_eq!(String::from_utf8_lossy(&resp.crl_data), "test-crl");
assert_eq!(resp.issuer_chain, "test-cert");
}
#[tokio::test]
async fn test_error_handling() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/sgx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded("fmspc".into(), "bad".into()))
.with_status(404)
.with_header("Request-ID", "test-123")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_sgx_tcb_info("bad", None, None).await;
assert!(result.is_err());
match result.unwrap_err() {
IntelApiError::ApiError {
status, request_id, ..
} => {
assert_eq!(status.as_u16(), 404);
assert_eq!(request_id, "test-123");
}
_ => panic!("Wrong error type"),
}
}
#[tokio::test]
async fn test_update_types() {
let mut server = Server::new_async().await;
// Test Early update type
let _m1 = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("fmspc".into(), "test".into()),
mockito::Matcher::UrlEncoded("update".into(), "early".into()),
]))
.with_status(200)
.with_header("TCB-Info-Issuer-Chain", "cert")
.with_body("{\"early\":true}")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client
.get_tdx_tcb_info("test", Some(UpdateType::Early), None)
.await;
assert!(result.is_ok());
assert_eq!(result.unwrap().tcb_info_json, "{\"early\":true}");
// Test Standard update type
let _m2 = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("fmspc".into(), "test".into()),
mockito::Matcher::UrlEncoded("update".into(), "standard".into()),
]))
.with_status(200)
.with_header("TCB-Info-Issuer-Chain", "cert")
.with_body("{\"standard\":true}")
.create_async()
.await;
let result2 = client
.get_tdx_tcb_info("test", Some(UpdateType::Standard), None)
.await;
assert!(result2.is_ok());
assert_eq!(result2.unwrap().tcb_info_json, "{\"standard\":true}");
}
#[tokio::test]
async fn test_v3_api_headers() {
let mut server = Server::new_async().await;
// V3 uses different header names for CRL
let _m = server
.mock("GET", "/sgx/certification/v3/pckcrl")
.match_query(mockito::Matcher::UrlEncoded("ca".into(), "platform".into()))
.with_status(200)
.with_header("SGX-PCK-CRL-Issuer-Chain", "v3-cert")
.with_body("v3-crl-data")
.create_async()
.await;
let client = ApiClient::new_with_options(server.url(), ApiVersion::V3).unwrap();
let result = client.get_pck_crl(CaType::Platform, None).await;
assert!(result.is_ok());
let resp = result.unwrap();
assert_eq!(String::from_utf8_lossy(&resp.crl_data), "v3-crl-data");
assert_eq!(resp.issuer_chain, "v3-cert");
}
#[tokio::test]
async fn test_sgx_qve_identity() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/sgx/certification/v4/qve/identity")
.with_status(200)
.with_header("SGX-Enclave-Identity-Issuer-Chain", "qve-cert")
.with_body("{\"id\":\"QVE\"}")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_sgx_qve_identity(None, None).await;
assert!(result.is_ok());
let resp = result.unwrap();
assert_eq!(resp.enclave_identity_json, "{\"id\":\"QVE\"}");
assert_eq!(resp.issuer_chain, "qve-cert");
}
#[tokio::test]
async fn test_tdx_qe_identity() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/tdx/certification/v4/qe/identity")
.with_status(200)
.with_header("SGX-Enclave-Identity-Issuer-Chain", "tdx-qe-cert")
.with_body("{\"id\":\"TDX-QE\"}")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_tdx_qe_identity(None, None).await;
assert!(result.is_ok());
let resp = result.unwrap();
assert_eq!(resp.enclave_identity_json, "{\"id\":\"TDX-QE\"}");
assert_eq!(resp.issuer_chain, "tdx-qe-cert");
}
#[tokio::test]
async fn test_error_with_details() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/sgx/certification/v4/pckcert")
.match_query(mockito::Matcher::Any)
.with_status(400)
.with_header("Request-ID", "error-req-123")
.with_header("Error-Code", "InvalidParameter")
.with_header("Error-Message", "PPID format is invalid")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client
.get_pck_certificate_by_ppid("bad", "bad", "bad", "bad", None, None)
.await;
assert!(result.is_err());
match result.unwrap_err() {
IntelApiError::ApiError {
status,
request_id,
error_code,
error_message,
} => {
assert_eq!(status.as_u16(), 400);
assert_eq!(request_id, "error-req-123");
assert_eq!(error_code.as_deref(), Some("InvalidParameter"));
assert_eq!(error_message.as_deref(), Some("PPID format is invalid"));
}
_ => panic!("Wrong error type"),
}
}
#[tokio::test]
async fn test_sgx_tcb_info() {
let mut server = Server::new_async().await;
let _m = server
.mock("GET", "/sgx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"00606A6A0000".into(),
))
.with_status(200)
.with_header("TCB-Info-Issuer-Chain", "sgx-tcb-cert")
.with_body("{\"tcbInfo\":{\"fmspc\":\"00606A6A0000\"}}")
.create_async()
.await;
let client = create_test_client(&server.url()).await;
let result = client.get_sgx_tcb_info("00606A6A0000", None, None).await;
assert!(result.is_ok());
let resp = result.unwrap();
assert_eq!(
resp.tcb_info_json,
"{\"tcbInfo\":{\"fmspc\":\"00606A6A0000\"}}"
);
assert_eq!(resp.issuer_chain, "sgx-tcb-cert");
}

View file

@ -0,0 +1,901 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright (c) 2025 Matter Labs
use intel_dcap_api::{
ApiClient, ApiVersion, CaType, CrlEncoding, IntelApiError, PlatformFilter, UpdateType,
};
use mockito::Server;
use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
use serde_json::Value;
// Include real test data
const TDX_TCB_INFO_DATA: &[u8] = include_bytes!("test_data/tdx_tcb_info.json");
const PCK_CRL_PROCESSOR_DATA: &[u8] = include_bytes!("test_data/pck_crl_processor.json");
const PCK_CRL_PLATFORM_DATA: &[u8] = include_bytes!("test_data/pck_crl_platform.json");
const SGX_QE_IDENTITY_DATA: &[u8] = include_bytes!("test_data/sgx_qe_identity.json");
const SGX_QVE_IDENTITY_DATA: &[u8] = include_bytes!("test_data/sgx_qve_identity.json");
const TDX_QE_IDENTITY_DATA: &[u8] = include_bytes!("test_data/tdx_qe_identity.json");
const SGX_TCB_INFO_ALT_DATA: &[u8] = include_bytes!("test_data/sgx_tcb_info_alt.json");
const SGX_QAE_IDENTITY_DATA: &[u8] = include_bytes!("test_data/sgx_qae_identity.json");
const FMSPCS_DATA: &[u8] = include_bytes!("test_data/fmspcs.json");
const SGX_TCB_EVAL_NUMS_DATA: &[u8] = include_bytes!("test_data/sgx_tcb_eval_nums.json");
const TDX_TCB_EVAL_NUMS_DATA: &[u8] = include_bytes!("test_data/tdx_tcb_eval_nums.json");
const PCK_CRL_PROCESSOR_DER_DATA: &[u8] = include_bytes!("test_data/pck_crl_processor_der.json");
const SGX_TCB_INFO_EARLY_DATA: &[u8] = include_bytes!("test_data/sgx_tcb_info_early.json");
const TDX_TCB_INFO_EVAL17_DATA: &[u8] = include_bytes!("test_data/tdx_tcb_info_eval17.json");
const FMSPCS_NO_FILTER_DATA: &[u8] = include_bytes!("test_data/fmspcs_no_filter.json");
// const FMSPCS_ALL_PLATFORMS_DATA: &[u8] = include_bytes!("test_data/fmspcs_all_platforms.json"); // Reserved for future use
const SGX_QE_IDENTITY_V3_DATA: &[u8] = include_bytes!("test_data/sgx_qe_identity_v3.json");
const SGX_TCB_INFO_V3_DATA: &[u8] = include_bytes!("test_data/sgx_tcb_info_v3.json");
const TDX_TCB_INFO_ALT_DATA: &[u8] = include_bytes!("test_data/tdx_tcb_info_00806F050000.json");
fn parse_test_data(data: &[u8]) -> Value {
serde_json::from_slice(data).expect("Failed to parse test data")
}
#[tokio::test]
async fn test_tdx_tcb_info_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(TDX_TCB_INFO_DATA);
// URL encode the issuer chain header value
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"00806F050000".into(),
))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_tdx_tcb_info("00806F050000", None, None).await;
if let Err(e) = &result {
eprintln!("Error: {:?}", e);
eprintln!("Server URL: {}", server.url());
}
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.tcb_info_json,
test_data["tcb_info_json"].as_str().unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
// Verify the JSON can be parsed
let tcb_info: Value = serde_json::from_str(&response.tcb_info_json).unwrap();
assert_eq!(tcb_info["tcbInfo"]["fmspc"], "00806F050000");
assert_eq!(tcb_info["tcbInfo"]["id"], "TDX");
}
#[tokio::test]
async fn test_sgx_qe_identity_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_QE_IDENTITY_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/qe/identity")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("SGX-Enclave-Identity-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["enclave_identity_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_qe_identity(None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.enclave_identity_json,
test_data["enclave_identity_json"].as_str().unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
// Verify the JSON structure
let identity: Value = serde_json::from_str(&response.enclave_identity_json).unwrap();
assert_eq!(identity["enclaveIdentity"]["id"], "QE");
}
#[tokio::test]
async fn test_sgx_qve_identity_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_QVE_IDENTITY_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/qve/identity")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("SGX-Enclave-Identity-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["enclave_identity_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_qve_identity(None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify the JSON structure
let identity: Value = serde_json::from_str(&response.enclave_identity_json).unwrap();
assert_eq!(identity["enclaveIdentity"]["id"], "QVE");
}
#[tokio::test]
async fn test_tdx_qe_identity_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(TDX_QE_IDENTITY_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/tdx/certification/v4/qe/identity")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("SGX-Enclave-Identity-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["enclave_identity_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_tdx_qe_identity(None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify the JSON structure
let identity: Value = serde_json::from_str(&response.enclave_identity_json).unwrap();
assert_eq!(identity["enclaveIdentity"]["id"], "TD_QE");
}
#[tokio::test]
async fn test_pck_crl_processor_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(PCK_CRL_PROCESSOR_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/pckcrl")
.match_query(mockito::Matcher::UrlEncoded(
"ca".into(),
"processor".into(),
))
.with_status(200)
.with_header("SGX-PCK-CRL-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["crl_data"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_pck_crl(CaType::Processor, None).await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
String::from_utf8_lossy(&response.crl_data),
test_data["crl_data"].as_str().unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
// Verify it's a valid CRL format
let crl_str = String::from_utf8_lossy(&response.crl_data);
assert!(crl_str.contains("BEGIN X509 CRL"));
assert!(crl_str.contains("END X509 CRL"));
}
#[tokio::test]
async fn test_pck_crl_platform_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(PCK_CRL_PLATFORM_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/pckcrl")
.match_query(mockito::Matcher::UrlEncoded("ca".into(), "platform".into()))
.with_status(200)
.with_header("SGX-PCK-CRL-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["crl_data"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_pck_crl(CaType::Platform, None).await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify issuer chain contains multiple certificates
assert!(response.issuer_chain.contains("BEGIN CERTIFICATE"));
assert!(response.issuer_chain.contains("END CERTIFICATE"));
}
#[tokio::test]
async fn test_sgx_tcb_info_alt_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_TCB_INFO_ALT_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"00906ED50000".into(),
))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_tcb_info("00906ED50000", None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify the JSON structure
let tcb_info: Value = serde_json::from_str(&response.tcb_info_json).unwrap();
assert_eq!(tcb_info["tcbInfo"]["fmspc"], "00906ED50000");
assert_eq!(tcb_info["tcbInfo"]["id"], "SGX");
}
#[tokio::test]
async fn test_tdx_tcb_with_update_type() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(TDX_TCB_INFO_DATA);
// Test with Early update type
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m1 = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("fmspc".into(), "00806F050000".into()),
mockito::Matcher::UrlEncoded("update".into(), "early".into()),
]))
.with_status(200)
.with_header("TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client
.get_tdx_tcb_info("00806F050000", Some(UpdateType::Early), None)
.await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_error_handling_with_intel_headers() {
let mut server = Server::new_async().await;
// Real error response from Intel API
let _m = server
.mock("GET", "/sgx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"invalid".into(),
))
.with_status(404)
.with_header("Request-ID", "abc123def456")
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_tcb_info("invalid", None, None).await;
assert!(result.is_err());
match result.unwrap_err() {
IntelApiError::ApiError {
status, request_id, ..
} => {
assert_eq!(status.as_u16(), 404);
assert_eq!(request_id, "abc123def456");
}
_ => panic!("Expected ApiError"),
}
}
#[tokio::test]
async fn test_v3_api_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(PCK_CRL_PROCESSOR_DATA);
// V3 uses different header names
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v3/pckcrl")
.match_query(mockito::Matcher::UrlEncoded(
"ca".into(),
"processor".into(),
))
.with_status(200)
.with_header("SGX-PCK-CRL-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["crl_data"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_options(server.url(), ApiVersion::V3).unwrap();
let result = client.get_pck_crl(CaType::Processor, None).await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
String::from_utf8_lossy(&response.crl_data),
test_data["crl_data"].as_str().unwrap()
);
}
#[tokio::test]
async fn test_sgx_qae_identity_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_QAE_IDENTITY_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/qae/identity")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("SGX-Enclave-Identity-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["enclave_identity_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_qae_identity(None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.enclave_identity_json,
test_data["enclave_identity_json"].as_str().unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
// Verify the JSON structure
let identity: Value = serde_json::from_str(&response.enclave_identity_json).unwrap();
assert_eq!(identity["enclaveIdentity"]["id"], "QAE");
}
#[tokio::test]
async fn test_get_fmspcs_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(FMSPCS_DATA);
let _m = server
.mock("GET", "/sgx/certification/v4/fmspcs")
.match_query(mockito::Matcher::UrlEncoded(
"platform".into(),
"all".into(),
))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_body(test_data["fmspcs_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_fmspcs(Some(PlatformFilter::All)).await;
assert!(result.is_ok());
let fmspcs_json = result.unwrap();
assert_eq!(fmspcs_json, test_data["fmspcs_json"].as_str().unwrap());
// Verify the JSON structure
let fmspcs: Value = serde_json::from_str(&fmspcs_json).unwrap();
assert!(fmspcs.is_array());
assert!(!fmspcs.as_array().unwrap().is_empty());
}
#[tokio::test]
async fn test_sgx_tcb_evaluation_data_numbers_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_TCB_EVAL_NUMS_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/tcbevaluationdatanumbers")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header(
"TCB-Evaluation-Data-Numbers-Issuer-Chain",
&encoded_issuer_chain,
)
.with_body(
test_data["tcb_evaluation_data_numbers_json"]
.as_str()
.unwrap(),
)
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_tcb_evaluation_data_numbers().await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.tcb_evaluation_data_numbers_json,
test_data["tcb_evaluation_data_numbers_json"]
.as_str()
.unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
// Verify the JSON structure
let eval_nums: Value =
serde_json::from_str(&response.tcb_evaluation_data_numbers_json).unwrap();
assert!(eval_nums.is_object());
}
#[tokio::test]
async fn test_tdx_tcb_evaluation_data_numbers_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(TDX_TCB_EVAL_NUMS_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/tdx/certification/v4/tcbevaluationdatanumbers")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header(
"TCB-Evaluation-Data-Numbers-Issuer-Chain",
&encoded_issuer_chain,
)
.with_body(
test_data["tcb_evaluation_data_numbers_json"]
.as_str()
.unwrap(),
)
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_tdx_tcb_evaluation_data_numbers().await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.tcb_evaluation_data_numbers_json,
test_data["tcb_evaluation_data_numbers_json"]
.as_str()
.unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
}
#[tokio::test]
async fn test_pck_crl_der_encoding_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(PCK_CRL_PROCESSOR_DER_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
// The DER data is stored as base64 in our test data
let crl_base64 = test_data["crl_data_base64"].as_str().unwrap();
use base64::{engine::general_purpose, Engine as _};
let crl_der = general_purpose::STANDARD.decode(crl_base64).unwrap();
let _m = server
.mock("GET", "/sgx/certification/v4/pckcrl")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("ca".into(), "processor".into()),
mockito::Matcher::UrlEncoded("encoding".into(), "der".into()),
]))
.with_status(200)
.with_header("SGX-PCK-CRL-Issuer-Chain", &encoded_issuer_chain)
.with_body(crl_der)
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client
.get_pck_crl(CaType::Processor, Some(CrlEncoding::Der))
.await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify the response data matches
let response_base64 = general_purpose::STANDARD.encode(&response.crl_data);
assert_eq!(response_base64, crl_base64);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
}
#[tokio::test]
async fn test_sgx_tcb_info_early_update_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_TCB_INFO_EARLY_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/sgx/certification/v4/tcb")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("fmspc".into(), "00906ED50000".into()),
mockito::Matcher::UrlEncoded("update".into(), "early".into()),
]))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client
.get_sgx_tcb_info("00906ED50000", Some(UpdateType::Early), None)
.await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.tcb_info_json,
test_data["tcb_info_json"].as_str().unwrap()
);
// Verify the JSON structure
let tcb_info: Value = serde_json::from_str(&response.tcb_info_json).unwrap();
assert_eq!(tcb_info["tcbInfo"]["fmspc"], "00906ED50000");
}
#[tokio::test]
async fn test_tdx_tcb_info_with_eval_number_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(TDX_TCB_INFO_EVAL17_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("fmspc".into(), "00806F050000".into()),
mockito::Matcher::UrlEncoded("tcbEvaluationDataNumber".into(), "17".into()),
]))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client
.get_tdx_tcb_info("00806F050000", None, Some(17))
.await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify the response
let tcb_info: Value = serde_json::from_str(&response.tcb_info_json).unwrap();
assert_eq!(tcb_info["tcbInfo"]["fmspc"], "00806F050000");
assert_eq!(tcb_info["tcbInfo"]["id"], "TDX");
}
#[tokio::test]
async fn test_get_fmspcs_v3_should_fail() {
let server = Server::new_async().await;
// FMSPCs is V4 only
let client = ApiClient::new_with_options(server.url(), ApiVersion::V3).unwrap();
let result = client.get_fmspcs(None).await;
assert!(result.is_err());
match result.unwrap_err() {
IntelApiError::UnsupportedApiVersion(msg) => {
assert!(msg.contains("API v4 only"));
}
_ => panic!("Expected UnsupportedApiVersion error"),
}
}
#[tokio::test]
async fn test_tcb_evaluation_data_numbers_v3_should_fail() {
let server = Server::new_async().await;
// TCB evaluation data numbers is V4 only
let client = ApiClient::new_with_options(server.url(), ApiVersion::V3).unwrap();
let sgx_result = client.get_sgx_tcb_evaluation_data_numbers().await;
assert!(sgx_result.is_err());
match sgx_result.unwrap_err() {
IntelApiError::UnsupportedApiVersion(msg) => {
assert!(msg.contains("requires API v4"));
}
_ => panic!("Expected UnsupportedApiVersion error"),
}
let tdx_result = client.get_tdx_tcb_evaluation_data_numbers().await;
assert!(tdx_result.is_err());
match tdx_result.unwrap_err() {
IntelApiError::UnsupportedApiVersion(msg) => {
assert!(msg.contains("requires API v4"));
}
_ => panic!("Expected UnsupportedApiVersion error"),
}
}
#[tokio::test]
async fn test_get_fmspcs_no_filter_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(FMSPCS_NO_FILTER_DATA);
let _m = server
.mock("GET", "/sgx/certification/v4/fmspcs")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_body(test_data["fmspcs_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_fmspcs(None).await;
assert!(result.is_ok());
let fmspcs_json = result.unwrap();
assert_eq!(fmspcs_json, test_data["fmspcs_json"].as_str().unwrap());
}
#[tokio::test]
async fn test_sgx_qe_identity_v3_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_QE_IDENTITY_V3_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
// V3 uses different header names
let _m = server
.mock("GET", "/sgx/certification/v3/qe/identity")
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("SGX-Enclave-Identity-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["enclave_identity_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_options(server.url(), ApiVersion::V3).unwrap();
let result = client.get_sgx_qe_identity(None, None).await;
if let Err(e) = &result {
eprintln!("Error in V3 test: {:?}", e);
}
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.enclave_identity_json,
test_data["enclave_identity_json"].as_str().unwrap()
);
assert_eq!(
response.issuer_chain,
test_data["issuer_chain"].as_str().unwrap()
);
}
#[tokio::test]
async fn test_sgx_tcb_info_v3_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(SGX_TCB_INFO_V3_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
// V3 uses different header names
let _m = server
.mock("GET", "/sgx/certification/v3/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"00906ED50000".into(),
))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("SGX-TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_options(server.url(), ApiVersion::V3).unwrap();
let result = client.get_sgx_tcb_info("00906ED50000", None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(
response.tcb_info_json,
test_data["tcb_info_json"].as_str().unwrap()
);
// Verify the JSON structure
let tcb_info: Value = serde_json::from_str(&response.tcb_info_json).unwrap();
assert_eq!(tcb_info["tcbInfo"]["fmspc"], "00906ED50000");
}
#[tokio::test]
async fn test_tdx_tcb_info_alternate_fmspc_with_real_data() {
let mut server = Server::new_async().await;
let test_data = parse_test_data(TDX_TCB_INFO_ALT_DATA);
let issuer_chain = test_data["issuer_chain"].as_str().unwrap();
let encoded_issuer_chain =
percent_encode(issuer_chain.as_bytes(), NON_ALPHANUMERIC).to_string();
let _m = server
.mock("GET", "/tdx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"00806F050000".into(),
))
.with_status(200)
.with_header("Content-Type", "application/json")
.with_header("TCB-Info-Issuer-Chain", &encoded_issuer_chain)
.with_body(test_data["tcb_info_json"].as_str().unwrap())
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_tdx_tcb_info("00806F050000", None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
// Verify we got the same data as the first TDX TCB info test
let tcb_info: Value = serde_json::from_str(&response.tcb_info_json).unwrap();
assert_eq!(tcb_info["tcbInfo"]["fmspc"], "00806F050000");
assert_eq!(tcb_info["tcbInfo"]["id"], "TDX");
}
#[tokio::test]
async fn test_platform_filter_combinations() {
let mut server = Server::new_async().await;
// Test with different platform filters
let filters = vec![
(Some(PlatformFilter::All), "all"),
(Some(PlatformFilter::Client), "client"),
(Some(PlatformFilter::E3), "E3"),
(Some(PlatformFilter::E5), "E5"),
(None, ""),
];
for (filter, query_value) in filters {
let mock_response = r#"[{"fmspc": "00906ED50000", "platform": "SGX"}]"#;
let mut mock = server.mock("GET", "/sgx/certification/v4/fmspcs");
if !query_value.is_empty() {
mock = mock.match_query(mockito::Matcher::UrlEncoded(
"platform".into(),
query_value.into(),
));
}
let _m = mock
.with_status(200)
.with_header("Content-Type", "application/json")
.with_body(mock_response)
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_fmspcs(filter).await;
assert!(result.is_ok());
let response = result.unwrap();
assert!(response.contains("00906ED50000"));
}
}
#[tokio::test]
async fn test_error_scenarios() {
let mut server = Server::new_async().await;
// Test 404 with Error headers
let _m = server
.mock("GET", "/sgx/certification/v4/tcb")
.match_query(mockito::Matcher::UrlEncoded(
"fmspc".into(),
"invalid".into(),
))
.with_status(404)
.with_header("Request-ID", "test123")
.with_header("Error-Code", "InvalidParameter")
.with_header("Error-Message", "Invalid FMSPC format")
.create_async()
.await;
let client = ApiClient::new_with_base_url(server.url()).unwrap();
let result = client.get_sgx_tcb_info("invalid", None, None).await;
assert!(result.is_err());
match result.unwrap_err() {
IntelApiError::ApiError {
status,
request_id,
error_code,
error_message,
} => {
assert_eq!(status.as_u16(), 404);
assert_eq!(request_id, "test123");
assert_eq!(error_code.as_deref(), Some("InvalidParameter"));
assert_eq!(error_message.as_deref(), Some("Invalid FMSPC format"));
}
_ => panic!("Expected ApiError"),
}
}

View file

@ -0,0 +1,3 @@
{
"fmspcs_json": "[{\"fmspc\":\"00A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"70A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"00A06E050000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EA50000\",\"platform\":\"client\"},{\"fmspc\":\"20606C040000\",\"platform\":\"E5\"},{\"fmspc\":\"50806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00A067110000\",\"platform\":\"E3\"},{\"fmspc\":\"00606C040000\",\"platform\":\"E5\"},{\"fmspc\":\"00706E470000\",\"platform\":\"client\"},{\"fmspc\":\"00806EA60000\",\"platform\":\"client\"},{\"fmspc\":\"00706A800000\",\"platform\":\"client\"},{\"fmspc\":\"00706A100000\",\"platform\":\"client\"},{\"fmspc\":\"F0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806EB70000\",\"platform\":\"client\"},{\"fmspc\":\"00906EC50000\",\"platform\":\"client\"},{\"fmspc\":\"90806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06F010000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EC10000\",\"platform\":\"client\"},{\"fmspc\":\"B0C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"20A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00906ED50000\",\"platform\":\"E3\"},{\"fmspc\":\"40A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"60A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"D0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00A065510000\",\"platform\":\"client\"},{\"fmspc\":\"10A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"30606A000000\",\"platform\":\"E5\"},{\"fmspc\":\"20806EB70000\",\"platform\":\"client\"},{\"fmspc\":\"00906EA10000\",\"platform\":\"E3\"},{\"fmspc\":\"30806F040000\",\"platform\":\"E5\"},{\"fmspc\":\"C0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"30A06D050000\",\"platform\":\"E5\"},{\"fmspc\":\"60C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"20A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06D000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806F050000\",\"platform\":\"E5\"},{\"fmspc\":\"60A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"20906EC10000\",\"platform\":\"client\"},{\"fmspc\":\"90C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"80C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EB10000\",\"platform\":\"client\"},{\"fmspc\":\"00606A000000\",\"platform\":\"E5\"}]"
}

View file

@ -0,0 +1,3 @@
{
"fmspcs_json": "[{\"fmspc\":\"00A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"70A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"00A06E050000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EA50000\",\"platform\":\"client\"},{\"fmspc\":\"20606C040000\",\"platform\":\"E5\"},{\"fmspc\":\"50806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00A067110000\",\"platform\":\"E3\"},{\"fmspc\":\"00606C040000\",\"platform\":\"E5\"},{\"fmspc\":\"00706E470000\",\"platform\":\"client\"},{\"fmspc\":\"00806EA60000\",\"platform\":\"client\"},{\"fmspc\":\"00706A800000\",\"platform\":\"client\"},{\"fmspc\":\"00706A100000\",\"platform\":\"client\"},{\"fmspc\":\"F0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806EB70000\",\"platform\":\"client\"},{\"fmspc\":\"00906EC50000\",\"platform\":\"client\"},{\"fmspc\":\"90806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06F010000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EC10000\",\"platform\":\"client\"},{\"fmspc\":\"B0C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"20A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00906ED50000\",\"platform\":\"E3\"},{\"fmspc\":\"40A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"60A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"D0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00A065510000\",\"platform\":\"client\"},{\"fmspc\":\"10A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"30606A000000\",\"platform\":\"E5\"},{\"fmspc\":\"20806EB70000\",\"platform\":\"client\"},{\"fmspc\":\"00906EA10000\",\"platform\":\"E3\"},{\"fmspc\":\"30806F040000\",\"platform\":\"E5\"},{\"fmspc\":\"C0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"30A06D050000\",\"platform\":\"E5\"},{\"fmspc\":\"60C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"20A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06D000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806F050000\",\"platform\":\"E5\"},{\"fmspc\":\"60A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"20906EC10000\",\"platform\":\"client\"},{\"fmspc\":\"90C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"80C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EB10000\",\"platform\":\"client\"},{\"fmspc\":\"00606A000000\",\"platform\":\"E5\"}]"
}

View file

@ -0,0 +1,3 @@
{
"fmspcs_json": "[{\"fmspc\":\"00A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"70A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"00A06E050000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EA50000\",\"platform\":\"client\"},{\"fmspc\":\"20606C040000\",\"platform\":\"E5\"},{\"fmspc\":\"50806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00A067110000\",\"platform\":\"E3\"},{\"fmspc\":\"00606C040000\",\"platform\":\"E5\"},{\"fmspc\":\"00706E470000\",\"platform\":\"client\"},{\"fmspc\":\"00806EA60000\",\"platform\":\"client\"},{\"fmspc\":\"00706A800000\",\"platform\":\"client\"},{\"fmspc\":\"00706A100000\",\"platform\":\"client\"},{\"fmspc\":\"F0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806EB70000\",\"platform\":\"client\"},{\"fmspc\":\"00906EC50000\",\"platform\":\"client\"},{\"fmspc\":\"90806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06F010000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EC10000\",\"platform\":\"client\"},{\"fmspc\":\"B0C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"20A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00906ED50000\",\"platform\":\"E3\"},{\"fmspc\":\"40A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"60A06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"D0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00A065510000\",\"platform\":\"client\"},{\"fmspc\":\"10A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"30606A000000\",\"platform\":\"E5\"},{\"fmspc\":\"20806EB70000\",\"platform\":\"client\"},{\"fmspc\":\"00906EA10000\",\"platform\":\"E3\"},{\"fmspc\":\"30806F040000\",\"platform\":\"E5\"},{\"fmspc\":\"C0806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"30A06D050000\",\"platform\":\"E5\"},{\"fmspc\":\"60C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"20A06D080000\",\"platform\":\"E5\"},{\"fmspc\":\"10A06D000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806F050000\",\"platform\":\"E5\"},{\"fmspc\":\"60A06D070000\",\"platform\":\"E5\"},{\"fmspc\":\"20906EC10000\",\"platform\":\"client\"},{\"fmspc\":\"90C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"80C06F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00806F000000\",\"platform\":\"E5\"},{\"fmspc\":\"00906EB10000\",\"platform\":\"client\"},{\"fmspc\":\"00606A000000\",\"platform\":\"E5\"}]"
}

View file

@ -0,0 +1,4 @@
{
"crl_data": "-----BEGIN X509 CRL-----\nMIIKYTCCCggCAQEwCgYIKoZIzj0EAwIwcDEiMCAGA1UEAwwZSW50ZWwgU0dYIFBD\nSyBQbGF0Zm9ybSBDQTEaMBgGA1UECgwRSW50ZWwgQ29ycG9yYXRpb24xFDASBgNV\nBAcMC1NhbnRhIENsYXJhMQswCQYDVQQIDAJDQTELMAkGA1UEBhMCVVMXDTI1MDUy\nNzE5MjUwNVoXDTI1MDYyNjE5MjUwNVowggk0MDMCFG/DTlAj5yiSNDXWGqS4PGGB\nZq01Fw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAO+ubpcV/KE7h+Mz\n6CYe1tmQqSatFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAP1ghkhi\nnLpzB4tNSS9LPqdBrQjNFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIV\nAIr5JBhOHVr93XPD1joS9ei1c35WFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMK\nAQEwNAIVALEleXjPqczdB1mr+MXKcvrjp4qbFw0yNTA1MjcxOTI1MDVaMAwwCgYD\nVR0VBAMKAQEwMwIUdP6mFKlyvg4oQ/IFmDWBHthy+bMXDTI1MDUyNzE5MjUwNVow\nDDAKBgNVHRUEAwoBATA0AhUA+cTvVrOrSNV34Qi67fS/iAFCFLkXDTI1MDUyNzE5\nMjUwNVowDDAKBgNVHRUEAwoBATAzAhQHHeB3j55fxPKHjzDWsHyaMOazCxcNMjUw\nNTI3MTkyNTA1WjAMMAoGA1UdFQQDCgEBMDQCFQDN4kJPlyzqlP8jmTf02AwlAp3W\nCxcNMjUwNTI3MTkyNTA1WjAMMAoGA1UdFQQDCgEBMDMCFGwzGeUQm2RQfTzxEyzg\nA0nvUnMZFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAN8I11a2anSX\n9DtbtYraBNP096k3Fw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwMwIUKK9I\nW2z2fkCaOdXLWu5FmPeo+nsXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBATA0\nAhUA+4strsCSytqKqbxP8vHCDQNGZowXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUE\nAwoBATA0AhUAzUhQrFK9zGmmpvBYyLxXu9C1+GQXDTI1MDUyNzE5MjUwNVowDDAK\nBgNVHRUEAwoBATA0AhUAmU3TZm9SdfuAX5XdAr1QyyZ52K0XDTI1MDUyNzE5MjUw\nNVowDDAKBgNVHRUEAwoBATAzAhQHAhNpACUidNkDXu31RXRi+tDvTBcNMjUwNTI3\nMTkyNTA1WjAMMAoGA1UdFQQDCgEBMDMCFGHyv3Pjm04EqifYAb1z0kMZtb+AFw0y\nNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwMwIUOZK+hRuWkC7/OJWebC7/GwZR\npLUXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBATAzAhQP2kOgC2jqebfC3q6s\nC0mL37KvkBcNMjUwNTI3MTkyNTA1WjAMMAoGA1UdFQQDCgEBMDMCFGOfE5pQQP3P\n8ZHopPsb8IbtYDlxFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAJWd\nUz+SSdweUTVEzcgwvxm38fMBFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEw\nMwIUeuN3SKn5EvTGO6erB8WTzh0dEYEXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUE\nAwoBATAzAhQTiEszJpk4wZWqFw/KddoXdTjfCxcNMjUwNTI3MTkyNTA1WjAMMAoG\nA1UdFQQDCgEBMDQCFQCF08k4G3en4E0RnJ5a1nSf8/+rhxcNMjUwNTI3MTkyNTA1\nWjAMMAoGA1UdFQQDCgEBMDQCFQCTiHykQR56kjvR/tKBmylJ8gG1tBcNMjUwNTI3\nMTkyNTA1WjAMMAoGA1UdFQQDCgEBMDMCFCSY3GKDkwmW/YvyOjesviajvtRXFw0y\nNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAIpm8adJSIZnaJzDkDrFTGYr\ncS5zFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAK/BNhC902y3mF0Q\nZIGogNOgH9oHFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIVAO/gSywz\n0DaqyWymc78emke2TVy7Fw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwNAIV\nAIPZrI2LtQnRxsgJrXEuhDBVntfzFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMK\nAQEwMwIUeTH9ULUHHBu/xbe23ti0W52LhSkXDTI1MDUyNzE5MjUwNVowDDAKBgNV\nHRUEAwoBATAzAhQfog4pcL3l1X97jd+DOUhOHx0IIxcNMjUwNTI3MTkyNTA1WjAM\nMAoGA1UdFQQDCgEBMDMCFB6HssOzLY0j5BHO80GXuVrwyK31Fw0yNTA1MjcxOTI1\nMDVaMAwwCgYDVR0VBAMKAQEwNAIVAJr9LukKRzVQoWfZlpEUN8dQLR8JFw0yNTA1\nMjcxOTI1MDVaMAwwCgYDVR0VBAMKAQEwMwIURIGw8RcooTtpbT6px3CgsV7FjdoX\nDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBATA0AhUAp4WfV5gu8OZ9N7yO8u9a\nyDX/GqkXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBATA0AhUAnWd1O4HkcJCu\np2P77ExFSbzbmTMXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBATAzAhQ0v7t6\nHZxWgUfhGLYU97du0+9o3xcNMjUwNTI3MTkyNTA1WjAMMAoGA1UdFQQDCgEBMDMC\nFCw8xv6SedsVFtXOOfKomM2loXXhFw0yNTA1MjcxOTI1MDVaMAwwCgYDVR0VBAMK\nAQEwMwIUcXlIaHUJI0vpeeS33ObzG+9ktowXDTI1MDUyNzE5MjUwNVowDDAKBgNV\nHRUEAwoBATA0AhUAnXbvLDnBNuhli25zlrHXRFonYx8XDTI1MDUyNzE5MjUwNVow\nDDAKBgNVHRUEAwoBATA0AhUAw+Al/KmV829ZtIRnk54+NOY2Gm8XDTI1MDUyNzE5\nMjUwNVowDDAKBgNVHRUEAwoBATA0AhUAjF9rMlfaBbF0KeLmG6ll1nMwYGoXDTI1\nMDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBATA0AhUAoXxRci7B4MMnj+i98FIFnL7E\n5kgXDTI1MDUyNzE5MjUwNVowDDAKBgNVHRUEAwoBAaAvMC0wCgYDVR0UBAMCAQEw\nHwYDVR0jBBgwFoAUlW9dzb0b4elAScnU9DPOAVcL3lQwCgYIKoZIzj0EAwIDRwAw\nRAIgUpcU4PTB0Bc3qvMCWYHx5EEDXqxSLgCoYKp4C/GgxpkCIE/xDOudQg2ldK1m\nABQqvvzE8ibtGcDyaq1WI56Wv1bl\n-----END X509 CRL-----\n",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICljCCAj2gAwIBAgIVAJVvXc29G+HpQEnJ1PQzzgFXC95UMAoGCCqGSM49BAMC\nMGgxGjAYBgNVBAMMEUludGVsIFNHWCBSb290IENBMRowGAYDVQQKDBFJbnRlbCBD\nb3Jwb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQsw\nCQYDVQQGEwJVUzAeFw0xODA1MjExMDUwMTBaFw0zMzA1MjExMDUwMTBaMHAxIjAg\nBgNVBAMMGUludGVsIFNHWCBQQ0sgUGxhdGZvcm0gQ0ExGjAYBgNVBAoMEUludGVs\nIENvcnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0Ex\nCzAJBgNVBAYTAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAENSB/7t21lXSO\n2Cuzpxw74eJB72EyDGgW5rXCtx2tVTLq6hKk6z+UiRZCnqR7psOvgqFeSxlmTlJl\neTmi2WYz3qOBuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBS\nBgNVHR8ESzBJMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2Vy\ndmljZXMuaW50ZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUlW9d\nzb0b4elAScnU9DPOAVcL3lQwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYB\nAf8CAQAwCgYIKoZIzj0EAwIDRwAwRAIgXsVki0w+i6VYGW3UF/22uaXe0YJDj1Ue\nnA+TjD1ai5cCICYb1SAmD5xkfTVpvo4UoyiSYxrDWLmUR4CI9NKyfPN+\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"crl_data": "-----BEGIN X509 CRL-----\nMIIBKjCB0QIBATAKBggqhkjOPQQDAjBxMSMwIQYDVQQDDBpJbnRlbCBTR1ggUENL\nIFByb2Nlc3NvciBDQTEaMBgGA1UECgwRSW50ZWwgQ29ycG9yYXRpb24xFDASBgNV\nBAcMC1NhbnRhIENsYXJhMQswCQYDVQQIDAJDQTELMAkGA1UEBhMCVVMXDTI1MDUy\nNzE4NDYyNVoXDTI1MDYyNjE4NDYyNVqgLzAtMAoGA1UdFAQDAgEBMB8GA1UdIwQY\nMBaAFNDoqtp11/kuSReYPHsUZdDV8llNMAoGCCqGSM49BAMCA0gAMEUCIQDtYSVu\nju3asUsAGZ2Hbe9uvZmk5zvLtwDk38KrWfb5zAIgSfk6Dmqhc4+moiRuRz0wQqLj\nckwO2BEUviI+nZfN75I=\n-----END X509 CRL-----\n",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICmDCCAj6gAwIBAgIVANDoqtp11/kuSReYPHsUZdDV8llNMAoGCCqGSM49BAMC\nMGgxGjAYBgNVBAMMEUludGVsIFNHWCBSb290IENBMRowGAYDVQQKDBFJbnRlbCBD\nb3Jwb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQsw\nCQYDVQQGEwJVUzAeFw0xODA1MjExMDUwMTBaFw0zMzA1MjExMDUwMTBaMHExIzAh\nBgNVBAMMGkludGVsIFNHWCBQQ0sgUHJvY2Vzc29yIENBMRowGAYDVQQKDBFJbnRl\nbCBDb3Jwb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNB\nMQswCQYDVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABL9q+NMp2IOg\ntdl1bk/uWZ5+TGQm8aCi8z78fs+fKCQ3d+uDzXnVTAT2ZhDCifyIuJwvN3wNBp9i\nHBSSMJMJrBOjgbswgbgwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqww\nUgYDVR0fBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNl\ncnZpY2VzLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFNDo\nqtp11/kuSReYPHsUZdDV8llNMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAG\nAQH/AgEAMAoGCCqGSM49BAMCA0gAMEUCIQCJgTbtVqOyZ1m3jqiAXM6QYa6r5sWS\n4y/G7y8uIJGxdwIgRqPvBSKzzQagBLQq5s5A70pdoiaRJ8z/0uDz4NgV91k=\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"crl_data_base64": "MIIBKjCB0QIBATAKBggqhkjOPQQDAjBxMSMwIQYDVQQDDBpJbnRlbCBTR1ggUENLIFByb2Nlc3NvciBDQTEaMBgGA1UECgwRSW50ZWwgQ29ycG9yYXRpb24xFDASBgNVBAcMC1NhbnRhIENsYXJhMQswCQYDVQQIDAJDQTELMAkGA1UEBhMCVVMXDTI1MDUyNzE5MjMwOVoXDTI1MDYyNjE5MjMwOVqgLzAtMAoGA1UdFAQDAgEBMB8GA1UdIwQYMBaAFNDoqtp11/kuSReYPHsUZdDV8llNMAoGCCqGSM49BAMCA0gAMEUCIQC2Q0kz4IioOr5HsdYUY8b0m3XSS6FwuKVUAIvroURNHgIgIo5mAP1gCBeW719AqdBaxnoNuUypHQ/X+1zfDiY69ec=",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICmDCCAj6gAwIBAgIVANDoqtp11/kuSReYPHsUZdDV8llNMAoGCCqGSM49BAMC\nMGgxGjAYBgNVBAMMEUludGVsIFNHWCBSb290IENBMRowGAYDVQQKDBFJbnRlbCBD\nb3Jwb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQsw\nCQYDVQQGEwJVUzAeFw0xODA1MjExMDUwMTBaFw0zMzA1MjExMDUwMTBaMHExIzAh\nBgNVBAMMGkludGVsIFNHWCBQQ0sgUHJvY2Vzc29yIENBMRowGAYDVQQKDBFJbnRl\nbCBDb3Jwb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNB\nMQswCQYDVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABL9q+NMp2IOg\ntdl1bk/uWZ5+TGQm8aCi8z78fs+fKCQ3d+uDzXnVTAT2ZhDCifyIuJwvN3wNBp9i\nHBSSMJMJrBOjgbswgbgwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqww\nUgYDVR0fBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNl\ncnZpY2VzLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFNDo\nqtp11/kuSReYPHsUZdDV8llNMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAG\nAQH/AgEAMAoGCCqGSM49BAMCA0gAMEUCIQCJgTbtVqOyZ1m3jqiAXM6QYa6r5sWS\n4y/G7y8uIJGxdwIgRqPvBSKzzQagBLQq5s5A70pdoiaRJ8z/0uDz4NgV91k=\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"enclave_identity_json": "{\"enclaveIdentity\":{\"id\":\"QAE\",\"version\":2,\"issueDate\":\"2025-05-27T19:31:54Z\",\"nextUpdate\":\"2025-06-26T19:31:54Z\",\"tcbEvaluationDataNumber\":17,\"miscselect\":\"00000000\",\"miscselectMask\":\"FFFFFFFF\",\"attributes\":\"01000000000000000000000000000000\",\"attributesMask\":\"FBFFFFFFFFFFFFFF0000000000000000\",\"mrsigner\":\"8C4F5775D796503E96137F77C68A829A0056AC8DED70140B081B094490C57BFF\",\"isvprodid\":3,\"tcbLevels\":[{\"tcb\":{\"isvsvn\":3},\"tcbDate\":\"2024-03-13T00:00:00Z\",\"tcbStatus\":\"UpToDate\"}]},\"signature\":\"a5dfb799f78ea3d32f7760f2b529fc80fe7efa3236c9888e8ece69379e206880f0b67b9407a9b139feb5007b785601f09050d4963116c1bd2cd5def4e3a11da8\"}",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICjTCCAjKgAwIBAgIUfjiC1ftVKUpASY5FhAPpFJG99FUwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTI1MDUwNjA5MjUwMFoXDTMyMDUwNjA5MjUwMFowbDEeMBwG\nA1UEAwwVSW50ZWwgU0dYIFRDQiBTaWduaW5nMRowGAYDVQQKDBFJbnRlbCBDb3Jw\nb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQswCQYD\nVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABENFG8xzydWRfK92bmGv\nP+mAh91PEyV7Jh6FGJd5ndE9aBH7R3E4A7ubrlh/zN3C4xvpoouGlirMba+W2lju\nypajgbUwgbIwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqwwUgYDVR0f\nBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNlcnZpY2Vz\nLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFH44gtX7VSlK\nQEmORYQD6RSRvfRVMA4GA1UdDwEB/wQEAwIGwDAMBgNVHRMBAf8EAjAAMAoGCCqG\nSM49BAMCA0kAMEYCIQDdmmRuAo3qCO8TC1IoJMITAoOEw4dlgEBHzSz1TuMSTAIh\nAKVTqOkt59+co0O3m3hC+v5Fb00FjYWcgeu3EijOULo5\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"enclave_identity_json": "{\"enclaveIdentity\":{\"id\":\"QE\",\"version\":2,\"issueDate\":\"2025-05-27T19:05:27Z\",\"nextUpdate\":\"2025-06-26T19:05:27Z\",\"tcbEvaluationDataNumber\":17,\"miscselect\":\"00000000\",\"miscselectMask\":\"FFFFFFFF\",\"attributes\":\"11000000000000000000000000000000\",\"attributesMask\":\"FBFFFFFFFFFFFFFF0000000000000000\",\"mrsigner\":\"8C4F5775D796503E96137F77C68A829A0056AC8DED70140B081B094490C57BFF\",\"isvprodid\":1,\"tcbLevels\":[{\"tcb\":{\"isvsvn\":8},\"tcbDate\":\"2024-03-13T00:00:00Z\",\"tcbStatus\":\"UpToDate\"},{\"tcb\":{\"isvsvn\":6},\"tcbDate\":\"2021-11-10T00:00:00Z\",\"tcbStatus\":\"OutOfDate\",\"advisoryIDs\":[\"INTEL-SA-00615\"]},{\"tcb\":{\"isvsvn\":5},\"tcbDate\":\"2020-11-11T00:00:00Z\",\"tcbStatus\":\"OutOfDate\",\"advisoryIDs\":[\"INTEL-SA-00477\",\"INTEL-SA-00615\"]},{\"tcb\":{\"isvsvn\":4},\"tcbDate\":\"2019-11-13T00:00:00Z\",\"tcbStatus\":\"OutOfDate\",\"advisoryIDs\":[\"INTEL-SA-00334\",\"INTEL-SA-00477\",\"INTEL-SA-00615\"]},{\"tcb\":{\"isvsvn\":2},\"tcbDate\":\"2019-05-15T00:00:00Z\",\"tcbStatus\":\"OutOfDate\",\"advisoryIDs\":[\"INTEL-SA-00219\",\"INTEL-SA-00293\",\"INTEL-SA-00334\",\"INTEL-SA-00477\",\"INTEL-SA-00615\"]},{\"tcb\":{\"isvsvn\":1},\"tcbDate\":\"2018-08-15T00:00:00Z\",\"tcbStatus\":\"OutOfDate\",\"advisoryIDs\":[\"INTEL-SA-00202\",\"INTEL-SA-00219\",\"INTEL-SA-00293\",\"INTEL-SA-00334\",\"INTEL-SA-00477\",\"INTEL-SA-00615\"]}]},\"signature\":\"5ecc03899589b58e8216c69c3439d1a9310d8af9ebfb37e61518a2a3cb801e0019a5fc955e38e6becc1c75a8a05bb337c93c1a61009a34cc8291fdd82f67ae19\"}",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICjTCCAjKgAwIBAgIUfjiC1ftVKUpASY5FhAPpFJG99FUwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTI1MDUwNjA5MjUwMFoXDTMyMDUwNjA5MjUwMFowbDEeMBwG\nA1UEAwwVSW50ZWwgU0dYIFRDQiBTaWduaW5nMRowGAYDVQQKDBFJbnRlbCBDb3Jw\nb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQswCQYD\nVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABENFG8xzydWRfK92bmGv\nP+mAh91PEyV7Jh6FGJd5ndE9aBH7R3E4A7ubrlh/zN3C4xvpoouGlirMba+W2lju\nypajgbUwgbIwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqwwUgYDVR0f\nBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNlcnZpY2Vz\nLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFH44gtX7VSlK\nQEmORYQD6RSRvfRVMA4GA1UdDwEB/wQEAwIGwDAMBgNVHRMBAf8EAjAAMAoGCCqG\nSM49BAMCA0kAMEYCIQDdmmRuAo3qCO8TC1IoJMITAoOEw4dlgEBHzSz1TuMSTAIh\nAKVTqOkt59+co0O3m3hC+v5Fb00FjYWcgeu3EijOULo5\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"enclave_identity_json": "{\"enclaveIdentity\":{\"id\":\"QE\",\"version\":2,\"issueDate\":\"2025-05-27T18:38:43Z\",\"nextUpdate\":\"2025-06-26T18:38:43Z\",\"tcbEvaluationDataNumber\":17,\"miscselect\":\"00000000\",\"miscselectMask\":\"FFFFFFFF\",\"attributes\":\"11000000000000000000000000000000\",\"attributesMask\":\"FBFFFFFFFFFFFFFF0000000000000000\",\"mrsigner\":\"8C4F5775D796503E96137F77C68A829A0056AC8DED70140B081B094490C57BFF\",\"isvprodid\":1,\"tcbLevels\":[{\"tcb\":{\"isvsvn\":8},\"tcbDate\":\"2024-03-13T00:00:00Z\",\"tcbStatus\":\"UpToDate\"},{\"tcb\":{\"isvsvn\":6},\"tcbDate\":\"2021-11-10T00:00:00Z\",\"tcbStatus\":\"OutOfDate\"},{\"tcb\":{\"isvsvn\":5},\"tcbDate\":\"2020-11-11T00:00:00Z\",\"tcbStatus\":\"OutOfDate\"},{\"tcb\":{\"isvsvn\":4},\"tcbDate\":\"2019-11-13T00:00:00Z\",\"tcbStatus\":\"OutOfDate\"},{\"tcb\":{\"isvsvn\":2},\"tcbDate\":\"2019-05-15T00:00:00Z\",\"tcbStatus\":\"OutOfDate\"},{\"tcb\":{\"isvsvn\":1},\"tcbDate\":\"2018-08-15T00:00:00Z\",\"tcbStatus\":\"OutOfDate\"}]},\"signature\":\"b4cc7bd5ee712a62cf6fbad0a052bd44194a25a5313b4bfff241a3c08ff00bcf0d15f1feb3a369bd9b362a6e5104c82f06d827ef676e70fdccf947566b77f6e8\"}",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICjTCCAjKgAwIBAgIUfjiC1ftVKUpASY5FhAPpFJG99FUwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTI1MDUwNjA5MjUwMFoXDTMyMDUwNjA5MjUwMFowbDEeMBwG\nA1UEAwwVSW50ZWwgU0dYIFRDQiBTaWduaW5nMRowGAYDVQQKDBFJbnRlbCBDb3Jw\nb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQswCQYD\nVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABENFG8xzydWRfK92bmGv\nP+mAh91PEyV7Jh6FGJd5ndE9aBH7R3E4A7ubrlh/zN3C4xvpoouGlirMba+W2lju\nypajgbUwgbIwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqwwUgYDVR0f\nBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNlcnZpY2Vz\nLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFH44gtX7VSlK\nQEmORYQD6RSRvfRVMA4GA1UdDwEB/wQEAwIGwDAMBgNVHRMBAf8EAjAAMAoGCCqG\nSM49BAMCA0kAMEYCIQDdmmRuAo3qCO8TC1IoJMITAoOEw4dlgEBHzSz1TuMSTAIh\nAKVTqOkt59+co0O3m3hC+v5Fb00FjYWcgeu3EijOULo5\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"enclave_identity_json": "{\"enclaveIdentity\":{\"id\":\"QVE\",\"version\":2,\"issueDate\":\"2025-05-27T19:31:54Z\",\"nextUpdate\":\"2025-06-26T19:31:54Z\",\"tcbEvaluationDataNumber\":17,\"miscselect\":\"00000000\",\"miscselectMask\":\"FFFFFFFF\",\"attributes\":\"01000000000000000000000000000000\",\"attributesMask\":\"FBFFFFFFFFFFFFFF0000000000000000\",\"mrsigner\":\"8C4F5775D796503E96137F77C68A829A0056AC8DED70140B081B094490C57BFF\",\"isvprodid\":2,\"tcbLevels\":[{\"tcb\":{\"isvsvn\":3},\"tcbDate\":\"2024-03-13T00:00:00Z\",\"tcbStatus\":\"UpToDate\"}]},\"signature\":\"3bb26b16155b207f884ef10fad705129bf566ccc9e6bd4e9907c99bc0ccd6deb6b6451b103b495926c582ece9d22c491f05a627806e09ca07e1063de898460e7\"}",
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICjTCCAjKgAwIBAgIUfjiC1ftVKUpASY5FhAPpFJG99FUwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTI1MDUwNjA5MjUwMFoXDTMyMDUwNjA5MjUwMFowbDEeMBwG\nA1UEAwwVSW50ZWwgU0dYIFRDQiBTaWduaW5nMRowGAYDVQQKDBFJbnRlbCBDb3Jw\nb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQswCQYD\nVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABENFG8xzydWRfK92bmGv\nP+mAh91PEyV7Jh6FGJd5ndE9aBH7R3E4A7ubrlh/zN3C4xvpoouGlirMba+W2lju\nypajgbUwgbIwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqwwUgYDVR0f\nBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNlcnZpY2Vz\nLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFH44gtX7VSlK\nQEmORYQD6RSRvfRVMA4GA1UdDwEB/wQEAwIGwDAMBgNVHRMBAf8EAjAAMAoGCCqG\nSM49BAMCA0kAMEYCIQDdmmRuAo3qCO8TC1IoJMITAoOEw4dlgEBHzSz1TuMSTAIh\nAKVTqOkt59+co0O3m3hC+v5Fb00FjYWcgeu3EijOULo5\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n"
}

View file

@ -0,0 +1,4 @@
{
"issuer_chain": "-----BEGIN CERTIFICATE-----\nMIICjTCCAjKgAwIBAgIUfjiC1ftVKUpASY5FhAPpFJG99FUwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTI1MDUwNjA5MjUwMFoXDTMyMDUwNjA5MjUwMFowbDEeMBwG\nA1UEAwwVSW50ZWwgU0dYIFRDQiBTaWduaW5nMRowGAYDVQQKDBFJbnRlbCBDb3Jw\nb3JhdGlvbjEUMBIGA1UEBwwLU2FudGEgQ2xhcmExCzAJBgNVBAgMAkNBMQswCQYD\nVQQGEwJVUzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABENFG8xzydWRfK92bmGv\nP+mAh91PEyV7Jh6FGJd5ndE9aBH7R3E4A7ubrlh/zN3C4xvpoouGlirMba+W2lju\nypajgbUwgbIwHwYDVR0jBBgwFoAUImUM1lqdNInzg7SVUr9QGzknBqwwUgYDVR0f\nBEswSTBHoEWgQ4ZBaHR0cHM6Ly9jZXJ0aWZpY2F0ZXMudHJ1c3RlZHNlcnZpY2Vz\nLmludGVsLmNvbS9JbnRlbFNHWFJvb3RDQS5kZXIwHQYDVR0OBBYEFH44gtX7VSlK\nQEmORYQD6RSRvfRVMA4GA1UdDwEB/wQEAwIGwDAMBgNVHRMBAf8EAjAAMAoGCCqG\nSM49BAMCA0kAMEYCIQDdmmRuAo3qCO8TC1IoJMITAoOEw4dlgEBHzSz1TuMSTAIh\nAKVTqOkt59+co0O3m3hC+v5Fb00FjYWcgeu3EijOULo5\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIICjzCCAjSgAwIBAgIUImUM1lqdNInzg7SVUr9QGzknBqwwCgYIKoZIzj0EAwIw\naDEaMBgGA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENv\ncnBvcmF0aW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJ\nBgNVBAYTAlVTMB4XDTE4MDUyMTEwNDUxMFoXDTQ5MTIzMTIzNTk1OVowaDEaMBgG\nA1UEAwwRSW50ZWwgU0dYIFJvb3QgQ0ExGjAYBgNVBAoMEUludGVsIENvcnBvcmF0\naW9uMRQwEgYDVQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExCzAJBgNVBAYT\nAlVTMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEC6nEwMDIYZOj/iPWsCzaEKi7\n1OiOSLRFhWGjbnBVJfVnkY4u3IjkDYYL0MxO4mqsyYjlBalTVYxFP2sJBK5zlKOB\nuzCBuDAfBgNVHSMEGDAWgBQiZQzWWp00ifODtJVSv1AbOScGrDBSBgNVHR8ESzBJ\nMEegRaBDhkFodHRwczovL2NlcnRpZmljYXRlcy50cnVzdGVkc2VydmljZXMuaW50\nZWwuY29tL0ludGVsU0dYUm9vdENBLmRlcjAdBgNVHQ4EFgQUImUM1lqdNInzg7SV\nUr9QGzknBqwwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwCgYI\nKoZIzj0EAwIDSQAwRgIhAOW/5QkR+S9CiSDcNoowLuPRLsWGf/Yi7GSX94BgwTwg\nAiEA4J0lrHoMs+Xo5o/sX6O9QWxHRAvZUGOdRQ7cvqRXaqI=\n-----END CERTIFICATE-----\n",
"tcb_evaluation_data_numbers_json": "{\"tcbEvaluationDataNumbers\":{\"id\":\"SGX\",\"version\":1,\"issueDate\":\"2025-05-27T19:04:23Z\",\"nextUpdate\":\"2025-06-26T19:04:23Z\",\"tcbEvalNumbers\":[{\"tcbEvaluationDataNumber\":19,\"tcbRecoveryEventDate\":\"2025-05-13T00:00:00Z\",\"tcbDate\":\"2025-05-14T00:00:00Z\"},{\"tcbEvaluationDataNumber\":18,\"tcbRecoveryEventDate\":\"2024-11-12T00:00:00Z\",\"tcbDate\":\"2024-11-13T00:00:00Z\"},{\"tcbEvaluationDataNumber\":17,\"tcbRecoveryEventDate\":\"2024-03-12T00:00:00Z\",\"tcbDate\":\"2024-03-13T00:00:00Z\"}]},\"signature\":\"19799ae10942dc046340aa279123fe743e2ab51c862ab6a04abdaab86083013ca81ac1963aa08f1a3b44f0c12e9c6d094cb98aa5ca51bc40439833ada6f0e9e1\"}"
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show more