Documentation
Welcome to the EDC, a framework for building globally-scalable data sharing services.
Many organizations face the challenge of securely sharing data with their partners or other trusted third parties. In
the past, this has been the realm of proprietary EDI solutions. EDC is an alternative to these systems built on the
concept of dataspaces. EDC is a set of components that enable developers to create
dataspaces using the following building blocks:
- Identity service for managing and verifying organizational credentials
using DIDs
and W3C Verifiable Credentials or OAuth2 tokens.
- Catalog service for publishing and securing assets that can be shared with other organizations.
- Control plane services for the automated creation and processing of data usage agreements that grant access to
data.
- Data plane and monitoring services for initiating and managing data transfers using off-the-shelf protocols such
as
HTTP
, Kafka
, cloud object storage, or virtually any other technology.
EDC is designed to serve a range of use cases, including large AI data sets, API access, supply-chain data processing,
and research data sharing.
EDC components are standards-based and implement
the Dataspace Protocol Specification
and Decentralized Claims Protocol Specification.
What EDC is not
EDC is not a data processing platform, integration framework, or messaging bus. EDC is also not a prepackaged system or
application. Rather, it is a toolbox for building customized distributions. As a generic toolbox, EDC:
- Does not ship an installable distribution; those are provided by downstream projects that customize EDC to their
needs.
- Does not contain use case-specific features; those are added through EDC’s modularity and extension system.
- Does not provide infrastructure for storing, processing, or moving data; EDC integrates with third-party data planes
to provide these services.
What Next?
If you are new to EDC, start with the Adopters Manual. If you are an experienced EDC developer and
want to take a deep-dive into the codebase, see the Contributors Manual.
1 - Adopters Manual
The Samples
The quickest way to get started building with EDC is to work through
the samples. The samples cover everything from basic scenarios involving
sharing files to advanced streaming and large data use cases.
The MVD
The EDC Minimal Viable Dataspace (MVD) sets up and runs a
complete demonstration dataspace between two organizations. The MVD includes automated setup of a complete dataspace
environment in a few minutes.
Overview: Key Components
EDC is architected as modules called extensions that can be combined and customized to create components
that perform specific tasks. These components (the “C” in EDC) are not what is commonly referred to as "
microservices." Rather, EDC components may be deployed as separate services or collocated in a runtime process. This
section provides a quick overview of the key EDC components.
The Connector
The Connector is a pair of components that control data sharing and execute data transfer. These components are the
Control Plane and Data Plane, respectively. In keeping with EDC’s modular design philosophy, connector
components may be deployed in a single monolith (for simple use cases) or provisioned as clusters of individual
services. It is recommended to separate the Control Plane and Data Plane so they can be individually managed and scaled.
The Control Plane
The Control Plane is responsible for creating contract agreements that grant access to data, managing data transfers,
and monitoring usage policy compliance. For example, a data consumer’s control Plan initiates a contract negotiation
with a data provider’s connector. The negotiation is an asynchronous process that results in a contract agreement if
approved. The consumer connector then uses the contract agreement to initiate a data transfer with the provider
connector. A data transfer can be a one-shot (finite) transfer, such as a discrete set of data, or an ongoing (
non-finite) data stream. The provider control plane can pause, resume, or terminate transfers in response to certain
conditions. For example, if a contract agreement expires.
The Data Plane
The Data Plane is responsible for executing data transfers, which are managed by the Control Plane. A Data Plane sends
data using specialized technology such as a messaging system or data integration platform. EDC includes the Data Plane
Framework (DPF) for building custom Data Planes. Alternatively, a Data Plane can be built using other languages or
technologies and integrated with the EDC Control Plane by implementing
the Data Plane Signaling API.
Federated Catalog
The Federated Catalog (FC) is responsible for crawling and caching data catalogs from other participants. The FC builds
a local cache that can be queried or processed without resorting to complex distributed queries across multiple
participants.
Identity Hub
The Identity Hub securely stores and manages W3C Verifiable Credentials, including the presentation of VCs and the
issuance and re-issuance process.
The Big Picture: The Dataspace Context
EDC components are deployed to create a dataspace ecosystem. It is important to understand that there is no such thing
as “dataspace software.” At its most basic level, a dataspace is simply a context between two participants:
The Federated Catalog fetches data catalogs from other participants. A Connector negotiates a contract agreement
for data access between two participants and manages data transfers using a data plane technology. The Identity Hub
presents verifiable credentials that a participant connector uses to determine whether it trusts and should grant data
access to a counterparty.
The above EDC components can be deployed in a single runtime process (e.g., K8S ReplicaSet) or a distributed topology (
multiple ReplicaSets or clusters). The connector components can be further decomposed. For example, multiple control
plane components can be deployed within an organization in a federated manner where departments or subdivisions manage
specific instances termed Management Domains
.
Customizing the EDC
EDC was designed with the philosophy that one size does not fit all. Before deploying an EDC-powered data sharing
ecosystem, you’ll need to build customizations and bundle them into one or more distributions. Specifically:
- Policies - Create a set of policies for data access and usage control. EDC adopts a code-first approach, which
involves writing policy functions.
- Verifiable Credentials - Define a set of W3C Verifiable Credentials for your use cases that your policy functions
can process. For example, a credential that identifies a particular partner type.
- Data transfer types - Define a set of data transfer technologies or types that must be supported. For example,
choose out-of-the-box support for HTTP, S3-based transfers, or Kafka. Alternatively, you can select your preferred
wire protocol and implement a custom data plane.
- Backend connectivity - You may need to integrate EDC components with back-office systems. This is done by writing
custom extensions.
Third parties and other open source projects distribute EDC extensions that can be included in a distribution. These
will typically be hosted on Maven Central.
1.1 - Dataspaces
A brief introduction to what a dataspace is and how it relates to EDC.
The concept of a dataspace is the starting point for learning about the EDC. A dataspace is a context between one or more participants that share data. A participant is typically an organization, but it could be any entity, such as a service or machine.
Dataspace Protocol (DSP): The Lingua Franca for Data Sharing
The messages exchanged in a dataspace are defined by the Dataspace Protocol Specification (DSP). EDC implements and builds on these asynchronous messaging patterns, so it will help to become acquainted with the specification. DSP defines how to retrieve data catalogs, conduct negotiations to create contract agreements that grant access to data, and send data over various lower-level wire protocols. While DSP focuses on the messaging layer for controlling data access, it does not specify how “trust” is established between participants. By trust, we mean on what basis a provider makes the decision to grant access to data, for example, by requiring the presentation of verifiable credentials issued by a third-party. This is specified by the Decentralized Claims Protocol (DCP), which layers on DSP. We won’t cover the two specifications here, other than to highlight a few key points that are essential to understanding how EDC works.
After reading this document, we recommend consulting the DSP and DCP specifications for further information.
The Question of Identity
One of the most important things to understand is how identities work in a dataspace and EDC. A participant has a single identity, which is a URI. EDC supports multiple identity systems, including OAuth2 and the Decentralized Claims Protocol (DCP). If DCP is used, the identity will be a Web DID.
An EDC component, such as a control plane, acts as a participant agent; in other words, it is a system that runs on behalf of a participant. Therefore, each component will use a single identity. This concept is important and nuanced. Let’s consider several scenarios.
Simple Scenarios
Single Deployment
An organization deploys a single-instance control plane. This is the simplest possible setup, although it is not very reliable or scalable. In this scenario, the connector has exactly one identity. Now take the case where an organization decides on a more robust deployment with multiple control plane instances hosted as a Kubernetes ReplicaSet.
The control plane instances still share the same identity.
Distributed Deployment
EDC supports the concept of management domains, which are realms of control. If different departments want to manage EDC components independently, the organization can define management domains where those components are deployed. Each management domain can be hosted on distinct Kubernetes clusters and potentially run in different cloud environments. Externally, the organization’s EDC infrastructure appears as a unified whole, with a single top-level catalog containing multiple sub-catalogs and data sharing endpoints.
In this scenario, departments deploy their own control plane clusters. Again, each instance is configured with the same identity across all management domains.
Multiple Operating Units
In some dataspaces, a single legal entity may have multiple subdivisions operating independently. For example, a multinational may have autonomous operating units in different geographic regions with different data access rights. In this case, each operating unit is a dataspace participant with a distinct identity. EDC components deployed by each operating unit will be configured with different identities. From a dataspace perspective, each operating unit is a distinct entity.
Common Misconceptions
Data transfers are only about sending static files
Data can be in a variety of forms. While the EDC can share static files, it also supports open-ended transfers such as streaming and API access. For example, many EDC use cases involve providing automated access to event streams or API endpoints, including pausing or terminating access based on continual evaluation of data use policies.
Dataspace software has to be installed
There is no such thing as dataspace “software” or a dataspace “application.” A dataspace is a decentralized context. Participants deploy the EDC and communicate with other participant systems using DSP and DCP.
EDC adds a lot of overhead
EDC is designed as a lightweight, non-resource-intensive engine. EDC adds no overhead to data transmission since specialized wire protocols handle the latter. For example, EDC can be used to grant access to an API endpoint or data stream. Once access is obtained, the consumer can invoke the API directly or subscribe to a stream without requiring the request to be proxied through EDC components.
Cross-dataspace communication vs. interoperability
There is no such thing as cross-dataspace communication. All data sharing takes place within a dataspace. However, that does not mean there is no such thing as dataspace interoperability. Let’s unpack this.
Consider two dataspaces, DS-1 and DS-B. It’s possible for a participant P-A, a member of DS-1, to share data with P-B, a member of DS-2, under one of the following conditions:
- P-A is also a member of DS-2, or
- P-B is also a member of DS-1
P-A shares data with P-B in the context of DS-1 or DS-2. Data does not flow between DS-1 and DS-2. It’s possible for one EDC instance to operate within multiple dataspaces as long as its identity remains the same (if not, different EDC deployments will be needed).
Interoperability is different. Two dataspaces are interoperable if:
- They have compatible identity systems. For example, if both dataspaces use DCP and Web DIDs, or a form of OAuth2 with federation between the Identity Providers.
- They have a common set of verifiable credentials (or claims) and credential issuers.
- They have an agreed set of data sharing policies.
If these conditions are met, it is possible for a single connector deployment to participate in two dataspaces.
1.2 - Modules, Runtimes, and Components
An overview of the EDC modularity system.
EDC is built on a module system that contributes features as extensions to a runtime. Runtimes are assembled to create a
component such as a control plane, a data plane, or an identity hub. A component may be composed of a single runtime
or a set of clustered runtimes:
The EDC module system provides a great deal of flexibility as it allows you to easily add customizations and target
diverse deployment topologies from small-footprint single-instance components to highly reliable, multi-cluster setups.
The documentation and samples cover in detail how EDC extensions are implemented and configured. At this point, it’s
important to remember that extensions are combined into one or more runtimes, which are then assembled into components.
A Note on Identifiers
The EDC uses identifiers based on this architecture. There are three identifier types: participant IDs, component IDs,
and runtime IDs. A participant ID corresponds to the organization’s identifier in a dataspace. This will vary by dataspace
but is often a Web DID. All runtimes of all components operated by an organization - regardless of where they are deployed
- use the same participant ID.
A component ID is associated with a particular component, for example, a control plane or data plane deployment. If an
organization deploys two data planes across separate clusters, they will be configured with two distinct component IDs.
All runtimes within a component deployment will share the same component ID. Component IDs are permanent and survive runtime
restarts.
A runtime ID is unique to each runtime instance. Runtime IDs are ephemeral and do not survive restarts. EDC uses runtime IDs to acquire cluster locks and for tracing, among other things.
1.3 - Control Plane
Explains how data, policies, access control, and transfers are managed.
The control plane is responsible for assembling catalogs, creating contract agreements that grant access to data, managing data transfers, and monitoring usage policy compliance. Control plane operations are performed by interacting with the Management API. Consumer and provider control planes communicate using the Dataspace Protocol (DSP). This section provides an overview of how the control plane works and its key concepts.
The main control plane operations are depicted below:
The consumer control plane requests catalogs containing data offers, which are then used to negotiate contract agreements. A contract agreement is an artifact that acts as a token granting access to a data set. It encodes a set of usage policies (as ODRL) and is bound to the consumer via its Participant ID. Every control plane must be configured with a Participant ID, which is the unique identifier of the dataspace participant operating it. The exact type of identifier is dataspace-specific but will often be a Web DID
if the Decentralized Claims Protocol (DCP) is used as the identity system.
After obtaining a contract agreement, the consumer can initiate a data transfer. A data transfer controls the flow of data, but it does not send it. That task is performed by the consumer and provider data planes using a separate wire protocol. Data planes are typically specialized technology, such as a messaging system or data integration platform, deployed separately from the control plane. A control plane may use multiple data planes and communicate with them via a RESTful interface called the Data Plane Signaling API.
EDC is designed to handle all general forms of data. It’s important to note that a data transfer does not need to be file-based. It can be a stream, such as a market feed or an API that a client queries. Moreover, a data transfer does not need to be completed. It can exist indefinitely and be paused and resumed by the control plane at intervals. Now, let’s jump into the specifics of how the control plane works, starting briefly with the Management API and proceeding to catalogs.
Management API
The Management API is a RESTful interface for client applications to interact with the control plane. All client operations described in this section use the Management API. We won’t cover the API in detail here since there is an OpenAPI definition. The API can be secured using an authentication key or third-party OAuth2 identity provider, but it is important to note that it should never be exposed over the Internet or other non-trusted networks.
[TODO: Management API Link]
Catalogs, Datasets, and Offers
A data provider uses its control plane to publish a data catalog that other dataspace participants access. Catalog requests are made using DSP (HTTP POST). The control plane will return a response containing a DCAT Catalog The following is an example response with some sections omitted for brevity:
{
"@context": {...},
"dspace:participantId": "did:web:example.com",
"@id": "567bf428-81d0-442b-bdc8-437ed46592c9",
"@type": "dcat:Catalog",
"dcat:dataset": [
{
"@id": "asset-1",
"@type": "dcat:Dataset",
"description": "...",
"odrl:hasPolicy": {...},
"dcat:distribution": [{...}]
}
]
}
Catalogs contain Datasets
, which represent data the provider wishes to make available to the requesting client. A Dataset
has two important properties: odrl:hasPolicy
, which is an ODRL usage policy, and one or more dcat:distribution
entries that describe how to obtain the data. The catalog is serialized as JSON-LD. It is highly recommended that you become familiar with JSON-LD, and in particular, the JSON-LD Playground, since EDC makes heavy use of it.
Why does EDC use JSON-LD instead of plain JSON? There are two reasons. First, DSP is based on DCAT and ODRL, which rely on JSON-LD. As you will see, many EDC entities can be extended with custom attributes added by end-users. EDC needed a way to avoid property name clashes. JSON-LD provides the closest thing to a namespace feature for plain JSON.
Catalogs are not static documents. When a data consumer requests a catalog from a provider, the provider’s control plane dynamically generates a response based on the consumer’s identity and credentials. For example, a provider may offer specific datasets to a consumer or category of consumer (for example, if it is a tier-1 or tier-2 partner).
You will learn more about restricting access to datasets in the next section, but one way to do so is through the offer associated with a dataset. The following odrl:hasPolicy
contains an Offer
that specifies a dataset can only be used by an accredited manufacturer:
"odrl:hasPolicy": {
"@id": "...",
"@type": "odrl:Offer",
"odrl:obligation": {
"odrl:action": {
"@id": "use"
},
"odrl:constraint": {
"odrl:leftOperand": {
"@id": "ManufacturerAccredidation"
},
"odrl:operator": {
"@id": "odrl:eq"
},
"odrl:rightOperand": "active"
}
}
},
An offer
defines usage policy. Usage policies are the requirements and permissions - or, more precisely, the duties, rights, and obligations - a provider imposes on a consumer to grant access to data. In the example above, the provider requires the consumer to be an accredited manufacturer. In practice, policies translate down into checks and verifications at runtime. When a consumer issues a catalog request, it will supply its identity (e.g., a Web DID) and potentially a set of Verifiable Presentations (VP)
. The provider control plane could check for a valid VP, or perform a back-office system lookup based on the client identity. Assuming the check passes, the dataset will be included in the catalog response.
A dataset will also be associated with one or more dcat:distributions
:
"dcat:distribution": [
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "HttpData-PULL"
},
"dcat:accessService": {
"@id": "a6c7f3a3-8340-41a7-8154-95c6b5585532",
"@type": "dcat:DataService",
"dcat:endpointDescription": "dspace:connector",
"dcat:endpointUrl": "http://localhost:8192/api/dsp",
"dct:terms": "dspace:connector",
"dct:endpointUrl": "http://localhost:8192/api/dsp"
}
},
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "S3-PUSH"
},
"dcat:accessService": {
"@id": "a6c7f3a3-8340-41a7-8154-95c6b5585532",
"@type": "dcat:DataService",
"dcat:endpointDescription": "dspace:connector",
"dcat:endpointUrl": "http://localhost:8192/api/dsp",
"dct:terms": "dspace:connector",
"dct:endpointUrl": "http://localhost:8192/api/dsp"
}
}
]
A distribution describes the wire protocol a dataset is available over. In the above example, the dataset is available using HTTP Pull
and S3 Push
protocols (specified by the dct:format
property). You will learn more about the differences between these protocols later. A distribution will be associated with a dcat:accessService
, which is the endpoint where a contract granting access can be negotiated.
If you would like to understand the structure of DSP messages in more depth, we recommend looking at the JSON schemas and examples provided by the Dataspace Protocol Specification (DSP).
EDC Entities
So far, we have examined catalogs, datasets, and offers from the perspective of DSP messages. We will now shift focus to the primary EDC entities used to create them. EDC entities do not have a one-to-one correspondence with DSP concepts, and the reason for this will become apparent as we proceed.
Assets
An Asset
is the primary building block for data sharing. An asset represents any data that can be shared. An asset is not limited to a single file or group of files. An asset could be a continual stream of data or an API endpoint. An asset does not even have to be physical data. It could be a set of computations performed at a later date. Assets are data descriptors loaded into EDC via its Management API (more on that later). Notice the emphasis on “descriptors”: assets are not the actual data to be shared but describe the data. The following excerpt shows an asset:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@id": "899d1ad0-532a-47e8-2245-1aa3b2a4eac6",
"properties": {
"somePublicProp": "a very interesting value"
},
"privateProperties": {
"secretKey": "..."
},
"dataAddress": {
"type": "HttpData",
"baseUrl": "http://localhost:8080/test"
}
}
When a client requests a catalog, the control plane processes its asset entries to create datasets in a DSP catalog. An asset must have a globally unique ID. We strongly recommend using the JDK UUID implementation. Entries under the properties
attribute will be used to populate dataset properties. The properties
attribute is open-ended and can be used to add custom fields to datasets. Note that several well-known
properties are included in the edc
namespace: id
, description
, version
, name
, contenttype
(more on this in the next section on asset expansion).
In contrast, the privateProperties
attribute contains properties that are not visible to clients (i.e., they will not be serialized in DSP messages). They can be used to internally tag and categorize assets. As you will see, tags are useful to select groups of assets in a query.
Why is the term Asset
used and not Dataset
? This is mostly for historical reasons since the EDC was originally designed before the writing of the DSP specification. However, it was decided to keep the two distinct since it provides a level of decoupling between the DSP and internal layers of EDC.
Remember that assets are just descriptors - they do not contain actual data. How does EDC know where the actual data is stored? The dataAddress
object acts as a pointer to where the actual data resides. The DataAddress type is open-ended. It could point to an HTTP address (HttpDataAddress), S3 bucket (S3DataAddress), messaging topic, or some other form of storage. EDC supports a defined set of storage types. These can be extended to include support for virtually any custom storage. While data addresses can contain custom data, it’s important not to include secrets since data addresses are persisted. Instead, use a secure store for secrets and include a reference to it in the DataAddress.
Understanding Expanded Assets
The @context
property on an asset indicates that it is a JSON-LD type. JSON-LD (more precisely, JSON-LD terms) is used by EDC to enable namespaces for custom properties. The following excerpt shows an asset with a custom property, dataFeed
:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"market-systems": "http://w3id.org/market-systems/v0.0.1/ns/"
},
"@id": "...",
"properties": {
"dataFeed": {
"feedName": "Market Data",
"feedType": "PRICING",
"feedFrequency": "DAILY"
}
}
}
Notice a reference to the market-systems
context has been added to @context
in the above example. This context defines the terms dataFeed
, feedName
, feedType
, and feedFrequency
. When the asset is added to the control plane via the EDC’s Management API, it is expanded according to the JSON expansion algorithm This is essentially a process of inlining the full term URIs into the JSON structure. The resulting JSON will look like this:
{
"@id": "...",
"https://w3id.org/edc/v0.0.1/ns/properties": [
{
"http://w3id.org/market-systems/v0.0.1/ns/dataFeed": [
{
"http://w3id.org/market-systems/v0.0.1/ns/feedName": [
{
"@value": "Market Data"
}
],
"http://w3id.org/market-systems/v0.0.1/ns/feedType": [
{
"@value": "PRICING"
}
],
"http://w3id.org/market-systems/v0.0.1/ns/feedFrequency": [
{
"@value": "DAILY"
}
]
}
]
}
]
}
Be careful when defining custom properties. If you forget to include a custom context and use simple property names (i.e., names that are not prefixed or a URI), they will be expanded using the EDC default context, https://w3id.org/edc/v0.0.1/ns/
.
EDC persists the asset in expanded form. As will be shown later, queries for assets must reference property names in their expanded form.
Policies and Policy Definitions
Policies are a generic way of defining a set of duties, rights, or obligations. EDC and DSP express policies with ODRL. EDC uses policies for the following:
- As a dataset offer in a catalog to define the requirements to access data
- As a contract agreement that grants access to data
- To enable access control
Policies are loaded into EDC via the Management API using a policy definition, which contains an ODRL policy type:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "PolicyDefinition",
"policy": {
"@context": "http://www.w3.org/ns/odrl.jsonld",
"@id": "8c2ff88a-74bf-41dd-9b35-9587a3b95adf",
"duty": [
{
"target": "http://example.com/asset:12345",
"action": "use",
"constraint": {
"leftOperand": "headquarter_location",
"operator": "eq",
"rightOperand": "EU"
}
}
]
}
}
A policy definition allows the policy to be referenced by its @id
when specifying the usage requirements for a set of assets or access control. Decoupling policies in this way allows for a great deal of flexibility. For example, specialists can create a set of corporate policies that are reused across an organization.
Contract Definitions
Contract definitions link assets and policies by declaring which policies apply to a set of assets. Contract definitions contain two types of policy:
- Contract policy
- Access policy
Contract policy determines what requirements a data consumer must fulfill and what rights it has for an asset. Contract policy corresponds directly to a dataset offer. In the previous example, a contract policy is used to require a consumer to be an accredited manufacturer. Access policy determines whether a data consumer can access an asset. For example, if a data consumer is a valid partner. The difference between contract and access policy is visibility: contract policy is communicated to a consumer via a dataset offer in a catalog, while access policy remains “hidden” and is only evaluated by the data provider’s runtime.
Now, let’s examine a contract definition:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "https://w3id.org/edc/v0.0.1/ns/ContractDefinition",
"@id": "test-id",
"edc:accessPolicyId": "access-policy-1234",
"edc:contractPolicyId": "contract-policy-5678",
"edc:assetsSelector": [
{
"@type": "https://w3id.org/edc/v0.0.1/ns/Criterion",
"edc:operandLeft": "id",
"edc:operator": "in",
"edc:operandRight": ["id1", "id2", "id3"]
},
{
"@type": "https://w3id.org/edc/v0.0.1/ns/Criterion",
"edc:operandLeft": "productCategory",
"edc:operator": "=",
"edc:operandRight": "gold"
},
]
}
The accessPolicyId
and contractPolicyId
properties refer to policy definitions. The assetsSelector
property is a query (similar to a SQL SELECT statement) that returns a set of assets the contract definition applies to. This allows users to associate policies with specific assets or types of assets.
Since assetsSelectors
are late-bound and evaluated at runtime, contract definitions can be created before assets exist. This is a particularly important feature since it allows data security to be put in place prior to loading a set of assets. It also enables existing policies to be applied to new assets.
Catalog Generation
We’re now in a position to understand how catalog generation in EDC works. When a data consumer requests a catalog from a provider, the latter will return a catalog result with datasets that the former can access. Catalogs are specific to the consumer and dynamically generated at runtime based on client credentials.
When a data consumer makes a catalog request via DSP, it will send an access token that provides access to the consumer’s verifiable credentials in the form of a verifiable presentation (VP). We won’t go into the mechanics of how the provider obtains a VP - that is covered by DCP and the EDC IdentityHub. When the provider receives the request, it generates a catalog containing datasets using the following steps:
The control plane first retrieves contract definitions and evaluates their access and contract policies against the consumer’s set of claims. These claims are populated from the consumer’s verifiable credentials and any additional data provided by custom EDC extensions. A custom EDC extension could look up claims such as partner tier in a back-office system. Next, the assetsSelector
queries from each passing contract definition is then evaluated to return a list of assets. These assets are iterated, and a dataset is created by combining the asset with the contract policy specified by the contract definition. The datasets are then collected into a catalog and returned by the client. Note that a single asset may result in multiple datasets if more than one contract definition selects it.
Careful consideration needs to be taken when designing contract definitions, particularly the level of granularity at which they operate. When a catalog request is made, The access and contract policies of all contract definitions are evaluated, and the passing ones are selected. The asset selector queries are then run from the resulting set. To optimize catalog generation, contract definitions should select groups of assets rather than correspond in a 1:1 relationship with an asset. In other words, limit contract definitions to a reasonable number and use them as a mechanism to filter groups of assets. Adding custom asset properties that serve as selection labels is an easy way to do this.
Contract Negotiations
Once a consumer has received a catalog, it can request access to a dataset by sending a DSP contract negotiation request using the Management API. The contract negotiation takes the dataset offer as a parameter. When the request is received, the provider will respond with an acknowledgment. Contract negotiations are asynchronous, which means they are not completed immediately but sometime in the future. A contract negotiation progresses through a series of states defined by the DSP specification (which we will not cover). Both the consumer and provider can transition the negotiation. When a transition is attempted, the initiating control plane sends a DSP message to the counterparty.
If a negotiation is successfully completed (termed finalized), a DSP contract agreement message is sent to the consumer. The message contains a contract agreement that can be used to access data by opening a transfer process:
{
"@context": "https://w3id.org/dspace/2024/1/context.json",
"@type": "dspace:ContractAgreementMessage",
"dspace:providerPid": "urn:uuid:a343fcbf-99fc-4ce8-8e9b-148c97605aab",
"dspace:consumerPid": "urn:uuid:32541fe6-c580-409e-85a8-8a9a32fbe833",
"dspace:agreement": {
"@id": "urn:uuid:e8dc8655-44c2-46ef-b701-4cffdc2faa44",
"@type": "odrl:Agreement",
"odrl:target": "urn:uuid:3dd1add4-4d2d-569e-d634-8394a8836d23",
"dspace:timestamp": "2023-01-01T01:00:00Z",
"odrl:permission": [{
"odrl:action": "odrl:use" ,
"odrl:constraint": [{
"odrl:leftOperand": "odrl:dateTime",
"odrl:operand": "odrl:lteq",
"odrl:rightOperand": { "@value": "2023-12-31T06:00Z", "@type": "xsd:dateTime" }
}]
}]
},
"dspace:callbackAddress": "https://example.com/callback"
}
EDC implements DSP message exchanges using a reliable quality of service. That is, all message operations and state machine transitions are performed reliably in a transaction context. EDC will only commit a state machine transition if a message is successfully acknowledged by the counterparty. If a send operation fails, the associated transition will be rolled back, and the message will be resent. As with all reliable messaging systems, EDC operations are idempotent.
Working with Asynchronous Messaging and Events
DSP and EDC are based on asynchronous messaging, and it is important to understand that and design your systems appropriately. One anti-pattern is to try to “simplify” EDC by creating a synchronous API that wraps the underlying messaging and blocks clients until a contract negotiation is complete. Put simply, don’t do that, as it will result in complex, inefficient, and incorrect code that will break EDC’s reliability guarantees. The correct way to interact with EDC and the control plane is expressed in the following sequence diagram:
EDC has an eventing system that code can plug into and receive events when something happens via a callback hook. For example, a contract negotiation is finalized. The EventRouter
is used by extension code to subscribe to events. Two dispatch modes are supported: asynchronous notification or synchronous transactional notification. The latter mode can be used to reliably deliver the event to an external destination such as a message queue, database, or remote endpoint. Integrations will often take advantage of this feature by dispatching contract negotiation finalized events to another system that initiates a data transfer.
Reliable Messaging
EDC implements reliable messaging for all interactions, so it is important to understand how this quality of service works. First, all messages have a unique ID and are idempotent. If a particular message is not acknowledged, it will be resent. Therefore, it is expected the receiving endpoint will perform de-duplication (which all EDC components do). Second, reliable messaging works across restarts. For example, if a runtime crashes before it can send a response, the response will be sent either by another instance (if running in a cluster) or by the runtime when it comes back online. Reliability is achieved by recording the state of all interactions using state machines to a transactional store such as Postgres. State transitions are initiated in the context of a transaction by sending a message to the counterparty, which is only committed after an acknowledgment is received.
Transfer Processes
After a contract negotiation has been finalized, a consumer can request data associated with an asset by opening a transfer process via the Management API. A finite transfer process completes after the data, such as a file, has been transferred. Other types of data transfers, such as a data stream or access to an API endpoint, may be ongoing. These types of transfer processes are termed non-finite because there is no specified completion point. They continue until they are explicitly terminated or canceled.
Pay careful attention to how data is modeled. In particular, model your assets in a way that minimizes the number of contract negotiations and transfer processes that need to be created. For large data sets such as machine-learning data, this is relatively straightforward: an asset can represent each individual data set. Consumers will typically need to transfer the data once or infrequently, so the number of contract negotiations and transfer processes will remain small, typically one contract negotiation and a few transfers.
Now, let’s take as an example a supplier that wishes to expose parts data to their partners. Do not model each part as a separate asset, as that would require at least one contract negotiation and transfer process per part. If there are millions of parts, the number of contract negotiations and transfer processes will quickly grow out of control. Instead, have a single asset represent aggregate data, such as all parts, or a significant subset, such as a part type. Only one contract negotiation will be needed, and if the transfer process is non-finite and kept open, consumers can make multiple parts data requests (over the course of hours, days, months, etc.) without incurring additional overhead.
Consumer Pull and Provider Push Transfers
We’ll explain how to open a transfer process in the next section. First, it is important to understand the two modes for sending data from a provider to a consumer that EDC supports. Consumer pull transfers require the consumer to initiate the data send operation. A common example of this is when a consumer makes an HTTP request to an endpoint and receives a response or pulls a message off a queue:
The second type, provider push transfers, involves the provider pushing data to the consumer:
An example of the latter is when a consumer wishes to receive a dataset at an object storage endpoint that it controls. This data may take the provider some time to create and process, so the consumer sends an access token to the provider when it opens a transfer process, which the provider then uses to push the data to the consumer when it is ready.
The Role of the Data Plane
Once a transfer process is initiated on the control planes of the consumer and provider, the respective data planes handle data send and receive operations. In the provider push scenario, the consumer control plane will signal to its data plane to be ready to receive data at an endpoint. The provider control plane will then signal to its data plane to begin the push operation. In the consumer pull scenario, the provider control plane will first signal to its data plane to make data available at an endpoint. The consumer control plane will then signal to its control plane to begin pulling the data from the provider endpoint.
Transfer Process States
Now that we have covered how transfer processes work at a high level, let’s look at the specifics. A transfer process is a shared state machine between the consumer and provider control planes. A transfer process will transition between states in response to a message received from the counterparty or as the result of a Management API operation. For example, a consumer will create a transfer process request via its Management API and send a request message to the provider. If the provider acknowledges the request with an OK
, the transfer process state machine will be set to the REQUESTED
state on both the consumer and provider. When the provider control plane is ready, it will send a message to the consumer, and the state machine will be transitioned to STARTED
on both control planes.
The following are the most important transfer process states:
- REQUESTED - The consumer has requested a data transfer from the provider.
- STARTED - The consumer has received a start message from the provider. The data is available and can be pulled by the consumer or will be pushed by the provider.
- SUSPENDED - The consumer or provider has received a suspend message from the counterparty. All in-process data send operations will be paused.
- RESUMED - The consumer or provider has received a resume message from the counterparty. All in-process data send operations will be restarted.
- COMPLETED - The data transfer has been completed.
- TERMINATED - The consumer or provider has received a termination message from the counterparty. All in-process data send operations will be stopped.
There are a number of internal states that the consumer or provider can transition into without notifying the other party. The two most important are:
- PROVISIONED - When a data transfer request is made through the Management API on the consumer, its state machine will first transition to the PROVISIONED state to perform any required setup. After this is completed, the consumer control plane will dispatch a request to the provider and transition to the REQUESTED state. The state machine on the provider will transition to the PROVISIONED state after receiving a request and asynchronously completing any required data pre-processing.
- DEPROVISIONED - After a transfer has completed, the provider state machine will transition to the deprovisioned state to clean up any remaining resources.
As with the contract negotiation state machine, custom code can react to transition events using the EventRouter
. There are also two further options for executing operations during the provisioning step on the consumer or provider. First, a Provisioner
extension can be used to perform a task. EDC also includes the HttpProviderProvisioner
, which invokes a configured HTTP endpoint when a provider control plane enters the provisioning step. The endpoint can front code that performs a task and asynchronously invoke a callback on the control plane when it is finished.
Policy Monitor
It may be desirable to conduct ongoing policy checks for non-finite transfer processes. Streaming data is a typical example where such checks may be needed. If a stream is active for a long duration (such as manufacturing data feed), the provider may want to check if the consumer is still a partner in good standing or has maintained an industry certification. The EDC PolicyMonitor can be embedded in the control plane or run in a standalone runtime to periodically check consumer credentials.
1.3.1 - Policy Engine
EDC includes a policy engine for evaluating policy expressions. It’s important to understand its design center, which
takes a code-first approach. Unlike other policy engines that use a declarative language, the EDC policy engine executes
code that is contributed as extensions called policy functions. If you are familiar with compiler design and visitors,
you will quickly understand how the policy engine works. Internally, policy expressed as ODRL is deserialized into a
POJO-based object tree (similar to an AST) and walked by the policy engine.
Let’s take one of the previous policy examples:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "PolicyDefinition",
"policy": {
"@context": "http://www.w3.org/ns/odrl.jsonld",
"@id": "8c2ff88a-74bf-41dd-9b35-9587a3b95adf",
"duty": [
{
"target": "http://example.com/asset:12345",
"action": "use",
"constraint": {
"leftOperand": "headquarter_location",
"operator": "eq",
"rightOperand": "EU"
}
}
]
}
}
When the policy constraint is reached during evaluation, the policy engine will dispatch to a function registered under
the key header_location
. Policy functions implement the AtomicConstraintRuleFunction
interface:
@FunctionalInterface
public interface AtomicConstraintRuleFunction<R extends Rule, C extends PolicyContext> {
/**
* Performs the evaluation.
*
* @param operator the operation
* @param rightValue the right-side expression for the constraint
* @param rule the rule associated with the constraint
* @param context the policy context
*/
boolean evaluate(Operator operator, Object rightValue, R rule, C context);
}
A function that evaluates the previous policy will look like the following snippet:
public class TestPolicy implements AtomicConstraintRuleFunction<Duty, ParticipantAgentPolicyContext> {
public static final String HEADQUARTERS = "headquarters";
@Override
public boolean evaluate(Operator operator, Object rightValue, Duty rule, ParticipantAgentPolicyContext context) {
if (!(rightValue instanceof String headquarterLocation)) {
context.reportProblem("Right-value expected to be String but was " + rightValue.getClass());
return false;
}
var participantAgent = context.participantAgent();
var claim = participantAgent.getClaims().get(HEADQUARTERS);
if (claim == null) {
return false;
}
// ... evaluate claim and if the headquarters are in the EU, return true
return true;
}
}
Note that PolicyContext
has its own hierarchy, that’s tightly bound to the policy scope.
Policy Scopes and Bindings
In EDC, policy rules are bound to a specific context termed a scope. EDC defines numerous scopes, such as one for
contract negotiations and provisioning of resources. To understand how scopes work, consider the following case,
“to access data, a consumer must be a business partner in good standing”:
{
"constraint": {
"leftOperand": "BusinessPartner",
"operator": "eq",
"rightOperand": "active"
}
}
In the above scenario, the provider EDC’s policy engine should verify a partner credential when a request is made to
initiate a contract negotiation. The business partner rule must be bound to the contract negotiation scope since
policy rules are only evaluated for each scope they are bound to. However, validating a business partner credential
may not be needed when data is provisioned if it has already been checked when starting a transfer process. To avoid an
unnecessary check, do not bind the business partner rule to the provision scope. This will result in the rule being
filtered and ignored during policy evaluation for that scope.
The relationship between scopes, rules, and functions is shown in the following diagram:
Rules are bound to scopes, and unbound rules are filtered when the policy engine evaluates a particular scope. Scopes
are bound to contexts, and functions are bound to rules for a particular scope/context. This means that separate
functions can be associated with the same rule in different scopes. Furthermore, both scopes and contexts are hierarchical
and denoted with a DOT
notation. A rule bound to a parent context will be evaluated in child scopes.
Be careful when implementing policy functions, particularly those bound to the catalog request scope (request.catalog
),
which may involve evaluating a large set of policies in the course of a synchronous request. Policy functions should be
efficient and avoid unnecessary remote communication. When a policy function makes a database call or invokes a
back-office system (e.g., for a security check), consider introducing a caching layer to improve performance if testing
indicates the function may be a bottleneck. This is less of a concern for policy scopes associated with asynchronous
requests where latency is generally not an issue.
In Force Policy
The InForce is an interoperable policy for specifying in force periods for contract agreements. An in force period can
be defined as a duration or a fixed date.
All dates must be expressed as UTC.
Duration
A duration is a period of time starting from an offset. EDC defines a simple expression language for specifying the
offset and duration in time units:
<offset> + <numeric value>ms|s|m|h|d
The following values are supported for <offset>
:
Value | Description |
---|
contractAgreement | The start of the contract agreement defined as the timestamp when the provider enters the AGREED state expressed in UTC epoch seconds |
The following values are supported for the time unit:
Value | Description |
---|
ms | milliseconds |
s | seconds |
m | minutes |
h | hours |
d | days |
A duration is defined in a ContractDefinition
using the following policy and left-hand
operands https://w3id.org/edc/v0.0.1/ns/inForceDate
:
{
"@context": {
"cx": "https://w3id.org/cx/v0.8/",
"@vocab": "http://www.w3.org/ns/odrl.jsonld"
},
"@type": "Offer",
"@id": "a343fcbf-99fc-4ce8-8e9b-148c97605aab",
"permission": [
{
"action": "use",
"constraint": {
"and": [
{
"leftOperand": "https://w3id.org/edc/v0.0.1/ns/inForceDate",
"operator": "gte",
"rightOperand": {
"@value": "contractAgreement",
"@type": "https://w3id.org/edc/v0.0.1/ns/inForceDate:dateExpression"
}
},
{
"leftOperand": "https://w3id.org/edc/v0.0.1/ns/inForceDate:inForceDate",
"operator": "lte",
"rightOperand": {
"@value": "contractAgreement + 100d",
"@type": "https://w3id.org/edc/v0.0.1/ns/inForceDate:dateExpression"
}
}
]
}
}
]
}
Fixed Date
Fixed dates may also be specified as follows using https://w3id.org/edc/v0.0.1/ns/inForceDate
operands:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/inForceDate",
"@vocab": "http://www.w3.org/ns/odrl.jsonld"
},
"@type": "Offer",
"@id": "a343fcbf-99fc-4ce8-8e9b-148c97605aab",
"permission": [
{
"action": "use",
"constraint": {
"and": [
{
"leftOperand": "https://w3id.org/edc/v0.0.1/ns/inForceDate",
"operator": "gte",
"rightOperand": {
"@value": "2023-01-01T00:00:01Z",
"@type": "xsd:datetime"
}
},
{
"leftOperand": "https://w3id.org/edc/v0.0.1/ns/inForceDate",
"operator": "lte",
"rightOperand": {
"@value": "2024-01-01T00:00:01Z",
"@type": "xsd:datetime"
}
}
]
}
}
]
}
Although xsd:datatime
supports specifying timezones, UTC should be used. It is an error to use an xsd:datetime
without specifying the timezone.
No Period
If no period is specified the contract agreement is interpreted as having an indefinite in force period and will remain
valid until its other constraints evaluate to false.
Not Before and Until
Not Before
and Until
semantics can be defined by specifying a single https://w3id.org/edc/v0.0.1/ns/inForceDate
fixed date constraint and an
appropriate operand. For example, the following policy
defines a contact is not in force before January 1, 2023
:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"@vocab": "http://www.w3.org/ns/odrl.jsonld"
},
"@type": "Offer",
"@id": "a343fcbf-99fc-4ce8-8e9b-148c97605aab",
"permission": [
{
"action": "use",
"constraint": {
"leftOperand": "edc:inForceDate",
"operator": "gte",
"rightOperand": {
"@value": "2023-01-01T00:00:01Z",
"@type": "xsd:datetime"
}
}
}
]
}
Examples
Please note that the samples use the abbreviated prefix notation "edc:inForceDate"
instead of the full namespace
"https://w3id.org/edc/v0.0.1/ns/inForceDate"
.
1.4 - Data Plane
Describes how the EDC integrates with off-the-shelf protocols such as HTTP
, Kafka
, cloud object storage, and other technologies to transfer data between parties.
A data plane is responsible for transmitting data using a wire protocol at the direction of the control plane. Data planes can vary greatly, from a simple serverless function to a data streaming platform or an API that clients access. One control plane may manage multiple data planes that specialize in the type of data sent or the wire protocol requested by the data consumer. This section provides an overview of how data planes work and the role they play in a dataspace.
Separation of Concerns
Although a data plane can be collocated in the same process as a control plane, this is not a recommended setup. Typically, a data plane component is deployed as a separate set of instances to an independent environment such as a Kubernetes cluster. This allows the data plane to be operated and scaled independently from the control plane. At runtime, a data plane must register with a control plane, which in turn directs the data plane using the Data Plane Signaling API. EDC does not ship with an out-of-the-box data plane. Rather, it provides the Data Plane Framework (DPF), a platform for building custom data planes. You can choose to start with the DPF or build your own data plane using your programming language of choice. In either case, understanding the data plane registration process and Signaling API are the first steps.
Data Plane Registration
In the EDC model, control planes and data planes are dynamically associated. At startup, a data plane registers itself with a control plane using its component ID. Registration is idempotent and persistent and made available to all clustered control plane runtimes via persistent storage. After a data plane is registered, the control plane periodically sends a heartbeat and culls the registration if the data plane is unavailable.
The data plane registration includes metadata about its capabilities, including:
- The supported wire protocols and supported transfer types. For example, “HTTP-based consumer pull” or “S3-based provider push”
- The supported data source types.
The control plane uses data plane metadata for two purposes. First, it is used to determine which data transfer types are available for an asset when generating a catalog. Second, the metadata is used to select a data plane when a transfer process is requested.
Data Plane Signaling
A control plane communicates with a data plane through a RESTful interface called the Data Plane Signaling API. Custom data planes can be written that integrate with the EDC control plane by implementing the registration protocol and the signaling API.
The Data Plane Signaling flow is shown below:
When a transfer process is started, and a data plane is selected, a start message will be sent. If the transfer process is a consumer-pull type where data is accessed by the consumer, the response will contain an Endpoint Data Reference (EDR) that contains the coordinates to the data and an access token if one is required. The control plane may send additional signals, such as SUSPEND and RESUME, or TERMINATE, in response to events. For example, the control plane policy monitor could send a SUSPEND or TERMINATE message if a policy violation is encountered.
The Data Plane Framework (DPF)
EDC includes a framework for building custom data planes called the DPF. DPF supports end-to-end streaming transfers (i.e., data content is streamed rather than materialized in memory) for scalability and both pull- and push- style transfers. The framework has extensibility points for supporting different data sources and sinks (e.g., S3, HTTP, Kafka) and can perform direct streaming between different source and sink types.
The EDC samples contain examples of how to use the DPF.
1.5 - Identity Hub
Identity Hub (IH) manages organization identity resources such as credentials for a dataspace participant. It is designed for machine-to-machine interactions and does not manage personal verifiable credentials. Identity Hub implements the Decentralized Claims Protocol (DCP) and is based on key decentralized identity standards, including W3C DIDs, the W3C did:web Method, and the W3C Verifiable Credentials Data Model v1.1specifications, so we recommend familiarizing yourself with those technologies first.
One question that frequently comes up is whether Identity Hub supports OpenID for Verifiable Credentials (OID4VC). The short answer is No. That’s because OID4VC mandates human (end-user) interactions, while Identity Hub is designed for machine-to-machine interactions where humans are not in the loop. Identity Hub is built on many of the same decentralized identity standards as OID4VC but implements DCP, a protocol specifically designed for non-human flows.
Identity Hub securely stores and manages W3C Verifiable Credentials, including handling presentation and issuance. But Identity Hub is more than an enterprise “wallet” since it handles key material and DID documents. Identity Hub manages the following identity resources:
- Verifiable Credentials. Receiving and managing issued credentials and generating Verifiable Presentations (VPs).
- Key Pairs. Generating, rotating, and revoking signing keys.
- DID Documents. Generating and publishing DID documents.
The EDC MVD Project provides a full test dataspace setup with Identity Hub. It’s an excellent tool to experiment with Identity Hub and decentralized identity technologies.
As we will see, Identity Hub can be deployed to diverse topologies, from embedded in a small footprint edge connector to an organization-wide clustered system. Before getting into these details, let’s review the role of Identity Hub.
Identities and Credentials in a Dataspace: The Role of Identity Hub
Note this section assumes a solid understanding of security protocols, DIDs, verifiable credentials, and modern cryptography concepts.
Identity Hub is built on the Decentralized Claims Protocol (DCP). This protocol overlays the Dataspace Protocol (DSP) by adding security and trust based on a decentralized identity model. To see how a decentralized identity system works, we will contrast it with a centralized approach.
Protocols such as traditional OAuth2 grants adopt a centralized model where a single identity provider or set of federated providers issue tokens on behalf of a party. Data consumers request a token from an identity provider, which, in turn, generates and signs one along with a set of claims. The data consumer passes the signed token to the data provider, which verifies the token using public key material from the identity provider:
The centralized model is problematic for many dataspaces:
- It is prone to network outages. If the identity provider goes down, the entire dataspace is rendered inoperable. Using federated providers only partially mitigates this risk while increasing complexity since large sections of a dataspace will still be subject to outage.
- It does not preserve privacy. Since an identity provider issues and verifies tokens, it is privy to communications between data consumers and providers. While the provider may not know the content of the communications, it is aware of who is communicating with whom.
- Participants are not in control of their identity and credentials. The identity provider creates identity tokens and manages credentials, not the actual dataspace participants.
Identity Hub and the Decentralized Claims Protocol are designed to address these limitations by introducing a model where there is no single point of failure, privacy is maintained, and dataspace participants control their identities and credentials. This approach is termed decentralized identity and builds on foundational standards from the W3C and Decentralized Identity Foundation.
The Presentation Flow
To understand the role of Identity Hub in a dataspace that uses a decentralized identity system, let’s start with a basic example. A consumer wants to access data from a provider that requires proof the consumer is certified by a third-party auditor. The certification proof is a W3C Verifiable Credential issued by the auditor. For now, we’ll assume the consumer’s Identity Hub already manages the VC (issuance will be described later).
When the consumer’s control plane makes a contract negotiation request to the provider, it must include a declaration of which participant it is associated with (the participant ID) and a way for the provider to access the required certification VC. From the provider’s perspective, it needs a mechanism to verify the consumer control plane is operating on behalf of the participant and that the VC is valid. Once this is done, the provider can trust the consumer control plane and grant it access to the data by issuing a contract agreement.
Instead of obtaining a token from a third-party identity provider, DCP mandates self-issued tokens. Self-issued tokens are generated and signed by the requesting party, which in the current example is the data consumer. As we will see, these self-issued tokens identify the data consumer and include a way for the provider to resolve the consumer’s credentials. This solves the issues of centralized identity systems highlighted above. By removing the central identity provider, DCP mitigates the risk of a network outage. Privacy is preserved since all communication is between the data consumer and the data provider. Finally, dataspace members remain in control of their identities and credentials.
Let’s look at how this works in practice. Identity and claims are transmitted as part of the transport header in DSP messages. The HTTP bindings for DSP do this using an Authorization
token. DCP further specifies the header contents to be a self-signed JWT. The JWT sub
claim contains the sender’s Web DID, and the JWT is signed with a public key contained in the associated DID document (as a verification method). The data provider verifies the sending control plane’s identity by resolving the DID document and checking the signed JWT against the public key.
This step only proves that the requesting control plane is operating on behalf of a participant. However, the control plane cannot yet be trusted since it must present the VC issued by the third-party auditor. DCP also specifies the JWT contains an access token in the token
claim. The data provider uses the access token to query the data consumer’s Identity Hub for a Verifiable Presentation with one or more required credentials. It obtains the endpoint of the consumer’s Identity Hub from a service
entry of type CredentialService
in the resolved DID document. At that point, the provider connector can query the Identity Hub using the access token to obtain a Verifiable Presentation containing the required VC:
Once the VP is obtained, the provider can verify the VC to establish trust with the consumer control plane.
Why not just include the VP in the token or another HTTP header and avoid the call to Identity Hub? There’s a practical reason: VPs often exceed the header size limit imposed by HTTP infrastructure such as proxies. DSP and DCP could have devised the concept of a message envelope (remember WS-* and SOAP?) but chose not to because it ties credentials to outbound client requests. To see why this is limiting, consider the scenario where a consumer requests access to an ongoing data stream. The provider control plane may set up a policy monitor to periodically check the consumer’s credentials while the stream is active. In the DCP model, the policy monitor can query the consumer’s Identity Hub using the mechanism we described without the flow being initiated by the consumer.
Verifiable Presentation Generation
When the data provider issues a presentation request, the consumer Identity Hub generates a Verifiable Presentation based on the query received in the request. DSP defines two ways to specify a query: using a list of string-based scopes or a DIF Presentation Exchange presentation definition. Identity Hub does not yet support DIF Presentation Exchange (this feature is in development), so scopes are currently the only supported mechanism for requesting a set of credentials be included.
The default setup for Identity Hub translates a scope string to a Verifiable Credential type. For example, the following presentation query includes the AuditCertificationCredential
:
{
"@context": [
"https://w3id.org/tractusx-trust/v0.8",
"https://identity.foundation/presentation-exchange/submission/v1"
],
"@type": "PresentationQueryMessage",
"scope": ["AuditCertificationCredential"]
}
Identity Hub will process this as a request for the AuditCertificationCredential
type. If the access token submitted along with the request permits the AuditCertificationCredential
, Identity Hub will generate a Verifiable Presentation containing the AuditCertificationCredential
. The generated VP will contain multiple credentials if more than one scope is present.
The default scope mapping behavior can be overridden by creating a custom extension that provides an implementation of the ScopeToCriterionTransformer
interface.
Two VP formats are supported: JWT-based and Linked-Data Proof. The JWT-based format is the default and recommended format because, in testing, it exhibited an order of magnitude better performance than the Linked-Data Proof format. It’s possible to override the default JWT format by either implementing VerifiablePresentationService
or providing a configuration of VerifiablePresentationServiceImpl
.
When DIF Present Exchange is supported, client requests will be able to specify the presentation format to generate.
Issuance Flow
Note: Identity Hub issuance support is currently a work in progress.
W3C Verifiable Credentials enable a holder to present claims directly to another party without the involvement or knowledge of the credential issuer. This is essential to preserve privacy and mitigate against network outages in a dataspace. DCP defines the way Identity Hub obtains credentials from an issuer. In DCP, issuance is an asynchronous process. The Identity Hub sends a request to the issuer endpoint, including a self-signed identity token. Similar to the presentation flow described above, the identity token contains an access token the issuer can use to send the VC to the requester’s Identity Hub. This is done asynchronously. The VC could be issued immediately or after an approval process:
Issuance can use the same claims verification as the presentation flow. For example, the auditor issuer in the previous example may require the presentation of a dataspace membership credential issued by another organization. In this case, the issuer would use the access token sent in the outbound request to query for the required credential from the Identity Hub before issuing its VC.
Using the Identity Hub
Identity Hub is built using the EDC modularity and extensibility system. It relies on core EDC features, including
cryptographic primitives, Json-Ld processing, and DID resolution. This architecture affords a great deal of deployment flexibility. Let’s break down the different supported deployment scenarios.
Organizational Component
Many organizations prefer to manage identity resources centrally, as strict security and control can be enforced over these sensitive resources. Identity Hub can be deployed as a centrally managed component in an organization that other EDC components use. In this scenario, Identity Hub will manage all identity resources for an organization for all dataspaces it participates in. For example, If an organization is a member of two dataspaces, DS1 and DS2, that issue membership credentials, both credentials will be managed by the central deployment. Connectors deployed for DS1 and DS2 will use their respective membership credentials from the central Identity Hub.
Per Dataspace Component
Some organizations may prefer to manage their identity resources at the dataspace level. For example, a multinational may participate in multiple regional dataspaces. Each dataspace may be geographically restricted, requiring all data and resources to be regionally fenced. In this case, an Identity Hub can deployed for each regional dataspace, allowing for separate management and isolation.
Embedded
Identity Hub is designed to scale down for edge-style deployments where footprint and latency are primary concerns. In these scenarios, Identity Hub can be deployed embedded in the same runtime process as other connector components, providing a simple, fast, and efficient deployment unit.
Identity Hub APIs and Resources
Identity Hub supports two main APIs: the Identity API for managing resources and the DCP API, which implements the wire protocol defined by the Decentralized Claims Protocol Specification. End-users generally do not interact with the DCP API, so we won’t cover it here. The Identity API is the primary way operators and third-party applications interact with the Identity Hub. Since the API provides access to highly sensitive resources, it’s essential to secure it. Above all, the API should never be exposed over a public network such as the Internet.
The best way to understand the Identity API is to start with the resources it is designed to manage. This will give you a solid grounding for reviewing the OpenAPI documentation and using its RESTful interface. It’s also important to note that since the Identity Hub is extensible, additional resource types may be added by third parties to enable custom use cases.
The Participant Context
The Identity API includes CRUD operations for managing participant contexts. This API requires elevated administrative privileges.
A participant context is a unit of control for resources in Identity Hub. A participant context is tied to a dataspace participant identity. Most of the time, an organization will have a single identity and use the same Web DID in multiple dataspaces. Its Identity Hub, therefore, will be configured with exactly one participant context to manage identity and credential resources.
If an organization uses different identities in multiple dataspaces, its Identity Hub will contain one participant context per identity. All resources are contained and accessed through a participant context. The participant context acts as both a scope and security boundary. Access control for public client API endpoints is scoped to a specific participant context. For example, the JWT access token sent to data providers described above is associated with a specific context and may not be used to access resources in another context. Furthermore, the lifecycle of participant resources is bound to their containing context; if a participant context is removed, the operation will cascade to all contained resources.
A participant context can be in one of three states:
CREATED
- The participant context is initialized but not operational. Resources may be added and updated, but they are not publicly accessible.ACTIVATED
- The participant context is operational, and resources are publicly accessible.DEACTIVATED
- The participant context is not operational. Resources may be added and updated, but they are not publicly accessible.
The participant context can transition from CREATED
to ACTIVATED
and between the ACTIVATED
and DEACTIVATED
states.
It’s useful to note that Identity Hub relies on the core EDC eventing system to enable custom extensions. Services may register to receive participant context events, for example, when a context is created or deleted, to implement custom workflows.
DID Documents
When a participant context is created, it is associated with a DID. After a participant context is activated, a corresponding DID document will be generated and published. Currently, Identity Hub only supports Web DIDs, so publishing the document will make it available at the URL specified by the DID. Identity Hub can support other DID methods through custom extensions.
In addition, custom publishers can be created by implementing the DidDocumentPublisher
interface and adding it via an extension to the Identity Hub runtime. For example, a publisher could deploy Web DID documents to a web server. Identity Hub includes an extension for locally publishing Web DID documents. The extension serves Web DID documents using a public API registered under the /did
path. Note that this extension is not designed to handle high-volume requests, as DID documents are served directly from storage and are not cached. For these scenarios, publishing to a web server is recommended.
Key Pair Resources
Key pair resources are used to sign and verify credentials, presentations, and other resources managed by Identity Hub. The public and private keys associated with a key pair resource can be generated by Identity Hub or provided when the resource is created. Identity Hub persists all private keys in a secure store and supports using Hashicorp Vault as the store.
A Key pair resource can be in one of the following states:
CREATED
ACTIVATED
ROTATED
REVOKED
Let’s walk through these lifecycle states.
Key Activation
When a key pair is created, it is not yet used to sign resources. When a key pair is activated, Identity Hub makes the public key material available as a verification method in the participant context’s DID document so that other parties can verify resources such as verifiable presentations signed by the private key. This is done by publishing an updated DID document for the participant context during the activation step.
Key Rotation
For security reasons, key pair resources should be periodically rotated and replaced by new ones. Identity Hub supports a staged rotation process to avoid service disruptions and ensure that existing signed resources can still be validated for a specified period.
For example, let’s assume private key A is used to sign Credential CA and public key A’ is used to verify CA. If the key pair A-A’ is immediately revoked, CA can no longer be validated, which may cause a service disruption. Key rotation can be used to avoid this. When the key pair A-A’ is rotated, a new key pair, B-B’, is created and used to sign resources. The private key A is immediately destroyed. A’, however, will remain as a verification method in the DID document associated with the participant context. CA validation will continue to work. When CA and all other resources signed by A expire, A’ can safely be removed from the DID document.
It’s important to perform key rotation periodically to enhance overall system security. This implies that signed resources should have a validity period less than the rotation period of the key used to sign them and should also be reissued on a frequent basis.
Key Revocation
If a private key is compromised, it must be immediately revoked. Revocation involves removing the verification method entry in the DID document and publishing the updated version. This will invalidate all resources signed with the revoked key pair.
Verifiable Credentials
Support for storing verifiable credentials using the DCP issuance flow is currently in development. In the meantime, adopters must develop custom extensions for storing verifiable credential resources or create them through the Identity API.
Resource Operations
Identity Hub implements transactional guarantees when resource operations are performed through the Identity API. The purpose of transactional behavior is to ensure the Identity Hub maintains a consistent state. This section catalogs those operations and guarantees.
Participant Context Operations
Create
When a participant context is created, the following sequence is performed:
- A transaction is opened.
- An API key is generated to access the context via the Identity API
- A DID document is created and added to storage.
- A default key pair is created and added to storage.
- The DID document is published if the participant context is set to active when created.
- The transaction commits on success, or a rollback is performed.
Delete
When a participant context is deleted, the following sequence is performed:
- A transaction is opened.
- The DID document is unpublished if the resource is in the
PUBLISHED
state. - The DID document resource is removed from storage.
- All associated key pair resources are removed from storage except for private keys.
- The participant context is removed from storage.
- The transaction commits on success, or a rollback is performed.
- All private keys associated with the context are removed after the transaction is committed since Vaults are not transactional resources.
If destroying private keys fails, manual intervention will be required to clean them up. Note that the IH will be in a consistent state.
Activate
A participant context cannot be activated without a default key pair.
When a participant context is activated, the following sequence is performed:
- A transaction is opened.
- The context is updated in storage.
- The DID document is published.
- The transaction commits on success, or a rollback is performed.
Deactivate
When a participant context is deactivated, the following sequence is performed:
- A transaction is opened.
- The context is updated in storage.
- The DID document is unpublished.
- The transaction commits on success, or a rollback is performed.
There is a force
option that will commit the transaction if the DID document unpublish operation is not successful.
Key Pair Operations
Activate
This operation can only be performed when the participant context is in the CREATED
or ACTIVATED
state.
When a key pair is activated, the following sequence is performed:
- A transaction is opened.
- The new key pair is added to storage.
- If the DID document resource is in the
PUBLISHED
state, the DID document is published with all verification methods for public keys in the ACTIVATED
state. - The transaction commits on success, or a rollback is performed.
If the transaction commit fails, the DID document must be manually repaired. This can be done by republishing the DID document.
Rotate
When a key pair is rotated and a new one is added, the following sequence is performed:
- A transaction is opened.
- The new key pair is added to storage.
- If the DID document resource is in the
PUBLISHED
state, the DID document is published with a verification method for the new public key. - The transaction commits on success, or a rollback is performed.
- The old private key is destroyed (note, not the old public key) after the transaction is committed since Vaults are not transactional resources.
Revoke
When a key pair is rotated, the following sequence is performed:
When a key pair is revoked, the following sequence is performed:
- A transaction is opened.
- The key pair state is updated.
- If the DID document resource is in the
PUBLISHED
state, the DID document is published with a verification method for the rotated public key removed. - The transaction commits on success, or a rollback is performed.
1.6 - Federated Catalog
Covers how publishing and retrieving federated data catalogs works.
TDB
1.7 - Distributions, Deployment, and Operations
Explains how to create distributions and design deployment architectures. This chapter also provides an overview of Management Domains and system configuration.
Using Bills-of-Material (BOMs)
In the Maven/Gradle world, bills-of-material are meta-modules with the sole purpose of declaring dependencies onto other modules. This greatly reduces the number of dependencies a developer needs to declare. By simply referencing the BOM module, all transitive dependencies are also referenced. The Eclipse Dataspace Components project declares several BOMs. The most important ones are listed here:
controlplane-base-bom
: base BOM for an EDC control plane without an IdentityService
implementation. Attempting to run this directly will result in an exceptioncontrolplane-dcp-bom
: controlplane that uses DCP as identity systemdataplane-base-bom
: runnable data plane image that contains HTTP transfer pipelines
FederatedCatalog:
federatedcatalog-base-bom
: base BOM for FC modules. Does not contain any IdentityService
implementationfederatedcatalog-dcp-bom
: adds DCP to the FederatedCatalog base BOM
IdentityHub:
identityhub-base-bom
: base BOM for IdentityHub. No DCP modules includedidentityhub-bom
: default IdentityHub runtime image including DCP. Does not include/embed the SecureTokenService (STS).identityhub-with-sts-bom
: IdentityHub runtime that has a SecureTokenService (STS) embedded.
In addition, most components also provide a *-feature-sql-bom
BOM, which simply adds SQL persistence for all related entities, for example Assets, ContractDefinitions, etc. in the control plane BOM.
Using the Basic Template Repository to Create Distributions
The Modules, Runtimes, and Components chapter explained how EDC is built on a module system. Runtimes are assembled to create a component such as a control plane, a data plane, or an identity hub. EDC itself does not ship runtime distributions since it is the job of downstream projects to bundle features and capabilities that address the specific requirements of a dataspace or organization. However, EDC provides the Basic Template Repository to facilitate creating extensions and runtime distributions.
The EDC Basic Template Repository can be forked and used as a starting point for building an EDC distribution. You will need to be familiar with Maven Repositories and Gradle. Once the repository is forked, custom extensions can be added and included in a runtime. The template is configured to create two runtime Docker images: a control plane and data plane. These images are designed to be deployed as separate processes, such as two Kubernetes Replicasets
.
EDC distributions can be created using other build systems if they support Maven dependencies since EDC modules are released to Maven Central. Using Gradle as the build system for your distribution has several advantages. One is that the distribution project can incorporate EDC Gradle Plugins such as the Autodoc and Build plugins to automate and remove boilerplate tasks.
Note: the template repository also leverages the BOM system.
Deployment Architectures and Operations
EDC does not dictate a specific deployment architecture. Components may be deployed to an edge device as a single low-footprint runtime or across multiple high-availability clusters. When deciding on an appropriate deployment architecture and operations setup, three considerations should be taken into account:
- How is your organization structured to manage data sharing?
- How should scaling be done?
- What components need to be highly available?
The answers to these questions will help define the required deployment architecture. We recommend starting with the simplest solution possible and only adding complexity when required. For example, a data plane must often be highly available and scalable, but a control plane does not. In this case, the deployment architecture should split the components and allocate different cluster resources to the data plane. We will now examine each question and how they impact deployment architectures.
Management Domains
The first question that needs to be assessed is: How is your organization structured to manage data sharing? In the simplest scenario, an organization may have a single IT department responsible for data sharing and dataspace membership company-wide. This is relatively easy to solve. EDC components can be deployed and managed by the same operations team. The deployment architecture will be influenced more by scaling and high-availability requirements.
Now, let’s look at a more complex case in which a multinational organization delegates data-sharing responsibilities to individual units. Each unit has its own IT department and data centers. In this setup, EDC components must be deployed and managed separately. This will impact control planes, data planes, and Identity Hubs. For example, the company could operate a central Identity Hub to manage organizational credentials and delegate control plane and data plane operations to the units. This requires a more complex deployment architecture where control and data planes may be hosted in separate environments, such as multiple cloud tenants.
To accommodate diverse deployment requirements, EDC supports management domains. A management domain is a realm of control over a set of EDC components. Management domains enable the operational responsibility of EDC components to be delegated throughout an organization. The following components may be included in a single management domain or spread across multiple domains:
- Catalog Server
- Control Plane
- Data Plane
- Identity Hub
To simplify things, we will focus on how a catalog server, control plane, and data plane can be deployed before discussing the Identity Hub. Management domains may be constructed to support the following deployment topologies.
Type 1: Single Management Domain
A single management domain deploys EDC components under one unified operations setup. In this topology, EDC components
can be deployed to a single, collocated process (management domains are represented by the black bounding box):
Type 1: One management domain controlling a single instance
More complex operational environments may deploy EDC components as separate clustered instances under the operational
control of a single management domain. For example, a Kubernetes cluster could be deployed with separate ReplicateSets
running pods of catalog servers, control planes, and data planes:
Type 1: One management domain controlling a cluster of individual ReplicaSets
Type 2: Distributed Management Domains
Single management domain topologies are not practical in organizations with independent subdivisions. Often, each subdivision is responsible for all or part of the data-sharing process. To accommodate these use cases, EDC components deployed to separate operational contexts (and hence separate management domains) must function together.
Consider the example of a large multinational conglomerate, Foo Industries, which supplies parts for widget production. Foo Industries has separate geographic divisions for production. Each division is responsible for sharing its supply chain data with Foo’s partners as part of the Widget-X Dataspace. Foo Industries participates under a single corporate identity in the dataspace, in this case using the Web DID did:web:widget-x.foo.com
. Some partners may have access to only one of Foo’s divisions.
Foo Industries can support this scenario by adopting a distributed management domain topology. There are several different ways to distribute management domains.
Type 2A: DSP Catalog Referencing EDC Stacks
Let’s take the simplest to start: each division deploys an EDC component stack. Externally, Foo Industries presents a unified DSP Catalog obtained by resolving the catalog endpoint from Foo’s Web DID, did:web:widget-x.foo.com
. The returned catalog will contain entries for the Foo Industries divisions a client can access (the mechanics of how this is done are explained below). Specifically, the component serving the DSP catalog would not be an EDC component, and thus not be subject to any management domains. To support this setup, Foo Industries could deploy the following management domains:
Type 2A: Distributed Management Domains containing an EDC stack
Here, two primary management domains contain a full EDC stack each. A root catalog (explained below) serves as the main entry point for client requests.
Type 2B: EDC Catalog Server and Control/Data Plane Runtimes
Foo Industries could also choose to deploy EDC components in separate management domains. For example, a central catalog server that runs in its own management domain and that fronts two other management domains consisting of control/data plane runtimes:
Type 2B: Distributed Management Domains containing a Catalog Server and separate Control/Data Plane runtimes
Type 2C: Catalog Server/Control Plane with Data Plane Runtime
Or, Foo Industries could elect to run a centralized catalog server/control plane:
Type 2C: Distributed Management Domains containing a Catalog Server/Control Plane and separate Data Plane runtimes
Identity Hub
The primary deployment scenario for Identity Hub is to run it as a central component under its own management domain. While Identity Hub instances could be distributed and included in the same management domains as a control/data plane pair, this would entail a much more complex setup, including credential, key, and DID document replication that is not supported out-of-the-box.
Setting Up Management Domains
Management domains are straightforward to configure as they mainly involve catalog setup. Recall how catalogs are structured as described in the chapter on [Control Plane concepts](Control Plane Concepts.md). Catalogs contain datasets, distributions, and data services. Distributions define the wire protocol used to transfer data and refer to a data service endpoint where a contract agreement can be negotiated to access the data. What was not mentioned is that a catalog is a dataset, which means catalogs can contain sub-catalogs. EDC takes advantage of this to implement management domains using linked catalogs. Here’s an example of a catalog with a linked sub-catalog:
{
"@context": "https://w3id.org/dspace/v0.8/context.json",
"@id": "urn:uuid:3afeadd8-ed2d-569e-d634-8394a8836d57",
"@type": "dcat:Catalog",
"dct:title": "Foo Industries Provider Root Catalog",
"dct:description": [
"A catalog of catalogs"
],
"dcat:catalog": {
"@type": "dcat:Catalog",
"dct:description": [
"Foo Industries Sub-Catalog"
],
"dcat:distribution": {
"@type": "dcat:Distribution",
"dcat:accessService": "urn:uuid:4aa2dcc8-4d2d-569e-d634-8394a8834d77"
},
"dcat:service": [
{
"@id": "urn:uuid:4aa2dcc8-4d2d-569e-d634-8394a8834d77",
"@type": "dcat:DataService",
"dcat:endpointURL": "https://foo-industries.com/subcatalog"
}
]
}
}
In this case, the data service entry contains an endpointURL
that resolves the contents of the sub-catalog. EDC deployments can consist of multiple sub-catalogs and nested sub-catalogs to reflect a desired management structure. For example, Foo Industries could include a sub-catalog for each division in its root catalog, where sub-catalogs are served from separate management domains. This setup would correspond to Type 2A shown above.
Configuring Linked Catalogs
Datasets are created from assets. The same is true for linked catalogs. Adding the following asset with the root catalog server’s Management API is the first step to creating a sub-catalog entry:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"@id": "subcatalog-id",
"@type": "CatalogAsset",
"properties": {...},
"dataAddress": {
"type": "HttpData",
"baseUrl": "https://foo-industries.com/subcatalog"
}
}
There are two things to note. First, the @type
is set to CatalogAsset (which Json-Ld expands to https://w3id.org/edc/v0.0.1/ns/CatalogAsset
). Second, the baseUrl
of the data address is set to the sub-catalog’s publicly accessible URL.
The next step in creating a sub-catalog is to decide on access control, that is, which clients can see the sub-catalog. Recall that this is done with a contract definition. A contract definition can have an empty policy (“allow all”) or require specific credentials. It can also apply to (select) all sub-catalogs, sub-catalogs containing a specified property value, or a specific sub-catalog. The following contract definition applies an access policy and selects the previous sub-catalog:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "https://w3id.org/edc/v0.0.1/ns/ContractDefinition",
"@id": "test-id",
"edc:accessPolicyId": "access-policy-1234",
"edc:contractPolicyId": "contract-policy-5678",
"edc:assetsSelector": [
{
"@type": "https://w3id.org/edc/v0.0.1/ns/Criterion",
"edc:operandLeft": "id",
"edc:operator": "in",
"edc:operandRight": ["subcatalog-id"]
},
]
}
Alternatively, the following contract definition example selects a group of sub-catalogs in the “EU” region:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "https://w3id.org/edc/v0.0.1/ns/ContractDefinition",
"@id": "test-id",
"edc:accessPolicyId": "group-access-policy-1234",
"edc:contractPolicyId": "contract-policy-5678",
"edc:assetsSelector": [
{
"@type": "https://w3id.org/edc/v0.0.1/ns/Criterion",
"edc:operandLeft": "region",
"edc:operator": "=",
"edc:operandRight": "EU"
},
]
}
Once the catalog asset and a corresponding contract definition are loaded, a sub-catalog will be included in a catalog response for matching clients. Clients can then resolve the sub-catalogs by following the appropriate data service link.
Management Domain Considerations
If connector components are deployed to more than one management domain, it’s important to keep in mind that contract agreements, negotiations, and transfer processes will be isolated to a particular domain. In most cases, that is the desired behavior. If you need to track contract agreements across management domains, one way to do this is to build an EDC extension that replicates this information to a central store that can be queried. EDC’s eventing system can be used to implement a listener that receives contract and transfer process events and forwards the information to a target destination.
Component Scaling
Management domains help align a deployment architecture with an organization’s governance requirements for data sharing. Another factor that impacts deployment architecture is potential scalability bottlenecks. While measurements are always better than assumptions, the most likely potential bottleneck is moving data from a provider to a consumer, in other words, the data plane.
Two design considerations are important here. First, as explained in the chapter on [Control Plane concepts](Control Plane Concepts.md), do not model assets in a granular fashion. For example, if data consists of a series of small JSON objects, don’t model those as individual assets requiring separate contract negotiations and transfer processes. Instead, model the data as a single asset that can be requested using a single contract agreement through an API.
The second consideration entails how best to optimize data plane performance. In the previous example, the data plane will likely need to be much more performant than the control plane since the request rate will be significantly greater. This means that the data plane will also need to be scaled independently. Consequently, the data plane should be deployed separately from the control plane, for example, as a Kubernetes ReplicaSet running on a dedicated cluster.
Component High Availability
Another consideration that will impact deployment architecture is availability requirements. Consider this carefully. High availability is different from reliability. High availability measures uptime, while reliability measures correctness, i.e., did the system handle an operation in the expected manner? All EDC components are designed to be reliable. For example, remote messages are de-duplicated and handled transactionally.
High availability is instead a function of an organization’s requirements. A data plane must often be highly available, particularly if a shared data stream should not be subject to outages. However, a control plane may not need the same guarantees. For example, it may be acceptable for contract negotiations to be temporarily offline as long as data plane operations continue uninterrupted. It may be better to minimize costs by deploying a control plane to less robust infrastructure than a data plane. There is no hard-and-fast rule here, so you will need to decide on the approach that best addresses your organization’s requirements.
1.8 - Extensions
Details how to add customizations, features, and new capabilities to EDC components.
This chapter covers adding custom features and capabilities to an EDC runtime by creating extensions. Features can be wide-ranging, from a specific data validation or policy function to integration with an identity system. We will focus on common extension use cases, for example, implementing specific dataspace requirements. For more complex features and in-depth treatment, refer to the Contributor documentation.
This chapter requires a thorough knowledge of Java and modern build systems such as Gradle and Maven. As you read through this chapter, it will be helpful to consult the extensions contained in the EDC Samples repository.
The EDC Module System
EDC is built on a module system that contributes features as extensions to a runtime. It’s accurate to say that EDC, at its core, is just a module system. Runtimes are assembled to create components such as a control plane, a data plane, or an identity hub. The EDC module system provides a great deal of flexibility as it allows you to easily add customizations and target diverse deployment topologies, from small-footprint single-instance components to highly reliable, multi-cluster setups.
When designing an extension, it’s important to consider all the possible target deployment topologies. For example, features should typically scale up to work in a cluster and scale down to low-overhead and test environments. In addition to good architectural planning (e.g., using proper concurrency strategies in a cluster), we will cover techniques such as default services that facilitate support for diverse deployment environments.
To understand the EDC module system, we will start with three of its most important characteristics: static modules defined at build time, design-time encapsulation as opposed to runtime encapsulation, and a focus on extensions, not applications.
The EDC module system is based on a static design. Unlike dynamic systems such as OSGi, EDC modules are defined at build time and are not cycled at runtime. EDC’s static module system delegates the task of loading and unloading runtime images to deployment infrastructure, whether the JUnit platform or Kubernetes. A new runtime image must be deployed if a particular module needs to be loaded. In practice, this is easy to do, leverages the strengths of modern deployment infrastructure, and greatly reduces the module system’s complexity.
The EDC module system also does not support classloader isolation between modules like OSGi or the Java Platform Module System. While some use cases require strong runtime encapsulation, the EDC module system made the trade-off for simplicity. Instead, it relies on design-time encapsulation enforced by modern build systems such as Gradle or Maven, which support multi-project layouts that enforce class visibility constraints.
Finally, the EDC module system is not a framework like Spring. Its design is centered on managing and assembling extensions, not making applications easier to write by providing API abstractions and managing individual services and their dependencies.
Extension Basics
If you are unfamiliar with bundling EDC runtimes, please read the chapter on Distributions, Deployment, and Operations. Let’s assume we have already enabled a runtime build that packages all EDC classes into a single executable JAR deployed in a Docker container.
An EDC extension can be created by implementing the ServiceExtension
interface:
public class SampleExtension implements ServiceExtension {
@Override
public void initialize(ServiceExtensionContext context) {
// do something
}
}
To load the extension, the SampleExtension
must be on the runtime classpath (e.g., in the runtime JAR) and configured using a Java ServiceLoader provider file. The latter is done by including an entry for the implementation class in the META-INF/services/org.eclipse.edc.spi.system.ServiceExtension
file.
SPI: Service Provider Interface
In the previous example, the extension did nothing. Generally, an extension provides a service to the runtime. It’s often the case that an extension also requires a service contributed by another extension. The EDC module system uses the Service Provider Interface (SPI) pattern to enable cross-extension dependencies:
An SPI module containing the shared service interface is created. The service implementation is packaged in a separate module that depends on the SPI module. The extension that requires the service then depends on the SPI module, not the implementation module. We will see in the next section how the EDC module system wires the service implementation to the extension that requires it. At this point, it is important to note that the build system maintains encapsulation since the two extension modules do not have a dependency relationship.
The SPI pattern is further used to define extension points. An extension point is an interface that can be implemented to provide a defined set of functionality. For example, there are extension points for persisting entities to a store and managing secrets in a vault. The EDC codebase is replete with SPI modules, which enables diverse runtimes to be assembled with just the required features, thereby limiting their footprint and startup overhead.
Providing and Injecting Services
The EDC module system assembles extensions into a runtime by wiring services to ServiceExtensions
that require them and initialing the latter. An extension can provide services that are used by other extensions. This is done by annotating a factory method with the org.eclipse.edc.runtime.metamodel.annotation.Provider
annotation:
public class SampleExtension implements ServiceExtension {
@Provider
public CustomService initializeService(ServiceExtensionContext context) {
return new CustomServiceImpl();
}
}
In the above example, initializeService
will be invoked when the extension is loaded to supply the CustomService
, which will be registered so other extensions can access it. The initializeService
method takes a ServiceExtensionContext
, which is optional (no-param methods can also be used with @Provider
). Provider methods must also be public and not return void
.
Provided services are singletons, so remember that they must be thread-safe.
The CustomService
can be accessed by injecting it into a ServiceExtension
using the org.eclipse.edc.runtime.metamodel.annotation.Inject
annotation:
public class SampleExtension implements ServiceExtension {
@Inject
private CustomService customService;
@Override
public void initialize(ServiceExtensionContext context) {
var extensionDelegate = ... // create and register a delegate with the CustomService
customService.register(extensionDelegate);
}
}
When the EDC module system starts, it scans all ServiceExtension
implementations and builds a dependency graph from the provided and injected services. The graph is then sorted (topologically) to order extension startup based on dependencies. Each extension is instantiated, injected, and initialized in order.
The EDC module system does not support assigning extensions to runlevels by design. Instead, it automatically orders extensions based on their dependencies. If you find the need to control the startup order of extensions that do not have a dependency, reconsider your approach. It’s often a sign of a hidden coupling that should be explicitly declared.
Service Registries
Service Registries are often used in situations where multiple implementations are required. For example, entities may need to be validated by multiple rules that are contributed as services. The recommended way to handle this is to create a registry that accepts extension services and delegates to them when performing an operation. The following is an example of a registry used to validate DataAddresses
:
public interface DataAddressValidatorRegistry {
/**
* Register a source DataAddress object validator for a specific DataAddress type
*
* @param type the DataAddress type string.
* @param validator the validator to be executed.
*/
void registerSourceValidator(String type, Validator<DataAddress> validator);
/**
* Register a destination DataAddress object validator for a specific DataAddress type
*
* @param type the DataAddress type string.
* @param validator the validator to be executed.
*/
void registerDestinationValidator(String type, Validator<DataAddress> validator);
/**
* Validate a source data address
*
* @param dataAddress the source data address.
* @return the validation result.
*/
ValidationResult validateSource(DataAddress dataAddress);
/**
* Validate a destination data address
*
* @param dataAddress the destination data address.
* @return the validation result.
*/
ValidationResult validateDestination(DataAddress dataAddress);
}
Validator
instances can be registered by other extensions, which will then be dispatched to when one of the validation methods is called:
```java
public class SampleExtension implements ServiceExtension {
@Inject private
DataAddressValidatorRegistry registry;
@Override
public void initialize(ServiceExtensionContext context) {
var validator = ... // create and register the validator
customService.register(TYPE, validator);
}
}
Configuration
Extensions will typically need to access configuration. The ServiceExtensionContext
provides several methods for reading configuration data. Configuration values are resolved in the following order:
- From a
ConfigurationExtension
contributed in the runtime. EDC includes a configuration extension that reads values from a file. - From environment variables, capitalized names are made lowercase, and underscores are converted to dot notation. For example, “HTTP_PORT” would be transformed to “HTTP.port.”
- From Java command line properties.
The recommended approach to reading configuration is through one of the two config methods: ServiceExtensionContext.getConfig()
or ServiceExtensionContext.getConfig(path)
. The returned Config
object can navigate a configuration hierarchy based on the dot notation used by keys. To understand how this works, let’s start with the following configuration values:
group.subgroup.key1=value1
group.subgroup.key2=value2
Invoking context.getConfig("group")
will return a config object that can be used for typed access to group
values or to navigate the hierarchy further:
var groupConfig = context.getConfig("group");
var groupValue1 = groupConfig.getString("subgroup.key1"); // equals "value1"
var subGroupValue1 = groupConfig.getConfig("subgroup").getString("key1"); // equals "value1"
The Config
class contains other useful methods, so it is worth looking at it in detail.
Extension Loading
Service extensions have the following lifecycle that is managed by the EDC module system:
Runtime Phase | Extension Phase | Description |
---|
LOAD | | Resolves and introspects ServiceExtension implementations on the classpath, builds a dependency graph, and orders extensions. |
BOOT | | For each extension, cycle through the INJECT , INITIALIZE , and PROVIDE phases. |
| INJECT | Instantiate the service extension class and inject it with dependencies. |
| INITIALIZE | Invoke the ServiceExtension.initialize() method. |
| PROVIDE | Invoke all @Provider factory methods on the extension instance and register returned services. |
PREPARE | | For each extension, ServiceExtension.prepare() is invoked. |
START | | For each extension, ServiceExtension.start() is invoked. The runtime is in normal operating mode. |
SHUTDOWN | | For each extension in reverse order, ServiceExtension.shutdown() is invoked. |
Most extensions will implement the ServiceExtension.initialize() and ServiceExtension.shutdown() callbacks. | | |
Extension Services
Default Services
Sometimes, it is desirable to provide a default service if no other implementation is available. For example, in an integration test setup, a runtime may provide an in-memory store implementation when a persistent storage implementation is not configured. Default services alleviate the need to explicitly configure extensions since they are not created if an alternative exists. Creating a default service is straightforward - set the isDefault
attribute on @Provider
to true:
public class SampleExtension implements ServiceExtension {
@Provider (isDefault = true)
public CustomService initializeDefaultService(ServiceExtensionContext context) {
new DefaultCustomService();
}
}
If another extension implements CustomService, SampleExtension.initializeDefaultService()
will not be invoked.
Creating Custom APIs and Controllers
Extensions may create custom APIs or ingress points with JAX-RS controllers. This is done by creating a web context and registering JAX-RS resource under that context. A web context is a port and path mapping under which the controller will be registered. For example, a context with a port and path set to 9191
and custom-api
respectively may expose a controller annotated with @PATH("custom-resources")
at:
https:9191//localhost/custom-api/custom-resources
Web contexts enable deployments to segment where APIs are exposed. Operational infrastructure may restrict management APIs to an internal network while another API may be available over the public internet.
EDC includes convenience classes for configuring a web context:
public class SampleExtension implements ServiceExtension {
@Inject
private WebServer webServer;
@Inject
private WebServiceConfigurer configurer;
@Inject
private WebService webService;
public void initialize(ServiceExtensionContext context) {
var settings = WebServiceSettings.Builder.newInstance()
.contextAlias("custom-context")
.defaultPath("/custom-context-path")
.defaultPort(9191)
.name("Custom API")
.apiConfigKey("web.http.custom-context")
.build();
var config = context.getConfig("web.http.custom-context");
configurer.configure(config, webServer, settings);
webService.registerResource("custom-context", new CustomResourceController();
webService.registerResource("custom-context", new CustomExceptionMapper());
}
}
Let’s break down the above sample. The WebServer
is responsible for creating and managing HTTP/S
contexts. The WebServiceConfigurer
takes a settings object and applies it to the WebServer
to create a web context. In the above example, the context alias is custom-context
, which will be used later to register JAX-RS controllers. The default path and port are also set and will be used if the deployment does not provide override values as part of the runtime configuration. The settings, runtime configuration, and web server instance are then passed to the configurer
, which registers the HTTP/S
context.
The default port and path can be overridden by configuration settings using the web.http.custom-context
config key:
web.http.custom-context.path=/override-path
web.http.custom-context.port=9292
Note that the web.http
prefix is used as a convention but is not strictly required.
Once a web context is created, JAX-RS controllers, interceptors, and other resources can be registered with the WebService
under the web context alias. EDC uses Eclipse Jersey and supports its standard features:
webService.registerResource("custom-context", new CustomResourceController();
webService.registerResource("custom-context", new CustomExceptionMapper());
Authentication
To enable custom authentication for a web context, you must:
- Implement
org.eclipse.edc.api.auth.spi.AuthenticationService
and register an instance with the ApiAuthenticationRegistry
. - Create an instance of the SPI class
org.eclipse.edc.api.auth.spi.AuthenticationRequestFilter
and register it as a resource for the web context.
The following code shows how to do this:
public class SampleExtension implements ServiceExtension {
@Inject
private ApiAuthenticationRegistry authenticationRegistry;
@Inject
private WebService webService;
@Override
public void initialize(ServiceExtensionContext context) {
authenticationRegistry.register("custom-auth", new CustomAuthService());
var authenticationFilter = new AuthenticationRequestFilter(authenticationRegistry, "custom-auth");
webService.registerResource("custom-context", authenticationFilter);
}
}
Events
The EDC eventing system is a powerful way to add capabilities to a runtime. All event types derive from org.eclipse.edc.spi.event.Event
and cover a variety of create, update, and delete operations, including those for:
- Assets
- Policies
- Contract definitions
- Contract negotiations
- Transfer processes
To receive an event, register and EventSubscriber
with the org.eclipse.edc.spi.event.EventRouter
. Events can be received either synchronously or asynchronously. Synchronous listeners are useful when executed transactionally in combination with the event operation. For example, a listener may wish to record audit information when an AssetUpdated
event is emitted. The transaction and asset update should be rolled back if the record operation fails. Asynchronous listeners are invoked in the context of a different thread. They are useful when a listener takes a long time to complete and is fire-and-forget.
Monitor
EDC does not directly use a logging framework. Log output should instead be sent to the Monitor
, which will forward it
to a configured sink.
Default Console Monitor
By default, the Monitor sends output to the console, which can be piped to another destination in a production environment.
The default console monitor can be configured through command line args:
--log-level=<DEBUG|INFO|WARNING|SEVERE>
: logs will be filtered and only the ones with the level selected or one of
the next ones will be shown. DEFAULT: INFO
--no-color
: coloured logs will be disabled. DEFAULT: enabled.
Custom Monitor
Alternatively, a custom Monitor
implementation can be provided an implementation of the MonitorExtension
that will
need to be registered at runtime.
Using the monitor
The provided Monitor
instance is available on the ServiceContext
object by using the getMonitor()
method, or it
could be injected as well:
public class SampleExtension implements ServiceExtension {
@Inject
private Monitor monitor;
@Provider
public void initialize(ServiceExtensionContext context) {
new CustomServiceImpl(monitor);
}
}
If you would like to have output prefixed for a specific service, use Monitor.withPrefix()
:
public class SampleExtension implements ServiceExtension {
@Inject
private Monitor monitor;
@Provider
public void initialize(ServiceExtensionContext context) {
var prefixedMonitor = monitor.withPrefix("Sample Extension"); // this will prefix all output with [Sample Extension]
new CustomServiceImpl(prefixedMonitor);
}
}
Transactions and DataSources
EDC uses transactional operations when persisting data to stores that support them such as the Postgres-backed implementations. Transaction code blocks are written using the TransactionContext
, which can be injected:
public class SampleExtension implements ServiceExtension {
@Inject
private TransactionContext transactionContext;
@Provider
public void initialize(ServiceExtensionContext context) {
new CustomServiceImpl(transactionContext);
}
}
and then:
return transactionalContext.execute(() -> {
// perform transactional work
var result = ... // get the result
return result;
});
The TransactionContext
supports creating a new transaction or joining an existing transaction associated with the current thread:
transactionalContext.execute(()-> {
// perform work
// in another service, execute additional work in a transactional context and they will be part of the same transaction
return transactionalContext.execute(()-> {
// more work
return result;
});
}
);
EDC also provides a DataSourceRegistry
for obtaining JDBC DataSource
instances that coordinate with the TransactionContext
:
public class SampleExtension implements ServiceExtension {
@Inject
private DataSourceRegistry datasourceRegistry;
@Provider
public void initialize(ServiceExtensionContext context) {
new CustomServiceImpl(datasourceRegistry);
}
}
The registry can then be used in a transactional context to obtain a DataSource
:
transactionalContext.execute(()-> {
var datasource = dataSourceRegistry.resolve(DATASOURCE_NAME);
try (var connection = datasource.getConnection()) {
// do work
return result;
}
});
EDC provides datasource connection pooling based on Apache Commons Pool. As long as the DataSource
is accessed in the same transactional context, it will automatically return the same pooled connection, as EDC manages the association of connections with transactional contexts.
Validation
Extensions may provide custom validation for entities using the JsonObjectValidatorRegistry
. For example, to register an asset validator:
public class SampleExtension implements ServiceExtension {
@Inject
private JsonObjectValidatorRegistry validatorRegistry;
public void initialize(ServiceExtensionContext context) {
validator.register(Asset.EDC_ASSET_TYPE, (asset) -> {
return ValidationResult.success();
});
}
}
Note that all entities are in Json-Ld expanded form, so you’ll need to understand the intricacies of working with the JSON-P API and Json-Ld.
Serialization
EDC provides several services related to JSON serialization. The TypeManager
manages ObjectMapper
instances in a runtime associated with specific serialization contexts. A serialization context provides ObjectMapper
instances configured based on specific requirements. Generally speaking, never create an ObjectMapper
directly since it is a heavyweight object. Promote reuse by obtaining the default mapper or creating one from a serialization context with the TypeManager
.
If an extension is required to work with Json-Ld
, use the JsonLd
service, which includes facilities for caching Json-Ld contexts and performing expansion.
HTTP Dispatching
Extensions should use the EdcHttpClient
to make remote HTTP/S
calls. The client is based on the OkHttp library and includes retry logic, which can be obtained through injection.
Secrets Handling and the Vault
All secrets should be stored in the Vault
. EDC supports several implementations, including one backed by Hashicorp Vault.
Documenting Extensions
Remember to document your extensions! EDC AutoDoc is a Gradle plugin that automates this process and helps ensure documentation remains in sync with code.
1.9 - Testing
Covers how to use EDC test runtimes.
EDC provides a JUnit test fixture for running automated integration tests. The EDC JUnit runtime offers a number of advantages:
- Fast build time since container images do not need to be built and deployed
- Launch and debug tests directly within an IDE
- Easily write asynchronous tests using libraries such as Awaitility
The JUnit runtime can be configured to include custom extensions. Running multiple instances as part of a single test setup is also possible. The following demonstrates how to set up and launch a basic test using JUnit’s RegisterExtension
annotation and the RuntimePerClassExtension
:
@EndToEndTest
class Basic01basicConnectorTest {
@RegisterExtension
static RuntimeExtension connector = new RuntimePerClassExtension(new EmbeddedRuntime(
"connector",
emptyMap(),
":basic:basic-01-basic-connector"
));
@Test
void shouldStartConnector() {
assertThat(connector.getService(Clock.class)).isNotNull();
}
}
For more details and examples, check out the EDC Samples system tests.
2 - Contributors Manual
0. Intended audience
This document is aimed at software developers who have already read the adopter documentation and
want to contribute code to the Eclipse Dataspace Components project.
Its purpose is to explain in greater detail the core concepts of EDC. After reading through it, readers should have a
good understanding of EDCs inner workings, implementation details and some of the advanced concepts.
So if you are a solution architect looking for a high-level description on how to integrate EDC, or a software engineer
who wants to use EDC in their project, then this guide is not for you. More suitable resources can be found
here and here respectively.
1. Getting started
1.1 Prerequisites
This document presumes a good understanding and proficiency in the following technical areas:
- JSON and JSON-LD
- HTTP/REST
- relational databases (PostgreSQL) and transaction management
- git and git workflows
Further, the following tools are required:
- Java Development Kit 17+
- Gradle 8+
- a POSIX compliant shell (bash, zsh,…)
- a text editor
- CLI tools like
curl
and git
This guide will use CLI tools as common denominator, but in many cases graphical alternatives exist (e.g. Postman,
Insomnia, some database client, etc.), and most developers will likely use IDEs like IntelliJ or VSCode. We are of
course aware of them and absolutely recommend their use, but we simply cannot cover and explain every possible
combination of OS, tool and tool version.
Note that Windows is not a supported OS at the moment. If Windows is a must, we recommend using WSL2 or a setting up a
Linux VM.
1.2 Terminology
- runtime: a Java process executing code written in the EDC programming model (e.g. a control plane)
- distribution: a specific combination of modules, compiled into a runnable form, e.g. a fat JAR file, a Docker image
etc.
- launcher: a runnable Java module, that pulls in other modules to form a distribution. “Launcher” and “distribution”
are sometimes used synonymously
- connector: a control plane runtime and 1…N data plane runtimes. Sometimes used interchangeably with “distribution”.
- consumer: a dataspace participant who wants to ingest data under the access rules imposed by the provider
- provider: a dataspace participant who offers data assets under a set of access rules
1.3 Architectural and coding principles
When EDC was originally created, there were a few fundamental architectural principles around which we designed and
implemented all dataspace components. These include:
- asynchrony: all external mutations of internal data structures happen in an asynchronous fashion. While the REST
requests to trigger the mutations may still be synchronous, the actual state changes happen in an asynchronous and
persistent way. For example starting a contract negotiation through the API will only return the negotiation’s ID, and
the control plane will cyclically advance the negotiation’s state.
- single-thread processing: the control plane is designed around a set of sequential state
machines, that employ pessimistic locking to guard
against race conditions and other problems.
- idempotency: requests, that do not trigger a mutation, are idempotent. The same is true when provisioning external
resources.
- error-tolerance: the design goal of the control plane was to favor correctness and reliability over (low) latency.
That means, even if a communication partner may not be reachable due to a transient error, it is designed to cope with
that error and attempt to overcome it.
Prospective contributors to the Eclipse Dataspace Components are well-advised to follow these principles and build their
applications around them.
There are other, less technical principles of EDC such as simplicity and self-contained-ness. We are extremely careful
when adding third-party libraries or technologies to maintain a simple, fast and un-opinionated platform.
Take a look at our coding principles and our
styleguide.
2. The control plane
Simply put, the control plane is the brains of a connector. Its tasks include handling protocol and API requests,
managing various internal asynchronous processes, validating policies, performing participant authentication and
delegating the data transfer to a data plane. Its job is to handle (almost) all business logic. For that, it is designed
to favor reliability over low latency. It does not directly transfer data from source to destination.
The primary way to interact with a connector’s control plane is through the Management API, all relevant Java modules
are located at extensions/control-plane/api/management-api
.
2.1 Entities
Detailed documentation about entities can be found here
2.2 Programming Primitives
This chapter describes the fundamental architectural and programming paradigms that are used in EDC. Typically, they
are not related to one single extension or feature area, they are of overarching character.
Detailed documentation about programming primitives can be found here
2.3 Serialization via JSON-LD
JSON-LD is a JSON-based format for serializing Linked Data, and allows adding
specific “context” to the data expressed as JSON format.
It is a W3C standard since 2010.
Detailed information about how JSON-LD is used in EDC can be found here
2.4 Extension model
One of the principles EDC is built around is extensibility. This means that by simply putting a Java module on the
classpath, the code in it will be used to enrich and influence the runtime behaviour of EDC. For instance, contributing
additional data persistence implementations can be achieved this way. This is sometimes also referred to as “plugin”.
Detailed documentation about the EDC extension model can be found here
2.5 Dependency injection deep dive
In EDC, dependency injection is available to inject services into extension classes (implementors of the
ServiceExtension
interface). The ServiceExtensionContext
acts as service registry, and since it’s not quite an IoC
container, we’ll refer to it simple as the “context” in this chapter.
Detailed documentation about the EDC dependency injection mechanism can be
found here
2.6 Service layers
Like many other applications and application frameworks, EDC is built upon a vertically oriented set of different layers
that we call “service layers”.
Detailed documentation about the EDC service layers can be found here
2.7 Policy Monitor
The policy monitor is a component that watches over on-going transfers and ensures that the policies associated with the
transfer are still valid.
Detailed documentation about the policy monitor can be found here
2.8 Protocol extensions (DSP)
This chapter describes how EDC abstracts the interaction between connectors in a Dataspace through protocol extensions
and introduces the current default implementation which follows the Dataspace
protocol specification.
Detailed documentation about protocol extensions can be found here
3. (Postgre-)SQL persistence
PostgreSQL is a very popular open-source database and it has a large community and vendor adoption. It is also EDCs data
persistence technology of choice.
Every store in the EDC, intended to persist state, comes out of
the box with two implementations:
- in-memory
- sql (PostgreSQL dialect)
By default, the in-memory stores are provided by the dependency
injection, the SQL variants can be used by simply adding the relevant extensions (e.g. asset-index-sql
,
contract-negotiation-store-sql
, …) to the classpath.
Detailed documentation about EDCs PostgreSQL implementations can be found here
4. The data plane
4.1 Data plane signaling
Data Plane Signaling (DPS) is the communication protocol that is used between control planes and data planes. Detailed
information about it and other topics such as data plane self-registration and public API authentication can be found
here.
4.2 Writing a custom data plane extension (sink/source)
The EDC Data Plane is build on top of the Data Plane Framework (DPF), which can be used for building custom data planes.
The framework has extensibility points for supporting different data sources and sinks (e.g., S3, HTTP, Kafka) and can
perform direct streaming between different source and sink types.
Detailed documentation about writing a custom data plane extension can be found here.
4.3 Writing a custom data plane (using only DPS)
Since the communication between control plane and data plane is well-defined in the DPS protocol, it’s possible
to write a data plane from scratch (without using EDC and DPF) and make it work with the EDC control plane.
Detailed documentation about writing a custom data plane be found here.
5. Development best practices
5.1 Writing Unit-, Component-, Integration-, Api-, EndToEnd-Tests
test pyramid… Like any other project, EDC has established a set of recommendations and rules that contributors must
adhere to in order to guarantee a smooth collaboration with the project. Note that familiarity with our formal
contribution guidelines is assumed. There additional recommendations we have compiled that
are relevant when deploying and administering EDC instances.
5.1 Coding best practices
Code should be written to conform with the EDC style guide.
A frequent subject of critique in pull requests is logging. Spurious and very verbose log lines like “Entering/Leaving
method X” or “Performing action Z” should be avoided because they pollute the log output and don’t contribute any value.
Please find detailed information about logging here.
5.2 Testing best practices
Every class in the EDC code base should have a test class that verifies the correct functionality of the code.
Detailed information about testing can be found here.
5.3 Other best practices
Please find general best practices and recommendations here.
6. Further concepts
6.1 Autodoc
In EDC there is an automated way to generate basic documentation about extensions, plug points, SPI modules and
configuration settings. To achieve this, simply annotate respective elements directly in Java code:
@Extension(value = "Some supercool extension", categories = {"category1", "category2"})
public class SomeSupercoolExtension implements ServiceExtension {
// default value -> not required
@Setting(value = "Some string config property", type = "string", defaultValue = "foobar", required = false)
public static final String SOME_STRING_CONFIG_PROPERTY = "edc.some.supercool.string";
//no default value -> required
@Setting(value = "Some numeric config", type = "integer", required = true)
public static final String SOME_INT_CONFIG_PROPERTY = "edc.some.supercool.int";
// ...
}
during compilation, the EDC build plugin generates documentation for each module as structured JSON.
Detailed information about autodoc can be found here
6.2 Adapting the Gradle build
The EDC build process is based on Gradle and as such uses several plugins to customize the build and centralize certain
functionality. One of these plugins has already been discussed in the previous chapter. All of EDC’s
plugins are hosted in the GradlePlugins repository.
The most important plugin is the “EDC build” plugin. It consists essentially of these things:
- a plugin class: extends
Plugin<Project>
from the Gradle API to hook into the Gradle task infrastructure - extensions: they are POJOs that are model classes for configuration.
- conventions: individual mutations that are applied to the project. For example, we use conventions to add some
standard repositories to all projects, or to implement publishing to OSSRH and MavenCentral in a generic way.
- tasks: executable Gradle tasks that perform a certain action like merging OpenAPI Specification documents.
It is important to note that a Gradle build is separated in phases, namely Initialization, Configuration and
Execution (see documentation). Some of our
conventions as well as other plugins have to be applied in the Configuration phase.
6.3 The EDC Release process
Generally speaking, EDC publishes -SNAPSHOT
build artifacts to OSSRH Snapshots and release build artefacts to
MavenCentral.
We further distinguish our artifacts in “core” modules and “technology” modules. The earlier consists of the Connector,
IdentityHub and FederatedCatalog as well as the RuntimeMetamodel and the aforementioned GradlePlugins. The latter is
comprised up of technology-specific implementations of core SPIs, for example cloud-based object storage or Vault
implementations.
6.3.1 Releasing “core” modules
The build processes for two module classes are separated from one another. All modules in the “core” class are published
under the same Maven group-id org.eclipse.edc
. This makes it necessary to publish them all at the same time, because
once publishing of an artifact of a certain group-id is completed, no artifacts with the same group-id can be published
anymore.
That means, that we cannot publish the Connector repository, then the IdentityHub repository and finally the
FederatedCatalog repository, because by the time we get to IdentityHub, the publishing of Connector would already
be complete and the publishing of IdentityHub would fail.
The way to get around this limitation is to merge all “core” modules into one big root project, where the project
structure is synthesized and contains all “core” modules as subprojects, and to publish the entire root project. The
artifact names remain unchanged.
This functionality is implemented in the Release repository, which also
contains GitHub Actions workflows to publish snapshots, nightly builds and release builds.
6.3.2 Releasing “technology” modules
Building and publishing releases for “technology” modules is much simpler, because they do not have to be built together
with any other repository. With them, we can employ a conventional build-and-publish approach.
2.1 - Best practices and recommendations
1. Preface
This document aims at giving guidelines and recommendations to developers who want to use or extend EDC or EDC modules
in their applications, to DevOps engineers who are tasked with packaging and operating EDC modules as runnable
application images.
Please understand this document as a recommendation from the EDC project committers team that they compiled to the best
of their knowledge. We realize that use case scenarios are plentiful and requirements vary, and not every best practice
is applicable everywhere. You know your use case best.
This document is not an exhaustive list of prescribed steps, that will shield adopters from any conceivable harm or
danger, but rather should serve as starting point for engineers to build upon.
Finally, it should go without saying that the software of the EDC project is distributed “as is” and committers of EDC
take no responsibility or liability, direct or indirect, for any harm or damage caused by the us`e of it. This document
does not change that.
2. Security recommendations
2.1 Exposing APIs to the internet
The EDC code base has several outward-facing APIs, exclusively implemented as HTTP/REST endpoints. These have different
purposes, different intended consumers and thus different security implications.
As a general rule, APIs should not be exposed directly to the internet. That does not mean that they shouldn’t be
accessible via the internet, obviously the connector and related components cannot work without a network connection.
This only means that API endpoints should not be directly facing the internet, instead, there should be appropriate
infrastructure in place.
It also means that we advise extreme caution when making APIs accessible via the internet - by default only the DSP
API and the data plane’s public API should be accessible via the internet, the others (management API, signaling
API,…) are intended only for local network access, e.g. within a Kubernetes cluster.
Corporate security policies might require that only HTTPS/TLS connections be used, even between pods in a Kubernetes
cluster. While the EDC project makes no argument pro or contra, that is certainly an idea worth considering in high
security environments.
The key take-away is that all of EDC’s APIs - if accessible outside the local network - should only be accessible
through separate software components such as API gateways or load balancers. These are specialized tools with the sole
purpose of performing authentication, authorization, rate limiting, IP blacklisting/whitelisting etc.
There is a plethora of ready-made components available, both commercial and open-source, therefor the EDC project will
not provide that functionality. Feature requests and issues to that effect will be ignored.
In the particular case of the DSP API, the same principle holds, although with the exception of authentication and
authorization. That is handled by the DSP protocol
itself.
We have a rudimentary token-based API security module available, which can be used to secure the connection API gateway
<-> connector if so desired. It should be noted that it is not designed to act as a ingress point!
TL;DR: don’t expose any APIs if you can help it, but if you must, use available tools to harden the ingress
2.2 Use only official TLS certificates/CAs
Typically, JVMs ship with trust stores that contain a number of widely accepted CAs. Any attempts to package additional
CAs/certificates with runtime base images are discouraged, as that would be problematic because:
- scalability: in a heterogenous networks one cannot assume such a custom CA to be accepted by the counterparty
- maintainability: TLS certificates expire, so there is a chance that mandatory software rollouts become necessary
because of expired certificates lest the network breaks down completely.
- security: there have been a number of issues with CAs
(1,
2), so adding non-sanctioned
ones brings a potential security weakness
2.3 Use appropriate network infrastructure
As discussed earlier, EDC does not (and will not) provide or implement tooling to harden network ingress, as that is
an orthogonal concern, and there are tools better suited for that.
We encourage every connector deployment to plan and design their network layout and infrastructure right from the onset,
before even writing code. Adding that later can be difficult and time-consuming.
For example, in Kubernetes deployments, which are the de-facto industry standard, networking can be taken on by ingress
controllers and load balancers. Additional external infrastructure, such as API gateways are recommended to handle
authentication, authorization and request throttling.
2.4 A word on authentication and authorization
EDC does not have a concept of a “user account” as many client-facing applications do. In terms of identity, the
connector itself represents a participant in a dataspace, so that is the level of granularity the connector operates on.
That means, that client-consumable APIs such as the Management API only have rudimentary security. This is by design and
must be solved out-of-band.
The reasoning behind this is that requirements for authentication and authorization are so diverse and heterogeneous,
that it is virtually impossible for the EDC project to satisfy them all, or even most of them. In addition, there is
very mature software available that is designed for this very use case.
Therefore, adopters of EDC have two options to consider:
- develop a custom
AuthenticationService
(or even a ContainerRequestFilter
), that integrates with an IDP - use a dedicated API gateway (recommended)
Both these options are viable, and may have merit depending on the use case.
2.5 Docker builds
As Docker is a very popular method to build and ship applications, we put forward the following recommendations:
- use official Eclipse Temurin base images for Java
- use dedicated non-root users: in your Dockerfile, add the following lines
ARG APP_USER=docker
ARG APP_UID=10100
RUN addgroup --system "$APP_USER"
RUN adduser \
shell /sbin/nologin \
disabled-password \
gecos "" \
ingroup "$APP_USER" \
no-create-home \
uid "$APP_UID" \
APP_USER"
USER "$APP_USER"
2.6 Use proper database security
Database connections are secured with a username and a password. Please choose non-default users and strong passwords.
In addition, database credentials should be stored in an HSM (vault).
Further, the roles of the technical user for the connector should be limited to SELECT
, INSERT
, UPDATE
, and
DELETE
. There is no reason for that user to have permissions to modify databases, tables, permissions or execute other
DDL statements.
2.7 Store sensitive data in a vault
While the default behaviour of EDC is that configuration values are taken either from environment variables, system
properties or from configuration extensions, it is highly recommended to store sensitive data in a vault
when
developing EDC extensions.
Here is a (non-exhaustive) list of examples of such sensitive values:
- database credentials
- cryptographic keys, e.g. private keys in an asymmetric key pair
- symmetric keys
- API keys/tokens
- credentials for other third-party services, even if temporary
Sensitive values should not be passed through multiple layers of code. Instead, they should be referenced by their
alias, and be resolved from the vault
wherever they are used. Do not store sensitive data as class members but use
local variables that are garbage-collected when leaving execution scope.
3. General recommendations
3.1 Use only official releases
We recommend using only official releases of our components. The latest version can be obtained from the project’s
GitHub releases page and the modules are available from
MavenCentral.
Snapshots are less stable, less tested and less reliable than release versions and they make for non-repeatable builds.
That said, we realize that sometimes living on the bleeding edge of technology is thrilling, or in some circumstances
even necessary. EDC components publish a -SNAPSHOT
build on every commit the main
branch, so there could be several
such builds per day, each overwriting the previous one. In addition, we publish nightly builds, that are versioned
<VERSION>-<YYYYMMDD>-SNAPSHOT
and those don’t get overwritten. For more information please refer to the respective
documentation.
3.2 Dependency hygiene
It should be at the top of every software engineer’s todo list to keep application dependencies current, to avoid
security issues, minimize technical debt and prevent difficult upgrade paths. We strongly recommend using a tool to keep
dependencies up-to-date, or at least notify when a new version is out.
This is especially true for EDC versions. Since the project has not yet reached a state of equilibrium, where we can
follow SemVer rules, major (potentially breaking) changes and incompatibilities are to be expected on every version
increment.
Internally we use dependabot to maintain our dependencies, as it
is well integrated with GitHub actions, but this is not an endorsement. Alternatives exist.
3.3 Use database persistence wherever possible
While the connector runtime provides in-memory persistence by default, it is recommended to use database persistence in
production scenarios, if possible. Hosting the persistence of several modules (e.g. AssetIndex and
PolicyDefinitionStore) in the same database is generally OK.
This is because although memory stores are fast and easy to use, they have certain drawbacks, for instance:
- clustered deployments: multiple replica don’t have the same data, thus they would operate on inconsistent data
- security: if an attacker is able to create a memdump of the pod, they gain access to all application data
- memory consumption: Kubernetes has no memory limits out-of-the-box, so depending on the amount of data that is stored
by a connector, this could cause runtime problems when databases start to grow, especially on resource constrained
deployments.
3.4 Use proper Vault
implementations
Similar to the previous section, proper HSM (Vault
) implementations should be used in all but the most basic test and
demo scenarios. Vaults are used to store the most sensitive information, and by
default EDC provides only an in-memory variant.
3.4 Use UUIDs as object identifiers
While we don’t enforce any particular shape or form for object identifiers, we recommend using UUIDs because they are
reasonably unique, reasonably compact, and reasonably available on most tech stacks. Use the JDK UUID
implementation. It’s good enough.
2.2 - Autodoc Gradle plugin
1. Introduction
In EDC, the autodoc plugin is intended to be used to generate metamodel manifests for every Gradle module, which can
then transformed into Markdown or HTML files, and subsequently be rendered for publication in static web content.
The plugin code can be found in the GradlePlugins GitHub Repository.
The autodoc
plugin hooks into the Java compiler task (compileJava
) and generates a module manifest file that
contains meta information about each module. For example, it exposes all required and provided dependencies of an EDC
ServiceExtension
.
2. Module structure
The autodoc
plugin is located at plugins/autodoc
and consists of four separate modules:
autodoc-plugin
: contains the actual Gradle Plugin
and an Extension
to configure the plugin. This module is
published to MavenCentral.autodoc-processor
: contains an AnnotationProcessor
that hooks into the compilation process and builds the manifest
file. Published to MavenCentral.autodoc-converters
: used to convert JSON manifests to Markdown or HTML
3. Usage
In order to use the autodoc
plugin we must follow a few simple steps. All examples use the Kotlin DSL.
3.1 Add the plugin to the buildscript
block of your build.gradle.kts
:
buildscript {
repositories {
maven {
url = uri("https://oss.sonatype.org/content/repositories/snapshots/")
}
}
dependencies {
classpath("org.eclipse.edc.autodoc:org.eclipse.edc.autodoc.gradle.plugin:<VERSION>>")
}
}
Please note that the repositories
configuration can be omitted, if the release version of the plugin is used.
3.2 Apply the plugin to the project:
There are two options to apply a plugin. For multi-module builds this should be done at the root level.
- via
plugin
block:plugins {
id("org.eclipse.edc.autodoc")
}
- using the iterative approach, useful when applying to
allprojects
or subprojects
:subprojects{
apply(plugin = "org.eclipse.edc.autodoc")
}
The autodoc
plugin exposes the following configuration values:
- the
processorVersion
: tells the plugin, which version of the annotation processor module to use. Set this value if
the version of the plugin and of the annotation processor diverge. If this is omitted, the plugin will use its own
version. Please enter just the SemVer-compliant version string, no groupId
or artifactName
are needed.configure<org.eclipse.edc.plugins.autodoc.AutodocExtension> {
processorVersion.set("<VERSION>")
}
Typically, you do not need to configure this and can safely omit it.
The plugin will then generate an edc.json
file for every module/gradle project.
4. Merging the manifests
There is a Gradle task readily available to merge all the manifests into one large manifest.json
file. This comes in
handy when the JSON manifest is to be converted into other formats, such as Markdown, HTML, etc.
To do that, execute the following command on a shell:
By default, the merged manifests are saved to <rootProject>/build/manifest.json
. This destination file can be
configured using a task property:
// delete the merged manifest before the first merge task runs
tasks.withType<MergeManifestsTask> {
destinationFile = YOUR_MANIFEST_FILE
}
Be aware that due to the multithreaded nature of the merger task, every subproject’s edc.json
gets appended to the
destination file, so it is a good idea to delete that file before running the mergeManifest
task. Gradle can take care
of that for you though:
// delete the merged manifest before the first merge task runs
rootProject.tasks.withType<MergeManifestsTask> {
doFirst { YOUR_MANIFEST_FILE.delete() }
}
5. Rendering manifest files as Markdown or HTML
Manifests get created as JSON, which may not be ideal for end-user consumption. To convert them to HTML or Markdown,
execute the following Gradle task:
./gradlew doc2md # or doc2html
this looks for manifest files and convert them all to either Markdown (doc2md
) or static HTML (doc2html
). Note that
if merged the manifests before (mergeManifests
), then the merged manifest file gets converted too.
The resulting *.md
or *.html
files are located next to the edc.json
file in <module-path>/build/
.
6. Using published manifest files (MavenCentral)
Manifest files (edc.json
) are published alongside the binary jar files, sources jar and javadoc jar to MavenCentral
for easy consumption by client projects. The manifest is published using type=json
and classifier=manifest
properties.
Client projects that want to download manifest files (e.g. for rendering static web content), simply define a Gradle
dependency like this (kotlin DSL):
implementation("org.eclipse.edc:<ARTIFACT>:<VERSION>:manifest@json")
For example, for the :core:control-plane:control-plane-core
module in version 0.4.2-SNAPSHOT
, this would be:
implementation("org.eclipse.edc:control-plane-core:0.4.2-SNAPSHOT:manifest@json")
When the dependency gets resolved, the manifest file will get downloaded to the local gradle cache, typically located at
.gradle/caches/modules-2/files-2.1
. So in the example the manifest would get downloaded at
~/.gradle/caches/modules-2/files-2.1/org.eclipse.edc/control-plane-core/0.4.2-SNAPSHOT/<HASH>/control-plane-core-0.4.2-SNAPSHOT-manifest.json
2.3 - OpenApi spec
It is possible to generate an OpenApi spec in the form of a *.yaml
file by invoking two simple Gradle tasks.
Generate *.yaml
files
Every module (=subproject) that contains REST endpoints is scanned for Jakarta Annotations which are then used to
generate a *.yaml
specification for that particular module. This means that there is one *.yaml
file per module,
resulting in several *.yaml
files.
Those files are named MODULENAME.yaml
, e.g. observability.yaml
or control.yaml
.
To re-generate those files, simply invoke
This will generate all *.yaml
files in the resources/openapi/yaml
directory.
Gradle Plugins
We use the official Swagger Gradle plugins:
"io.swagger.core.v3.swagger-gradle-plugin"
: used to generate a *.yaml
file per module
So in order for a module to be picked up by the Swagger Gradle plugin, simply add it to the build.gradle.kts
:
// in yourModule/build.gradle.kts
val rsApi: String by project
plugins {
`java-library`
id(libs.plugins.swagger.get().pluginId) //<-- add this
}
Categorizing your API
All APIs in EDC should be “categorized”, i.e. they should belong to a certain group of APIs.
Please see this decision record
for reference. In order to add your module to one of the categories, simply add this block to your module’s build.gradle.kts
:
plugins {
`java-library`
id(libs.plugins.swagger.get().pluginId)
}
dependencies {
// ...
}
// add this block:
edcBuild {
swagger {
apiGroup.set("management-api")
}
}
This tells the build plugin how to categorize your API and SwaggerHub will list it accordingly.
Note: currently we have categories for control-api
and management-api
How to generate code
This feature does neither expose the generated files through a REST endpoint providing any sort of live try-out
feature, nor does it generate any sort of client code. A visual documentation page for our APIs is served
through SwaggerHub.
However, there is Gradle plugin capable of generating client code.
Please refer to the official documentation.
2.4 - Data Persistence with PostgreSQL
By default, the in-memory
stores are provided by the dependency injection, the sql
implementations can be used by
simply registering the relative extensions (e.g. asset-index-sql
, contract-negotiation-store-sql
, …).
1. Configuring DataSources
For using sql
extensions, a DataSource
is needed, and it should be registered on the DataSourceRegistry
service.
The sql-pool-apache-commons
extension is responsible for creating and registering pooled data sources starting from
configuration. At least one data source named "default"
is required.
edc.datasource.default.url=...
edc.datasource.default.name=...
edc.datasource.default.password=...
It is recommended to hold these values in the Vault rather than in configuration. The config key (e.g.
edc.datasource.default.url
) serves as secret alias. If no vault entries are found for these keys, they will be
obtained from the configuration. This is unsafe and should be avoided!
Other datasources can be defined using the same settings structure:
edc.datasource.<datasource-name>.url=...
edc.datasource.<datasource-name>.name=...
edc.datasource.<datasource-name>.password=...
<datasource-name>
is string that then can be used by the store’s configuration to use specific data sources.
1.2 Using custom datasource in stores
Using a custom datasource in a store can be done by configuring the setting:
edc.sql.store.<store-context>.datasource=<datasource-name>
Note that <store-context>
can be an arbitrary string, but it is recommended to use a descriptive name. For example,
the SqlPolicyStoreExtension
defines a data source name as follows:
@Extension("SQL policy store")
public class SqlPolicyStoreExtension implements ServiceExtension {
@Setting(value = "The datasource to be used", defaultValue = DataSourceRegistry.DEFAULT_DATASOURCE)
public static final String DATASOURCE_NAME = "edc.sql.store.policy.datasource";
@Override
public void initialize(ServiceExtensionContext context) {
var datasourceName = context.getConfig().getString(DATASOURCE_NAME, DataSourceRegistry.DEFAULT_DATASOURCE);
//...
}
}
2. SQL Statement abstraction
EDC does not use any sort of Object-Relation-Mapper (ORM), which would automatically translate Java object graphs to SQL
statements. Instead, EDC uses pre-canned parameterized SQL statements.
We typically distinguish between literals such as table names or column names and “templates”, which are SQL statements
such as INSERT
.
Both are declared as getters in an interface that extends the SqlStatements
interface, with literals being default
methods and templates being implemented by a BaseSqlDialectStatements
class.
A simple example could look like this:
public class BaseSqlDialectStatements implements SomeEntityStatements {
@Override
public String getDeleteByIdTemplate() {
return executeStatement().delete(getSomeEntityTable(), getIdColumn());
}
@Override
public String getUpdateTemplate() {
return executeStatement()
.column(getIdColumn())
.column(getSomeStringFieldColumn())
.column(getCreatedAtColumn())
.update(getSomeEntityTable(), getIdColumn());
}
//...
}
Note that the example makes use of the SqlExecuteStatement
utility class, which should be used to construct all SQL
statements - except queries. Queries are special in that they have a highly dynamic aspect to them. For more
information, please read on in this chapter.
As a general rule of thumb, issuing multiple statements (within one transaction) should be preferred over writing
complex nested statements. It is very easy to inadvertently create an inefficient or wasteful statement that causes high
resource load on the database server. The latency that is introduced by sending multiple statements to the DB server is
likely negligible in comparison, especially because EDC is architected towards reliability rather than latency.
3. Querying PostgreSQL databases
Generally speaking, the basis for all queries is a QuerySpec
object. This means, that at some point a QuerySpec
must
be translated into an SQL SELECT
statement. The place to do this is the SqlStatements
implementation often called
BaseSqlDialectStatements
:
@Override
public SqlQueryStatement createQuery(QuerySpec querySpec) {
var select = "SELECT * FROM %s".formatted(getSomeEntityTable());
return new SqlQueryStatement(select, querySpec, new SomeEntityMapping(this), operatorTranslator);
}
Now, there are a few things to unpack here:
- the
SELECT
statement serves as starting point for the query - individual
WHERE
clauses get added by parsing the filterExpression
property of the QuerySpec
LIMIT
and OFFSET
clauses get appended based on QuerySpec#offset
and QuerySpec#limit
- the
SomeEntityMapping
maps the canonical form onto the SQL literals - the
operatorTranslator
is used to convert operators such as =
or like
into SQL operators
Theoretically it is possible to map every schema onto every other schema, given that they are of equal cardinality. To
achieve that, EDC introduces the notion of a canonical form, which is our internal working schema for entities. In
other words, this is the schema in which objects are represented internally. If we ever support a wider variety of
translation and transformation paths, everything would have to be transformed into that canonical format first.
In actuality the canonical form of an object is defined by the Java class and its field names. For instance, a query
for contract negotiations must be specified using the field names of a ContractNegotiation
object:
public class ContractNegotiation {
// ...
private ContractAgreement contractAgreement;
// ...
}
public class ContractAgreement {
// ...
private final String assetId;
}
Consequently, contractAgreement.assetId
would be valid, whereas contract_agreement.asset_id
would be invalid. Or,
the left-hand operand looks like as if we were traversing the Java object graph. This is what we call the canonical
form . Note the omission of the root object contractNegotiation
!
3.1 Translation Mappings
Translation mappings are EDCs way to map a QuerySpec
to SQL statements. At its core, it contains a Map
that contains
the Java entity field name and the related SQL column name.
In order to decouple the canonical form from the SQL schema (or any other database schema), a mapping scheme exists to
map the canonical model onto the SQL model. This TranslationMapping
is essentially a graph-like metamodel of the
entities: every Java entity has a related mapping class that contains its field names and the associated SQL column
names. The convention is to append *Mapping
to the class name, e.g. PolicyDefinitionMapping
.
3.1.1 Mapping primitive fields
Primitive fields are stored directly as columns in SQL tables. Thus, mapping primitive data types is trivial: a simple
mapping from one onto the other is necessary, for example, ContractNegotiation.counterPartyAddress
would be
represented in the ContractNegotiationMappin
as an entry
"counterPartyAddress"->"counterparty_address"
When constructing WHERE/AND
clauses, the canonical property is simply be replaced by the respective SQL column name.
3.1.2 Mapping complex objects
For fields that are of complex type, such as the ContractNegotiation.contractAgreement
field, it is necessary to
accommodate this, depending on how the relational data model is defined. There are two basic variants we use:
Option 1: using foreign keys
In this case, the referenced object is stored in a separate table using a foreign key relation. Thus, the canonical
property (contractAgreement
) is mapped onto the SQL schema using another *Mapping
class. Here, this would be the
ContractAgreementMapping
. When resolving a property in the canonical format (contractAgreement.assetId
), this means
we must recursively descend into the model graph and resolve the correct SQL expression.
Note: mapping one-to-many
relations (= arrays/lists) with foreign keys is not implemented at this time.
Option 2a: encoding the object
Another popular way to store complex objects is to encode them in JSON and store them in a VARCHAR
column. In
PostgreSQL we use the specific JSON
type instead of VARCHAR
. For example, the TranferProcess
is stored in a table
called edc_transfer_process
, its DataAddress
property is encoded in JSON and stored in a JSON
field.
Querying for TransferProcess
objects: when mapping the filter expression
contentDataAddress.properties.somekey=somevalue
, the contentDataAddress
is represented as JSON, therefore in the
TransferProcessMapping
the contentDataAddress
field maps to a JsonFieldTranslator
:
public TransferProcessMapping(TransferProcessStoreStatements statements) {
// ...
add(FIELD_CONTENTDATAADDRESS, new JsonFieldTranslator(statements.getContentDataAddressColumn()));
// ...
}
which would then get translated to:
SELECT *
FROM edc_transfer_process
-- omit LEFT OUTER JOIN for readability
WHERE content_data_address -> 'properties' ->> 'somekey' = 'somevalue'
Note that JSON queries are specific to PostgreSQL and are not portable to other database technologies!
Option 2b: encoding lists/arrays
Like accessing objects, accessing lists/arrays of objects is possible using special JSON operators. In this case the
special Postgres function json_array_elements()
is used. Please refer to the official
documentation.
For an example of how this is done, please look at how the TransferProcessMapping
maps a ResourceManifest
, which in
turn contains a List<ResourceDefinition>
using the ResourceManifestMapping
.
2.5 - Logging
A comprehensive and consistent way of logging is a crucial pillar for operability. Therefore, the following rules should be followed:
Logging component
Logs must only be produced using the Monitor
service, which offers 4 different log levels:
severe
Error events that might lead the application to abort or still allow it to continue running.
Used in case of an unexpected interruption of the flow or when something is broken, i.e. an operator has to take action.
e.g. service crashes, database in illegal state, … even if there is chance of self recovery.
warning
Potentially harmful situations messages.
Used in case of an expected event that does not interrupt the flow but that should be taken into consideration.
info
Informational messages that highlight the progress of the application at coarse-grained level.
Used to describe the normal flow of the application.
debug
Fine-grained informational events that are most useful to debug an application.
Used to describe details of the normal flow that are not interesting for a production environment.
What should be logged
- every exception with
severe
or warning
- every
Result
object evaluated as failed
:- with
severe
if this is something that interrupts the flow and someone should take care of immediately - with
warning
if this is something that doesn’t interrupt the flow but someone should take care of, because it could give worse results in the future
- every important message that’s not an error with
info
- other informative events like incoming calls at the API layer or state changes with
debug
What should be not logged
- secrets and any other potentially sensitive data, like the payload that is passed through the
data-plane
- an exception that will be thrown in the same block
- not strictly necessary information, like “entering method X”, “leaving block Y”, “returning HTTP 200”
2.6 - Writing tests
1. Adding EDC test fixtures
To add EDC test utilities and test fixtures to downstream projects, simply add the following Gradle dependency:
testImplementation("org.eclipse.edc:junit:<version>")
2. Controlling test verbosity
To run tests verbosely (displaying test events and output and error streams to the console), use the following system
property:
./gradlew test -PverboseTest
3. Definition and distinction
- unit tests test one single class by stubbing or mocking dependencies.
- integration test tests one particular aspect of a software, which may involve external
systems.
- system tests are end-to-end tests that rely on the entire system to be present.
4. Integration Tests
4.1 TL;DR
Use integration tests only when necessary, keep them concise, implement them in a defensive manner using timeouts and
randomized names, use test containers for external systems wherever possible. This increases portability.
4.2 When to use them
Generally speaking developers should favor writing unit tests over integration tests, because they are simpler, more
stable and typically run faster. Sometimes that is not (easily) possible, especially when an implementation relies on an
external system that is not easily mocked or stubbed such as databases.
Therefore, in many cases writing unit tests is more involved that writing an integration test, for example say you want
to test your implementation of a Postgres-backed database. You would have to mock the behaviour of the PostgreSQL
database, which - while certainly possible - can get complicated pretty quickly. You might still choose to do that for
simpler scenarios, but eventually you will probably want to write an integration test that uses an actual PostgreSQL
instance.
4.3 Coding Guidelines
The EDC codebase has few annotations and these annotation focuses on two important aspects:
- Exclude integration tests by default from JUnit test runner, as these tests relies on external systems which might not
be available during a local execution.
- Categorize integration tests with help of JUnit
Tags.
Following are some available annotations:
@IntegrationTest
: Marks an integration test with IntegrationTest
Junit tag. This is the default tag and can be
used if you do not want to specify any other tags on your test to do further categorization.
Below annotations are used to categorize integration tests based on the runtime components that must be available for
the test to run. All of these annotations are composite annotations and contains @IntegrationTest
annotation as well.
@ApiTest
: marks an integration test that focuses on testing a REST API. To do that, a runtime the controller class
with all its collaborators is spun up.@EndToEndTest
: Marks an integration test with EndToEndTest
Junit Tag. This should be used when entire system is- involved in a test.
@ComponentTest
: Marks an integration test with ComponentTest
Junit Tag. This should be used when the test does not
use any external systems, but uses actual collaborator objects instead of mocks.- there are other more specific tags for cloud-vendor specific environments, like
@AzureStorageIntegrationTest
or
@AwsS3IntegrationTest
. Some of those enviroments can be emulated (with test containers), others can’t.
We encourage you to use these available annotation but if your integration test does not fit in one of these available
annotations, and you want to categorize them based on their technologies then feel free to create a new annotations but
make sure to use composite annotations which contains @IntegrationTest
. If you do not wish to categorize based on
their technologies then you can use already available @IntegrationTest
annotation.
- By default, JUnit test runner ignores all integration tests because in root
build.gradle.kts
file we have excluded
all tests marked with IntegrationTest
Junit tag. - If your integration test does not rely on an external system then you may not want to use above-mentioned annotations.
All integration tests should specify annotation to categorize them and the "...IntegrationTest"
postfix to distinguish
them clearly from unit tests. They should reside in the same package as unit tests because all tests should maintain
package consistency to their test subject.
Any credentials, secrets, passwords, etc. that are required by the integration tests should be passed in using
environment variables. A good way to access them is ConfigurationFunctions.propOrEnv()
because then the credentials
can also be supplied via system properties.
There is no one-size-fits-all guideline whether to perform setup tasks in the @BeforeAll
or @BeforeEach
, it will
depend on the concrete system you’re using. As a general rule of thumb long-running one-time setup should be done in the
@BeforeAll
so as not to extend the run-time of the test unnecessarily. In contrast, in most cases it is not
advisable to deploy/provision the external system itself in either one of those methods. In other words, manually
provisioning a cloud resource should generally be avoided, because it will introduce code that has nothing to do with
the test and may cause security problems.
If possible all external system should be deployed using Testcontainers. Alternatively,
in special situations there might be a dedicated test instance running continuously, e.g. a cloud-based database test
instance. In the latter case please be careful to avoid conflicts (e.g. database names) when multiple test runners
access that system simultaneously and to properly clean up any residue before and after the test.
4.4 Running integration tests locally
As mentioned above the JUnit runner won’t pick up integration tests unless a tag is provided. For example to run Azure CosmosDB
integration tests pass includeTags
parameter with tag value to the gradlew
command:
./gradlew test -p path/to/module -DincludeTags="PostgresqlIntegrationTest"
running all tests (unit & integration) can be achieved by passing the runAllTests=true
parameter to the gradlew
command:
./gradlew test -DrunAllTests="true"
4.5 Running them in the CI pipeline
All integration tests should go into the verify.yaml
workflow, every “technology”
should
have its own job, and technology specific tests can be targeted using Junit tags with -DincludeTags
property as
described above in document.
A GitHub composite action was created to
encapsulate the tasks of setting up Java/Gradle and running tests.
For example let’s assume we’ve implemented a PostgreSQL-based store for SomeObject
, and let’s assume that the
verify.yaml
already contains a “Postgres” job, then every module that contains a test class annotated with
@PostgresqlIntegrationTest
will be loaded and executed here. This tagging will be used by the CI pipeline step to
target and execute the integration tests related to Postgres.
Let’s also make sure that the code is checked out before and integration tests only run on the upstream repo.
jobs:
Postgres-Integration-Tests:
# run only on upstream repo
if: github.repository_owner == 'eclipse-edc'
runs-on: ubuntu-latest
# taken from https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}
steps:
- uses: ./.github/actions/setup-build
- name: Postgres Tests
uses: ./.github/actions/run-tests
with:
command: ./gradlew test -DincludeTags="PostgresIntegrationTest"
[ ... ]
4.6 Do’s and Don’ts
DO:
- aim to cover as many test cases with unit tests as possible
- use integration tests sparingly and only when unit tests are not practical
- deploy the external system test container if possible, or
- use a dedicated always-on test instance (esp. cloud resources)
- take into account that external systems might experience transient failures or have degraded performance, so test
methods should have a timeout so as not to block the runner indefinitely.
- use randomized strings for things like database/table/bucket/container names, etc., especially when the external
system does not get destroyed after the test.
DO NOT:
- try to cover everything with integration tests. It’s typically a code smell if there are no corresponding unit tests
for an integration test.
- slip into a habit of testing the external system rather than your usage of it
- store secrets directly in the code. GitHub will warn about that.
- perform complex external system setup in
@BeforeEach
or @BeforeAll
- add production code that is only ever used from tests. A typical smell are
protected
or package-private
methods.
5. Running an EDC instance from a JUnit test (End2End tests)
In some circumstances it is necessary to launch an EDC runtime and execute tests against it. This could be a
fully-fledged connector runtime, replete with persistence and all bells and whistles, or this could be a partial runtime
that contains lots of mocks and stubs. One prominent example of this is API tests. At some point, you’ll want to run
REST requests using a HTTP client against the actual EDC runtime, using JSON-LD expansion, transformation etc. and
real database infrastructure.
EDC provides a nifty way to launch any runtime from within the JUnit process, which makes it easy to configure and debug
not only the actual test code, but also the system-under-test, i.e. the runtime.
To do that, two parts are needed:
- a runner: a module that contains the test logic
- one or several runtimes: one or more modules that define a standalone runtime (e.g. a runnable EDC definition)
The runner can load an EDC runtime by using the @RegisterExtension
annotation:
@EndToEndTest
class YourEndToEndTest {
@RegisterExtension
private final RuntimeExtension controlPlane = new RuntimePerClassExtension(new EmbeddedRuntime(
"control-plane", // the runtime's name, used for log output
Map.of( // the runtime's configuration
"web.http.control.port", String.valueOf(getFreePort()),
"web.http.control.path", "/control"
//...
),
// all modules to be put on the runtime classpath
":core:common:connector-core",
":core:control-plane:control-plane-core",
":core:data-plane-selector:data-plane-selector-core",
":extensions:control-plane:transfer:transfer-data-plane-signaling",
":extensions:common:iam:iam-mock",
":extensions:common:http",
":extensions:common:api:control-api-configuration"
//...
));
}
This example will launch a runtime called "control-plane"
, add the listed Gradle modules to its classpath and pass the
configuration as map to it. And it does that from within the JUnit process, so the "control-plane"
runtime can be
debugged from the IDE.
The example above will initialize and start the runtime once, before all tests run (hence the name
“RuntimePerClassExtension”). Alternatively, there is the RuntimePerMethodExtension
which will re-initialize and
start the runtime before every test method.
In most use cases, RuntimePerClassExtension
is preferable, because it avoids having to start the runtime on every
test. There are cases, where the RuntimePerMethodExtension
is useful, for example when the runtime is mutated during
tests and cleaning up data stores is not practical. Be aware of the added test execution time penalty though.
To make sure that the runtime extensions are correctly built and available, they need to be set as dependency of the
runner module as testCompileOnly
.
This ensures proper dependency isolation between runtimes (very important the test need to run two different components
like a control plane and a data plane).
Technically, the number of runtimes launched that way is not limited (other than by host system resource), so
theoretically, an entire dataspace with N participants could be launched that way…
2.7 - Control Plane
2.7.1 - Entities
1. Assets
Assets are containers for metadata, they do not contain the actual bits and bytes. Say you want to offer a file to
the dataspace, that is physically located in an S3 bucket, then the corresponding Asset
would contain metadata about
it, such as the content type, file size, etc. In addition, it could contain private properties, for when you want to
store properties on the asset, which you do not want to expose to the dataspace. Private properties will get ignored
when serializing assets out over DSP.
A very simplistic Asset
could look like this:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"properties": {
"somePublicProp": "a very interesting value"
},
"privateProperties": {
"secretKey": "this is secret information, never tell it to the dataspace!"
},
"dataAddress": {
"type": "HttpData",
"baseUrl": "http://localhost:8080/test"
}
}
The Asset
also contains a DataAddress
object, which can be understood as a “pointer into the physical world”. It
contains information about where the asset is physically located. This could be a HTTP URL, or a complex object. In the
S3 example, that DataAddress
might contain the bucket name, region and potentially other information. Notice that the
schema of the DataAddress
will depend on where the data is physically located, for instance a HttpDataAddress
has
different properties from an S3 DataAddress
. More precisely, Assets and DataAddresses are schemaless, so there is no
schema enforcement beyond a very basic validation. Read this document to learn about plugging in
custom validators.
A few things must be noted. First, while there isn’t a strict requirement for the @id
to be a UUID, we highly
recommend using the JDK UUID
implementation.
Second, never store access credentials such as passwords, tokens, keys etc. in the dataAddress
or even the
privateProperties
object. While the latter does not get serialized over DSP, both properties are persisted in the
database. Always use a HSM to store the credential, and hold a reference to the secret in the DataAddress. Checkout
the best practices for details.
By design, Assets are extensible, so users can store any metadata they want in it. For example, the properties
object
could contain a simple string value, or it could be a complex object, following some custom schema. Be aware, that
unless specified otherwise, all properties are put under the edc
namespace by default. There are some “well-known”
properties in the edc
namespace: id
, description
, version
, name
, contenttype
.
Here is an example of how an Asset with a custom property following a custom namespace would look like:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"sw": "http://w3id.org/starwars/v0.0.1/ns/"
},
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"properties": {
"faction": "Galactic Imperium",
"person": {
"name": "Darth Vader",
"webpage": "https://death.star"
}
}
}
(assuming the sw
context contains appropriate definitions for faction
and person
).
Remember that upon ingress through the Management API, all JSON-LD objects get
expanded, and the control plane only operates on expanded
JSON-LD objects. The Asset above would look like this:
[
{
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"https://w3id.org/edc/v0.0.1/ns/properties": [
{
"https://w3id.org/starwars/v0.0.1/ns/faction": [
{
"@value": "Galactic Imperium"
}
],
"http://w3id.org/starwars/v0.0.1/ns/person": [
{
"http://w3id.org/starwars/v0.0.1/ns/name": [
{
"@value": "Darth Vader"
}
],
"http://w3id.org/starwars/v0.0.1/ns/webpage": [
{
"@value": "https://death.star"
}
]
}
]
}
]
}
]
This is important to keep in mind, because it means that Assets get persisted in their expanded form, and operations
performed on them (e.g. querying) in the control plane must also be done on the expanded form. For example, a query
targeting the sw:faction
field from the example above would look like this:
{
"https://w3id.org/edc/v0.0.1/ns/filterExpression": [
{
"https://w3id.org/edc/v0.0.1/ns/operandLeft": [
{
"@value": "https://w3id.org/starwars/v0.0.1/ns/faction"
}
],
"https://w3id.org/edc/v0.0.1/ns/operator": [
{
"@value": "="
}
],
"https://w3id.org/edc/v0.0.1/ns/operandRight": [
{
"@value": "Galactic Imperium"
}
]
}
]
}
2. Policies
Policies are the EDC way of expressing that certain conditions may, must or must not be satisfied in certain situations.
Policies are used to express what requirements a subject (e.g. a communication partner) must fulfill in
order to be able to perform an action. For example, that the communication partner must be headquartered in the European
Union.
Policies are ODRL serialized as JSON-LD. Thus, our previous example would look like
this:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "PolicyDefinition",
"policy": {
"@context": "http://www.w3.org/ns/odrl.jsonld",
"@type": "Set",
"duty": [
{
"target": "http://example.com/asset:12345",
"action": "use",
"constraint": {
"leftOperand": "headquarter_location",
"operator": "eq",
"rightOperand": "EU"
}
}
]
}
}
The duty
object expresses the semantics of the constraint. It is a specialization of rule
, which expresses either a
MUST (duty
), MAY (permission
) or MUST NOT (prohibition
) relation. The action
expresses the type of action for
which the rule is intended. Acceptable values for action
are defined here,
but in EDC you’ll exclusively encounter "use"
.
The constraint
object expresses logical relationship of a key (leftOperand
), the value (righOperand
) and the
operator
. Multiple constraints can be linked with logical operators, see advanced policy
concepts. The leftOperand
and rightOperand
are completely arbitrary, only the
operator
is limited to the following possible values: eq
, neq
, gt
, geq
, lt
, leq
, in
, hasPart
, isA
,
isAllOf
, isAnyOf
, isNoneOf
.
Please note that not all operators are always allowed, for example headquarter_location lt EU
is nonsensical and
should result in an evaluation error, whereas headquarter_location isAnOf [EU, US]
would be valid. Whether an
operator
is valid is solely defined by the policy evaluation function, supplying
an invalid operator should raise an exception.
2.1 Policy vs PolicyDefinition
In EDC we have two general use cases under which we handle and persist policies:
- for use in contract definitions
- during contract negotiations
In the first case policies are ODRL objects and thus must have a uid
property. They are typically used in contract
definitions.
Side note: the ODRL context available at http://www.w3.org/ns/odrl.jsonld
simply defines uid
as an alias to the
@id
property. This means, whether we use uid
or @id
doesn’t matter, both expand to the same property @id
.
However in the second case we are dealing with DCAT objects, that have no concept of Offers, Policies or Assets. Rather,
their vocabulary includes Datasets, Dataservices etc. So when deserializing those DCAT objects there is no way to
reconstruct Policy#uid
, because the JSON-LD structure does not contain it.
To account for this, we defined the Policy
class as value object that contains rules and other properties. In
addition, we have a PolicyDefinition
class, which contains a Policy
and an id
property, which makes it an
entity.
2.2 Policy scopes and bindings
A policy scope is the “situation”, in which a policy is evaluated. For example, a policy may need to be evaluated when a
contract negotiation is attempted. To do that, EDC defines certain points in the code called “scopes” to which policies
are bound. These policy scopes (sometimes called policy evaluation points) are static, injecting/adding additional
scopes is not possible. Currently, the following scopes are defined:
contract.negotiation
: evaluated upon initial contract offer. Ensures that the consumer fulfills the contract policy.transfer.process
: evaluated before starting a transfer process to ensure that the policy of the contract
agreement is fulfilled. One example would be contract expiry.catalog
: evaluated when the catalog for a particular participant agent is generated. Decides whether the participant
has the asset in their catalog.request.contract.negotiation
: evaluated on every request during contract negotiation between two control plane
runtimes. Not relevant for end users.request.transfer.process
: evaluated on every request during transfer establishment between two control plane
runtimes. Not relevant for end users.request.catalog
: evaluated upon an incoming catalog request. Not relevant for end users.provision.manifest.verify
: evaluated during the precondition check for resource provisioning. Only relevant in
advanced use cases.
A policy scope is a string that is used for two purposes:
- binding a scope to a rule type: implement filtering based on the
action
or the leftOperand
of a policy. This
determines for every rule inside a policy whether it should be evaluated in the given scope. In other words, it
determines if a rule should be evaluated. - binding a policy evaluation function to a scope: if a policy is determined to be
“in scope” by the previous step, the policy engine invokes the evaluation function that was bound to the scope to
evaluate if the policy is fulfilled. In other words, it determines (implements) how a rule should be evaluated.
2.3 Policy evaluation functions
If policies are a formalized declaration of requirements, policy evaluation functions are the means to evaluate those
requirements. They are pieces of Java code executed at runtime. A policy on its own only expresses the requirement,
but in order to enforce it, we need to run policy evaluation functions.
Upon evaluation, they receive the operator, the rightOperand
(or rightValue), the rule, and the PolicyContext
. A
simple evaluation function that asserts the headquarters policy mentioned in the example above could look similar to
this:
import org.eclipse.edc.policy.engine.spi.AtomicConstraintFunction;
public class HeadquarterFunction implements AtomicConstraintFunction<Duty> {
public boolean evaluate(Operator operator, Object rightValue, Permission rule, PolicyContext context) {
if (!(rightValue instanceof String)) {
context.reportProblem("Right-value expected to be String but was " + rightValue.getClass());
return false;
}
if (operator != Operator.EQ) {
context.reportProblem("Invalid operator, only EQ is allowed!");
return false;
}
var participant = context.getContextData(ParticipantAgent.class);
var participantLocation = extractLocationClaim(participant); // EU, US, etc.
return participantLocation != null && rightValue.equalsIgnoreCase(participantLocation);
}
}
This particular evaluation function only accepts eq
as operator, and only accepts scalars as rightValue
, no list
types.
The ParticipantAgent
is a representation of the communication counterparty that contains a set of verified claims. In
the example, extractLocationClaim()
would look for a claim that contains the location of the agent and return it as
string. This can get quite complex, for example, the claim could contain geo-coordinates, and the evaluation function
would have to perform inverse address geocoding.
Other policies may require other context data than the participant’s location, for example an exact timestamp, or may
even need a lookup in some third party system such as a customer database.
The same policy can be evaluated by different evaluation functions, if they are meaningful in different contexts
(scopes).
NB: to write evaluation code for policies, implement the org.eclipse.edc.policy.engine.spi.AtomicConstraintFunction
interface. There is a second interface with the same name, but that is only used for internal use in the
PolicyEvaluationEngine
.
2.4 Example: binding an evaluation function
As we’ve learned, for a policy to be evaluated at certain points, we need to create
a policy (duh!), bind the policy to a scope, create a policy evaluation function,
and we need to bind the function to the same scope. The standard way of registering and binding policies is done in an
extension. For example, here we configure our HeadquarterFunction
so that it evaluates our
headquarter_location
function whenever someone tries to negotiate a contract:
public class HeadquarterPolicyExtension implements ServiceExtension {
@Inject
private RuleBindingRegistry ruleBindingRegistry;
@Inject
private PolicyEngine policyEngine;
private static final String HEADQUARTER_POLICY_KEY = "headquarter_location";
@Override
public void initialize() {
// bind the policy to the scope
ruleBindingRegistry.bind(HEADQUARTER_POLICY_KEY, NEGOTIATION_SCOPE);
// create the function object
var function = new HeadquarterFunction();
// bind the function to the scope
policyEngine.registerFunction(NEGOTIATION_SCOPE, Duty.class, HEADQUARTER_POLICY_KEY, function);
}
}
The code does two things: it binds the function key (= the leftOperand) to the negotiation scope, which means that the
policy is “relevant” in that scope. Further, it binds the evaluation function to the same scope, which means the policy
engine “finds” the function and executes it in the negotiation scope.
This example assumes, a policy object exists in the system, that has a leftOperand = headquarter_location
. For details
on how to create policies, please check out the OpenAPI
documentation.
2.5 Advanced policy concepts
2.5.1 Pre- and Post-Evaluators
Pre- and post-validators are functions that are executed before and after the actual policy evaluation, respectively.
They can be used to perform preliminary evaluation of a policy or to enrich the PolicyContext
. For example, EDC uses
pre-validators to inject DCP scope strings using dedicated ScopeExtractor
objects.
2.5.2 Dynamic functions
These are very similar to AtomicConstraintFunctions
, with one significant difference: they also receive the
left-operand as function parameter. This is useful when the function cannot be bound to a left-operand of a policy,
because the left-operand is not known in advance.
Let’s revisit our headquarter policy from earlier and change it a little:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "PolicyDefinition",
"policy": {
"@context": "http://www.w3.org/ns/odrl.jsonld",
"@type": "Set",
"duty": [
{
"target": "http://example.com/asset:12345",
"action": "use",
"constraint": {
"or": [
{
"leftOperand": "headquarter.location",
"operator": "eq",
"rightOperand": "EU"
},
{
"leftOperand": "headerquarter.numEmployees",
"operator": "gt",
"rightOperand": 5000
}
]
}
}
]
}
}
This means two things. One, our policy has changed its semantics: now we require the headquarter to be in the EU, or to
have more than 5000 employees.
2.6 Bundled policy functions
2.6.1 Contract expiration function
3. Contract definitions
Contract definitions are how assets and policies are linked together. It is EDC’s way of
expressing which policies are in effect for an asset. So when an asset (or several assets) are offered in the dataspace,
a contract definition is used to express under what conditions they are offered. Those conditions are comprised of a
contract policy and an access policy. The access policy determines, whether a participant will even get the offer,
and the contract policy determines whether they can negotiate a contract for it. Those policies are referenced by ID,
but foreign-key constrainta are not enforced. This means that contract definitions can be created ahead of time.
It is important to note that contract definitions are implementation details (i.e. internal objects), which means
they never leave the realm of the provider, and they are never sent to the consumer via DSP.
- access policy: determines whether a particular consumer is offered an asset when making a catalog request. For
example, we may want to restrict certain assets such that only consumers within a particular geography can see them.
Consumers outside that geography wouldn’t even have them in their catalog.
- contract policy: determines the conditions for initiating a contract negotiation for a particular asset. Note that
this only guarantees the successful initiation of a contract negotiation, it does not automatically guarantee the
successful conclusion of it!
Contract definitions also contain an assetsSelector
. THat is a query expression that defines all the assets that are
included in the definition, like an SQL SELECT
statement. With that it is possible to configure the same set of
conditions (= access policy and contract policy) for a multitude of assets.
Please note that creating an assetSelector
may require knowledge about the shape of an Asset and can get complex
fairly quickly, so be sure to read the chapter about querying.
Here is an example of a contract definition, that defines an access policy and a contract policy for assets id1
, id2
and id3
that must contain the "foo" : "bar"
property.
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "https://w3id.org/edc/v0.0.1/ns/ContractDefinition",
"@id": "test-id",
"edc:accessPolicyId": "access-policy-1234",
"edc:contractPolicyId": "contract-policy-5678",
"edc:assetsSelector": [
{
"@type": "https://w3id.org/edc/v0.0.1/ns/Criterion",
"edc:operandLeft": "id",
"edc:operator": "in",
"edc:operandRight": [
"id1",
"id2",
"id3"
]
},
{
"@type": "https://w3id.org/edc/v0.0.1/ns/Criterion",
"edc:operandLeft": "foo",
"edc:operator": "=",
"edc:operandRight": "bar"
}
]
}
The sample expresses that a set of assets identified by their ID be made available under the access policy
access-policy-1234
and contract policy contract-policy-5678
, if they contain a property "foo" : "bar"
.
Note that asset selector expressions are always logically conjoined using an “AND” operation.
4. Contract negotiations
If a connector fulfills the contract policy, it may initiate the negotiation of a contract
for
a particular asset. During that negotiation, both parties can send offers and counter-offers that can contain altered
terms (= policy) as any human would in a negotiation, and the counter-party may accept or reject them.
Contract negotiations have a few key aspects:
- they target one asset
- they take place between a provider and a consumer connector
- they cannot be changed by the user directly
- users can only decline, terminate or cancel them
As a side note it is also important to note that contract offers are ephemeral objects as they are generated
on-the-fly for a particular participant, and they are never persisted in a database and thus cannot be queried through
any API.
Contract negotiations are asynchronous in nature. That means after initiating them, they become (potentially
long-running) stateful processes that are advanced by an
internal state machine.
The current state of the negotiation can be queried and altered through the management API.
Here’s a diagram of the state machine applied to contract negotiations:
A contract negotiation can be initiated from the consumer side by sending a ContractRequest
to the connector
management API.
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "ContractRequest",
"counterPartyAddress": "http://provider-address",
"protocol": "dataspace-protocol-http",
"policy": {
"@context": "http://www.w3.org/ns/odrl.jsonld",
"@type": "odrl:Offer",
"@id": "offer-id",
"assigner": "providerId",
"permission": [],
"prohibition": [],
"obligation": [],
"target": "assetId"
},
"callbackAddresses": [
{
"transactional": false,
"uri": "http://callback/url",
"events": [
"contract.negotiation"
],
"authKey": "auth-key",
"authCodeId": "auth-code-id"
}
]
}
The counterPartyAddress
is the address where to send the ContractRequestMessage
via the specified protocol
(
currently dataspace-protocol-http
)
The policy
should hold the same policy associated to the data offering chosen from the catalog, plus
two additional properties:
assigner
the providers participantId
target
the asset (dataset) ID
In addition, the (optional) callbackAddresses
array can be used to get notified about state changes of the
negotiation. Read more on callbacks in the section
about events and callbacks.
Note: if the policy
sent by the consumer differs from the one expressed by the provider, the contract negotiation
will fail and transition to a TERMINATED
state.
5. Contract agreements
Once a contract negotiation is successfully concluded (i.e. it reaches the FINALIZED
state), it “turns into” a
contract agreement. It is always the provider connector that gives the final approval. Contract agreements are
immutable objects that contain the final, agreed-on policy, the ID of the asset that the contract was negotiated for,
the IDs of the negotiation parties and the exact signing date.
Note that in future iterations contracts will be cryptographically signed to further support the need for
immutability and non-repudiation.
Like contract definitions, contract agreements are entities that only exist within the bounds of a connector.
About terminating contracts: once a contract negotiation has reached a terminal
state
TERMINATED
or FINALIZED
, it becomes immutable. This could be compared to not being able to scratch a signature off a
physical paper contract. Cancelling or terminating a contract is therefor handled through other channels like eventing
systems. The semantics of cancelling a contract are highly individual to each dataspace and may even bring legal side
effects, so EDC cannot make an assumption here.
6. Catalog
The catalog contains the “data offerings” of a connector and one or multiple service endpoints to initiate a negotiation
for those offerings.
Every data offering is represented by a Dataset
object which
contains a policy and one or multiple Distribution
objects. A Distribution
should be understood as a variant
or representation of the Dataset
. For instance, if a file is accessible via multiple transmission channels from a
provider (HTTP and FTP), then each of those channels would be represented as a Distribution
. Another example would be
image assets that are available in different file formats (PNG, TIFF, JPEG).
A DataService
object specifies the endpoint where contract
negotiations and transfers are accepted by the provider. In practice, this will be the DSP endpoint of the connector.
The following example shows an HTTP response to a catalog request, that contains one offer that is available via two
channels HttpData-PUSH
and HttpData-PULL
.
catalog example
{
"@id": "567bf428-81d0-442b-bdc8-437ed46592c9",
"@type": "dcat:Catalog",
"dcat:dataset": [
{
"@id": "asset-2",
"@type": "dcat:Dataset",
"odrl:hasPolicy": {
"@id": "c2Vuc2l0aXZlLW9ubHktZGVm:YXNzZXQtMg==:MzhiYzZkNjctMDIyNi00OGJjLWFmNWYtZTQ2ZjAwYTQzOWI2",
"@type": "odrl:Offer",
"odrl:permission": [],
"odrl:prohibition": [],
"odrl:obligation": {
"odrl:action": {
"@id": "use"
},
"odrl:constraint": {
"odrl:leftOperand": {
"@id": "DataAccess.level"
},
"odrl:operator": {
"@id": "odrl:eq"
},
"odrl:rightOperand": "sensitive"
}
}
},
"dcat:distribution": [
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "HttpData-PULL"
},
"dcat:accessService": {
"@id": "a6c7f3a3-8340-41a7-8154-95c6b5585532",
"@type": "dcat:DataService",
"dcat:endpointDescription": "dspace:connector",
"dcat:endpointUrl": "http://localhost:8192/api/dsp",
"dct:terms": "dspace:connector",
"dct:endpointUrl": "http://localhost:8192/api/dsp"
}
},
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "HttpData-PUSH"
},
"dcat:accessService": {
"@id": "a6c7f3a3-8340-41a7-8154-95c6b5585532",
"@type": "dcat:DataService",
"dcat:endpointDescription": "dspace:connector",
"dcat:endpointUrl": "http://localhost:8192/api/dsp",
"dct:terms": "dspace:connector",
"dct:endpointUrl": "http://localhost:8192/api/dsp"
}
}
],
"description": "This asset requires Membership to view and SensitiveData credential to negotiate.",
"id": "asset-2"
}
],
"dcat:distribution": [],
"dcat:service": {
"@id": "a6c7f3a3-8340-41a7-8154-95c6b5585532",
"@type": "dcat:DataService",
"dcat:endpointDescription": "dspace:connector",
"dcat:endpointUrl": "http://localhost:8192/api/dsp",
"dct:terms": "dspace:connector",
"dct:endpointUrl": "http://localhost:8192/api/dsp"
},
"dspace:participantId": "did:web:localhost%3A7093",
"participantId": "did:web:localhost%3A7093",
"@context": {}
}
Catalogs are ephemeral objects, they are not persisted or cached on the provider side. Everytime a consumer participant
makes a catalog request through DSP, the connector runtime has to evaluate the incoming request and build up the catalog
specifically for that participant. The reason for this is that between two subsequent requests from the same
participant, the contract definition or the claims or the participant could have changed.
The relevant component in EDC is the DatasetResolver
, which resolves all contract definitions that are relevant to a
participant filtering out those where the participant does not satisfy the access policy and collects all the assets
therein.
In order to determine how an asset can be distributed, the resolver requires knowledge about the data planes that are
available. It uses the Dataplane Signaling Protocol to query them
and construct the list of
Distributions
for an asset.
For details about the FederatedCatalog, please refer to
its documentation.
7 Transfer processes
A TransferProcess
is a record of the data sharing procedure between a consumer and a provider. As they traverse
through the system, they transition through several
states (TransferProcessStates
).
Once a contract is negotiated and an agreement is reached, the
consumer connector may send a transfer initiate request to start the transfer. In the course of doing that, both parties
may provision additional resources, for example deploying a
temporary object store, where the provider should put the data. Similarly, the provider may need to take some
preparatory steps, e.g. anonymizing the data before sending it out.
This is sometimes referred to as the provisioning phase. If no additional provisioning is needed, the transfer process
simply transitions through the state with a NOOP.
Once that is done, the transfer begins in earnest. Data is transmitted according to the dataDestination
, that was
passed in the initiate-request.
Once the transmission has completed, the transfer process will transition to the COMPLETED
state, or - if an error
occurred - to the TERMINATED
state.
The Management API provides several endpoints to manipulate data transfers.
Here is a diagram of the state machine applied to transfer processes on consumer side:
Here is a diagram of the state machine applied to transfer processes on provider side:
A transfer process can be initiated from the consumer side by sending a TransferRequest
to the connector Management
API:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "https://w3id.org/edc/v0.0.1/ns/TransferRequest",
"protocol": "dataspace-protocol-http",
"counterPartyAddress": "http://provider-address",
"contractId": "contract-id",
"transferType": "transferType",
"dataDestination": {
"type": "data-destination-type"
},
"privateProperties": {
"private-key": "private-value"
},
"callbackAddresses": [
{
"transactional": false,
"uri": "http://callback/url",
"events": [
"contract.negotiation",
"transfer.process"
],
"authKey": "auth-key",
"authCodeId": "auth-code-id"
}
]
}
where:
7.1 Transfer and data flows types
The transfer type defines the channel (Distribution) for the data transfer and it depends on the capabilities of
the data plane if it can be fulfilled. The transferType
available for a
data offering is available in the dct:format
of the Distribution
when inspecting the catalog response.
Each transfer type also characterizes the type of the flow, which can be either pull
or push and it’s data can be either finite
or non-finite
7.1.1 Consumer Pull
A pull transfer is when the consumer receives information (in the form of a DataAddress
) on how to retrieve data from
the Provider.
Then it’s up to the consumer to use this information for pulling the data.
Provider and consumer agree to a contract (not displayed in the diagram)
- Consumer initiates the transfer process by sending a
TransferRequestMessage
- The Provider Control Plane retrieves the
DataAddress
of the actual data source and creates aDataFlowStartMessage
. - The Provider Control Plane asks the selector which Data Plane instance can be used for this data transfer
- The Selector returns an eligible Data Plane instance (if any)
- Provider Control Plane sends the
DataFlowStartMessage
to the selected Data Plane instance
through data plane signaling protocol. - The Provider
DataPlaneManager
validates the incoming request and delegates to the DataPlaneAuthorizationService
the generation of DataAddress
, containing the information on location and authorization for fetching the data - The Provider Data Plane acknowledges the Provider control plane and attach the
DataAddress
generated. - The Provider Control Plane notifies the start of the transfer attaching the
DataAddress
in the TransferStartMessage
. - The Consumer Control plane receives the
DataAddress
and dispatch it accordingly to the configured runtime. Consumer
can either decide to receive the DataAddress
using the eventing
system callbacks using the transfer.process.started
type, or use
the EDRs extensions for automatically store it on consumer control plane side. - With the informations in the
DataAddress
such as the endpointUrl
and the Authorization
data can be fetched. - The Provider Data plane validates and authenticates the incoming request and retrieves the source
DataAddress
. - The he provider data plane proxies the validated request to the configured backend in the source
DataAddress
.
7.1.2 Provider Push
A push transfer is when the Provider data plane initiates sending data to the destination specified by the consumer.
Provider and consumer agree to a contract (not displayed in the diagram)
- The Consumer initiates the transfer process, i.e. sends
TransferRequestMessage
with a destination DataAddress - The Provider Control Plane retrieves the
DataAddress
of the actual data source and creates a DataFlowStartMessage
with both source and destination DataAddress
. - The Provider Control Plane asks the selector which Data Plane instance can be used for this data transfer
- The Selector returns an eligible Data Plane instance (if any)
- The Provider Control Plane sends the
DataFlowStartMessage
to the selected Data Plane instance
through data plane signaling protocol. - The Provider Data Plane validates the incoming request
- If request is valid, the Provider Data Plane returns acknowledgement
- The
DataPlaneManager
of the the Provider Data Plane processes the request: it creates a DataSource
/DataSink
pair
based on the source/destination data addresses - The Provider Data Plane fetches data from the actual data source (see
DataSource
) - The Provider Data Plane pushes data to the consumer services (see
DataSink
)
7.1.2 Finite and Non-Finite Data
The charaterization of the data applies to either push
and pull
transfers. Finite data transfers cause the transfer
process to transitition to the state COMPLETED
, once the transmission has finished. For example a transfer of a single
file that is hosted and transferred into a cloud storage system.
Non-finite data means that once the transfer process request has been accepted by the provider the transfer process is
in the STARTED
state until it gets terminated by the consumer or the provider. Exampes of Non-finite data are streams
or API endpoins.
On the provider side transfer processes can also be terminated by
the policy monitor that
periodically watches over the on going transfer and checks if the
associated contract agreement still fulfills the contract policy.
7.2 About Data Destinations
A data destination is a description of where the consumer expects to find the data after the transfer completes. In a "
provider-push" scenario this could be an object storage container, a directory on a file system, etc. In a
“consumer-pull” scenario this would be a placeholder, that does not contain any information about the destination, as
the provider “decides” which endpoint he makes the data available on.
A data address is a schemaless object, and the provider and the consumer need to have a common understanding of the
required fields. For example, if the provider is supposed to put the data into a file share, the DataAddress
object
representing the data destination will likely contain the host URL, a path and possibly a file name. So both connectors
need to be “aware” of that.
The actual data transfer is handled by a data plane through extensions (
called “sources” and "
sinks"). Thus, the way to establish that “understanding” is to make sure that both parties have matching sources and
sinks. That means, if a consumer asks to put the data in a file share, the provider must have the appropriate data plane
extensions to be able to perform that transfer.
If the provider connector does not have the appropriate extensions loaded at runtime, the transfer process will fail.
7.3 Transfer process callbacks
In order to get timely updates about status changes of a transfer process, we could simply poll the management API by
firing a GET /v*/transferprocesses/{tp-id}/state
request every X amount of time. That will not only put unnecessary
load on the connector,
you may also run into rate-limiting situations, if the connector is behind a load balancer of some sort. Thus, we
recommend using event callbacks.
Callbacks must be specified when requesting to initiate the transfer:
{
// ...
"callbackAddresses": [
{
"transactional": false,
"uri": "http://callback/url",
"events": [
"transfer.process"
],
"authKey": "auth-key",
"authCodeId": "auth-code-id"
}
]
//...
}
Currently, we support the following events:
transfer.process.deprovisioned
transfer.process.completed
transfer.process.deprovisioningRequested
transfer.process.initiated
transfer.process.provisioned
transfer.process.provisioning
transfer.process.requested
transfer.process.started
transfer.process.terminated
The connector’s event dispatcher will send invoke the webhook specified in the uri
field passing the event
payload as JSON object.
More info about events and callbacks can be found here.
8 Endpoint Data References
9 Querying with QuerySpec
and Criterion
Most of the entities can be queried with the QuerySpec
object, which is a generic way of expressing limit, offset,
sort and filters when querying a collection of objects managed by the EDC stores.
Here’s an example of how a QuerySpec
object might look like when querying for Assets via management APIs:
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "QuerySpec",
"limit": 1,
"offset": 1,
"sortField": "createdAt",
"sortOrder": "DESC",
"filterExpression": [
{
"operandLeft": "https://w3id.org/edc/v0.0.1/ns/description",
"operator": "=",
"operandRight": "This asset"
}
]
}
which filters by the description
custom property being equals to This asset
. The query also paginates the result
with limit and p set to 1. Additionally a sorting strategy is in place by createdAt
property in descending
order (
the default is ASC
)
Note: Since custom properties are persisted in their expanded form, we have to use
the expanded form also when querying.
The filterExpression
property is a list of Criterion
, which expresses a single filtering condition based on:
operandLeft
: the property to filter onoperator
: the operator to apply e.g. =
operandRight
: the value of the filtering
The supported operators are:
- Equal:
=
- Not equal:
!=
- In:
in
- Like:
like
- Ilike:
ilike
(same as like
but ignoring case sensitive) - Contains:
contains
Note: multiple filtering expressions are always logically conjoined using an “AND” operation.
The properties that can be expressed in the operandLeft
of a Criterion
depend on the shape of the entity that we are
want to query.
Note: nested properties are also supported using the dot notation.
QuerySpec
can also be used when doing the catalog request using the querySpec
property in the catalog request
payload for filtering the datasets:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"counterPartyAddress": "http://provider/api/dsp",
"protocol": "dataspace-protocol-http",
"counterPartyId": "providerId",
"querySpec": {
"filterExpression": [
{
"operandLeft": "https://w3id.org/edc/v0.0.1/ns/description",
"operator": "=",
"operandRight": "This asset"
}
]
}
}
Entities are backed by stores for doing CRUD operations. For each entity
there is an associated store interface (SPI). Most of the stores SPI have a query
like method which takes a
QuerySpec
type as input and returns the matched entities in a collection. Indivitual implementations are then
responsible for translating the QuerySpec
to a proper fetching strategy.
The description on how the translation and mapping works will be explained in each implementation. Currently EDC support
out of the box:
- In-memory stores (default implementation).
- SQL stores provied as extensions for each store, mostly
tailored for and tested with
PostgreSQL.
For guaranteeing the highest compatibility between store implementations, a base tests suite is provided for each store
that each technology implementors need to fulfill in order to have a minimum usable store implementation.
2.7.2 - Json LD
Here is a simple example taken from json-ld.org
{
"@context": "https://json-ld.org/contexts/person.jsonld",
"@id": "http://dbpedia.org/resource/John_Lennon",
"name": "John Lennon",
"born": "1940-10-09",
"spouse": "http://dbpedia.org/resource/Cynthia_Lennon"
}
It’s similar on how a Person
would be represented in JSON, with additional known properties such as @context
and
@id
.
The @id
is used to uniquely identify an object.
The @context
is used to define how terms should be interpreted and help
expressing specific identifier with short-hand names instead
of IRI.
Exhausting reserved keywords list and their meaning is
available here
In the above example the @context
is a remote one, but the @context
can also be defined inline. Here is the same
JSON-LD object using locally defined terms.
{
"@context": {
"xsd": "http://www.w3.org/2001/XMLSchema#",
"name": "http://xmlns.com/foaf/0.1/name",
"born": {
"@id": "http://schema.org/birthDate",
"@type": "xsd:date"
},
"spouse": {
"@id": "http://schema.org/spouse",
"@type": "@id"
}
},
"@id": "http://dbpedia.org/resource/John_Lennon",
"name": "John Lennon",
"born": "1940-10-09",
"spouse": "http://dbpedia.org/resource/Cynthia_Lennon"
}
which defines inline the name
, born
and spouse
terms.
The two objects have the same meaning as Linked Data.
A JSON-LD document can be described in multiple forms and by applying
certain transformations a document can change shape without changing the meaning.
Relevant forms in the realm of EDC are:
- Expanded document form
- Compacted document form
The examples above are in compacted
form and by applying
the expansion algorithm the output would look like this
[
{
"@id": "http://dbpedia.org/resource/John_Lennon",
"http://schema.org/birthDate": [
{
"@type": "http://www.w3.org/2001/XMLSchema#date",
"@value": "1940-10-09"
}
],
"http://xmlns.com/foaf/0.1/name": [
{
"@value": "John Lennon"
}
],
"http://schema.org/spouse": [
{
"@id": "http://dbpedia.org/resource/Cynthia_Lennon"
}
]
}
]
The expansion is the process of taking in input a JSON-LD
document and applying the @context
so that it is no longer necessary, as all the terms are resolved in their IRI
representation.
The compaction is the inverse process. It takes in input a
JSON-LD in expanded form and by applying the supplied @context
, it creates the compacted form.
For playing around JSON-LD and processing algorithm the playground is a useful tool.
1. JSON-LD in EDC
EDC uses JSON-LD as primary serialization format at API layer and at runtime EDC manages the objects in their expanded
form, for example when transforming JsonObject
into EDC entities and and backwards
in transformers or when validating input
JsonObject
at API level.
Extensible properties in entities are always stored expanded form.
To achieve that, EDC uses an interceptor (JerseyJsonLdInterceptor
) that always expands in ingress and compacts in
egress the JsonObject
.
EDC uses JSON-LD for two main reasons:
Fist EDC embraces different protocols and standards such as:
and they all rely on JSON-LD as serialization format.
The second reason is that EDC allows to extends entities like Asset
with custom properties, and uses JSON-LD as the
way to extend objects with custom namespaces.
EDC handles JSON-LD through the JsonLd
SPI. It supports different operation and configuration for managing JSON-LD in
the EDC runtime.
It supports expansion and compaction process:
Result<JsonObject> expand(JsonObject json);
Result<JsonObject> compact(JsonObject json, String scope);
and allows the configuration of which @context
and namespaces
to use when processing the JSON-LD in a specific
scope.
For example when using the JsonLd
service in the management API the @context
and namespaces
configured might
differs when using the same service in the dsp
layer.
The JsonLd
service also can configure cached contexts by allowing to have a local copy of the remote context. This
limits the network request required when processing the JSON-LD and reduces the attack surface if the remote host of the
context is compromised.
By default EDC make usage of @vocab
for processing input/output
JSON-LD document. This can provide a default vocabulary for extensible properties. An on-going initiative is available
with
this extension
in order to provide a cached terms mapping (context) for EDC management API. The remote context definition is
available here.
Implementors that need additional @context
and namespaces
to be supported in EDC runtime, should develop a custom
extension that registers the required @context
and namespace
.
For example let’s say we want to support a custom namespace http://w3id.org/starwars/v0.0.1/ns/
in the extensible
properties of an Asset.
The input JSON would look like this:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/",
"sw": "http://w3id.org/starwars/v0.0.1/ns/"
},
"@type": "Asset",
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"properties": {
"sw:faction": "Galactic Imperium",
"sw:person": {
"sw:name": "Darth Vader",
"sw:webpage": "https://death.star"
}
},
"dataAddress": {
"@type": "DataAddress",
"type": "myType"
}
}
Even if we don’t register a any additional @context
or namespace
prefix in the EDC runtime,
the Asset will still be persisted correctly since the JSON-LD gets expanded correctly and
stored in the expanded form.
But in the egress
the JSON-LD document gets always compacted, and without additional configuration, it will look like
this:
{
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"@type": "Asset",
"properties": {
"http://w3id.org/starwars/v0.0.1/ns/faction": "Galactic Imperium",
"http://w3id.org/starwars/v0.0.1/ns/person": {
"http://w3id.org/starwars/v0.0.1/ns/name": "Darth Vader",
"http://w3id.org/starwars/v0.0.1/ns/webpage": "https://death.star"
},
"id": "79d9c360-476b-47e8-8925-0ffbeba5aec2"
},
"dataAddress": {
"@type": "DataAddress",
"type": "myType"
},
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/",
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"odrl": "http://www.w3.org/ns/odrl/2/"
}
}
That means that the IRIs are not shortened to terms
or compact iri. This might be ok for some runtime and configuration. But if implementors want to
achieve more usability and easy of usage, two main strategy can be applied:
1.1 Compact IRI
The first strategy is to register a namespace prefix in an extension:
public class MyExtension implements ServiceExtension {
@Inject
private JsonLd jsonLd;
@Override
public void initialize(ServiceExtensionContext context) {
jsonLd.registerNamespace("sw", "http://w3id.org/starwars/v0.0.1/ns/", "MANAGEMENT_API");
}
}
This will shorten the IRI to compact IRI when compacting the same
JSON-LD:
{
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"@type": "Asset",
"properties": {
"sw:faction": "Galactic Imperium",
"sw:person": {
"sw:name": "Darth Vader",
"sw:webpage": "https://death.star"
},
"id": "79d9c360-476b-47e8-8925-0ffbeba5aec2"
},
"dataAddress": {
"@type": "DataAddress",
"type": "myType"
},
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/",
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"odrl": "http://www.w3.org/ns/odrl/2/",
"sw": "http://w3id.org/starwars/v0.0.1/ns/"
}
}
1.2 Custom Remote Context
An improved version requires developers to draft a context (which should be resolvable with an URL), for example
http://w3id.org/starwars/context.jsonld
, that contains the terms definition.
An example of a definition might look like this:
{
"@context": {
"@version": 1.1,
"sw": "http://w3id.org/starwars/v0.0.1/ns/",
"person": "sw:person",
"faction": "sw:faction",
"name": "sw:name",
"webpage": "sw:name"
}
}
Then in a an extension the context URL should be registered in the desired scope and cached:
public class MyExtension implements ServiceExtension {
@Inject
private JsonLd jsonLd;
@Override
public void initialize(ServiceExtensionContext context) {
jsonld.registerContext("http://w3id.org/starwars/context.jsonld", "MANAGEMENT_API");
URI documentLocation = // load from filesystem or classpath
jsonLdService.registerCachedDocument("http://w3id.org/starwars/context.jsonld", documentLocation)
}
}
With this configuration the JSON-LD will be representend without the sw
prefix, since the terms mapping is defined in
the remote context http://w3id.org/starwars/context.jsonld
:
{
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"@type": "Asset",
"properties": {
"faction": "Galactic Imperium",
"person": {
"name": "Darth Vader",
"webpage": "https://death.star"
},
"id": "79d9c360-476b-47e8-8925-0ffbeba5aec2"
},
"dataAddress": {
"@type": "DataAddress",
"type": "myType"
},
"@context": [
"http://w3id.org/starwars/context.jsonld",
{
"@vocab": "https://w3id.org/edc/v0.0.1/ns/",
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"odrl": "http://www.w3.org/ns/odrl/2/"
}
]
}
In case of name clash in the terms definition, the JSON-LD processor should fallback to
the compact URI representation.
1.1 JSON-LD Validation
EDC provides a mechanism to validate JSON-LD objects. The validation phase is typically handled at the
network/controller layer. For each entity identified by it’s own @type
, it is possible to register a custom
Validator<JsonObject>
using the registry JsonObjectValidatorRegistry
. By default EDC provides validation for all the
entities it manages like Asset
, ContractDefinition
..etc.
For custom validator it is possible to either implements Validator<JsonObject>
interface (not recommended) or
or use the bundled JsonObjectValidator
, which is a declarative way of configuring a validator for an object through
the builder pattern. It also comes with a preset of validation rules such as id not empty, mandatory properties and many
more.
An example of validator for a custom type Foo
:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/",
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@id": "79d9c360-476b-47e8-8925-0ffbeba5aec2",
"@type": "Foo",
"bar": "value"
}
might look like this:
public class FooValidator {
public static JsonObjectValidator instance() {
return JsonObjectValidator.newValidator()
.verifyId(OptionalIdNotBlank::new)
.verify("https://w3id.org/edc/v0.0.1/ns/bar")
.build();
}
}
and can be registered with the @Injectable
JsonObjectValidatorRegistry
:
public class MyExtension implements ServiceExtension {
@Inject
private JsonObjectValidatorRegistry validator;
@Override
public void initialize(ServiceExtensionContext context) {
validator.register("https://w3id.org/edc/v0.0.1/ns/Foo", FooValidator.instance());
}
}
When needed, it can be invoked like this:
public class MyController {
private JsonObjectValidatorRegistry validator;
@Override
public void doSomething(JsonObject input) {
validator.validate("https://w3id.org/edc/v0.0.1/ns/Foo", input)
.orElseThrow(ValidationFailureException::new);
}
}
2.7.3 - Policy Monitor
Some transfer types, once accepted by the provider
, never reach the COMPLETED
state. Streaming and HTTP transfers in consumer pull scenario are examples of this. In those scenarios the transfer will remain active (STARTED
) until it gets terminated either manually by using the transfer processes management API, or automatically by the policy monitor, if it has been configured in the EDC runtime.
The policy monitor (PolicyMonitorManager
) is a component that watches over on-going transfers on the provider side and ensures that the associated policies are still valid. The default implementation of the policy monitor tracks the monitored transfer processes in it’s own entity PolicyMonitorEntry
stored in the PolicyMonitorStore
.
Once a transfer process transition to the STARTED
state on the provider side, the policy monitor gets notified through the eventing system of EDC and start tracking transfer process. For each monitored transfer process in the STARTED
state the policy monitor retrieves the policy associated (through contract agreement) and runs the Policy Engine using the policy.monitor
as scope.
If the policy is no longer valid, the policy monitor marks the transfer process for termination (TERMINATING
) and stops tracking it.
The data plane also gets notified through the data plane signaling protocol about the termination of the transfer process, and if accepted by the data plane, the data transfer terminates as well.
Note for implementors
Implementors that want a Policy function to be evaluated at the policy monitor layer need to bind such function to the policy.monitor
scope.
Note that because the policy evaluation happens in the background, the PolicyContext
does not contain ParticipantAgent
as context data. This means that the Policy Monitor cannot evaluate policies that involve VerifiableCredentials.
Currently the only information published in the PolicyContext
available for functions in the policy.monitor
scope are the ContractAgreement
, and the Instant
at the time of the evaluation.
A bundled example of a Policy function that runs in the policy.monitor
scope is the ContractExpiryCheckFunction
which checks if the contract agreement is not expired.
2.7.4 - Programming Primitives
1 State machines
EDC is asynchronous by design, which means that processes are processed in such a way that they don’t block neither the
runtime nor the caller. For example starting a contract negotiation is a long-running process and every contract
negotiation has to traverse a series of
states,
most of which involve sending remote messages to the counter party. These state transitions are not guaranteed to happen
within a certain time frame, they could take hours or even days.
From that it follows that an EDC instance must be regarded as ephemeral (= they can’t hold state in memory), so the
state (of a contract negotiation) must be held in persistent storage. This makes it possible to start and stop connector
runtimes arbitrarily, and every replica picks up where the other left off, without causing conflicts or processing an
entity twice.
The state machine itself is synchronous: in every iteration it processes a number of objects and then either goes back
to sleep, if there was nothing to process, or continues right away.
At a high level this is implemented in the StateMachineManager
, which uses a set of Processor
s. The
StateMachineManager
sequentially invokes each Processor
, who then reports the number of processed entities. In EDC’s
state machines, processors are functions who handle StatefulEntities
in a particular state and are registered when the
application starts up:
// ProviderContractNegotiationManagerImpl.java
@Override
protected StateMachineManager.Builder configureStateMachineManager(StateMachineManager.Builder builder) {
return builder
.processor(processNegotiationsInState(OFFERING, this::processOffering))
.processor(processNegotiationsInState(REQUESTED, this::processRequested))
.processor(processNegotiationsInState(ACCEPTED, this::processAccepted))
.processor(processNegotiationsInState(AGREEING, this::processAgreeing))
.processor(processNegotiationsInState(VERIFIED, this::processVerified))
.processor(processNegotiationsInState(FINALIZING, this::processFinalizing))
.processor(processNegotiationsInState(TERMINATING, this::processTerminating));
}
This instantiates a Processor
that binds a given state to a callback function. For example AGREEING
->
this::processAgreeing
. When the StateMachineManager
invokes this Processor
, it loads all contract negotiations in
that state (here: AGREEING
) and passes each one to the processAgreeing
method.
All processors are invoked sequentially, because it is possible that one single entity transitions to multiple states in
the same iteration.
1.1 Batch-size, sorting and tick-over timeout
In every iteration the state machine loads multiple StatefulEntity
objects from the database. To avoid overwhelming
the state machine and to prevent entites from becoming stale, two main safeguards are in place:
- batch-size: this is the maximum amount of entities per state that are fetched from the database
- sorting:
StatefulEntity
objects are sorted based on when their state was last updated, oldest first. - iteration timeout: if no
StatefulEntities
were processed, the statemachine simply yields for a configurable amount
of time.
1.2 Database-level locking
In production deployments the control plane is typically replicated over several instances for performance and
robustness. This must be considered when loading StatefulEntity
objects from the database, because it is possible that
two replicas attempt to load the same entity at the same time, which - without locks - would lead to a race condition,
data inconsistencies, duplicated DSP messages and other problems.
To avoid this, EDC employs pessimistic exclusive locks on the database level for stateful entities, which are called
Lease
. These are entries in a database that indicate whether an entity is currently leased, whether the lease is
expired and which replica leased the entity. Attempting to acquire a lease for an already-leased entity is only possible
if the
lease holder is the same.
Note that the value of the edc.runtime.id
property is used to record the holder of a Lease
. It is recommended not
to configure this property in clustered environments so that randomized runtime IDs (= default) are used.
Generally the process is as follows:
- load
N
“leasable” entities and acquire a lease for each one. An entity is considered “leasable” if it is not already
leased, or the current runtime already holds the lease, or the lease is expired. - if the entity was processed, advance state, free the lease
- if the entity was not processed, free the lease
That way, each replica of the control plane holds an exclusive lock for a particular entity while it is trying to
process and advance its state.
EDC uses JSON-LD serialization on API ingress and egress. For information about this can be found in this
chapter, but the TL;DR is that it is necessary because of extensible properties and
namespaces on wire-level DTOs.
2.1 Basic Serialization and Deserialization
On API ingress and egress this means that conventional serialization and deserialization (“SerDes”) cannot be achieved
with Jackson, because Jackson operates on a configurable, but ultimately rigid schema.
For that reason, EDC implements its own SerDes layer, called “transformers”. The common base class for all transformers
is the AbstractJsonLdTransformer<I,O>
and the naming convention is JsonObject[To|From]<Entity>Transformer
for
example JsonObjectToAssetTransformer
. They typically come in pairs, to enable both serialization and deserialization.
Another rule is that the entity class must contain the fully-qualified (expanded) property names as constants and
typical programming patterns are:
- deserialization: transformers contain a
switch
statement that parses the property names and populates the entity’s
builder. - serialization: transformers simply construct the
JsonObject
based on the properties of the entity using a
JsonObjectBuilder
2.1 Transformer context
Many entities in EDC are complex objects that contain other complex objects. For example, a ContractDefinition
contains the asset selector, which is a List<Criterion>
. However, a Criterion
is also used in a QuerySpec
, so it
makes sense to extract its deserialization into a dedicated transformer. So when the
JsonObjectFromContractDefinitionTransformer
encounters the asset selector property in the JSON structure, it delegates
its deserialization back to the TransformerContext
, which holds a global list of type transformers (
TypeTransformerRegistry
).
As a general rule of thumb, a transformer should only deserialize first-order properties, and nested complex objects
should be delegated back to the TransformerContext
.
Every module that contains a type transformer should register it with the TypeTransformerRegistry
in its accompanying
extension:
@Inject
private TypeTransformerRegistry typeTransformerRegistry;
@Override
public void initialize(ServiceExtensionContext context) {
typeTransformerRegistry.register(new JsonObjectToYourEntityTransformer());
}
One might encounter situations, where different serialization formats are required for the same entity, for example
DataAddress
objects are serialized differently on
the Signaling API and
the DSP API.
If we would simply register both transformers with the transformer registry, the second registration would overwrite the
first, because both transformers have the same input and output types:
public class JsonObjectFromDataAddressTransformer extends AbstractJsonLdTransformer<DataAddress, JsonObject> {
//...
}
public class JsonObjectFromDataAddressDspaceTransformer extends AbstractJsonLdTransformer<DataAddress, JsonObject> {
//...
}
Consequently, all DataAddress
objects would get serialized in the same way.
To overcome this limitation, EDC has the concept of segmented transformer registries, where the segment is defined by
a string called a “context”:
@Inject
private TypeTransformerRegistry typeTransformerRegistry;
@Override
public void initialize(ServiceExtensionContext context) {
var signalingApiRegistry = typeTransformerRegistry.forContext("signaling-api");
signalingApiRegistry.register(new JsonObjectFromDataAddressDspaceTransformer(/*arguments*/));
var dspRegistry = typeTransformerRegistry.forContext("dsp-api");
dspRegistry.register(new JsonObjectToDataAddressTransformer());
}
Note that this example serves for illustration purposes only!
Usually, transformation happens in API controllers to deserialize input, process and serialize output, but controllers
don’t use transformers directly because more than one transformer may be required to correctly deserialize an object.
Rather, they have a reference to a TypeTransformerRegistry
for this. For more information please refer to the chapter
about service layers.
Generally speaking, input validation should be performed by validators. However, it
is still possible that an object cannot be serialized/deserialized correctly, for example when a property has has the
wrong type, wrong multiplicity, cannot be parsed, unknown property, etc. Those types of errors should be reported to the
TransformerContext
:
// JsonObjectToDataPlaneInstanceTransformer.java
private void transformProperties(String key, JsonValue jsonValue, DataPlaneInstance.Builder builder, TransformerContext context) {
switch (key) {
case URL -> {
try {
builder.url(new URL(Objects.requireNonNull(transformString(jsonValue, context))));
} catch (MalformedURLException e) {
context.reportProblem(e.getMessage());
}
}
// other properties
}
}
Transformers should report errors to the context instead of throwing exceptions. Please note that basic JSON validation
should be performed by validators.
3. Token generation and decorators
A token is a datastructure that consists of a header and claims and that is signed with a private key. While EDC
is able to create any type of tokens through extensions, in most use cases JSON Web Tokens (JWT)
are a good option.
The TokenGenerationService
offers a way to generate such a token by passing in a reference to a private key and a set
of TokenDecorators
. These are functions that mutate the parameters of a token, for example they could contribute
claims and headers to JWTs:
TokenDecorator jtiDecorator = tokenParams -> tokenParams.claim("jti", UUID.randomUuid().toString());
TokenDecorator typeDecorator = tokenParams -> tokenParams.header("typ", "JWT");
var token = tokenGenerationService.generate("my-private-key-id", jtiDecorator, typeDecorator);
In the EDC code base the TokenGenerationService
is not intended to be injectable, because client code typically should
be opinionated with regards to the token technology.
4. Token validation and rules
When receiving a token, EDC makes use of the TokenValidationService
facility to verify and validate the incoming
token. Out-of-the-box JWTs are supported, but other token types could be supported through
extensions. This section will be limited to validating JWT tokens.
Every JWT that is validated by EDC must have a kid
header indicating the ID of the public key with which the token
can be verified. In addition, a PublicKeyResolver
implementation is required to download the public key.
4.1 Public Key Resolvers
PublicKeyResolvers
are services that resolve public key material from public locations. It is common for organizations
to publish their public keys as JSON Web Key Set (JWKS) or as verification
method in a DID document. If operational circumstances require
that multiple resolution strategies be supported at runtime, the recommended way to achieve this is to implement a
PublicKeyResolver
that dispatches to multiple sub-resolvers based on the shape of the key ID.
Sometimes it is necessary for the connector runtime to resolve its own public key, e.g. when validating a token that
was
sent out in a previous interaction. In these cases it is best to avoid a remote call to a DID document or a JWKS URL,
but to resolve the public key locally.
4.2 Validation Rules
With the public key the validation service is able to verify the token’s signature, i.e. to assert its cryptographic
integrity. Once that succeeds, the TokenValidationService
parses the token string and applies all
TokenValidationRules
on the claims. We call this validation, since it asserts the correct (“valid”) structure of the
token’s claims.
4.3 Validation Rules Registry
Usually, tokens are validated in different contexts, each of which brings its own validation rules. Currently, the
following token validation contexts exist:
"dcp-si"
: when validating Self-Issued ID tokens in the Decentralized Claims Protocol (DCP)"dcp-vc"
: when validating VerifiableCredentials that have an external proof in the form of a JWT (JWT-VCs)"dcp-vp"
: when validating VerifiablePresentations that have an external proof in the form of a JWT (JWT-VPs)"oauth2"
: when validating OAuth2 tokens"management-api"
: when validating external tokens in the Management API ingress (relevant when delegated
authentication is used)
Using these contexts it is possible to register additional validation rules using extensions:
//YourSpecialExtension.java
@Inject
private TokenValidationRulesRegistry rulesRegistry;
@Override
public void initialize(ServiceExtensionContext context) {
rulesRegistry.addRule(DCP_SELF_ISSUED_TOKEN_CONTEXT, (claimtoken, additional) -> {
var checkResult = ...// perform rule check
return checkResult;
});
}
This is useful for example when certain dataspaces require additional rules to be satisfied or even private
claims to be exchanged.
2.7.5 - Protocol Extensions
The EDC officially supports the Dataspace protocol using the HTTPs bindings, but since it is an extensible platform, multiple protocol implementations can be supported for inter-connectors communication. Each supported protocols is identified by a unique key used by EDC for dispatching a remote message.
1. RemoteMessage
At the heart of EDC message exchange mechanism lies the RemoteMessage
interface, which describes the protocol
, the counterPartyAddress
and the counterPartyId
used for a message delivery.
RemoteMessage
extensions can be divided in three groups:
Each RemoteMessage
is:
1.1 Delivering messages with RemoteMessageDispatcher
Each protocol implements a RemoteMessageDispatcher
:
public interface RemoteMessageDispatcher {
String protocol();
<T, M extends RemoteMessage> CompletableFuture<StatusResult<T>> dispatch(Class<T> responseType, M message);
}
and it is registered in the RemoteMessageDispatcherRegistry
, where it gets associated to the protocol defined in RemoteMessageDispatcher#protocol
.
Internally EDC uses the RemoteMessageDispatcherRegistry
whenever it needs to deliver a RemoteMessage
to the counter-party. The RemoteMessage
then gets routed to the right RemoteMessageDispatcher
based on the RemoteMessage#getProtocol
property.
EDC also uses RemoteMessageDispatcherRegistry
for non-protocol messages when dispatching event callbacks
1.2 Handling incoming messages with protocol services
On the ingress side, protocol implementations should be able to receive messages through the network (e.g. API Controllers), deserialize them into the corresponding RemoteMessage
s and then dispatching them to the right protocol service.
Protocol services are three:
CatalogProtocolService
ContractNegotiationProtocolService
TransferProcessProtocolService
which handle respectively Catalog
, ContractNegotiation
and TransferProcess
messages.
2. DSP protocol implementation
The Dataspace protocol protocol implementation is available under the data-protocol/dsp
subfolder in the Connector repository and it is identified by the key dataspace-protocol-http
.
It extends the RemoteMessageDispatcher
with the interface DspHttpRemoteMessageDispatcher
(dsp-spi
), which adds an additional method for registering message handlers.
The implementation of the three DSP specifications:
is separated in multiple extension modules grouped by specification.
This allows for example to build a runtime that only serves a dsp catalog requests useful the Management Domains scenario.
Each specification implementation defines handlers, transformers for RemoteMessage
s and exposes HTTP endpoints.
The dsp
implementation also provide HTTP endpoints for the DSP common functionalities.
2.1 RemoteMessage
handlers
Handlers map a RemoteMessage
to an HTTP Request and instruct the DspHttpRemoteMessageDispatcher
how to extract the response body to a desired type.
2.2 HTTP endpoints
Each dsp-*-http-api
module exposes its own API Controllers for serving the specification requests. Each request handler transforms the JSON-LD in input, if present, into a RemoteMessage
and then calls the protocol service layer.
Each dsp-*-transform
module registers in the DSP API context Transformers
for mapping JSON-LD objects from and to RemoteMessage
s.
2.7.6 - Service Layers
This document describes the EDC service layers.
1. API controllers
EDC uses JAX-RS/Jersey to expose REST endpoints, so our REST controllers look like this:
@Consumes({ MediaType.APPLICATION_JSON })
@Produces({ MediaType.APPLICATION_JSON })
@Path("/v1/foo/bar")
public class SomeApiController implements SomeApi {
@POST
@Override
public JsonObject create(JsonObject someApiObject) {
//perform logic
}
}
it is worth noting that as a rule, EDC API controllers only carry JAX-RS annotations, where all other annotations, such
as OpenApi should be put on the interface SomeApi
.
In addition, EDC APIs accept their arguments as JsonObject
due to the use of JSON-LD.
This applies to internal APIs and external APIs alike.
API controllers should not contain any business logic other than validation, serialization and service invocation.
All API controllers perform JSON-LD expansion upon ingress and JSON-LD compaction upon egress.
1.1 API contexts
API controllers must be registered with the Jersey web server. To better separate the different API controllers and
cluster them in coherent groups, EDC has the notion of “web contexts”. Technically, these are individual
ServletContainer
instances, each of which available at a separate port and URL path.
To register a new context, it needs to be configured first:
@Inject
private WebService webService;
@Inject
private WebServiceConfigurer configurer;
@Inject
private WebServer webServer;
@Override
public void initialize(ServiceExtensionContext context) {
var defaultConfig = WebServiceSettings.Builder.newInstance()
.apiConfigKey("web.http.yourcontext")
.contextAlias("yourcontext")
.defaultPath("/api/some")
.defaultPort(10080)
.useDefaultContext(false)
.name("Some new API")
.build();
var config = context.getConfig("web.http.yourcontext"); //reads web.http.yourcontext.[port|path] from the configuration
configurer.configure(config, webServer, defaultConfig);
}
1.2 Registering controllers
After the previous step, the "yourcontext"
context is available with the web server and the API controller can be
registered:
webservice.registerResource("yourcontext",new SomeApiController(/* arguments */)).
This makes the SomeApiController
available at http://localhost:10080/api/some/v1/foo/bar. It is possible to register
multiple controllers with the same context.
Note that the default port and path can be changed by configuring web.http.yourcontext.port
and
web.http.yourcontext.path
.
1.3 Registering other resources
Any JAX-RS Resource (as per
the JAX-RS Specification, Chapter 3. Resources)
can be registered with the web server.
Examples of this in EDC are JSON-LD interceptors, that expand/compact JSON-LD on ingress and egress, respectively, and
ContainerFilter
instances that are used for request authentication.
1.4 API Authentication
In Jersey, one way to do request authentication is by implementing the ContainerRequestFilter
interface. Usually,
authentication and authorization information is communicated in the request header, so EDC defines the
AuthenticationRequestFilter
, which extracts the headers from the request, and forwards them to an
AuthenticationService
instance.
Implementations for the AuthenticationService
interface must be registered by an extension:
@Inject
private ApiAuthenticationRegistry authenticationRegistry;
@Inject
private WebService webService;
@Override
public void initialize(ServiceExtensionContext context) {
authenticationRegistry.register("your-api-auth", new SuperCustomAuthService());
var authenticationFilter = new AuthenticationRequestFilter(authenticationRegistry, "your-api-auth");
webService.registerResource("yourcontext", authenticationFilter);
}
This registers the request filter for the web context, and registers the authentication service within the request
filter. That way, whenever a HTTP request hits the "yourcontext"
servlet container, the request filter gets invoked,
delegating to the SuperCustomAuthService
instance.
2. Validators
Extending the API controller example from the previous chapter, we add input validation. The validatorRegistry
variable is of type JsonObjectValidatorRegistry
and contains Validator
s that are registered for an arbitrary string,
but usually the @type
field of a JSON-LD structure is used.
public JsonObject create(JsonObject someApiObject) {
validatorRegistry.validate(SomeApiObject.TYPE_FIELD, someApiObject)
.orElseThrow(ValidationFailureException::new);
// perform logic
}
A common pattern to construct a Validator
for a JsonObject
is to use the JsonObjectValidator
:
public class SomeApiObjectValidator {
public static Validator<JsonObject> instance() {
return JsonObjectValidator.newValidator()
.verify(path -> new TypeIs(path, SomeApiObject.TYPE_FIELD))
.verifyId(MandatoryIdNotBlank::new)
.verifyObject(SomeApiObject.NESTED_OBJECT, v -> v.verifyId(MandatoryIdNotBlank::new))
.verify(SomeApiObject.NAME_PROPERTY, MandatoryValue::new)
.build();
}
}
This validator asserts that, the @type
field is equal to SomeApiObject.TYPE_FIELD
, that the input object has an
@id
that is non-null, that the input object has a nested object on it, that also has an @id
, and that the input
object has a non-null property that contains the name.
Of course, defining a separate class that implements the Validator<JsonObject>
interface is possible as well.
This validator must then be registered in the extension class with the JsonObjectValidatorRegistry
:
// YourApiExtension.java
@Override
public void initialize() {
validatorRegistry.register(SomeApiObject.TYPE_FIELD, SomeApiObjectValidator.instance());
}
Transformers are among the EDC’s fundamental programming primitives. They are
responsible for SerDes only, they are not supposed to perform any validation or any sort of business logic.
Recalling the code example from the API controllers chapter, we can add
transformation as follows:
@Override
public JsonObject create(JsonObject someApiObject) {
validatorRegistry.validate(SomeApiObject.TYPE_FIELD, someApiObject)
.orElseThrow(ValidationFailureException::new);
// deserialize JSON -> SomeApiObject
var someApiObject = typeTransformerRegistry.transform(someApiObject, SomeApiObject.class)
.onFailure(f -> monitor.warning(/*warning message*/))
.orElseThrow(InvalidRequestException::new);
var modifiedObject = someService.someServiceMethod(someApiObject);
// serialize SomeApiObject -> JSON
return typeTransformerRegistry.transform(modifiedObject, JsonObject.class)
.orElseThrow(f -> new EdcException(f.getFailureDetail()));
}
Note that validation should always be done first, as it is supposed to operate on the raw JSON structure. A failing
transformation indicates a client error, which is represented as a HTTP 400 error code. Throwing a
ValidationFailureException
takes care of that.
This example assumes, that the input object get processed by the service and the modified object is returned in the HTTP
body.
The step sequence should always be: Validation, Transformation, Aggregate Service invocation.
4. Aggregate services
Aggregate services are merely an integration of several other services to provide a single, unified service contract
to the
caller. They should be understood as higher-order operations that delegate down to lower-level services. A typical
example in EDC is when trying to delete an Asset
. The AssetService
would first check whether the asset in question
is referenced by a ContractNegotiation
, and - if not - delete the asset. For that it requires two collaborator
services, an AssetIndex
and a ContractNegotiationStore
.
Likewise, when creating assets, the AssetService
would first perform some validation, then create the asset (again
using the AssetIndex
) and the emit an event.
Note that the validation mentioned here is different from API validators. API validators only
validate the structure of a JSON object, so check if mandatory fields are missing etc., whereas service validation
asserts that all business rules are adhered to.
In addition to business logic, aggregate services are also responsible for transaction management, by enclosing relevant
code with transaction boundaries:
public ServiceResult<SomeApiObject> someServiceMethod(SomeApiObject input) {
transactionContext.execute(() -> {
input.modifySomething();
return ServiceResult.from(apiObjectStore.update(input))
}
}
the example presumes that the apiObjectStore
returns a StoreResult
object.
5. Data persistence
One important collaborator service for aggregate services is data persistence because ost operations involve some sort
of persistence interaction. In EDC, these persistence services are often called “stores” and they usually provide CRUD
functionality for entities.
Typically, stores fulfill the following contract:
- all store operations are transactional, i.e. they run in a
transactionContext
create
and update
are separate operations. Creating an existing object and updating a non-existent one should
return errors- stores should have a query method that takes a
QuerySpec
object and returns either a Stream
or a Collection
.
Read the next chapter for details. - stores return a
StoreResult
- stores don’t implement business logic.
5.1 In-Memory stores
By default and unless configured otherwise, EDC provides in-memory store
implementations by default. These are light-weight, thread-safe Map
-based implementations, that are intended for
testing, demonstration and tutorial purposes only.
Querying in InMemory stores
Memory-stores are based on Java collection types and can therefor can make use of the capabilities of the Streaming-API
for filtering and querying. What we are looking for is a way to convert a QuerySpec
into a set of Streaming-API
expressions. This is pretty straight forward for the offset
, limit
and sortOrder
properties, because there are
direct counterparts in the Streaming API.
For filter expressions (which are Criterion
objects), we first need to convert each criterion into a Predicate
which
can be passed into the .filter()
method.
Since all objects held by in-memory stores are just Java classes, we can perform the query based on field names which we
obtain through Reflection. For this, we use a QueryResolver
, in particular the ReflectionBasedQueryResolver
.
The query resolver then attempts to find an instance field that corresponds to the leftOperand
of a Criterion
. Let’s
assume a simple entity SimpleEntity
:
public class SimpleEntity {
private String name;
}
and a filter expression
{
"leftOperand": "name",
"operator": "=",
"rightOperand": "foobar"
}
The QueryResolver
attempts to resolve a field named "name"
and resolve its assigned value, convert the "="
into a
Predicate
and pass "foobar"
to the test()
method. In other words, the QueryResolver
checks, if the value
assigned to a field that is identified by the leftOperand
matches the value specified by rightOperand
.
Here is a full example of how querying is implemented in in-memory stores:
Example: ContractDefinitionStore
public class InMemoryContractDefinitionStore implements ContractDefinitionStore {
private final Map<String, ContractDefinition> cache = new ConcurrentHashMap<>();
private final QueryResolver<ContractDefinition> queryResolver;
// usually you can pass CriterionOperatorRegistryImpl.ofDefaults() here
public InMemoryContractDefinitionStore(CriterionOperatorRegistry criterionOperatorRegistry) {
queryResolver = new ReflectionBasedQueryResolver<>(ContractDefinition.class, criterionOperatorRegistry);
}
@Override
public @NotNull Stream<ContractDefinition> findAll(QuerySpec spec) {
return queryResolver.query(cache.values().stream(), spec);
}
// other methods
}
6. Events and Callbacks
In EDC, all processing in the control plane is asynchronous and state changes are communicated by events. The base class
for all events is Event
.
6.1 Event
vs EventEnvelope
Subclasses of Event
are supposed to carry all relevant information pertaining to the event such as entity IDs. They
are not supposed to carry event metadata such as event timestamp or event ID. These should be stored on the
EventEnvelope
class, which also contains the Event
class as payload.
There are two ways how events can be consumed: in-process and webhooks
6.2 Registering for events (in-process)
This variant is applicable when events are to be consumed by a custom extension in an EDC runtime. The term “in-process”
refers to the fact that event producer and event consumer run in the same Java process.
The entry point for event listening is the EventRouter
interface, on which an EventSubscriber
can be registered.
There are two ways to register an EventSubscriber
:
- async: every event will be sent to the subscribers in an asynchronous way. Features:
- fast, as the main thread won’t be blocked during event dispatch
- not-reliable, as an eventual subscriber dispatch failure won’t get handled
- to be used for notifications and for send-and-forget event dispatch
- sync: every event will be sent to the subscriber in a synchronous way. Features:
- slow, as the subscriber will block the main thread until the event is dispatched
- reliable, an eventual exception will be thrown to the caller, and it could make a transactional fail
- to be used for event persistence and to satisfy the “at-least-one” rule
The EventSubscriber
is typed over the event kind (Class), and it will be invoked only if the type of the event matches
the published one (instanceOf). The base class for all events is Event
.
For example, developing an auditing extension could be done through event subscribers:
@Inject
private EventRouter eventRouter;
@Override
public void initialize(ServiceExtensionContext context) {
eventRouter.register(TransferProcessEvent.class, new AuditingEventHandler()); // sync dispatch
// or
eventRouter.registerSync(TransferProcessEvent.class, new AuditingEventHandler()); // async dispatch
}
Note that TransferProcessEvent
is not a concrete class, it is a super class for all events related to transfer process
events. This implies that subscribers can either be registered for “groups” of events or for concrete events (e.g.
TransferProcessStarted
).
The AuditingEventHandler
could look like this:
@Override
public <E extends Event> void on(EventEnvelope<E> event) {
if (event.getPayload() instanceof TransferProcessEvent transferProcessEvent) {
// react to event
}
}
6.3 Registering for callbacks (webhooks)
This variant is applicable when adding extensions that contain event subscribers is not possible. Rather, the EDC
runtime invokes a webhook when a particular event occurs and sends event data there.
Webhook information must be sent alongside in the request body of certain Management API requests. For details, please
refer to the Management API documentation. Providing
webhooks is only possible for certain events, for example when initiating a contract
negotiation:
// POST /v3/contractnegotiations
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "https://w3id.org/edc/v0.0.1/ns/ContractRequest",
"counterPartyAddress": "http://provider-address",
"protocol": "dataspace-protocol-http",
"policy": {
//...
},
"callbackAddresses": [
{
"transactional": false,
"uri": "http://callback/url",
"events": [
"contract.negotiation",
"transfer.process"
],
"authKey": "auth-key",
"authCodeId": "auth-code-id"
}
]
}
If your webhook endpoint requires authentication, the secret must be sent in the authKey
property. The authCodeId
field should contain a string which EDC can use to temporarily store the secret in its secrets vault.
6.4 Emitting custom events
It is also possible to create and publish custom events on top of the EDC eventing system. To define the event, extend
the Event
class.
Rule of thumb: events should be named in past tense, to describe something that has already happened
public class SomethingHappened extends Event {
private String description;
public String getDescription() {
return description;
}
private SomethingHappened() {
}
// Builder class not shown
}
All the data pertaining an event should be stored in the Event
class. Like any other events, custom events can be
published through the EventRouter
component:
public class ExampleBusinessLogic {
public void doSomething() {
// some business logic that does something
var event = SomethingHappened.Builder.newInstance()
.description("something interesting happened")
.build();
var envelope = EventEnvelope.Builder.newInstance()
.at(clock.millis())
.payload(event)
.build();
eventRouter.publish(envelope);
}
}
Please note that the at
field is a timestamp that every event has, and it’s mandatory (please use the Clock
to get
the current timestamp).
6.5 Serialization and Deserialization of custom events
All events must be serializable, because of this, every class that extends Event
will be serializable to JSON through
the TypeManager
service. The JSON structure will contain an additional field called type
that describes the name of
the event class. For example, a serialized EventEnvelope<SomethingHappened>
event will look like:
{
"type": "SomethingHappened",
"at": 1654764642188,
"payload": {
"description": "something interesting happened"
}
}
In order to make such an event deserializable by the TypeManager
is necessary to register the type:
typeManager.registerTypes(new NamedType(SomethingHappened.class, SomethingHappened .class.getSimpleName()));
doing so, the event can be deserialized using the EvenEnvelope
class as type:
var deserialized = typeManager.readValue(json, EventEnvelope.class);
// deserialized will have the `EventEnvelope<SomethingHappened>` type at runtime
2.7.7 - Dependency Injection
1. Registering a service implementation
As a general rule, the module that provides the implementation also should register it with the
ServiceExtensionContext
. This is done in an accompanying service extension. For example, providing a “FunkyDB” based
implementation for a FooStore
(stores Foo
objects) would require the following classes:
- A
FooStore.java
interface, located in SPI:public interface FooService {
void store(Foo foo);
}
- A
FunkyFooStore.java
class implementing the interface, located in :extensions:funky:foo-store-funky
:public class FunkyFooStore implements FooStore {
@Override
void store(Foo foo){
// ...
}
}
- A
FunkyFooStoreExtension.java
located also in :extensions:funky:foo-store-funky
. Must be accompanied by
a “provider-configuration file” as required by
the ServiceLoader
documentation. Code
examples will follow below.
1.1 Use @Provider
methods (recommended)
Every ServiceExtension
may declare methods that are annotated with @Provider
, which tells the dependency resolution
mechanism, that this method contributes a dependency into the context. This is very similar to other DI containers, e.g.
Spring’s @Bean
annotation. It looks like this:
public class FunkyFooStoreExtension implements ServiceExtension {
@Override
public void initialize(ServiceExtensionContext context) {
// ...
}
//Example 1: no args
@Provider
public SomeService provideSomeService() {
return new SomeServiceImpl();
}
//Example 2: using context
@Provider
public FooStore provideFooStore(ServiceExtensionContext context) {
var setting = context.getConfig("...", null);
return new FunkyFooStore(setting);
}
}
As the previous code snipped shows, provider methods may have no args, or a single argument, which is the
ServiceExtensionContext
. There are a few other restrictions too. Violating these will raise an exception. Provider
methods must:
- be public
- return a value (
void
is not allowed) - either have no arguments, or a single
ServiceExtensionContext
.
Declaring a provider method is equivalent to invoking
context.registerService(SomeService.class, new SomeServiceImpl())
. Thus, the return type of the method defines the
service type
, whatever is returned by the provider method determines the implementation of the service.
Caution: there is a slight difference between declaring @Provider
methods and calling
service.registerService(...)
with respect to sequence: the DI loader mechanism first invokes
ServiceExtension#initialize()
, and then invokes all provider methods. In most situations this difference is
negligible, but there could be situations, where it is not.
1.2 Provide “defaults”
Where @Provider
methods really come into their own is when providing default implementations. This means we can have a
fallback implementation. For example, going back to our FooStore
example, there could be an extension that provides a
default (=in-mem) implementation:
public class DefaultsExtension implements ServiceExtension {
@Provider(isDefault = true)
public FooStore provideDefaultFooStore() {
return new InMemoryFooStore();
}
}
Provider methods configured with isDefault=true
are only invoked, if the respective service (here: FooStore
) is not
provided by any other extension.
As a general programming rule, every SPI should come with a default implementation if possible.
Default provider methods are a tricky topic, please be sure to thoroughly read the additional documentation about
them here!
1.3 Register manually (not recommended)
Of course, it is also possible to manually register services by invoking the respective method on
the ServiceExtensionContext
@Provides(FooStore.class/*, possibly others*/)
public class FunkyFooStoreExtension implements ServiceExtension {
@Override
public void initialize(ServiceExtensionContext context) {
var setting = context.getConfig("...", null);
var store = new FunkyFooStore(setting);
context.registerService(FooStore.class, store);
}
}
There are three important things to mention:
- the call to
context.registerService()
makes the object available in the context. From this point on other
extensions can inject a FooStore
(and in doing so will provide a FunkyFooStore
). - the interface class must be listed in the
@Provides()
annotation, because it helps the extension loader to
determine in which order in which it needs to initialize extensions - service registrations must be done in the
initialize()
method.
2. Injecting a service
As with other DI mechanisms, services should only be referenced by the interface they implement. This will keep
dependencies clean and maintain extensibility, modularity and testability. Say we have a FooMaintenanceService
that
receives Foo
objects over an arbitrary network channel and stores them.
2.1 Use @Inject
to declare dependencies (recommended)
public class FooMaintenanceService {
private final FooStore fooStore;
public FooMaintenanceService(FooStore fooStore) {
this.fooStore = fooStore;
}
}
Note that the example uses what we call constructor injection (even though nothing is actually injected), because
that is needed for object construction, and it increases testability. Also, those types of instance members should be
declared final
to avoid programming errors.
In contrast to conventional DI frameworks the fooStore
dependency won’t get auto-injected - rather, this is done in a
ServiceExtension
that accompanies the FooMaintenanceService
and that injects FooStore
:
public class FooMaintenanceExtension implements ServiceExtension {
@Inject
private FooStore fooStore;
@Override
public void initialize(ServiceExtensionContext context) {
var service = new FooMaintenanceService(fooStore); //use the injected field
}
}
The @Inject
annotation on the fooStore
field tells the extension loading mechanism that FooMaintenanceExtension
depends on a FooService
and because of that, any provider of a FooStore
must be initialized before the
FooMaintenanceExtension
. Our FunkyFooStoreExtension
from the previous chapter provides a FooStore
.
2.2 Use @Requires
to declare dependencies
In cases where defining a field seems unwieldy or is simply not desirable, we provide another way to dynamically resolve
service from the context:
@Requires({ FooService.class, /*maybe others*/ })
public class FooMaintenanceExtension implements ServiceExtension {
@Override
public void initialize(ServiceExtensionContext context) {
var fooStore = context.getService(FooStore.class);
var service = new FooMaintenanceService(fooStore); //use the resolved object
}
}
The @Requires
annotation is necessary to inform the service loader about the dependency. Failing to add it may
potentially result in a skewed initialization order, and in further consequence, in an EdcInjectionException
.
Both options are almost semantically equivalent, except for optional dependencies:
while @Inject(required=false)
allows for nullable dependencies, @Requires
has no such option and the service
dependency must be resolved by explicitly allowing it to be optional: context.getService(FooStore.class, true)
.
3. Extension initialization sequence
The extension loading mechanism uses a two-pass procedure to resolve dependencies. First, all implementations of
of ServiceExtension
are instantiated using their public default constructor, and sorted using a topological sort
algorithm based on their dependency graph. Cyclic dependencies would be reported in this stage.
Second, the extension is initialized by setting all fields annotated with @Inject
and by calling its initialize()
method. This implies that every extension can assume that by the time its initialize()
method executes, all its
dependencies are already registered with the context, because the extension(s) providing them were ordered at previous
positions in the list, and thus have already been initialized.
4. Testing extension classes
To test classes using the @Inject
annotation, use the appropriate JUnit extension @DependencyInjectionExtension
:
@ExtendWith(DependencyInjectionExtension.class)
class FooMaintenanceExtensionTest {
private final FooStore mockStore = mock();
@BeforeEach
void setUp(ServiceExtensionContext context) {
context.registerService(FooStore.class, mockStore);
}
@Test
void testInitialize(FooMaintenanceExtension extension, ServiceExtensionContext context) {
extension.initialize(context);
verify(mockStore).someMethodGotInvoked();
}
}
5. Advanced concepts: default providers
In this chapter we will use the term “default provider” and “default provider method” synonymously to refer to a method
annotated with @Provider(isDefault=true)
. Similarly, “provider”, “provider method” or “factory method” refer to
methods annotated with just @Provider
.
5.1 Fallbacks versus extensibility
Default provider methods are intended to provide fallback implementations for services rather than to achieve
extensibility - that is what extensions are for. There is a subtle but important semantic difference between fallback
implementations and extensibility:
5.2 Fallback implementations
Fallbacks are meant as safety net, in case developers forget or don’t want to add a specific implementation for a
service. It is there so as not to end up without an implementation for a service interface. A good example for this
are in-memory store implementations. It is expected that an actual persistence implementation is contributed by another
extension. In-mem stores get you up and running quickly, but we wouldn’t recommend using them in production
environments. Typically, fallbacks should not have any dependencies onto other services.
Default-provided services, even though they are on the classpath, only get instantiated if there is no other
implementation.
5.3 Extensibility
In contrast, extensibility refers to the possibility of swapping out one implementation of a service for another by
choosing the respective module at compile time. Each implementation must therefore be contained in its own java module,
and the choice between one or the other is made by referencing one or the other in the build file. The service
implementation is typically instantiated and provided by its own extension. In this case, the @Provider
-annotation **
must not** have the isDefault
attribute. This is also the case if there will likely only ever be one implementation
for a service.
One example for extensibility is the IdentityService
: there could be several implementations for it (OAuth,
DecentralizedIdentity, Keycloak etc.), but providing either one as default would make little sense, because all of them
require external services to work. Each implementation would be in its own module and get instantiated by its own
extension.
Provided services get instantiated only if they are on the classpath, but always get instantiated.
5.4 Deep-dive into extension lifecycle management
Generally speaking every extension goes through these lifecycle stages during loading:
inject
: all fields annotated with @Inject
are resolvedinitialize
: the initialize()
method is invoked. All required collaborators are expected to be resolved after this.provide
: all @Provider
methods are invoked, the object they return is registered in the context.
Due to the fact that default provider methods act a safety net, they only get invoked if no other provider method offers
the same service type. However, what may be a bit misleading is the fact that they typically get invoked during the
inject
phase. The following section will demonstrate this.
5.5 Example 1 - provider method
Recall that @Provider
methods get invoked regardless, and after the initialze
phase. That means, assuming both
extensions are on the classpath, the extension that declares the provider method (= ExtensionA
) will get fully
instantiated before another extension (= ExtensionB
) can use the provided object:
public class ExtensionA { // gets loaded first
@Inject
private SomeStore store; // provided by some other extension
@Provider
public SomeService getSomeService() {
return new SomeServiceImpl(store);
}
}
public class ExtensionB { // gets loaded second
@Inject
private SomeService service;
}
After building the dependency graph, the loader mechanism would first fully construct ExtensionA
, i.e.
getSomeService()
is invoked, and the instance of SomeServiceImpl
is registered in the context. Note that this is
done regardless whether another extension actually injects a SomeService
. After that, ExtensionB
gets constructed,
and by the time it goes through its inject
phase, the injected SomeService
is already in the context, so the
SomeService
field gets resolved properly.
5.6 Example 2 - default provider method
Methods annotated with @Provider(isDefault=true)
only get invoked if there is no other provider method for that
service, and at the time when the corresponding @Inject
is resolved. Modifying example 1 slightly we get:
public class ExtensionA {
@Inject
private SomeStore store;
@Provider(isDefault = true)
public SomeService getSomeService() {
return new SomeServiceImpl(store);
}
}
public class ExtensionB {
@Inject
private SomeService service;
}
The biggest difference here is the point in time at which getSomeService
is invoked. Default provider methods get
invoked when the @Inject
dependency is resolved, because that is the “latest” point in time that that decision can
be made. That means, they get invoked during ExtensionB
’s inject phase, and not during ExtensionA
’s provide phase.
There is no guarantee that ExtensionA
is already initialized by that time, because the extension loader does not know
whether it needs to invoke getSomeService
at all, until the very last moment, i.e. when resolving ExtensionB
’s
service
field. By that time, the dependency graph is already built.
Consequently, default provider methods could (and likely would) get invoked before the defining extension’s provide
phase has completed. They even could get invoked before the initialize
phase has completed: consider the following
situation the previous example:
- all implementors of
ServiceExtension
get constructed by the Java ServiceLoader
ExtensionB
gets loaded, runs through its inject phase- no provider for
SomeService
, thus the default provider kicks in ExtensionA.getSomeService()
is invoked, but ExtensionA
is not yet loaded -> store
is null- -> potential NPE
Because there is no explicit ordering in how the @Inject
fields are resolved, the order may depend on several factors,
like the Java version or specific JVM used, the classloader and/or implementation of reflection used, etc.
5.7 Usage guidelines when using default providers
From the previous sections and the examples demonstrated above we can derive a few important guidelines:
- do not use them to achieve extensibility. That is what extensions are for.
- use them only to provide a fallback implementation
- they should not depend on other injected fields (as those may still be null)
- they should be in their own dedicated extension (cf.
DefaultServicesExtension
) and Java module - do not provide and inject the same service in one extension
- rule of thumb: unless you know exactly what you’re doing and why you need them - don’t use them!
6. Limitations
Only available in ServiceExtension
: services can only be injected into ServiceExtension
objects at this time as
they are the main hook points for plugins, and they have a clearly defined interface. All subsequent object creation
must be done manually using conventional mechanisms like constructors or builders.
No multiple registrations: registering two implementations for an interface will result in the first registration
being overwritten by the second registration. If both providers have the same topological ordering it is undefined
which comes first. A warning is posted to the Monitor
.
It was a conscientious architectural decision to forego multiple service registrations for the sake of simplicity and
clean design. Patterns like composites or delegators exist for those rare cases where having multiple implementors of
the same interface is indeed needed. Those should be used sparingly and not without good reason.
No collection-based injection: Because there can be only ever one implementation for a service, it is not possible to
inject a collection of implementors as it is in other DI frameworks.
Field injection only: @Inject
can only target fields. For example
public SomeExtension(@Inject SomeService someService){ ... }
would not be possible.
No named dependencies: dependencies cannot be decorated with an identifier, which would technically allow for multiple
service registrations (using different tags). Technically this is linked to the limitation of single service
registrations.
Direct inheritors/implementors only: this is not due to a limitation of the dependency injection mechanism, but rather
due to the way how the context maintains service registrations: it simply maintains a Map
containing interface class
and implementation type.
Cyclic dependencies: cyclic dependencies are detected by the TopologicalSort
algorithm
No generic dependencies: @Inject private SomeInterface<SomeType> foobar
is not possible.
2.7.8 - Extension Model
1. Extension basics
Three things are needed to register an extension module with the EDC runtime:
- a class that implements
ServiceExtension
- a provider-configuration file
- adding the module to your runtime’s build file. EDC uses Gradle, so your runtime build file should contain
runtimeOnly(project(":module:path:of:your:extension"))
Extensions should not contain business logic or application code. Their main job is to
- read and handle configuration
- instantiate and register services with the service context (read more here)
- allocate and free resources, for example scheduled tasks
EDC can automatically generate documentation about its extensions, about the settings used therein and about its
extension points. This feature is available as Gradle task:
Upon execution, this task generates a JSON file located at build/edc.json
, which contains structural information about
the extension, for example:
Autodoc output in edc.json
[
{
"categories": [],
"extensions": [
{
"categories": [],
"provides": [
{
"service": "org.eclipse.edc.web.spi.WebService"
},
{
"service": "org.eclipse.edc.web.spi.validation.InterceptorFunctionRegistry"
}
],
"references": [
{
"service": "org.eclipse.edc.web.spi.WebServer",
"required": true
},
{
"service": "org.eclipse.edc.spi.types.TypeManager",
"required": true
}
],
"configuration": [
{
"key": "edc.web.rest.cors.methods",
"required": false,
"type": "string",
"description": "",
"defaultValue": "",
"deprecated": false
}
// other settings
],
"name": "JerseyExtension",
"type": "extension",
"overview": null,
"className": "org.eclipse.edc.web.jersey.JerseyExtension"
}
],
"extensionPoints": [],
"modulePath": "org.eclipse.edc:jersey-core",
"version": "0.8.2-SNAPSHOT",
"name": null
}
]
To achieve this, the EDC Runtime Metamodel defines several
annotations. These are not required for compilation, but they should be added to the appropriate classes and fields with
proper attributes to enable good documentation. For detailed information please read this chapter.
Note that @Provider
, @Inject
, @Provides
and @Requires
are used by Autodoc to resolve the dependency graph for
documentation, but they are also used by the runtime to resolve service dependencies. Read more about that
here.
3. Configuration and best practices
One important task of extensions is to read and handle configuration. For this, the ServiceExtensionContext
interface
provides the getConfig()
group of methods.
Configuration values can be optional, i.e. they have a default value, or they can be mandatory, i.e. no default
value. Attempting to resolve a mandatory configuration value that was not specified will raise an EdcException
.
EDC’s configuration API can resolve configuration from three places, in this order:
- from a
ConfigurationExtension
: this is a special extension class that provides a Config
object. EDC ships with a
file-system based config extension. - from environment variables:
edc.someconfig.someval
would map to EDC_SOMECONFIG_SOMEVAL
- from Java
Properties
: can be passed in through CLI arguments, e.g. -Dedc.someconfig.someval=...
Best practices when handling configuration:
- resolve early, fail fast: configuration values should be resolved and validated as early as possible in the
extension’s
initialize()
method. - don’t pass the context: it is a code smell if the
ServiceExtensionContext
is passed into a service to resolve config - annotate: every setting should have a
@Setting
annotation - no magic defaults: default values should be declard as constants in the extension class and documented in the
@Setting
annotation. - no secrets: configuration is the wrong place to store secrets
- naming convention: every config value should start with
edc.
2.8 - Data Plane
2.8.1 - Extensions
The EDC Data Plane is a component responsible for transmitting data using a wire protocol and can be easily extended using the Data Plane Framework (DPF) for supporting different protocols and transfer types.
The main component of an EDC data plane is the DataPlaneManager.
1. The DataPlaneManager
The DataPlaneManager
manages execution of data plane requests, using the EDC State Machine pattern for tracking the state of data transmissions.
It receives DataFlowStartMessage
from the Control Plane through the data plane signaling protocol if it’s deployed as standalone process, or directly via method call when it’s embedded in the same process.
The DataPlaneManager
supports two flow types:
1.1 Consumer PULL Flow
When the flow type of the DataFlowStartMessage
is PULL
the DataPlaneManager
delegates the creation of the DataAddress
to the DataPlaneAuthorizationService
, and then returns it to the ControlPlane as part of the response to a DataFlowStartMessage
.
1.2 Provider PUSH Flow
When the flow type is PUSH
, the data transmission is handled by the DPF using the information contained in the DataFlowStartMessage
such as sourceDataAddress
and destinationDataAddress
.
2. The Data Plane Framework
The DPF
consists on a set of SPIs and default implementations for transferring data from a sourceDataAddress
to a destinationDataAddress
. It has a built-in support for end-to-end streaming transfers using the PipelineService and it comes with a more generic TransferService that can be extended to satisfy more specialized or optimized transfer case.
Each TransferService
is registered in the TransferServiceRegistry
, that the DataPlaneManager
uses for validating and initiating a data transfer from a DataFlowStartMessage
.
2.1 TransferService
Given a DataFlowStartMessage
, an implementation of a TransferService
can transfer data from a sourceDataAddress
to a destinationDataAddress
.
The TransferService
does not specify how the transfer should happen. It can be processed internally in the data plane or it could delegate out to external (and more specialized) systems.
Relevant methods of the TransferService
are:
public interface TransferService {
boolean canHandle(DataFlowStartMessage request);
Result<Boolean> validate(DataFlowStartMessage request);
CompletableFuture<StreamResult<Object>> transfer(DataFlowStartMessage request);
}
The canHandle
expresses if the TransferService
implementation is able to fulfill the transfer request expressed in the DataFlowStartMessage
.
The validate
performs a validation on the content of a DataFlowStartMessage
.
The transfer
triggers a data transfer from a sourceDataAddress
to a destinationDataAddress
.
An implementation of a TransferService
bundled with the DPF is the PipelineService.
2.2 PipelineService
The PipelineService
is an extension of TransferService that leverages on an internal Data-Plane transfer mechanism.
It supports end-to-end streaming by connecting a DataSink
(output) and a DataSource
(input).
DataSink
and DataSource
are created for each data transfer using DataSinkFactory
and DataSourceFactory
from the DataFlowStartMessage
. Custom source and sink factories should be registered in the PipelineService
for adding support different data source and sink types (e.g. S3, HTTP, Kafka).
public interface PipelineService extends TransferService {
void registerFactory(DataSourceFactory factory);
void registerFactory(DataSinkFactory factory);
}
When the PipelineService
receives a transfer request, it identifies which DataSourceFactory
and DataSinkFactory
can satisfy a DataFlowStartMessage
, then it creates their respective DataSource
and DataSink
and ultimately initiate the transfer by calling DataSink#transfer(DataSource)
.
EDC supports out of the box (with specialized extensions) a variety of data source and sink types like S3, HTTP, Kafka, AzureStorage, but it can be easily extended with new types.
3. Writing custom Source/Sink
The PipelineService
is the entry point for adding new source and sink types to a data plane runtime.
We will see how to write a custom data source, a custom data sink and how we can trigger a transfer leveraging those new types.
Just as example we will write a custom source type that is based on filesystem and a sink type that is based on SMTP
Note: those custom extensions are just example for didactic purpose.
As always when extending the EDC, the starting point is to create an extension:
public class MyDataPlaneExtension implements ServiceExtension {
@Inject
PipelineService pipelineService;
@Override
public void initialize(ServiceExtensionContext context) {
}
}
where we inject the PipelineService
.
the extension module should include data-plane-spi
as dependency.
3.1 Custom DataSource
Just for simplicity the filesystem based DataSource
will just support transferring a single file and not folders.
Here’s how an implementation of FileDataSource
might look like:
public class FileDataSource implements DataSource {
private final File sourceFile;
public FileDataSource(File sourceFile) {
this.sourceFile = sourceFile;
}
@Override
public StreamResult<Stream<Part>> openPartStream() {
return StreamResult.success(Stream.of(new FileStreamPart(sourceFile)));
}
@Override
public void close() {
}
private record FileStreamPart(File file) implements Part {
@Override
public String name() {
return file.getName();
}
@Override
public InputStream openStream() {
try {
return new FileInputStream(file);
} catch (FileNotFoundException e) {
throw new RuntimeException(e);
}
}
}
}
The relevant method is the openPartStream
, which will be called for connecting the source and sink. The openPartStream
returns a Stream
of Part
objects, as the DataSource
can be composed by more that one part (e.g. folders, files, etc.). The openPartStream
does not actually open a Java InputStream
, but returns a stream of Part
s.
Transforming a Part
into an InputStream
is the main task of the DataSource
implementation. In our case the FileStreamPart#openStream
just returns a FileInputStream
from the input File
.
Now we have a DataSource
that can be used for transferring the content of a file. The only missing bit is how to create a DataSource
for a transfer request.
This can be achieved by implementing a DataSourceFactory
that creates the FileDataSource
from a DataFlowStartMessage
:
public class FileDataSourceFactory implements DataSourceFactory {
@Override
public String supportedType() {
return "File";
}
@Override
public DataSource createSource(DataFlowStartMessage request) {
return new FileDataSource(getFile(request).orElseThrow(RuntimeException::new));
}
@Override
public @NotNull Result<Void> validateRequest(DataFlowStartMessage request) {
return getFile(request)
.map(it -> Result.success())
.orElseGet(() -> Result.failure("sourceFile is not found or it does not exist"));
}
private Optional<File> getFile(DataFlowStartMessage request) {
return Optional.ofNullable(request.getSourceDataAddress().getStringProperty("sourceFile"))
.map(File::new)
.filter(File::exists)
.filter(File::isFile);
}
}
For our implementation we express in the supportedType
method that the
the sourceDataAddress
should be of type File
and in the validateRequest
method
that it should contains a property sourceFile
containing the path of the file to be transferred.
The FileDataSourceFactory
then should be registered in the PipelineService
:
public class MyDataPlaneExtension implements ServiceExtension {
@Inject
PipelineService pipelineService;
@Override
public void initialize(ServiceExtensionContext context) {
pipelineService.registerFactory(new FileDataSourceFactory());
}
}
3.2 Custom DataSink
For the DataSink
we will sketch an implementation of an SMTP based one using the javamail API.
The implementation should send the Part
s of the input DataSource
as email attachments to a recipient
.
The MailDataSink
may look like this:
public class MailDataSink implements DataSink {
private final Session session;
private final String recipient;
private final String sender;
private final String subject;
public MailDataSink(Session session, String recipient, String sender, String subject) {
this.session = session;
this.recipient = recipient;
this.sender = sender;
this.subject = subject;
}
@Override
public CompletableFuture<StreamResult<Object>> transfer(DataSource source) {
var msg = new MimeMessage(session);
try {
msg.setSentDate(new Date());
msg.setRecipients(Message.RecipientType.TO, recipient);
msg.setSubject(subject, "UTF-8");
msg.setFrom(sender);
var streamResult = source.openPartStream();
if (streamResult.failed()) {
return CompletableFuture.failedFuture(new EdcException(streamResult.getFailureDetail()));
}
var multipart = new MimeMultipart();
streamResult.getContent()
.map(this::createBodyPart)
.forEach(part -> {
try {
multipart.addBodyPart(part);
} catch (MessagingException e) {
throw new EdcException(e);
}
});
msg.setContent(multipart);
Transport.send(msg);
return CompletableFuture.completedFuture(StreamResult.success());
} catch (Exception e) {
return CompletableFuture.failedFuture(e);
}
}
private BodyPart createBodyPart(DataSource.Part part) {
try {
var messageBodyPart = new MimeBodyPart();
messageBodyPart.setFileName(part.name());
var source = new ByteArrayDataSource(part.openStream(), part.mediaType());
messageBodyPart.setDataHandler(new DataHandler(source));
return messageBodyPart;
} catch (Exception e) {
throw new EdcException(e);
}
}
}
The MailDataSink
receives in input a DataSource
in the transfer
method. After setting up the MimeMessage
with recipient
, sender
and the subject
, the code maps each DataSource.Part
into a BodyPart
(attachments), with Part#name
as the name of each attachment.
The message is finally delivered using the Transport
API.
In this case is not a proper streaming, since the javamail
buffers the InputStream
when using the ByteArrayDataSource
.
To use the MailDataSink
as available sink type, an implementation of the DataSinkFactory
is required:
public class MailDataSinkFactory implements DataSinkFactory {
private final Session session;
private final String sender;
public MailDataSinkFactory(Session session, String sender) {
this.session = session;
this.sender = sender;
}
@Override
public String supportedType() {
return "Mail";
}
@Override
public DataSink createSink(DataFlowStartMessage request) {
var recipient = getRecipient(request);
var subject = "File transfer %s".formatted(request.getProcessId());
return new MailDataSink(session, recipient, sender, subject);
}
@Override
public @NotNull Result<Void> validateRequest(DataFlowStartMessage request) {
return Optional.ofNullable(getRecipient(request))
.map(it -> Result.success())
.orElseGet(() -> Result.failure("Missing recipient"));
}
private String getRecipient(DataFlowStartMessage request) {
var destination = request.getDestinationDataAddress();
return destination.getStringProperty("recipient");
}
}
The MailDataSinkFactory
declares the supported type (Mail
) and implements validation and creation of the DataSource
based on the destinationAddress
in the DataFlowStartMessage
.
In the validation phase only expects the recipient
as additional property in the DataAddress
of the destination.
Ultimately the MailDataSinkFactory
should be registered in the PipelineService
:
public class MyDataPlaneExtension implements ServiceExtension {
@Inject
PipelineService pipelineService;
@Override
public void initialize(ServiceExtensionContext context) {
pipelineService.registerFactory(new FileDataSourceFactory());
var sender = // fetch the sender from config
pipelineService.registerFactory(new MailDataSinkFactory(getSession(context),sender));
}
private Session getSession(ServiceExtensionContext context) {
// configure the java mail Session
}
}
3.3 Executing the transfer
With the MyDataPlaneExtension
loaded in the provider data plane, that adds
a new source
type based on filesystem and a sink
in the runtime we can now complete a File
-> Mail
transfer.
On the provider side we can create an Asset like this:
{
"@context": { "@vocab": "https://w3id.org/edc/v0.0.1/ns/" },
"@id": "file-asset",
"properties": {
},
"dataAddress": {
"type": "File",
"sourceFile": "{{filePath}}"
}
}
The Asset
then should then be advertised in the catalog.
When a consumer fetches the provider’s catalog,
if the access policy conditions are met, it should see the Dataset with a new distribution available.
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "Mail-PUSH"
},
"dcat:accessService": {
"@id": "ef9494bb-7000-4bae-9770-6567f451dba5",
"@type": "dcat:DataService",
"dcat:endpointDescription": "dspace:connector",
"dcat:endpointUrl": "http://localhost:18182/protocol",
"dct:terms": "dspace:connector",
"dct:endpointUrl": "http://localhost:18182/protocol"
}
}
which indicates that the Dataset
is also available with the format Mail-PUSH
.
Once a contract agreement is reached between the parties, a consumer may send a transfer request:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "TransferRequest",
"dataDestination": {
"type": "Mail",
"recipient": "{{recipientEmail}}"
},
"protocol": "dataspace-protocol-http",
"contractId": "{{agreementId}}",
"connectorId": "provider",
"counterPartyAddress": "http://localhost:18182/protocol",
"transferType": "Mail-PUSH"
}
that will deliver the Dataset
as attachments in the recipient
email address.
2.8.2 - Data Plane Signaling interface
Data plane signaling (DPS) defines a set of API endpoints and message types which are used for communication between a
control plane and dataplane to control data flows.
1. DataAddress
and EndpointDataReference
When the control plane signals to the data plane to start a client pull transfer process, the data plane returns a
DataAddress
. This is only true for consumer-pull transfers - provider push transfers do not return a
DataAddress
. This DataAddress
contains information the client can use to resolve the provider’s data plane endpoint.
It also contain an access token (cf. authorization).
This DataAddress
is returned by the provider control plane to the consumer in a TransferProcessStarted
DSP message.
Its purpose is to inform the consumer where they can obtain the data, and which authorization token to use.
The EndpointDataReference
is a data structure that is used on the consumer side to contain all the relevant
information of the DataAddress
and some additional information associated with the transfer, such as asset ID and
contract ID. Note that is is only the case if the consumer is implemented using EDC.
A transfer process may be STARTED
multiple times (e.g., after it is temporarily SUSPENDED
), the consumer may receive
a different DataAddress
objects as part of each start message. The consumer must always create a new EDR
from
these messages and remove the previous EDR. Data plane implementations may choose to pass the same DataAddress
or an
updated one.
This start signaling pattern can be used to change a data plane’s endpoint address, for example, after a software
upgrade, or a load balancer switch-over.
2. Signaling protocol messages and API endpoints
All requests support idempotent behavior. Data planes must therefore perform request de-duplication. After a data plane
commits a request, it will return an ack to the control plane, which will transition the TransferProcess
to its next
state (e.g., STARTED
, SUSPENDED
, TERMINATED
). If a successful ack is not received, the control plane will resend
the request during a subsequent tick period.
2.1 START
During the transfer process STARTING
phase, the provider control plane selects a data plane using the
DataFlowController
implementations it has available, which will then send a DataFlowStartMessage
to the data plane.
The control plane (i.e. the DataFlowController
) records which data plane was selected for the transfer process so that
it can properly route subsequent, start, stop, and terminate requests.
For client pull transfers, the data plane returns a DataAddress
with an access token.
If the data flow was previously SUSPENDED
, the data plane may elect to return the same DataAddress
or create a new
one.
The provider control plane sends a DataFlowStartMessage
to the provider data plane:
POST https://dataplane-host:port/api/signaling/v1/dataflows
Content-Type: application/json
{
"@context": { "@vocab": "https://w3id.org/edc/v0.0.1/ns/" },
"@id": "transfer-id",
"@type": "DataFlowStartMessage",
"processId": "process-id",
"datasetId": "dataset-id",
"participantId": "participant-id",
"agreementId": "agreement-id",
"transferType": "HttpData-PULL",
"sourceDataAddress": {
"type": "HttpData",
"baseUrl": "https://jsonplaceholder.typicode.com/todos"
},
"destinationDataAddress": {
"type": "HttpData",
"baseUrl": "https://jsonplaceholder.typicode.com/todos"
},
"callbackAddress" : "http://control-plane",
"properties": {
"key": "value"
}
}
The data plane responds with a DataFlowResponseMessage
, that contains the public endpoint, the authorization token and
possibly other information in the form of a DataAddress
. For more information about how access tokens are generated,
please refer to this chapter.
2.2 SUSPEND
During the transfer process SUSPENDING
phase, the DataFlowController
will send a DataFlowSuspendMessage
to the
data plane. The data plane will transition the data flow to the SUSPENDED
state and invalidate the associated access
token.
POST https://dataplane-host:port/api/signaling/v1/dataflows
Content-Type: application/json
{
"@context": { "@vocab": "https://w3id.org/edc/v0.0.1/ns/" },
"@type": "DataFlowSuspendMessage",
"reason": "reason"
}
2.3 TERMINATE
During the transfer process TERMINATING
phase, the DataFlowController
will send a DataFlowTerminateMessage
to the
data plane. The data plane will transition the data flow to the TERMINATED
state and invalidate the associated access
token.
POST https://dataplane-host:port/api/signaling/v1/dataflows
Content-Type: application/json
{
"@context": { "@vocab": "https://w3id.org/edc/v0.0.1/ns/" },
"@type": "DataFlowTerminateMessage",
"reason": "reason"
}
3. Data plane public API
One popular use case for data transmission is where the provider organization exposes a REST API where consumers can
download data. We call this a “Http-PULL” transfer. This is especially useful for structured data, such as JSON and it
can even be used to model streaming data.
To achieve that, the provider data plane can expose a “public API” that takes REST requests and satisfies them by
pulling data out of a DataSource
which it obtains by verifying and parsing the Authorization
token (see this
chapter for details).
3.1 Endpoints and endpoint resolution
3.2 Public API Access Control
The design of the EDC Data Plane Framework is based on non-renewable access tokens. One access token will be
maintained for the period a transfer process is in the STARTED
state. This duration may be a single request or a
series of requests spanning an indefinite period of time (“streaming”).
Other data plane implementations my chose to support renewable tokens. Token renewal is often used as a strategy for
controlling access duration and mitigating leaked tokens. The EDC implementation will handle access duration and
mitigate against leaked tokens in the following ways.
3.2.1 Access Duration
Access duration is controlled by the transfer process and contract agreement, not the token. If a transfer processes is
moved from the STARTED
to the SUSPENDED
, TERMINATED
, or COMPLETED
state, the access token will no longer be
valid. Similarly, if a contract agreement is violated or otherwise invalidated, a cascade operation will terminate all
associated transfer processes.
To achieve that, the data plane maintains a list of currently active/valid tokens.
3.2.2 Leaked Access Tokens
If an access token is leaked or otherwise compromised, its associated transfer process is placed in the TERMINATED
state and a new one is started. In order to mitigate the possibility of ongoing data access when a leak is not
discovered, a data plane may implement token renewal. Limited-duration contract agreements and transfer processes may
also be used. For example, a transfer process could be terminated after a period of time by the provider and the
consumer can initiate a new process before or after that period.
3.2.3 Access Token Generation
When the DataPlaneManager
receives a DataFlowStartMessage
to start the data transmission, it uses the
DataPlaneAuthorizationService
to generate an access token (in JWT format) and a DataAddress
, that contains the
follwing information:
endpoint
: the URL of the public APIendpointType
: should be https://w3id.org/idsa/v4.1/HTTP
for HTTP pull transfersauthorization
: the newly generated access token.
DataAddress with access token
{
"dspace:dataAddress": {
"@type": "dspace:DataAddress",
"dspace:endpointType": "https://w3id.org/idsa/v4.1/HTTP",
"dspace:endpoint": "http://example.com",
"dspace:endpointProperties": [
{
"@type": "dspace:EndpointProperty",
"dspace:name": "https://w3id.org/edc/v0.0.1/ns/authorization",
"dspace:value": "token"
},
{
"@type": "dspace:EndpointProperty",
"dspace:name": "https://w3id.org/edc/v0.0.1/ns/authType",
"dspace:value": "bearer"
}
]
}
}
This DataAddress
is returned in the DataFlowResponse
as mentioned here. With that alone, the data plane
would not be able to determine token revocation or invalidation, so it must also record the access token.
To that end, the EDC data plane stores an AccessTokenData
object that contains the token, the source DataAddress
and
some information about the bearer of the token, specifically:
- agreement ID
- asset ID
- transfer process ID
- flow type (
push
or pull
) - participant ID (of the consumer)
- transfer type (see later sections for details)
The token creation flow is illustrated by the following sequence diagram:
3.2.4 Access Token Validation and Revocation
When the consumer executes a REST request against the provider data plane’s public API, it must send the previously
received access token (inside the DataAddress
) in the Authorization
header.
The data plane then attempts to resolve the AccessTokenData
object associated with that token and checks that the
token is valid.
The authorization flow is illustrated by the following sequence diagram:
A default implementation will be provided that always returns true
. Extensions can supply alternative implementations
that perform use-case-specific authorization checks.
Please note that DataPlaneAccessControlService
implementation must handle all request types (including transport
types) in a data plane runtime. If multiple access check implementations are required, creating a multiplexer or
individual data plane runtimes is recommended.
Note that in EDC, the access control check (step 8) always returns true
!
In order to revoke the token with immediate effect, it is enough to delete the AccessTokenData
object from the
database. This is done using the DataPlaneAuthorizationService
as well.
3.3 Token expiry and renewal
EDC does not currently implement token expiry and renewal, so this section is intended for developers who wish to
provide a custom data plane.
To implement token renewal, the recommended way is to create an extension, that exposes a refresh endpoint which can be
used by consumers. The URL of this refresh endpoint could be encoded in the original DataAddress
in the
dspace:endpointProperties
field.
In any case, this will be a dataspace-specific solution, so administrative steps are
required to achieve interoperability.
4. Data plane registration
The life cycle of a data plane is decoupled from the life cycle of a control plane. That means, they could be started,
paused/resumed and stopped at different points in time. In clustered deployments, this is very likely the default
situation. With this, it is also possible to add or remove individual data planes anytime.
When data planes come online, they register with the control plane using the DataPlaneSelectorControlApi
. Each
dataplane sends a DataPlaneInstance
object that contains information about its supported transfer types, supported
source types, URL, the data plane’s component ID
and other properties.
From then on, the control plane sends periodic heart-beats to the dataplane.
5. Data plane selection
During data plane self-registration, the control plane builds a list of DataPlaneInstance
objects, each of which
represents one (logical) data plane component. Note that these are logical instances, that means, even replicated
runtimes would still only count as one instance.
In a periodic task the control plane engages a state machine DataPlaneSelectorManager
to manage the state of each
registered data plane. To do that, it simply sends a REST request to the /v1/dataflows/check
endpoint of the data
plane. If that returns successfully, the dataplane is still up and running.
If not, the control plane will consider the data plane as “unavailable”.
In addition to availability, the control plane also records the capabilities of each data plane, i.e. which which
source data types and transfer types are supported. Each data plane must declare where it can transfer data from
(source type, e.g. AmazonS3
) and how it can transfer data (transfer type, e.g. Http-PULL
).
5.1 Building the catalog
The data plane selection directly influences the contents of the catalog: for example, let say that a particular
provider can transmit an asset either via HTTP (pull), or via S3 (push), then each one of these variants would be
represented in the catalog as individual Distribution
.
Upon building the catalog, the control plane checks for each Asset
, whether the Asset.dataAddress.type
field is
contained in the list of allowedTransferTypes
of each DataPlaneInstance
In the example above, at least one data plane has to have Http-PULL
in its allowedTransferTypes
, and at least one
has to have AmazonS3-PUSH
. Note that one data plane could have both entries.
5.2 Fulfilling data requests
When a START
message is sent from the control plane to the data plane via the Signaling API, the data plane first
checks whether it can fulfill the request. If multiple data planes can fulfill the request, the selectionStrategy
is
employed to determine the actual data plane.
This check is necessary, because a START
message could contain a transfer type, that is not supported by any of the
data planes, or all data planes, that could fulfill the request are unavailable.
This algorithm is called data plane selection.
Selection strategies can be added via extensions, using the SelectionStrategyRegistry
. By default, a data plane is
selected at random.
2.8.3 - Custom Data Plane
When the data-plane is not embedded, EDC uses the Data Plane Signaling protocol (DPS) for the communication between control plane and data plane. In this chapter we will see how to leverage on DPS for writing a custom data plane from scratch.
For example purposes, this chapter contains JS snippets that use express
as web framework.
Since it’s only for educational purposes, the code is not intended to be complete, as proper error handling and JSON-LD processing are not implemented
Our simple data plane setup looks like this:
const express = require('express')
const app = express()
const port = 3000
app.use(express.json());
app.use((req, res, next) => {
console.log(req.method, req.hostname, req.path, new Date(Date.now()).toString());
next();
})
app.listen(port, () => {
console.log(`Data plane listening on port ${port}`)
})
It’s a basic express
application that listens on port 3000
and logs every request with a basic middleware.
1. The Registration Phase
First we need to register our custom data plane in the EDC control plane.
By using the internal Dataplane Selector
API available under the control
context of EDC, we could send a registration request:
POST https://controlplane-host:port/api/control/v1/dataplanes
Content-Type: application/json
{
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "DataPlaneInstance",
"@id": "custom_dataplane",
"url": "http://custom-dataplane-host:3000/dataflows",
"allowedSourceTypes": [
"HttpData"
],
"allowedTransferTypes": [
"HttpData-PULL",
"HttpData-PUSH"
]
}
It’s up to the implementors to decide when the data plane gets registered. This may be a manual operation as well as automated in a process routine.
The @id
is the data plane’s component ID
, which identify a logical data plane component.
The url
is the location on which the data plane will be receiving protocol messages.
The allowedSourceTypes
is an array of source type supported, in this case only HttpData
.
The allowedTransferTypes
is an array of supported transfer types. When using the DPS the transfer type is by convention a string with format <label>-{PULL,PUSH}
, which carries the type of the flow push
or pull
. By default in EDC the label
always corresponds to a source/sync type (e.g HttpData
), but it can be customized for data plane implementation.
With this configuration we declare that our data plane is able to transfer data using HTTP protocol in push
and pull
mode.
The lifecycle of a data plane instance is managed by the DataPlaneSelectorManager
component implemented as state machine. A data plane instance is in the REGISTERED
state when created/updated. Then for each data plane a periodic heartbeat is sent for checking if it is still running.
If the data plane response is successful, the state transits to AVAILABLE
. As soon as the data plane does not respond or returns a non successful response, the state transits to UNAVAILABLE
.
Let’s implement a route method for GET /dataflows/check
in our custom data plane:
app.get('/dataflows/check', (req, res) => {
res.send('{}')
})
Only the response code matters, the response body is ignored on the EDC side.
Once the data plane is started and registered we should see this entries in the logs:
GET localhost /dataflows/check Fri Aug 30 2024 18:01:56 GMT+0200 (Central European Summer Time)
And the status of our the data plane is AVAILABLE
.
2. Handling DPS messages
When a transfer process is ready to be started by the Control Plane, the DataPlaneSignalingFlowController
is engaged for handling the transfer request. The DPS
flow controller uses the DataPlaneSelectorService
for selecting the right data plane instance based on it’s capabilities and once selected it sends a DataFlowStartMessage that our custom data plane should be able to process.
The AVAILABLE
state is a prerequisite to candidate the data plane instance in the selection process.
The ID
of the selected data plane is stored in the transfer process entity for delivering subsequent messages that may be necessary in the lifecycle of a transfer process. (e.g. SUSPEND and TERMINATE)
2.1 START
If our data plane fulfills the data plane selection criteria, it should be ready to handle DataFlowStartMessage
at the endpoint /dataflows
:
app.post('/dataflows', async (req, res) => {
let { flowType } = req.body;
if (flowType === 'PUSH') {
await handlePush(req,res);
} else if (flowType === 'PULL') {
await handlePull(req,res);
} else {
res.status(400);
res.send(`Flow type ${flowType} not supported`)
}
});
We split the handling of the transfer request in handlePush
and handlePull
functions that handle PUSH and PULL flow types.
The format of the sourceDataAddress
and destinationDataAddress
is aligned with the DSP specification.
2.1.1 PUSH
Our custom data plane should be able to transfer data (PUSH
) from an HttpData
source (sourceDataAddress
) to an HttpData
sink (destinationDataAddress
).
The sourceDataAddress
is the DataAddress
configured in the Asset
and may look like this in our case:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"@id": "asset-1",
"@type": "Asset",
"dataAddress": {
"@type": "DataAddress",
"type": "HttpData",
"baseUrl": "https://jsonplaceholder.typicode.com/todos"
}
}
The destinationDataAddress
is derived from the dataDestination
in the TransferRequest
and may look look this:
{
"@context": {
"@vocab": "https://w3id.org/edc/v0.0.1/ns/"
},
"counterPartyAddress": "{{PROVIDER_DSP}}/api/dsp",
"connectorId": "{{PROVIDER_ID}}",
"contractId": "{{CONTRACT_ID}}",
"dataDestination": {
"type": "HttpData",
"baseUrl": "{{RECEIVER_URL}}"
},
"protocol": "dataspace-protocol-http",
"transferType": "HttpData-PUSH"
}
The simplest handlePush
function would need to fetch data from the source baseUrl
and send the result to the sink baseUrl
.
A naive implementation may look like this:
async function handlePush(req, res) {
res.send({
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "DataFlowResponseMessage"
});
const { sourceDataAddress, destinationDataAddress } = req.body;
const sourceUrl = getBaseUrl(sourceDataAddress);
const destinationUrl = getBaseUrl(destinationDataAddress);
const response = await fetch(sourceUrl);
await fetch(destinationUrl, {
"method": "POST",
body : await response.text()
});
}
First we acknowledge the Control Plane by sending a DataFlowResponseMessage
as response.
Then we transfer the data from sourceUrl
to destinationUrl
.
The getBaseUrl
is an utility function that extracts the baseUrl
from the DataAddress
.
Implementors should keep track of DataFlowStartMessage
s in some persistent storage system in order to fulfill subsequent DPS
messages on the same transfer id (e.g. SUSPEND and TERMINATE).
For example in the streaming case, implementors may track the opened streaming channels, which could be terminated on-demand or by the policy monitor.
2.1.2 PULL
When receiving a DataFlowStartMessage
in a PULL
scenario there is no direct transfer to be handled by the data plane. Based on the sourceDataAddress
in the DataFlowStartMessage
a custom data plane implementation should create another DataAddress
containing all the information required for the data transfer:
async function handlePull(req, res) {
const { sourceDataAddress } = req.body;
const { dataAddress } = await generateDataAddress(sourceDataAddress);
const response = {
"@context": {
"edc": "https://w3id.org/edc/v0.0.1/ns/"
},
"@type": "DataFlowResponseMessage",
"dataAddress": dataAddress
};
res.send(response);
}
We will not implement the generateDataAddress
function, as it may vary depending on the use case. But at the high level a generateDataAddress
should generate a DataAddress
in DSP format that contains useful information for the consumer for fetching the data: endpoint
, endpointType
and custom extensible properties endpointProperties
.
For example the default EDC genarates a DataAddress
that contains also authorization information like the auth token to be used when request data using the Data Plane public API and the token type (e.g. bearer).
Implementors may also want to track PULL
requests in a persistent storage, which can be useful in scenario like token revocation or transfer process termination.
How the actual data requests is handled depends on the implementation of the custom data plane. It could be done in the same way as it’s done in the EDC data plane, which exposes an endpoint that validates the authorization and it proxies the request to the sourceDataAddress
.
The DPS gives enough flexibility for implementing different strategy for different use cases.
2.2 SUSPEND
and TERMINATE
A DPS compliant data plane implementation should also support SUSPEND and TERMINATE messages.
If implementors are keeping track of the transfers (STARTED
), those message are useful for closing the data channels and cleaning-up I/O resources.
2.9 - Custom validation framework
The validation framework hooks into the normal Jetty/Jersey request dispatch mechanism and is designed to allow users to
intercept the request chain to perform additional validation tasks. In its current form it is intended for intercepting
REST requests. Users can elect any validation framework they desire, such as jakarta.validation
or
the Apache Commons Validator, or they can implement one
themselves.
When to use it
This feature is intended for use cases where the standard DTO validation, that ships with EDC’s APIs is not sufficient.
Please check out the OpenAPI spec to find out more about the object schema.
EDC features various data types that do not have a strict schema but are extensible, for example Asset
/AssetDto
,
or a DataRequest
/DataRequestDto
. This was done by design, to allow for maximum flexibility and openness. However,
users may still want to put a more rigid schema on top of those data types, for example a use case may require an
Asset
to always have a owner
property or may require a contentType
to be present. The standard EDC validation
scheme has no way of enforcing that, so this is where custom validation enters the playing field.
Building blocks
There are two important components necessary for custom validation:
- the
InterceptorFunction
: a function that accepts the intercepted method’s parameters as argument (as Object[]
),
and returns a Result<Void>
to indicate the validation success. It must not throw an exception, or dispatch to
the target resource is not guaranteed. - the
ValidationFunctionRegistry
: all InterceptorFunctions
must be registered there, using one of three registration
methods (see below).
Custom validation works by supplying an InterceptorFunction
to the ValidationFunctionRegistry
in one of the
following ways:
bound to a resource-method: here, we register the InterceptorFunction
to any of a controller’s methods. That means,
we need compile-time access to the controller class, because we use reflection to obtain the Method
:
var method = YourController.class.getDeclaredMethods("theMethod", /*parameter types*/)
var yourFunction = objects -> Result.success(); // you validation logic goes here
registry.addFunction(method, yourFunction);
Consequently yourFunction
will get invoked before YourController#theMethod
is invoked by the request dispatcher.
Note that there is currently no way to bind an InterceptorFunction
directly to an HTTP endpoint.
bound to an argument type: the interceptor function gets bound to all resource methods that have a particular type in
their signature:
var yourFunction = objects -> Result.success(); // your validation logic goes here
registry.addFunction(YourObjectDto.class, yourFunction);
The above function would therefore get invoked in all controllers on the classpath, that have a YourObjectDto
in their signature, e.g. public void createObject(YourObjectDto dto)
and public boolean deleteObject (YourObjectDto dto)
would both get intercepted, even if they are defined in different controller classes.
This is the recommended way in the situation described above - adding additional schema restrictions on extensible
types
globally, for all resource methods: this is intended for interceptor functions that should get invoked on all
resource methods. This is generally not recommended and should only be used in very specific situations such as
logging
Please check
out this test
for a comprehensive example how validation can be enabled. All functions are registered during the extension’s
initialization phase.
Limitations and caveats
InterceptorFunction
objects must not throw exceptions- all function registration must happen during the
initialize
phase of the extension lifecycle. - interceptor functions should not perform time-consuming tasks, such as invoking other backend systems, so as not
to cause timeouts in the request chain
- for method-based interception compile-time access to the resource is required. This might not be suitable for a lot of
situations.
- returning a
Result.failure(...)
will result in an HTTP 400 BAD REQUEST
status code. This is the only supported
status code at this time. Note that the failure message will be part of the HTTP response body. - binding methods directly to paths (“endpoints”) is not supported.
2.10 - Performance Tuning
Out of the box the EDC provides a set of default configurations that aim to find a good balance for performances.
The extensibility nature of the EDC permits the user to configure it deeply.
Here will be showed how performance could be improved.
State Machines
At the core of the EDC there are different State Machines
, and their configuration is crucial to
reach the best performances.
Settings
The most important settings for configuring a state machine are:
iteration-wait
- the time that the state machine will pass before fetching the next batch of entities to process in the case in the
last iteration there was no processing; Otherwise no wait is applied.
batch-size
- how many entities are fetched from the store for processing by the connector instance. The entities are locked
pessimistically against mutual access, so for the time of the processing no other connector instances can read
the same entities.
How to tune them
In the control-plane there are 3 state machines:
negotiation-consumer
: the state machine that handles the contract negotiations from a consumer perspectivenegotiation-provider
: the state machine that handles the contract negotiations from a provider perspectivetransfer-process
: the state machine that handles the transfer processes
For every state machine you can set the iteration-wait
(actually for the negotiation-*
there’s a single setting
used for both) and the batch-size
, so the settings (and their default value) are:
edc.negotiation.state-machine.iteration-wait-millis
= 1000edc.negotiation.consumer.state-machine.batch-size
= 20edc.negotiation.provider.state-machine.batch-size
= 20edc.transfer.state-machine.iteration-wait-millis
= 1000edc.transfer.state-machine.batch-size
= 20
Thus, by default all the control-plane state machines will have an iteration of 1 second if no
entities are found/processed. There will be no wait but the next iteration will start as soon as all the entities are
processed. At every iteration 20 entities are fetched.
Changing these values you could tune your connector, for example reducing the iteration-wait
will mean that the state
machine will be more reactive, and increasing the batch-size
will mean that more entities will be processed in the
same iteration. Please note increasing batch-size
too much could bring to longer processing time in the case that
there are a lot of different entities and that reducing iteration-wait
too much will make the state machine spend more
time in the fetch operation.
If tweaking the settings doesn’t give you a performance boost, you can achieve them through horizontal scaling.
2.11 - Instrumentation with Micrometer
EDC provides extensions for instrumentation with the Micrometer metrics library to automatically collect metrics from the host system, JVM, and frameworks and libraries used in EDC (including OkHttp, Jetty, Jersey and ExecutorService).
See sample 04.3 for an example of an instrumented EDC consumer.
Micrometer Extension
This extension provides support for instrumentation for some core EDC components:
Jetty Micrometer Extension
This extension provides support for instrumentation for the Jetty web server, which is enabled when using the JettyExtension
.
Jersey Micrometer Extension
This extension provides support for instrumentation for the Jersey framework, which is enabled when using the JerseyExtension
.
Instrumenting ExecutorServices
Instrumenting ExecutorServices requires using the ExecutorInstrumentation
service to create a wrapper around the service to be instrumented:
ExecutorInstrumentation executorInstrumentation = context.getService(ExecutorInstrumentation.class);
// instrument a ScheduledExecutorService
ScheduledExecutorService executor = executorInstrumentation.instrument(Executors.newScheduledThreadPool(10), "name");
Without any further configuration, a noop implementation of ExecutorInstrumentation
is used. We recommend using the implementation provided in the Micrometer Extension that uses Micrometer’s ExecutorServiceMetrics to record ExecutorService metrics.
Configuration
The following properties can use used to configure which metrics will be collected.
edc.metrics.enabled
: enables/disables metrics collection globallyedc.metrics.system.enabled
: enables/disables collection of system metrics (class loader, memory, garbage collection, processor and thread metrics)edc.metrics.okhttp.enabled
: enables/disables collection of metrics for the OkHttp clientedc.metrics.executor.enabled
: enables/disables collection of metrics for the instrumented ExecutorServicesedc.metrics.jetty.enabled
: enables/disables collection of Jetty metricsedc.metrics.jersey.enabled
: enables/disables collection of Jersey metrics
Default values are always “true”, switch to “false” to disable the corresponding feature.
2.12 - Contribution Guidelines
Thank you for your interest in the EDC! This document provides guidelines and steps members are asked to follow when
contributing to the project.
Table of Contents
Code Of Conduct
All community members are expected to adhere to
the Eclipse Code of Conduct.
How to Contribute
If you want to share a feature idea or discuss a potential use case, first check the existing issues and discussions to
see if it has already been raised. If not, open a discussion (not an issue).
Creating an Issue
If you have identified a bug first check the existing issues to see if it has already been identified. If not, create
a new issue in the appropriate GitHub repository. Keep in mind the following:
- We
use GitHub’s default label set
extended by custom ones to classify issues and improve findability.
- If an issue appears to cover changes that will significantly impact the codebase, open a discussion before creating an
issue.
- If an issue covers a topic or the response to a question that may be interesting for further discussion, it should be
converted to a discussion instead of being closed.
Submitting a Pull Request
Before submitting code to EDC, you should complete the following prerequisites:
Eclipse Contributor Agreement
Before your contribution can be accepted by the project, you need to create and electronically sign
an Eclipse Contributor Agreement (ECA):
- Log in to the Eclipse foundation website. You will
need to create an account within the Eclipse Foundation if you have not already done so.
- Click on “Eclipse ECA”, and complete the form.
Be sure to use the same email address in your Eclipse Account that you intend to use when committing to GitHub.
Stale Issues and PRs
In order to keep our backlog clean, EDC uses a bot that labels and closes old issues and PRs. The following table
outlines this process:
| Stale After | Closed After Stale |
---|
Issue without assignee | 14 days | 7 days |
Issue with assignee | 28 days | 7 days |
PR | 7 days | 7 days |
Note that updating an issue, for example by commenting, will remove the stale
label and reset the counters. However,
we ask the community not to abuse this feature (e.g., periodically commenting “what’s the status?” would qualify as
abuse). If an issue receives no attention, usually there are reasons for it. To avoid closed issues, it’s recommended to
clarify in advance whether a feature fits into the project roadmap by opening a discussion, which are not automatically
closed.
Reporting Flaky Tests
If you discover a randomly failing (“flaky”) test, please check whether an issue for that already
exists. If not, create one, making sure to provide a meaningful description and a link to the failing run. Also include
the Bug
and FlakyTest
labels and assign it to an author of the relevant code. If assigning the issue is not
possible due to missing rights, just comment and @mention the author/last editor.
Be sure not restart the run, as this will overwrite the results. Instead, push an empty commit to trigger another run.
git commit --allow-empty -m "trigger CI" && git push
Note that issues labeled with Bug
and FlakyTest
are prioritized.
Non-Code Contributions
Non-code contributions are another valued way to contribute. Examples include:
- Evangelizing EDC
- Helping to develop the community by hosting events, meetups, summits, and hackathons
- Community education
- Answering questions on GitHub, Discord, etc.
- Writing documentation
- Other writing (Blogs, Articles, Interviews)
Project and Milestone Planning
We use milestones to set a common focus for a period of 6 to 8 weeks. The group of committers chooses issues based on
customer needs and contributions we expect.
Milestones
Milestones are organized at the GitHub Milestones page.
They are numbered in ascending order. There, contributors, users, and adopters can track the progress.
Please note that the due date of a milestone does not imply any guarantee that all linked issued will
be resolved by then.
When closing the current milestone, issues that were not resolved within a milestone phase will be
reviewed to evaluate their relevance and priority, before being assigned to the next milestone.
Issues
Every issue that should be addressed during a milestone phase is assigned to it by using the
Milestone
feature for linking both items. This way, the issues can easily be filtered by
milestones.
Pull Requests
Pull requests are not assigned to milestones as their linking to issues is sufficient to track
the relations and progresses.
Projects
The GitHub Projects page
provides a general overview of the project’s working items. Every new issue is automatically assigned
to the “Dataspace Connector” project.
It can be unassigned or moved to any other project that is provided.
In every project, an issue passes four stages: Backlog
, In progress
, Review in progress
, and Done
,
independent of their association to a specific milestone.
Releases
Please find more information about our release approach here.
If you have questions or suggestions, do not hesitate to contact the project developers via
the project’s “dev” list. You may also want to join
our Discord server.
The project holds a biweekly meeting on fridays 2-3 p.m. (CET) to give community members the
opportunity to get in touch with the committer team. We meet in the “general” voice channel.
Schedule details are on GitHub.
If you have a “contributor” or “committer” status, you will also have access to private channels.
2.12.1 - Pull Request Etiquette
Authors
PRs should adhere to the following rules.
- Familiarize yourself with coding style, ./architectural patterns,
and other contribution guidelines.
- No surprise PRs. Before submitting a PR, open a discussion or an issue outlining the planned work and give
people time to comment. Unsolicited PRs may get ignored or rejected.
- Create focused PRs. Work should be focused on one particular feature or bug. Do not create broad-scoped PRs that
solve multiple issues or span signficant portions of the codebase as they will be rejected outright.
- Provide a clear PR description and motivation. This makes the reviewer’s life much
easier. It is also helpful to outline the broad changes that were made, e.g. “Changes the schema of XYZ-Entity:
the
age
field changed from long
to String
”. - If 3rd party dependencies are introduced, note them in the PR description and explain why they are necessary.
- Stick to the established code style, please refer to
the styleguide document.
- All tests should be green, especially when your PR is in
"Ready for review"
- Mark PRs as
"Ready for review"
only when the PR is complete. No additional commits should be pushed other than to
incorporate review comments. - Merge conflicts should be resolved by squashing all commits on the PR branch, rebasing onto
main,
and
force-pushing. Do this when your PR is ready to review. - If you require a reviewer’s input while it’s still in draft, please contact the designated reviewer using
the
@mention
feature and let them know what you’d like them to look at. - Request a review from one of the technical committers. Requesting a review
from anyone else is still possible, and
sometimes may be advisable, but only committers can merge PRs, so be sure to include them early on.
- Re-request reviews after all remarks have been adopted. This helps reviewers track their work in GitHub.
- If you disagree with a committer’s remarks, feel free to object and argue, but if no agreement is reached, you’ll have
to either accept the committer’s decision or withdraw your PR.
- Be civil and objective. No foul language, insulting or otherwise abusive language will be tolerated.
- The PR titles must follow Conventional Commits.
- The title must follow the format as
<type>(<optional scope>): <description>
.
build
, chore
, ci
, docs
, feat
, fix
, perf
, refactor
, revert
, style
, test
are allowed for the
<type>
. - The length must be kept under 80 characters.
See check-pull-request-title job of GitHub workflow
for checking details.
Reviewers
- Please complete reviews within two business days or delegate to another committer, removing yourself as a reviewer.
- If you have been requested as reviewer, but cannot do the review for any reason (time, lack of knowledge in particular
area, etc.) please comment that in the PR and remove yourself as a reviewer, suggesting a stand-in. The CODEOWNERS
document
should help with that.
- Don’t be overly pedantic.
- Don’t argue basic principles (code style, architectural decisions, etc.)
- Use the
suggestion
feature of GitHub for small/simple changes. - The following could serve you as a review checklist:
- No unnecessary dependencies in
build.gradle.kts
- Sensible unit tests, prefer unit tests over integration tests wherever possible (test runtime). Also check the
usage of test tags.
- Code style
- Simplicity and “uncluttered-ness” of the code
- Overall focus of the PR
- Don’t just wave through any PR. Please take the time to look at them carefully.
- Be civil and objective. No foul language, insulting or otherwise abusive language will be tolerated. The goal is to
encourage contributions.
The technical committers
(as of Sept 15, 2024)
- @wolf4ood
- @jimmarino
- @bscholtes1A
- @ndr_brt
- @ronjaquensel
- @juliapampus
- @paullatzelsperger
2.12.2 - Style Guide
In order to maintain a coherent codebase, every contributor must adhere to the project style guidelines. We assume
contributors will use a modern code editor with support for automatic code formatting.
Checkstyle configuration
Checkstyle is a tool that statically analyzes source code against a set of given
rules formulated in an XML document. Checkstyle rules are included in all EDC code repositories. Many modern IDEs have a
plugin that runs Checkstyle analysis in the background.
Our checkstyle config is based on the Google Style with a few
additional rules such as the naming of constants and Types.
Note: currently we do not enforce the generation of Javadoc comments, even though documenting code is highly
recommended.
Running Checkstyle
Checkstyle is run through the checkstyle
Gradle Plugin during gradle build
for all code repositories. In addition,
Checkstyle is enabled in all GitHub Actions pipelines for PR validation. If checkstyle any violations are found, the
pipeline will fail. We therefore recommend configuring your IDE to run Checkstyle:
IntelliJ Code Style Configuration
If you are using Jetbrains IntelliJ IDEA, we have created a specific code style configuration that will automatically
format your source code according to that style guide. This should eliminate most of the potential Checkstyle violations
from the get-go. However, some code may need to be reformatted manually.
Intellij SaveActions Plugin
To assist with automated code formatting, you may want to use
the SaveActions plugin for IntelliJ IDEA. Unfortunately,
SaveActions has no export feature, so you will need to manually apply this configuration:
Generic .editorConfig
For most other editors and IDEs we’ve supplied .editorConfig
files. Refer to
the official documentation for configuration details since they depend on the editor and OS.
2.12.3 - PR Check List
It’s recommended to submit a draft pull request early on and add people previously working on the same code as
reviewers. Make sure all automatic checks pass before marking it as “ready for review”:
Before submitting a PR, please follow the steps below.
Open a Discussion or File an Issue
Do not submit a PR without first opening an issue (if the PR resolves a bug) or creating a discussion. If a bug fix
requires a significant change or touches on critical code paths (e.g. security-related), open a discussion first.
Coding Style
All code contributions must strictly adhere to the Style Guide and design principles outlined in the
Contributor Technical Documentation. PRs that do not adhere to these rules will be rejected.
All artifacts must include the following copyright header, replacing the fields enclosed by curly brackets “{}” with
your own identifying information. (Don’t include the curly brackets!) Enclose the text in the appropriate comment syntax
for the file format.
Copyright (c) {year} {owner}[ and others]
This program and the accompanying materials are made available under the
terms of the Apache License, Version 2.0 which is available at
https://www.apache.org/licenses/LICENSE-2.0
SPDX-License-Identifier: Apache-2.0
Contributors:
{name} - {description}
Commit Messages
Git commit messages should comply with the following format:
<prefix>(<scope>): <description>
Use the imperative mood
as in “Fix bug” or “Add feature” rather than “Fixed bug” or “Added feature” and
mention the GitHub issue
e.g. chore(transfer process): improve logging
.
All committers and all commits, are bound to
the Developer Certificate of Origin.
As such, all parties involved in a contribution must have valid ECAs. Additionally, commits can
include a “Signed-off-by” entry.
Testing and Documentation
All submissions must include extensive test coverage and be fully documented:
- Add meaningful unit tests and integration tests when appropriate to verify your submission acts as expected.
- All code must be documented. Interfaces and implementation classes must have Javadoc. Include inline documentation
where code blocks are not self-explanatory.
- If a new module has been added or a significant part of the code has been changed, and you should be seen as the
contact person for any further changes, please add appropriate
information to the CODEOWNERS
file. You can find instructions on how to do this at https://help.github.com/articles/about-codeowners/.
Please note that this file does not represent all contributions to the code. What persons and organizations
actually contributed to each file can be seen on GitHub and is documented in the license headers.
3 - Autodoc
This section contains rendering of the autodoc
plugin (details).
3.1 - Connector
Module accesstokendata-store-sql
Artifact: org.eclipse.edc:accesstokendata-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.store.sql.SqlAccessTokenDataStoreExtension
Name: “Sql AccessTokenData Store”
Overview: Provides Sql Store for {@link AccessTokenData} objects
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.accesstokendata.name | | string | `` | | | | Name of the datasource to use for accessing data plane store |
edc.sql.store.accesstokendata.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.dataplane.spi.store.AccessTokenDataStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.dataplane.store.sql.schema.AccessTokenDataStatements
(optional)java.time.Clock
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module api-core
Artifact: org.eclipse.edc:api-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.ApiCoreDefaultServicesExtension
Name: “ApiCoreDefaultServicesExtension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationProviderRegistry
Referenced (injected) services
None
Class: org.eclipse.edc.api.ApiCoreExtension
Name: “API Core”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module api-observability
Artifact: org.eclipse.edc:api-observability:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.observability.ObservabilityApiExtension
Name: “Observability API”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.spi.system.health.HealthCheckService
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module asset-api
Artifact: org.eclipse.edc:asset-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.asset.AssetApiExtension
Name: “Management API: Asset”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.asset.AssetService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)
Module asset-index-sql
Artifact: org.eclipse.edc:asset-index-sql:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.store.sql.assetindex.schema.AssetStatements
Extensions
Class: org.eclipse.edc.connector.controlplane.store.sql.assetindex.SqlAssetIndexServiceExtension
Name: “SQL asset index”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.asset.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.controlplane.asset.spi.index.AssetIndex
org.eclipse.edc.connector.controlplane.asset.spi.index.DataAddressResolver
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.controlplane.store.sql.assetindex.schema.AssetStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module auth-basic
Artifact: org.eclipse.edc:auth-basic:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.auth.basic.BasicAuthenticationExtension
Name: “Basic authentication”
Overview: Extension that registers an AuthenticationService that uses API Keys
@deprecated this module is not supported anymore and it will be removed in the next iterations.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.api.auth.basic.vault-keys | * | map | `` | | | | Key-value object defining authentication credentials stored in the vault |
Provided services
org.eclipse.edc.api.auth.spi.AuthenticationService
Referenced (injected) services
org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)
Module auth-configuration
Artifact: org.eclipse.edc:auth-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.auth.configuration.ApiAuthenticationConfigurationExtension
Name: “Api Authentication Configuration Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
web.http.<context>.auth.type | * | string | `` | | | | The type of the authentication provider. |
web.http.<context>.auth.context | | string | `` | | | | The api context where to apply the authentication. Default to the web |
Provided services
None
Referenced (injected) services
org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationProviderRegistry
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)org.eclipse.edc.web.spi.WebService
(required)
Module auth-delegated
Artifact: org.eclipse.edc:auth-delegated:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.auth.delegated.DelegatedAuthenticationExtension
Name: “Delegating Authentication Service Extension”
Overview: Extension that registers an AuthenticationService that delegates authentication and authorization to a third-party IdP
and register an {@link ApiAuthenticationProvider} under the type called delegated
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.api.auth.dac.cache.validity | | Long | 300000 | | | | Duration (in ms) that the internal key cache is valid |
edc.api.auth.dac.key.url | | string | `` | | | | URL where the third-party IdP’s public key(s) can be resolved |
web.http.<context>.auth.dac.key.url | | string | `` | | | | URL where the third-party IdP’s public key(s) can be resolved for the configured |
web.http.<context>.auth.dac.cache.validity | | Long | 300000 | | | | Duration (in ms) that the internal key cache is valid for the configured |
edc.api.auth.dac.validation.tolerance | | string | 5000 | | | | Default token validation time tolerance (in ms), e.g. for nbf or exp claims |
Provided services
None
Referenced (injected) services
org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationProviderRegistry
(required)org.eclipse.edc.token.spi.TokenValidationRulesRegistry
(required)org.eclipse.edc.keys.spi.KeyParserRegistry
(required)org.eclipse.edc.token.spi.TokenValidationService
(required)java.time.Clock
(required)
Module auth-spi
Name: Auth services
Artifact: org.eclipse.edc:auth-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationProviderRegistry
org.eclipse.edc.api.auth.spi.AuthenticationService
org.eclipse.edc.api.auth.spi.ApiAuthenticationProvider
Extensions
Module auth-tokenbased
Artifact: org.eclipse.edc:auth-tokenbased:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.auth.token.TokenBasedAuthenticationExtension
Name: “Static token API Authentication”
Overview: Extension that registers an AuthenticationService that uses API Keys and register
an {@link ApiAuthenticationProvider} under the type called tokenbased
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
web.http.<context>.auth.key | | string | `` | | | | The api key to use for the |
web.http.<context>.auth.key.alias | | string | `` | | | | The vault api key alias to use for the |
edc.api.auth.key | | string | `` | | | | |
edc.api.auth.key.alias | | string | `` | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationProviderRegistry
(required)
Module boot
Artifact: org.eclipse.edc:boot:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.boot.BootServicesExtension
Name: “Boot Services”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.participant.id | | string | `` | | | | Configures the participant id this runtime is operating on behalf of |
edc.runtime.id | | string | <random UUID> | | | | Configures the runtime id. This should be fully or partly randomized, and need not be stable across restarts. It is recommended to leave this value blank. |
edc.component.id | | string | <random UUID> | | | | Configures this component’s ID. This should be a unique, stable and deterministic identifier. |
Provided services
java.time.Clock
org.eclipse.edc.spi.telemetry.Telemetry
org.eclipse.edc.spi.system.health.HealthCheckService
org.eclipse.edc.spi.security.Vault
org.eclipse.edc.spi.system.ExecutorInstrumentation
org.eclipse.edc.spi.system.apiversion.ApiVersionService
Referenced (injected) services
None
Module callback-event-dispatcher
Artifact: org.eclipse.edc:callback-event-dispatcher:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.callback.dispatcher.CallbackEventDispatcherExtension
Name: “Callback dispatcher extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.services.spi.callback.CallbackProtocolResolverRegistry
Referenced (injected) services
org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.connector.controlplane.services.spi.callback.CallbackRegistry
(required)
Module callback-http-dispatcher
Artifact: org.eclipse.edc:callback-http-dispatcher:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.callback.dispatcher.http.CallbackEventDispatcherHttpExtension
Name: “Callback dispatcher http extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.connector.controlplane.services.spi.callback.CallbackProtocolResolverRegistry
(required)org.eclipse.edc.spi.security.Vault
(required)
Module callback-static-endpoint
Artifact: org.eclipse.edc:callback-static-endpoint:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.callback.staticendpoint.CallbackStaticEndpointExtension
Name: “Static callbacks extension”
Overview: Extension for configuring the static endpoints for callbacks
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.services.spi.callback.CallbackRegistry
(required)
Module catalog-api
Artifact: org.eclipse.edc:catalog-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.catalog.CatalogApiExtension
Name: “Management API: Catalog”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.catalog.CatalogService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module configuration-filesystem
Artifact: org.eclipse.edc:configuration-filesystem:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.configuration.filesystem.FsConfigurationExtension
Name: “FS Configuration”
Overview: Sources configuration values from a properties file.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.fs.config | | string | `` | | | | |
Provided services
None
Referenced (injected) services
None
Module connector-core
Artifact: org.eclipse.edc:connector-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.core.SecurityDefaultServicesExtension
Name: “Security Default Services Extension”
Overview: This extension provides default/standard implementations for the {@link PrivateKeyResolver} and the {@link CertificateResolver}
Those provider methods CANNOT be implemented in {@link CoreDefaultServicesExtension}, because that could potentially cause
a conflict with injecting/providing the {@link Vault}
Configuration_None_
Provided services
org.eclipse.edc.keys.spi.PrivateKeyResolver
org.eclipse.edc.keys.spi.CertificateResolver
org.eclipse.edc.keys.spi.KeyParserRegistry
Referenced (injected) services
org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Class: org.eclipse.edc.connector.core.CoreServicesExtension
Name: “Core Services”
Overview: This extension provides default/standard implementations for the {@link PrivateKeyResolver} and the {@link CertificateResolver}
Those provider methods CANNOT be implemented in {@link CoreDefaultServicesExtension}, because that could potentially cause
a conflict with injecting/providing the {@link Vault}
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.hostname | | string | localhost | | | | Connector hostname, which e.g. is used in referer urls |
edc.agent.identity.key | | string | client_id | | | | The name of the claim key used to determine the participant identity |
Provided services
org.eclipse.edc.spi.types.TypeManager
org.eclipse.edc.spi.system.Hostname
org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
org.eclipse.edc.spi.command.CommandHandlerRegistry
org.eclipse.edc.participant.spi.ParticipantAgentService
org.eclipse.edc.policy.engine.spi.RuleBindingRegistry
org.eclipse.edc.policy.engine.spi.PolicyEngine
org.eclipse.edc.spi.event.EventRouter
org.eclipse.edc.transform.spi.TypeTransformerRegistry
org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
org.eclipse.edc.validator.spi.DataAddressValidatorRegistry
org.eclipse.edc.spi.query.CriterionOperatorRegistry
org.eclipse.edc.http.spi.ControlApiHttpClient
Referenced (injected) services
org.eclipse.edc.connector.core.event.EventExecutorServiceContainer
(required)org.eclipse.edc.spi.types.TypeManager
(optional)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.api.auth.spi.ControlClientAuthenticationProvider
(required)
Class: org.eclipse.edc.connector.core.LocalPublicKeyDefaultExtension
Name: “Security Default Services Extension”
Overview: This extension provides default/standard implementations for the {@link PrivateKeyResolver} and the {@link CertificateResolver}
Those provider methods CANNOT be implemented in {@link CoreDefaultServicesExtension}, because that could potentially cause
a conflict with injecting/providing the {@link Vault}
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.publickeys.<pkAlias>.id | * | string | `` | | | | ID of the public key. |
edc.iam.publickeys.<pkAlias>.value | | string | `` | | | | Value of the public key. Multiple formats are supported, depending on the KeyParsers registered in the runtime |
edc.iam.publickeys.<pkAlias>.path | | string | `` | | | | Path to a file that holds the public key, e.g. a PEM file. Multiple formats are supported, depending on the KeyParsers registered in the runtime |
Provided services
org.eclipse.edc.keys.spi.LocalPublicKeyService
Referenced (injected) services
org.eclipse.edc.keys.spi.KeyParserRegistry
(required)org.eclipse.edc.spi.security.Vault
(required)
Class: org.eclipse.edc.connector.core.CoreDefaultServicesExtension
Name: “CoreDefaultServicesExtension”
Overview: This extension provides default/standard implementations for the {@link PrivateKeyResolver} and the {@link CertificateResolver}
Those provider methods CANNOT be implemented in {@link CoreDefaultServicesExtension}, because that could potentially cause
a conflict with injecting/providing the {@link Vault}
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.core.retry.retries.max | | int | 5 | | | | RetryPolicy: Maximum retries before a failure is propagated |
edc.core.retry.backoff.min | | int | 500 | | | | RetryPolicy: Minimum number of milliseconds for exponential backoff |
edc.core.retry.backoff.max | | int | 10000 | | | | RetryPolicy: Maximum number of milliseconds for exponential backoff |
edc.core.retry.log.on.retry | | boolean | false | | | | RetryPolicy: Log onRetry events |
edc.core.retry.log.on.retry.scheduled | | boolean | false | | | | RetryPolicy: Log onRetryScheduled events |
edc.core.retry.log.on.retries.exceeded | | boolean | false | | | | RetryPolicy: Log onRetriesExceeded events |
edc.core.retry.log.on.failed.attempt | | boolean | false | | | | RetryPolicy: Log onFailedAttempt events |
edc.core.retry.log.on.abort | | boolean | false | | | | RetryPolicy: Log onAbort events |
edc.http.client.https.enforce | | boolean | false | | | | OkHttpClient: If true, enable HTTPS call enforcement |
edc.http.client.timeout.connect | | int | 30 | | | | OkHttpClient: connect timeout, in seconds |
edc.http.client.timeout.read | | int | 30 | | | | OkHttpClient: read timeout, in seconds |
edc.http.client.send.buffer.size | | int | 0 | | | | OkHttpClient: send buffer size, in bytes |
edc.http.client.receive.buffer.size | | int | 0 | | | | OkHttpClient: receive buffer size, in bytes |
Provided services
org.eclipse.edc.transaction.spi.TransactionContext
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
org.eclipse.edc.connector.core.event.EventExecutorServiceContainer
org.eclipse.edc.http.spi.EdcHttpClient
org.eclipse.edc.api.auth.spi.ControlClientAuthenticationProvider
okhttp3.OkHttpClient
dev.failsafe.RetryPolicy<T>
org.eclipse.edc.participant.spi.ParticipantIdMapper
Referenced (injected) services
okhttp3.EventListener
(optional)
Module contract-agreement-api
Artifact: org.eclipse.edc:contract-agreement-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.contractagreement.ContractAgreementApiExtension
Name: “Management API: Contract Agreement”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.contractagreement.ContractAgreementService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)
Module contract-definition-api
Artifact: org.eclipse.edc:contract-definition-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.contractdefinition.ContractDefinitionApiExtension
Name: “Management API: Contract Definition”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.contractdefinition.ContractDefinitionService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module contract-definition-store-sql
Artifact: org.eclipse.edc:contract-definition-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.store.sql.contractdefinition.SqlContractDefinitionStoreExtension
Name: “SQL contract definition store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.contractdefinition.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.controlplane.contract.spi.offer.store.ContractDefinitionStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.controlplane.store.sql.contractdefinition.schema.ContractDefinitionStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module contract-negotiation-api
Artifact: org.eclipse.edc:contract-negotiation-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.contractnegotiation.ContractNegotiationApiExtension
Name: “Management API: Contract Negotiation”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.contractnegotiation.ContractNegotiationService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)
Module contract-negotiation-store-sql
Artifact: org.eclipse.edc:contract-negotiation-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.store.sql.contractnegotiation.SqlContractNegotiationStoreExtension
Name: “SQL contract negotiation store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.contractnegotiation.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)java.time.Clock
(required)org.eclipse.edc.connector.controlplane.store.sql.contractnegotiation.store.schema.ContractNegotiationStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module contract-spi
Name: Contract services
Artifact: org.eclipse.edc:contract-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.contract.spi.offer.store.ContractDefinitionStore
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.observe.ContractNegotiationObservable
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ConsumerContractNegotiationManager
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.NegotiationWaitStrategy
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ContractNegotiationPendingGuard
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ProviderContractNegotiationManager
org.eclipse.edc.connector.controlplane.contract.spi.validation.ContractValidationService
Extensions
Module control-api-configuration
Artifact: org.eclipse.edc:control-api-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.api.control.configuration.ControlApiConfigurationExtension
Name: “Control API configuration”
Overview: Tells all the Control API controllers under which context alias they need to register their resources: either
default
or control
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.control.endpoint | | string | `` | | | | Configures endpoint for reaching the Control API. If it’s missing it defaults to the hostname configuration. |
Provided services
org.eclipse.edc.web.spi.configuration.context.ControlApiUrl
Referenced (injected) services
org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.spi.system.Hostname
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module control-plane-aggregate-services
Artifact: org.eclipse.edc:control-plane-aggregate-services:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.services.ControlPlaneServicesExtension
Name: “Control Plane Services”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.services.spi.asset.AssetService
org.eclipse.edc.connector.spi.service.SecretService
org.eclipse.edc.connector.controlplane.services.spi.catalog.CatalogService
org.eclipse.edc.connector.controlplane.services.spi.catalog.CatalogProtocolService
org.eclipse.edc.connector.controlplane.services.spi.contractagreement.ContractAgreementService
org.eclipse.edc.connector.controlplane.services.spi.contractdefinition.ContractDefinitionService
org.eclipse.edc.connector.controlplane.services.spi.contractnegotiation.ContractNegotiationService
org.eclipse.edc.connector.controlplane.services.spi.contractnegotiation.ContractNegotiationProtocolService
org.eclipse.edc.connector.controlplane.services.spi.policydefinition.PolicyDefinitionService
org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessService
org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessProtocolService
org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolTokenValidator
org.eclipse.edc.connector.controlplane.services.spi.protocol.VersionProtocolService
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.connector.controlplane.asset.spi.index.AssetIndex
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.connector.controlplane.contract.spi.offer.store.ContractDefinitionStore
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ConsumerContractNegotiationManager
(required)org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
(required)org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
(required)org.eclipse.edc.connector.controlplane.transfer.spi.TransferProcessManager
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.controlplane.contract.spi.validation.ContractValidationService
(required)org.eclipse.edc.connector.controlplane.contract.spi.offer.ConsumerOfferResolver
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.observe.ContractNegotiationObservable
(required)org.eclipse.edc.connector.controlplane.transfer.spi.observe.TransferProcessObservable
(required)org.eclipse.edc.spi.telemetry.Telemetry
(required)org.eclipse.edc.participant.spi.ParticipantAgentService
(required)org.eclipse.edc.connector.controlplane.catalog.spi.DataServiceRegistry
(required)org.eclipse.edc.connector.controlplane.catalog.spi.DatasetResolver
(required)org.eclipse.edc.spi.command.CommandHandlerRegistry
(required)org.eclipse.edc.validator.spi.DataAddressValidatorRegistry
(required)org.eclipse.edc.spi.iam.IdentityService
(required)org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolTokenValidator
(optional)org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolVersionRegistry
(required)org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowManager
(required)org.eclipse.edc.connector.controlplane.transfer.spi.flow.TransferTypeParser
(required)
Module control-plane-api
Artifact: org.eclipse.edc:control-plane-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.ControlPlaneApiExtension
Name: “Control Plane API”
Overview: {@link ControlPlaneApiExtension } exposes HTTP endpoints for internal interaction with the Control Plane
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessService
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module control-plane-api-client
Artifact: org.eclipse.edc:control-plane-api-client:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.client.ControlPlaneApiClientExtension
Name: “Control Plane HTTP API client”
Overview: Extensions that contains clients for Control Plane HTTP APIs
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.api.client.spi.transferprocess.TransferProcessApiClient
Referenced (injected) services
org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.http.spi.ControlApiHttpClient
(required)
Module control-plane-api-client-spi
Name: Control Plane API Services
Artifact: org.eclipse.edc:control-plane-api-client-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.api.client.spi.transferprocess.TransferProcessApiClient
Extensions
Module control-plane-catalog
Artifact: org.eclipse.edc:control-plane-catalog:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.catalog.CatalogDefaultServicesExtension
Name: “Catalog Default Services”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.catalog.spi.DataServiceRegistry
org.eclipse.edc.connector.controlplane.catalog.spi.DistributionResolver
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowManager
(required)
Class: org.eclipse.edc.connector.controlplane.catalog.CatalogCoreExtension
Name: “Catalog Core”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.catalog.spi.DatasetResolver
Referenced (injected) services
org.eclipse.edc.connector.controlplane.asset.spi.index.AssetIndex
(required)org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
(required)org.eclipse.edc.connector.controlplane.catalog.spi.DistributionResolver
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)org.eclipse.edc.connector.controlplane.contract.spi.offer.store.ContractDefinitionStore
(required)org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)
Module control-plane-contract
Artifact: org.eclipse.edc:control-plane-contract:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.contract.ContractNegotiationDefaultServicesExtension
Name: “Contract Negotiation Default Services”
Overview: Contract Negotiation Default Services Extension
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.contract.spi.offer.ConsumerOfferResolver
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.observe.ContractNegotiationObservable
org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyArchive
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ContractNegotiationPendingGuard
Referenced (injected) services
org.eclipse.edc.connector.controlplane.contract.spi.offer.store.ContractDefinitionStore
(required)org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
(required)
Class: org.eclipse.edc.connector.controlplane.contract.ContractNegotiationCommandExtension
Name: “Contract Negotiation command handlers”
Overview: Contract Negotiation Default Services Extension
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
(required)org.eclipse.edc.spi.command.CommandHandlerRegistry
(required)
Class: org.eclipse.edc.connector.controlplane.contract.ContractCoreExtension
Name: “Contract Core”
Overview: Contract Negotiation Default Services Extension
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.negotiation.state-machine.iteration-wait-millis | | long | `` | | | | the iteration wait time in milliseconds in the negotiation state machine. Default value 1000 |
edc.negotiation.consumer.state-machine.batch-size | | int | `` | | | | the batch size in the consumer negotiation state machine. Default value 20 |
edc.negotiation.provider.state-machine.batch-size | | int | `` | | | | the batch size in the provider negotiation state machine. Default value 20 |
edc.negotiation.consumer.send.retry.limit | | int | 7 | | | | how many times a specific operation must be tried before terminating the consumer negotiation with error |
edc.negotiation.provider.send.retry.limit | | int | 7 | | | | how many times a specific operation must be tried before terminating the provider negotiation with error |
edc.negotiation.consumer.send.retry.base-delay.ms | | long | 1000 | | | | The base delay for the consumer negotiation retry mechanism in millisecond |
edc.negotiation.provider.send.retry.base-delay.ms | | long | 1000 | | | | The base delay for the provider negotiation retry mechanism in millisecond |
Provided services
org.eclipse.edc.connector.controlplane.contract.spi.validation.ContractValidationService
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ConsumerContractNegotiationManager
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ProviderContractNegotiationManager
Referenced (injected) services
org.eclipse.edc.connector.controlplane.asset.spi.index.AssetIndex
(required)org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
(required)org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.spi.telemetry.Telemetry
(required)java.time.Clock
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.policy.engine.spi.RuleBindingRegistry
(required)org.eclipse.edc.spi.protocol.ProtocolWebhook
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.observe.ContractNegotiationObservable
(required)org.eclipse.edc.connector.controlplane.contract.spi.negotiation.ContractNegotiationPendingGuard
(required)org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)
Module control-plane-core
Artifact: org.eclipse.edc:control-plane-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.ControlPlaneDefaultServicesExtension
Name: “Control Plane Default Services”
Overview: Provides default service implementations for fallback
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.asset.spi.index.AssetIndex
org.eclipse.edc.connector.controlplane.asset.spi.index.DataAddressResolver
org.eclipse.edc.connector.controlplane.contract.spi.offer.store.ContractDefinitionStore
org.eclipse.edc.connector.controlplane.contract.spi.negotiation.store.ContractNegotiationStore
org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
org.eclipse.edc.connector.controlplane.services.spi.callback.CallbackRegistry
org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolVersionRegistry
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module control-plane-transfer
Artifact: org.eclipse.edc:control-plane-transfer:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.transfer.TransferProcessCommandExtension
Name: “TransferProcessCommandExtension”
Overview: Provides core data transfer services to the system.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
(required)
Class: org.eclipse.edc.connector.controlplane.transfer.TransferProcessDefaultServicesExtension
Name: “Transfer Process Default Services”
Overview: Provides core data transfer services to the system.
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowManager
org.eclipse.edc.connector.controlplane.transfer.spi.provision.ResourceManifestGenerator
org.eclipse.edc.connector.controlplane.transfer.spi.provision.ProvisionManager
org.eclipse.edc.connector.controlplane.transfer.spi.observe.TransferProcessObservable
org.eclipse.edc.connector.controlplane.transfer.spi.TransferProcessPendingGuard
org.eclipse.edc.connector.controlplane.transfer.spi.flow.TransferTypeParser
Referenced (injected) services
org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)
Class: org.eclipse.edc.connector.controlplane.transfer.TransferCoreExtension
Name: “Transfer Core”
Overview: Provides core data transfer services to the system.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.transfer.state-machine.iteration-wait-millis | | long | `` | | | | the iteration wait time in milliseconds in the transfer process state machine. Default value 1000 |
edc.transfer.state-machine.batch-size | | int | `` | | | | the batch size in the transfer process state machine. Default value 20 |
edc.transfer.send.retry.limit | | int | 7 | | | | how many times a specific operation must be tried before terminating the transfer with error |
edc.transfer.send.retry.base-delay.ms | | long | 1000 | | | | The base delay for the transfer retry mechanism in millisecond |
Provided services
org.eclipse.edc.connector.controlplane.transfer.spi.TransferProcessManager
org.eclipse.edc.connector.controlplane.transfer.spi.edr.EndpointDataReferenceReceiverRegistry
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
(required)org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowManager
(required)org.eclipse.edc.connector.controlplane.transfer.spi.provision.ResourceManifestGenerator
(required)org.eclipse.edc.connector.controlplane.transfer.spi.provision.ProvisionManager
(required)org.eclipse.edc.connector.controlplane.transfer.spi.observe.TransferProcessObservable
(required)org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyArchive
(required)org.eclipse.edc.spi.command.CommandHandlerRegistry
(required)org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.connector.controlplane.asset.spi.index.DataAddressResolver
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.spi.event.EventRouter
(required)java.time.Clock
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.telemetry.Telemetry
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.protocol.ProtocolWebhook
(required)org.eclipse.edc.connector.controlplane.transfer.spi.TransferProcessPendingGuard
(required)org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)
Module core-spi
Name: Core services
Artifact: org.eclipse.edc:core-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.spi.iam.AudienceResolver
org.eclipse.edc.spi.iam.IdentityService
org.eclipse.edc.spi.command.CommandHandlerRegistry
org.eclipse.edc.spi.event.EventRouter
org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
Extensions
Module data-plane-core
Artifact: org.eclipse.edc:data-plane-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.framework.DataPlaneFrameworkExtension
Name: “Data Plane Framework”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dataplane.state-machine.iteration-wait-millis | | long | 1000 | | | | the iteration wait time in milliseconds in the data plane state machine. |
edc.dataplane.state-machine.batch-size | | int | 20 | | | | the batch size in the data plane state machine. |
edc.dataplane.send.retry.limit | | int | 7 | | | | how many times a specific operation must be tried before terminating the dataplane with error |
edc.dataplane.send.retry.base-delay.ms | | long | 1000 | | | | The base delay for the dataplane retry mechanism in millisecond |
edc.dataplane.transfer.threads | | int | 20 | | | | Size of the transfer thread pool. It is advisable to set it bigger than the state machine batch size |
Provided services
org.eclipse.edc.connector.dataplane.spi.manager.DataPlaneManager
org.eclipse.edc.connector.dataplane.spi.registry.TransferServiceRegistry
org.eclipse.edc.connector.dataplane.spi.pipeline.DataTransferExecutorServiceContainer
Referenced (injected) services
org.eclipse.edc.connector.dataplane.framework.registry.TransferServiceSelectionStrategy
(required)org.eclipse.edc.connector.dataplane.spi.store.DataPlaneStore
(required)org.eclipse.edc.connector.controlplane.api.client.spi.transferprocess.TransferProcessApiClient
(required)org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)org.eclipse.edc.spi.telemetry.Telemetry
(required)java.time.Clock
(required)org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
(required)org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAuthorizationService
(required)
Class: org.eclipse.edc.connector.dataplane.framework.DataPlaneDefaultServicesExtension
Name: “Data Plane Framework Default Services”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.dataplane.framework.registry.TransferServiceSelectionStrategy
org.eclipse.edc.connector.dataplane.spi.store.DataPlaneStore
org.eclipse.edc.connector.dataplane.spi.store.AccessTokenDataStore
org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
org.eclipse.edc.connector.dataplane.spi.iam.PublicEndpointGeneratorService
org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAuthorizationService
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module data-plane-http
Artifact: org.eclipse.edc:data-plane-http:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.http.DataPlaneHttpExtension
Name: “Data Plane HTTP”
Overview: Provides support for reading data from an HTTP endpoint and sending data to an HTTP endpoint.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dataplane.http.sink.partition.size | | int | 5 | | | | Number of partitions for parallel message push in the HttpDataSink |
Provided services
org.eclipse.edc.connector.dataplane.http.spi.HttpRequestParamsProvider
Referenced (injected) services
org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
(required)org.eclipse.edc.connector.dataplane.spi.pipeline.DataTransferExecutorServiceContainer
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module data-plane-http-oauth2-core
Artifact: org.eclipse.edc:data-plane-http-oauth2-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.http.oauth2.DataPlaneHttpOauth2Extension
Name: “Data Plane HTTP OAuth2”
Overview: Provides support for adding OAuth2 authentication to http data transfer
Configuration_None_
Provided services
None
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.connector.dataplane.http.spi.HttpRequestParamsProvider
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
(required)org.eclipse.edc.iam.oauth2.spi.client.Oauth2Client
(required)
Module data-plane-iam
Artifact: org.eclipse.edc:data-plane-iam:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.iam.DataPlaneIamDefaultServicesExtension
Name: “Data Plane Default IAM Services”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.transfer.proxy.token.signer.privatekey.alias | | string | `` | | | | Alias of private key used for signing tokens, retrieved from private key resolver |
edc.transfer.proxy.token.verifier.publickey.alias | | string | `` | | | | Alias of public key used for verifying the tokens, retrieved from the vault |
Provided services
org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAccessControlService
org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAccessTokenService
Referenced (injected) services
org.eclipse.edc.connector.dataplane.spi.store.AccessTokenDataStore
(required)org.eclipse.edc.token.spi.TokenValidationService
(required)org.eclipse.edc.keys.spi.LocalPublicKeyService
(required)org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
(required)
Class: org.eclipse.edc.connector.dataplane.iam.DataPlaneIamExtension
Name: “Data Plane IAM”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAuthorizationService
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAccessTokenService
(required)org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAccessControlService
(required)org.eclipse.edc.connector.dataplane.spi.iam.PublicEndpointGeneratorService
(required)
Module data-plane-instance-store-sql
Artifact: org.eclipse.edc:data-plane-instance-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.selector.store.sql.SqlDataPlaneInstanceStoreExtension
Name: “Sql Data Plane Instance Store”
Overview: Extensions that expose an implementation of {@link DataPlaneInstanceStore} that uses SQL as backend storage
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.dataplaneinstance.name | | string | `` | | | | Name of the datasource to use for accessing data plane instances |
edc.sql.store.dataplaneinstance.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.dataplane.selector.spi.store.DataPlaneInstanceStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.dataplane.selector.store.sql.schema.DataPlaneInstanceStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)java.time.Clock
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module data-plane-kafka
Artifact: org.eclipse.edc:data-plane-kafka:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.kafka.DataPlaneKafkaExtension
Name: “Data Plane Kafka”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dataplane.kafka.sink.partition.size | | int | 5 | | | | The partitionSize used by the kafka data sink |
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.dataplane.spi.pipeline.DataTransferExecutorServiceContainer
(required)org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
(required)java.time.Clock
(required)
Module data-plane-public-api-v2
Artifact: org.eclipse.edc:data-plane-public-api-v2:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.api.DataPlanePublicApiV2Extension
Name: “Data Plane Public API”
Overview: This extension provides generic endpoints which are open to public participants of the Dataspace to execute
requests on the actual data source.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dataplane.api.public.baseurl | | string | http://<HOST>:8185/api/v2/public | | | | Base url of the public API endpoint without the trailing slash. This should correspond to the values configured in ‘8185’ and ‘/api/v2/public’. |
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAuthorizationService
(required)org.eclipse.edc.connector.dataplane.spi.iam.PublicEndpointGeneratorService
(required)org.eclipse.edc.spi.system.Hostname
(required)
Module data-plane-selector-api
Artifact: org.eclipse.edc:data-plane-selector-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.selector.DataPlaneSelectorApiExtension
Name: “DataPlane selector API”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)java.time.Clock
(required)
Module data-plane-selector-client
Artifact: org.eclipse.edc:data-plane-selector-client:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.selector.DataPlaneSelectorClientExtension
Name: “DataPlane Selector client”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dpf.selector.url | * | string | `` | | | | DataPlane selector api URL |
edc.dataplane.client.selector.strategy | | string | random | | | | Defines strategy for Data Plane instance selection in case Data Plane is not embedded in current runtime |
Provided services
org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
Referenced (injected) services
org.eclipse.edc.http.spi.ControlApiHttpClient
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)
Module data-plane-selector-control-api
Artifact: org.eclipse.edc:data-plane-selector-control-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.selector.control.api.DataplaneSelectorControlApiExtension
Name: “Dataplane Selector Control API”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
(required)java.time.Clock
(required)
Module data-plane-selector-core
Artifact: org.eclipse.edc:data-plane-selector-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.selector.DataPlaneSelectorExtension
Name: “Data Plane Selector core”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.data.plane.selector.state-machine.iteration-wait-millis | | long | 1000 | | | | the iteration wait time in milliseconds in the data plane selector state machine. |
edc.data.plane.selector.state-machine.batch-size | | int | 20 | | | | the batch size in the data plane selector state machine. |
edc.data.plane.selector.state-machine.check.period | | int | 60 | | | | the check period for data plane availability, in seconds |
Provided services
org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
Referenced (injected) services
org.eclipse.edc.connector.dataplane.selector.spi.store.DataPlaneInstanceStore
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.dataplane.selector.spi.strategy.SelectionStrategyRegistry
(required)org.eclipse.edc.connector.dataplane.selector.spi.client.DataPlaneClientFactory
(required)
Class: org.eclipse.edc.connector.dataplane.selector.DataPlaneSelectorDefaultServicesExtension
Name: “DataPlaneSelectorDefaultServicesExtension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.dataplane.selector.spi.store.DataPlaneInstanceStore
org.eclipse.edc.connector.dataplane.selector.spi.strategy.SelectionStrategyRegistry
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module data-plane-selector-spi
Name: DataPlane selector services
Artifact: org.eclipse.edc:data-plane-selector-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
org.eclipse.edc.connector.dataplane.selector.spi.client.DataPlaneClient
Extensions
Module data-plane-self-registration
Artifact: org.eclipse.edc:data-plane-self-registration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.registration.DataplaneSelfRegistrationExtension
Name: “Dataplane Self Registration”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.data.plane.self.unregistration | | boolean | false | | | | Enable data-plane un-registration at shutdown (not suggested for clustered environments) |
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
(required)org.eclipse.edc.web.spi.configuration.context.ControlApiUrl
(required)org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
(required)org.eclipse.edc.connector.dataplane.spi.iam.PublicEndpointGeneratorService
(required)org.eclipse.edc.spi.system.health.HealthCheckService
(required)
Module data-plane-signaling-api
Artifact: org.eclipse.edc:data-plane-signaling-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.api.DataPlaneSignalingApiExtension
Name: “DataPlane Signaling API extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.dataplane.spi.manager.DataPlaneManager
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module data-plane-signaling-client
Artifact: org.eclipse.edc:data-plane-signaling-client:0.10.1
Categories: None
Extension points
None
Extensions
Name: “Data Plane Signaling transform Client”
Overview: This extension provides an implementation of {@link DataPlaneClient} compliant with the data plane signaling protocol
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Class: org.eclipse.edc.connector.dataplane.client.DataPlaneSignalingClientExtension
Name: “Data Plane Signaling Client”
Overview: This extension provides an implementation of {@link DataPlaneClient} compliant with the data plane signaling protocol
Configuration_None_
Provided services
org.eclipse.edc.connector.dataplane.selector.spi.client.DataPlaneClientFactory
Referenced (injected) services
org.eclipse.edc.http.spi.ControlApiHttpClient
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.connector.dataplane.spi.manager.DataPlaneManager
(optional)
Module data-plane-spi
Name: DataPlane services
Artifact: org.eclipse.edc:data-plane-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.dataplane.spi.registry.TransferServiceRegistry
org.eclipse.edc.connector.dataplane.spi.pipeline.PipelineService
org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAccessControlService
org.eclipse.edc.connector.dataplane.spi.iam.DataPlaneAccessTokenService
org.eclipse.edc.connector.dataplane.spi.manager.DataPlaneManager
Extensions
Module data-plane-store-sql
Artifact: org.eclipse.edc:data-plane-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.dataplane.store.sql.SqlDataPlaneStoreExtension
Name: “Sql Data Plane Store”
Overview: Provides Sql Store for Data Plane Flow Requests states
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.dataplane.name | | string | `` | | | | Name of the datasource to use for accessing data plane store |
edc.sql.store.dataplane.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.dataplane.spi.store.DataPlaneStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.dataplane.store.sql.schema.DataPlaneStatements
(optional)java.time.Clock
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module dsp-catalog-http-api
Artifact: org.eclipse.edc:dsp-catalog-http-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.catalog.http.api.DspCatalogApiExtension
Name: “Dataspace Protocol Catalog Extension”
Overview: Creates and registers the controller for dataspace protocol catalog requests.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.spi.protocol.ProtocolWebhook
(required)org.eclipse.edc.connector.controlplane.services.spi.catalog.CatalogProtocolService
(required)org.eclipse.edc.connector.controlplane.catalog.spi.DataServiceRegistry
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.protocol.dsp.http.spi.message.DspRequestHandler
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolVersionRegistry
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)
Module dsp-catalog-http-dispatcher
Artifact: org.eclipse.edc:dsp-catalog-http-dispatcher:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.catalog.http.dispatcher.DspCatalogHttpDispatcherExtension
Name: “Dataspace Protocol Catalog HTTP Dispatcher Extension”
Overview: Creates and registers the HTTP dispatcher delegate for sending a catalog request as defined in
the dataspace protocol specification.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.protocol.dsp.http.spi.dispatcher.DspHttpRemoteMessageDispatcher
(required)org.eclipse.edc.protocol.dsp.http.spi.serialization.JsonLdRemoteMessageSerializer
(required)org.eclipse.edc.protocol.dsp.http.spi.DspProtocolParser
(required)
Artifact: org.eclipse.edc:dsp-catalog-transform:0.10.1
Categories: None
Extension points
None
Extensions
Name: “Dataspace Protocol Catalog Transform Extension”
Overview: Provides the transformers for catalog message types via the {@link TypeTransformerRegistry}.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.participant.spi.ParticipantIdMapper
(required)
Module dsp-http-api-configuration
Artifact: org.eclipse.edc:dsp-http-api-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.http.api.configuration.DspApiConfigurationExtension
Name: “Dataspace Protocol API Configuration Extension”
Overview: Configure ‘protocol’ api context.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dsp.callback.address | | string | <hostname:protocol.port/protocol.path> | | | | Configures endpoint for reaching the Protocol API. |
Provided services
org.eclipse.edc.spi.protocol.ProtocolWebhook
Referenced (injected) services
org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.participant.spi.ParticipantIdMapper
(required)org.eclipse.edc.spi.system.Hostname
(required)
Module dsp-http-core
Artifact: org.eclipse.edc:dsp-http-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.http.DspHttpCoreExtension
Name: “Dataspace Protocol Core Extension”
Overview: Provides an implementation of {@link DspHttpRemoteMessageDispatcher} to support sending dataspace
protocol messages. The dispatcher can then be used by other extensions to add support for
specific message types.
Configuration_None_
Provided services
org.eclipse.edc.protocol.dsp.http.spi.dispatcher.DspHttpRemoteMessageDispatcher
org.eclipse.edc.protocol.dsp.http.spi.message.DspRequestHandler
org.eclipse.edc.protocol.dsp.http.spi.serialization.JsonLdRemoteMessageSerializer
org.eclipse.edc.protocol.dsp.spi.transform.DspProtocolTypeTransformerRegistry
org.eclipse.edc.protocol.dsp.http.spi.DspProtocolParser
Referenced (injected) services
org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.iam.IdentityService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.token.spi.TokenDecorator
(optional)org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)org.eclipse.edc.spi.iam.AudienceResolver
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolVersionRegistry
(required)
Module dsp-negotiation-http-api
Artifact: org.eclipse.edc:dsp-negotiation-http-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.negotiation.http.api.DspNegotiationApiExtension
Name: “Dataspace Protocol Negotiation Api”
Overview: Creates and registers the controller for dataspace protocol negotiation requests.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.connector.controlplane.services.spi.contractnegotiation.ContractNegotiationProtocolService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.protocol.dsp.http.spi.message.DspRequestHandler
(required)org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolVersionRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module dsp-negotiation-http-dispatcher
Artifact: org.eclipse.edc:dsp-negotiation-http-dispatcher:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.negotiation.http.dispatcher.DspNegotiationHttpDispatcherExtension
Name: “Dataspace Protocol Negotiation HTTP Dispatcher Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.protocol.dsp.http.spi.dispatcher.DspHttpRemoteMessageDispatcher
(required)org.eclipse.edc.protocol.dsp.http.spi.serialization.JsonLdRemoteMessageSerializer
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.protocol.dsp.spi.transform.DspProtocolTypeTransformerRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.protocol.dsp.http.spi.DspProtocolParser
(required)
Artifact: org.eclipse.edc:dsp-negotiation-transform:0.10.1
Categories: None
Extension points
None
Extensions
Name: “Dataspace Protocol Negotiation Transform Extension”
Overview: Provides the transformers for negotiation message types via the {@link TypeTransformerRegistry}.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)
Module dsp-transfer-process-http-api
Artifact: org.eclipse.edc:dsp-transfer-process-http-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.transferprocess.http.api.DspTransferProcessApiExtension
Name: “Dataspace Protocol: TransferProcess API Extension”
Overview: Creates and registers the controller for dataspace protocol transfer process requests.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessProtocolService
(required)org.eclipse.edc.protocol.dsp.http.spi.message.DspRequestHandler
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.protocol.ProtocolVersionRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module dsp-transfer-process-http-dispatcher
Artifact: org.eclipse.edc:dsp-transfer-process-http-dispatcher:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.transferprocess.http.dispatcher.DspTransferProcessDispatcherExtension
Name: “Dataspace Protocol Transfer HTTP Dispatcher Extension”
Overview: Provides HTTP dispatching for Dataspace Protocol transfer process messages via the {@link DspHttpRemoteMessageDispatcher}.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.protocol.dsp.http.spi.dispatcher.DspHttpRemoteMessageDispatcher
(required)org.eclipse.edc.protocol.dsp.http.spi.serialization.JsonLdRemoteMessageSerializer
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.protocol.dsp.spi.transform.DspProtocolTypeTransformerRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.protocol.dsp.http.spi.DspProtocolParser
(required)
Artifact: org.eclipse.edc:dsp-transfer-process-transform:0.10.1
Categories: None
Extension points
None
Extensions
Name: “Dataspace Protocol Transfer Process Transform Extension”
Overview: Provides the transformers for transferprocess message types via the {@link TypeTransformerRegistry}.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module dsp-version-http-api
Artifact: org.eclipse.edc:dsp-version-http-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.protocol.dsp.version.http.api.DspVersionApiExtension
Name: “Dataspace Protocol Version Api”
Overview: Provide API for the protocol versions.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.protocol.dsp.http.spi.message.DspRequestHandler
(required)org.eclipse.edc.connector.controlplane.services.spi.protocol.VersionProtocolService
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module edr-cache-api
Artifact: org.eclipse.edc:edr-cache-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.edr.EdrCacheApiExtension
Name: “Management API: EDR cache”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.edr.spi.store.EndpointDataReferenceStore
(required)org.eclipse.edc.spi.monitor.Monitor
(required)
Module edr-index-sql
Artifact: org.eclipse.edc:edr-index-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.edr.store.index.SqlEndpointDataReferenceEntryIndexExtension
Name: “SQL edr entry store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.edr.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.edr.spi.store.EndpointDataReferenceEntryIndex
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.edr.store.index.sql.schema.EndpointDataReferenceEntryStatements
(optional)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module edr-store-core
Artifact: org.eclipse.edc:edr-store-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.edr.store.EndpointDataReferenceStoreDefaultServicesExtension
Name: “Endpoint Data Reference Core Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.edr.vault.path | | string | `` | | | | Directory/Path where to store EDRs in the vault for vaults that supports hierarchical structuring. |
Provided services
org.eclipse.edc.edr.spi.store.EndpointDataReferenceCache
org.eclipse.edc.edr.spi.store.EndpointDataReferenceEntryIndex
Referenced (injected) services
org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Class: org.eclipse.edc.edr.store.EndpointDataReferenceStoreExtension
Name: “Endpoint Data Reference Core Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.edr.spi.store.EndpointDataReferenceStore
Referenced (injected) services
org.eclipse.edc.edr.spi.store.EndpointDataReferenceEntryIndex
(required)org.eclipse.edc.edr.spi.store.EndpointDataReferenceCache
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)
Module edr-store-receiver
Artifact: org.eclipse.edc:edr-store-receiver:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.edr.store.receiver.EndpointDataReferenceStoreReceiverExtension
Name: “Endpoint Data Reference Store Receiver Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.edr.receiver.sync | | string | false | | | | If true the EDR receiver will be registered as synchronous listener |
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.edr.spi.store.EndpointDataReferenceStore
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.connector.controlplane.services.spi.contractagreement.ContractAgreementService
(required)org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyArchive
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)
Module events-cloud-http
Artifact: org.eclipse.edc:events-cloud-http:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.event.cloud.http.CloudEventsHttpExtension
Name: “Cloud events HTTP”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.events.cloudevents.endpoint | * | string | `` | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.spi.types.TypeManager
(required)java.time.Clock
(required)org.eclipse.edc.spi.system.Hostname
(required)
Module iam-mock
Artifact: org.eclipse.edc:iam-mock:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.mock.IamMockExtension
Name: “Mock IAM”
Overview: An IAM provider mock used for testing.
Configuration_None_
Provided services
org.eclipse.edc.spi.iam.IdentityService
org.eclipse.edc.spi.iam.AudienceResolver
Referenced (injected) services
org.eclipse.edc.spi.types.TypeManager
(required)
Module identity-did-core
Artifact: org.eclipse.edc:identity-did-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.did.IdentityDidCoreExtension
Name: “Identity Did Core”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.iam.did.spi.resolution.DidResolverRegistry
org.eclipse.edc.iam.did.spi.resolution.DidPublicKeyResolver
Referenced (injected) services
org.eclipse.edc.keys.spi.KeyParserRegistry
(required)
Module identity-did-spi
Name: IAM DID services
Artifact: org.eclipse.edc:identity-did-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.iam.did.spi.store.DidStore
org.eclipse.edc.iam.did.spi.resolution.DidResolverRegistry
org.eclipse.edc.iam.did.spi.resolution.DidPublicKeyResolver
org.eclipse.edc.iam.did.spi.credentials.CredentialsVerifier
Extensions
Module identity-did-web
Artifact: org.eclipse.edc:identity-did-web:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.did.web.WebDidExtension
Name: “Web DID”
Overview: Initializes support for resolving Web DIDs.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.webdid.doh.url | | string | `` | | | | |
edc.iam.did.web.use.https | | string | `` | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.iam.did.spi.resolution.DidResolverRegistry
(required)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module identity-trust-core
Artifact: org.eclipse.edc:identity-trust-core:0.10.1
Categories: iam, transform, jsonld, iam, transform, jsonld
Extension points
None
Extensions
Name: “DCP scope extractor extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)org.eclipse.edc.iam.identitytrust.spi.scope.ScopeExtractorRegistry
(required)org.eclipse.edc.spi.monitor.Monitor
(required)
Name: “Identity And Trust Transform Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Class: org.eclipse.edc.iam.identitytrust.core.DcpDefaultServicesExtension
Name: “Identity And Trust Extension to register default services”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.sts.privatekey.alias | | string | A random EC private key | | | | Alias of private key used for signing tokens, retrieved from private key resolver |
edc.iam.sts.publickey.id | | string | A random EC public key | | | | Id used by the counterparty to resolve the public key for token validation, e.g. did:example:123#public-key-0 |
edc.iam.sts.token.expiration | | string | 5 | | | | Self-issued ID Token expiration in minutes. By default is 5 minutes |
Provided services
org.eclipse.edc.iam.identitytrust.spi.SecureTokenService
org.eclipse.edc.iam.verifiablecredentials.spi.validation.TrustedIssuerRegistry
org.eclipse.edc.iam.identitytrust.spi.verification.SignatureSuiteRegistry
org.eclipse.edc.iam.identitytrust.spi.DcpParticipantAgentServiceExtension
org.eclipse.edc.iam.identitytrust.spi.scope.ScopeExtractorRegistry
org.eclipse.edc.spi.iam.AudienceResolver
org.eclipse.edc.iam.identitytrust.spi.ClaimTokenCreatorFunction
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
(required)org.eclipse.edc.jwt.validation.jti.JtiValidationStore
(required)
Class: org.eclipse.edc.iam.identitytrust.core.IdentityAndTrustExtension
Name: “Identity And Trust Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.credential.revocation.cache.validity | | long | 900000 | | | | Validity period of cached StatusList2021 credential entries in milliseconds. |
edc.iam.issuer.id | * | string | `` | | | | DID of this connector |
edc.sql.store.jti.cleanup.period | | string | 60 | | | | The period of the JTI entry reaper thread in seconds |
Provided services
org.eclipse.edc.spi.iam.IdentityService
org.eclipse.edc.iam.identitytrust.spi.CredentialServiceClient
org.eclipse.edc.iam.verifiablecredentials.spi.validation.PresentationVerifier
Referenced (injected) services
org.eclipse.edc.iam.identitytrust.spi.SecureTokenService
(required)org.eclipse.edc.iam.verifiablecredentials.spi.validation.TrustedIssuerRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.iam.identitytrust.spi.verification.SignatureSuiteRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)java.time.Clock
(required)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.iam.did.spi.resolution.DidResolverRegistry
(required)org.eclipse.edc.token.spi.TokenValidationService
(required)org.eclipse.edc.token.spi.TokenValidationRulesRegistry
(required)org.eclipse.edc.iam.did.spi.resolution.DidPublicKeyResolver
(required)org.eclipse.edc.iam.identitytrust.spi.ClaimTokenCreatorFunction
(required)org.eclipse.edc.participant.spi.ParticipantAgentService
(required)org.eclipse.edc.iam.identitytrust.spi.DcpParticipantAgentServiceExtension
(required)org.eclipse.edc.iam.verifiablecredentials.spi.model.RevocationServiceRegistry
(required)org.eclipse.edc.jwt.validation.jti.JtiValidationStore
(required)org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)
Module identity-trust-issuers-configuration
Artifact: org.eclipse.edc:identity-trust-issuers-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.identitytrust.issuer.configuration.TrustedIssuerConfigurationExtension
Name: “Trusted Issuers Configuration Extensions”
Overview: This DCP extension makes it possible to configure a list of trusted issuers, that will be matched against the Verifiable Credential issuers.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.trusted-issuer.<issuerAlias>.id | * | string | `` | | | | ID of the issuer. |
edc.iam.trusted-issuer.<issuerAlias>.properties | | string | `` | | | | Additional properties of the issuer. |
edc.iam.trusted-issuer.<issuerAlias>.supportedtypes | | string | `` | | | | List of supported credential types for this issuer. |
Provided services
None
Referenced (injected) services
org.eclipse.edc.iam.verifiablecredentials.spi.validation.TrustedIssuerRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.monitor.Monitor
(required)
Module identity-trust-sts-accounts-api
Artifact: org.eclipse.edc:identity-trust-sts-accounts-api:0.10.1
Categories: sts, dcp, api, sts, dcp, api
Extension points
None
Extensions
Class: org.eclipse.edc.api.iam.identitytrust.sts.accounts.StsAccountsApiExtension
Name: “Secure Token Service Accounts API Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.api.accounts.key | | string | `` | | | | API key (or Vault alias) for the STS Accounts API’s default authentication mechanism (token-based). |
Provided services
None
Referenced (injected) services
org.eclipse.edc.iam.identitytrust.sts.spi.service.StsAccountService
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)org.eclipse.edc.spi.security.Vault
(required)
Class: org.eclipse.edc.api.iam.identitytrust.sts.accounts.StsAccountsApiConfigurationExtension
Name: “Secure Token Service Accounts API configuration”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module identity-trust-sts-api
Artifact: org.eclipse.edc:identity-trust-sts-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.api.iam.identitytrust.sts.StsApiConfigurationExtension
Name: “Secure Token Service API configuration”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Class: org.eclipse.edc.api.iam.identitytrust.sts.SecureTokenServiceApiExtension
Name: “Secure Token Service API”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.iam.identitytrust.sts.spi.service.StsAccountService
(required)org.eclipse.edc.iam.identitytrust.sts.spi.service.StsClientTokenGeneratorService
(required)org.eclipse.edc.web.spi.WebService
(required)
Module identity-trust-sts-client-configuration
Artifact: org.eclipse.edc:identity-trust-sts-client-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.identitytrust.sts.client.configuration.StsClientConfigurationExtension
Name: “STS Client Configuration extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.iam.identitytrust.sts.spi.store.StsAccountStore
(required)
Module identity-trust-sts-core
Artifact: org.eclipse.edc:identity-trust-sts-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.identitytrust.sts.defaults.StsDefaultServicesExtension
Name: “Secure Token Service Default Services”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.sts.token.expiration | | string | 5 | | | | Self-issued ID Token expiration in minutes. By default is 5 minutes |
Provided services
org.eclipse.edc.iam.identitytrust.sts.spi.service.StsClientTokenGeneratorService
org.eclipse.edc.iam.identitytrust.sts.spi.service.StsAccountService
Referenced (injected) services
org.eclipse.edc.iam.identitytrust.sts.spi.store.StsAccountStore
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
(required)java.time.Clock
(required)org.eclipse.edc.iam.identitytrust.sts.spi.service.StsClientSecretGenerator
(optional)org.eclipse.edc.jwt.validation.jti.JtiValidationStore
(required)
Class: org.eclipse.edc.iam.identitytrust.sts.defaults.StsDefaultStoresExtension
Name: “Secure Token Service Default Stores”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.iam.identitytrust.sts.spi.store.StsAccountStore
Referenced (injected) services
org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module identity-trust-sts-remote-client
Artifact: org.eclipse.edc:identity-trust-sts-remote-client:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.identitytrust.sts.remote.client.StsRemoteClientConfigurationExtension
Name: “Sts remote client configuration extension”
Overview: Configuration Extension for the STS OAuth2 client
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.sts.oauth.token.url | | string | `` | | | | STS OAuth2 endpoint for requesting a token |
edc.iam.sts.oauth.client.id | | string | `` | | | | STS OAuth2 client id |
edc.iam.sts.oauth.client.secret.alias | | string | `` | | | | Vault alias of STS OAuth2 client secret |
Provided services
org.eclipse.edc.iam.identitytrust.sts.remote.StsRemoteClientConfiguration
Referenced (injected) services
org.eclipse.edc.spi.security.Vault
(required)
Class: org.eclipse.edc.iam.identitytrust.sts.remote.client.StsRemoteClientExtension
Name: “Sts remote client configuration extension”
Overview: Configuration Extension for the STS OAuth2 client
Configuration_None_
Provided services
org.eclipse.edc.iam.identitytrust.spi.SecureTokenService
Referenced (injected) services
org.eclipse.edc.iam.identitytrust.sts.remote.StsRemoteClientConfiguration
(required)org.eclipse.edc.iam.oauth2.spi.client.Oauth2Client
(required)org.eclipse.edc.spi.security.Vault
(required)
Module jersey-core
Artifact: org.eclipse.edc:jersey-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.web.jersey.JerseyExtension
Name: “JerseyExtension”
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.web.rest.cors.origins | | string | `` | | | | |
edc.web.rest.cors.enabled | | string | `` | | | | |
edc.web.rest.cors.headers | | string | `` | | | | |
edc.web.rest.cors.methods | | string | `` | | | | |
Provided services
org.eclipse.edc.web.spi.WebService
org.eclipse.edc.web.spi.validation.InterceptorFunctionRegistry
Referenced (injected) services
org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module jersey-micrometer
Artifact: org.eclipse.edc:jersey-micrometer:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.web.jersey.micrometer.JerseyMicrometerExtension
Name: “JerseyMicrometerExtension”
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.metrics.enabled | | string | `` | | | | |
edc.metrics.jersey.enabled | | string | `` | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)io.micrometer.core.instrument.MeterRegistry
(required)
Module jetty-core
Artifact: org.eclipse.edc:jetty-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.web.jetty.JettyExtension
Name: “JettyExtension”
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.web.https.keystore.password | | string | `` | | | | |
edc.web.https.keymanager.password | | string | `` | | | | |
edc.web.https.keystore.path | | string | `` | | | | |
edc.web.https.keystore.type | | string | `` | | | | |
Provided services
org.eclipse.edc.web.spi.WebServer
org.eclipse.edc.web.jetty.JettyService
org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
Referenced (injected) services
None
Module jetty-micrometer
Artifact: org.eclipse.edc:jetty-micrometer:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.web.jetty.micrometer.JettyMicrometerExtension
Name: “Jetty Micrometer Metrics”
Overview: An extension that registers Micrometer {@link JettyConnectionMetrics} into Jetty to
provide server metrics.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.metrics.enabled | | string | `` | | | | |
edc.metrics.jetty.enabled | | string | `` | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.jetty.JettyService
(required)io.micrometer.core.instrument.MeterRegistry
(required)
Module json-ld
Artifact: org.eclipse.edc:json-ld:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.jsonld.JsonLdExtension
Name: “JSON-LD Extension”
Overview: Adds support for working with JSON-LD. Provides an ObjectMapper that works with Jakarta JSON-P
types through the TypeManager context {@link CoreConstants#JSON_LD} and a registry
for {@link JsonLdTransformer}s. The module also offers
functions for working with JSON-LD structures.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.jsonld.document.<documentAlias>.path | * | string | `` | | | | Path of the JSON-LD document to cache |
edc.jsonld.document.<documentAlias>.url | * | string | `` | | | | URL of the JSON-LD document to cache |
edc.jsonld.http.enabled | | boolean | false | | | | If set enable http json-ld document resolution |
edc.jsonld.https.enabled | | boolean | false | | | | If set enable https json-ld document resolution |
edc.jsonld.vocab.disable | | boolean | false | | | | If true disable the @vocab context definition. This could be used to avoid api breaking changes |
edc.jsonld.prefixes.check | | boolean | true | | | | If true a validation on expended object will be made against configured prefixes |
Provided services
org.eclipse.edc.jsonld.spi.JsonLd
Referenced (injected) services
org.eclipse.edc.spi.types.TypeManager
(required)
Module jti-validation-store-sql
Artifact: org.eclipse.edc:jti-validation-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.edr.store.index.SqlJtiValidationStoreExtension
Name: “SQL JTI Validation store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.jti.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.jwt.validation.jti.JtiValidationStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.edr.store.index.sql.schema.JtiValidationStoreStatements
(optional)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module jwt-signer-spi
Name: Implementation SPI that is used to contribute custom JWSSigners to the JwtGenerationService
Artifact: org.eclipse.edc:jwt-signer-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
Extensions
Module jwt-spi
Name: JTW services
Artifact: org.eclipse.edc:jwt-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.jwt.validation.jti.JtiValidationStore
Extensions
Module management-api-configuration
Artifact: org.eclipse.edc:management-api-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.api.management.configuration.ManagementApiConfigurationExtension
Name: “Management API configuration”
Overview: Configure ‘management’ api context.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.management.endpoint | | string | <hostname:management.port/management.path> | | | | Configures endpoint for reaching the Management API. |
edc.management.context.enabled | | string | false | | | | If set enable the usage of management api JSON-LD context. |
Provided services
org.eclipse.edc.web.spi.configuration.context.ManagementApiUrl
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.api.auth.spi.registry.ApiAuthenticationRegistry
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.participant.spi.ParticipantIdMapper
(required)org.eclipse.edc.spi.system.Hostname
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module micrometer-core
Artifact: org.eclipse.edc:micrometer-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.metrics.micrometer.MicrometerExtension
Name: “Micrometer Metrics”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.metrics.enabled | | string | `` | | | | |
edc.metrics.system.enabled | | string | `` | | | | |
edc.metrics.okhttp.enabled | | string | `` | | | | |
edc.metrics.executor.enabled | | string | `` | | | | |
Provided services
okhttp3.EventListener
org.eclipse.edc.spi.system.ExecutorInstrumentation
io.micrometer.core.instrument.MeterRegistry
Referenced (injected) services
None
Module monitor-jdk-logger
Artifact: org.eclipse.edc:monitor-jdk-logger:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.monitor.logger.LoggerMonitorExtension
Name: “Logger monitor”
Overview: Extension adding logging monitor.
Configuration_None_
Provided services
None
Referenced (injected) services
None
Module oauth2-client
Artifact: org.eclipse.edc:oauth2-client:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.oauth2.client.Oauth2ClientExtension
Name: “OAuth2 Client”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.iam.oauth2.spi.client.Oauth2Client
Referenced (injected) services
org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module oauth2-core
Artifact: org.eclipse.edc:oauth2-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.oauth2.Oauth2ServiceExtension
Name: “OAuth2 Identity Service”
Overview: Provides OAuth2 client credentials flow support.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.oauth.provider.jwks.url | | string | `` | | | | |
edc.oauth.provider.audience | | string | `` | | | | outgoing tokens ‘aud’ claim value, by default it’s the connector id |
edc.oauth.endpoint.audience | | string | `` | | | | incoming tokens ‘aud’ claim required value, by default it’s the provider audience value |
edc.oauth.certificate.alias | | string | `` | | | | |
edc.oauth.private.key.alias | | string | `` | | | | |
edc.oauth.provider.jwks.refresh | | string | `` | | | | |
edc.oauth.token.url | | string | `` | | | | |
edc.oauth.token.expiration | | string | `` | | | | Token expiration in minutes. By default is 5 minutes |
edc.oauth.client.id | | string | `` | | | | |
edc.oauth.validation.nbf.leeway | | int | 10 | | | | Leeway in seconds for validating the not before (nbf) claim in the token. |
edc.oauth.validation.issued.at.leeway | | int | 0 | | | | Leeway in seconds for validating the issuedAt claim in the token. By default it is 0 seconds. |
Provided services
org.eclipse.edc.spi.iam.IdentityService
Referenced (injected) services
org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.keys.spi.PrivateKeyResolver
(required)org.eclipse.edc.keys.spi.CertificateResolver
(required)java.time.Clock
(required)org.eclipse.edc.iam.oauth2.spi.client.Oauth2Client
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.token.spi.TokenValidationRulesRegistry
(required)org.eclipse.edc.token.spi.TokenValidationService
(required)org.eclipse.edc.token.spi.TokenDecoratorRegistry
(required)org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
(required)
Class: org.eclipse.edc.iam.oauth2.Oauth2ServiceDefaultServicesExtension
Name: “Oauth2ServiceDefaultServicesExtension”
Overview: Provides OAuth2 client credentials flow support.
Configuration_None_
Provided services
org.eclipse.edc.spi.iam.AudienceResolver
Referenced (injected) services
None
Module oauth2-daps
Artifact: org.eclipse.edc:oauth2-daps:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.oauth2.daps.DapsExtension
Name: “DAPS”
Overview: Provides specialization of Oauth2 extension to interact with DAPS instance
@deprecated will be removed in the next versions.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.token.scope | | string | idsc:IDS_CONNECTOR_ATTRIBUTES_ALL | | | | The value of the scope claim that is passed to DAPS to obtain a DAT |
Provided services
org.eclipse.edc.token.spi.TokenDecorator
Referenced (injected) services
org.eclipse.edc.token.spi.TokenDecoratorRegistry
(required)
Module oauth2-spi
Name: OAuth2 services
Artifact: org.eclipse.edc:oauth2-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.iam.oauth2.spi.client.Oauth2Client
Extensions
Module policy-definition-api
Artifact: org.eclipse.edc:policy-definition-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.policy.PolicyDefinitionApiExtension
Name: “Management API: Policy Definition”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.connector.controlplane.services.spi.policydefinition.PolicyDefinitionService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module policy-definition-store-sql
Artifact: org.eclipse.edc:policy-definition-store-sql:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.store.sql.policydefinition.store.schema.SqlPolicyStoreStatements
Extensions
Class: org.eclipse.edc.connector.controlplane.store.sql.policydefinition.SqlPolicyStoreExtension
Name: “SQL policy store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.policy.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.controlplane.store.sql.policydefinition.store.schema.SqlPolicyStoreStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module policy-engine-spi
Name: Policy Engine services
Artifact: org.eclipse.edc:policy-engine-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.policy.engine.spi.PolicyEngine
org.eclipse.edc.policy.engine.spi.RuleBindingRegistry
Extensions
Module policy-monitor-core
Artifact: org.eclipse.edc:policy-monitor-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.policy.monitor.PolicyMonitorExtension
Name: “Policy Monitor”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.policy.monitor.state-machine.iteration-wait-millis | | long | `` | | | | the iteration wait time in milliseconds in the policy monitor state machine. Default value 1000 |
edc.policy.monitor.state-machine.batch-size | | int | `` | | | | the batch size in the policy monitor state machine. Default value 20 |
Provided services
org.eclipse.edc.connector.policy.monitor.spi.PolicyMonitorManager
Referenced (injected) services
org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)org.eclipse.edc.spi.telemetry.Telemetry
(required)java.time.Clock
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.connector.controlplane.services.spi.contractagreement.ContractAgreementService
(required)org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessService
(required)org.eclipse.edc.connector.policy.monitor.spi.PolicyMonitorStore
(required)org.eclipse.edc.policy.engine.spi.RuleBindingRegistry
(required)
Class: org.eclipse.edc.connector.policy.monitor.PolicyMonitorDefaultServicesExtension
Name: “PolicyMonitor Default Services”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.connector.policy.monitor.spi.PolicyMonitorStore
Referenced (injected) services
java.time.Clock
(required)org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module policy-monitor-store-sql
Artifact: org.eclipse.edc:policy-monitor-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.policy.monitor.store.sql.SqlPolicyMonitorStoreExtension
Name: “SqlPolicyMonitorStoreExtension”
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.policy-monitor.name | | string | default | | | | Name of the datasource to use for accessing policy monitor store |
edc.sql.store.policy-monitor.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.policy.monitor.spi.PolicyMonitorStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.connector.policy.monitor.store.sql.schema.PolicyMonitorStatements
(optional)java.time.Clock
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module policy-spi
Name: Policy services
Artifact: org.eclipse.edc:policy-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyArchive
org.eclipse.edc.connector.controlplane.policy.spi.store.PolicyDefinitionStore
Extensions
Module provision-http
Artifact: org.eclipse.edc:provision-http:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.provision.http.HttpProvisionerExtension
Name: “HTTP Provisioning”
Overview: The HTTP Provisioner extension delegates to HTTP endpoints to perform provision operations.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
provisioner.type | * | string | `` | | | | |
data.address.type | * | string | `` | | | | |
endpoint | * | string | `` | | | | |
policy.scope | | string | http.provisioner | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.provision.ProvisionManager
(required)org.eclipse.edc.policy.engine.spi.PolicyEngine
(required)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.connector.controlplane.transfer.spi.provision.ResourceManifestGenerator
(required)org.eclipse.edc.connector.controlplane.provision.http.HttpProvisionerWebhookUrl
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.validator.spi.DataAddressValidatorRegistry
(required)
Class: org.eclipse.edc.connector.controlplane.provision.http.HttpWebhookExtension
Name: “HttpWebhookExtension”
Overview: The HTTP Provisioner extension delegates to HTTP endpoints to perform provision operations.
Configuration_None_
Provided services
org.eclipse.edc.connector.controlplane.provision.http.HttpProvisionerWebhookUrl
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessService
(required)org.eclipse.edc.web.spi.configuration.context.ManagementApiUrl
(required)
Module secrets-api
Artifact: org.eclipse.edc:secrets-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.api.management.secret.SecretsApiExtension
Name: “Management API: Secret”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.spi.service.SecretService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)
Module sql-bootstrapper
Artifact: org.eclipse.edc:sql-bootstrapper:0.10.1
Categories: sql, persistence, storage, sql, persistence, storage
Extension points
None
Extensions
Class: org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapperExtension
Name: “SQL Schema Bootstrapper Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
Referenced (injected) services
org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.spi.monitor.Monitor
(required)
Module sql-core
Artifact: org.eclipse.edc:sql-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.sql.SqlCoreExtension
Name: “SQL Core”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.fetch.size | | string | 5000 | | | | Fetch size value used in SQL queries |
Provided services
org.eclipse.edc.sql.QueryExecutor
org.eclipse.edc.sql.ConnectionFactory
Referenced (injected) services
org.eclipse.edc.transaction.spi.TransactionContext
(required)
Module sql-pool-apache-commons
Artifact: org.eclipse.edc:sql-pool-apache-commons:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.sql.pool.commons.CommonsConnectionPoolServiceExtension
Name: “Commons Connection Pool”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.<name>url | * | string | `` | | | | JDBC url |
edc.datasource.<name>user | | string | `` | | | | Username to be used for the JDBC connection. Can be omitted if not required, or if the user is encoded in the JDBC url. |
edc.datasource.<name>password | | string | `` | | | | Username to be used for the JDBC connection. Can be omitted if not required, or if the password is encoded in the JDBC url. |
edc.datasource.<name>pool.connections.max-idle | | int | `` | | | | Pool max idle connections |
edc.datasource.<name>pool.connections.max-total | | int | `` | | | | Pool max total connections |
edc.datasource.<name>pool.connections.min-idle | | int | `` | | | | Pool min idle connections |
edc.datasource.<name>pool.connection.test.on-borrow | | boolean | `` | | | | Pool test on borrow |
edc.datasource.<name>pool.connection.test.on-create | | boolean | `` | | | | Pool test on create |
edc.datasource.<name>pool.connection.test.on-return | | boolean | `` | | | | Pool test on return |
edc.datasource.<name>pool.connection.test.while-idle | | boolean | `` | | | | Pool test while idle |
edc.datasource.<name>pool.connection.test.query | | string | `` | | | | Pool test query |
Provided services
None
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.sql.ConnectionFactory
(required)org.eclipse.edc.spi.security.Vault
(required)
Module sts-client-store-sql
Artifact: org.eclipse.edc:sts-client-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.identitytrust.sts.store.SqlStsClientStoreExtension
Name: “SQL sts accounts store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.stsclient.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.iam.identitytrust.sts.spi.store.StsAccountStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.iam.identitytrust.sts.store.schema.StsClientStatements
(optional)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module sts-server
Artifact: org.eclipse.edc:sts-server:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.identitytrust.sts.server.StsVaultSeedExtension
Name: “StsVaultSeedExtension”
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.security.Vault
(required)
Module token-core
Artifact: org.eclipse.edc:token-core:0.10.1
Categories: token, security, auth, token, security, auth
Extension points
None
Extensions
Class: org.eclipse.edc.token.TokenServicesExtension
Name: “Token Services Extension”
Overview: This extension registers the {@link TokenValidationService} and the {@link TokenValidationRulesRegistry}
which can then be used by downstream modules.
Configuration_None_
Provided services
org.eclipse.edc.token.spi.TokenValidationRulesRegistry
org.eclipse.edc.token.spi.TokenValidationService
org.eclipse.edc.token.spi.TokenDecoratorRegistry
org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
org.eclipse.edc.jwt.validation.jti.JtiValidationStore
Referenced (injected) services
org.eclipse.edc.keys.spi.PrivateKeyResolver
(required)
Module token-spi
Name: Token services
Artifact: org.eclipse.edc:token-spi:0.10.1
Categories: None
Extension points
None
Extensions
Module transaction-atomikos
Artifact: org.eclipse.edc:transaction-atomikos:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.transaction.atomikos.AtomikosTransactionExtension
Name: “Atomikos Transaction”
Overview: Provides an implementation of a {@link DataSourceRegistry} and a {@link TransactionContext} backed by Atomikos.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
driver.class | * | string | `` | | | | |
url | * | string | `` | | | | |
type | | string | `` | | | | |
username | | string | `` | | | | |
password | | string | `` | | | | |
pool.size | | string | `` | | | | |
max.pool.size | | string | `` | | | | |
min.pool.size | | string | `` | | | | |
connection.timeout | | string | `` | | | | |
login.timeout | | string | `` | | | | |
maintenance.interval | | string | `` | | | | |
max.idle | | string | `` | | | | |
query | | string | `` | | | | |
properties | | string | `` | | | | |
edc.atomikos.timeout | | string | `` | | | | |
edc.atomikos.directory | | string | `` | | | | |
edc.atomikos.threaded2pc | | string | `` | | | | |
edc.atomikos.logging | | string | `` | | | | |
edc.atomikos.checkpoint.interval | | string | `` | | | | |
Provided services
org.eclipse.edc.transaction.spi.TransactionContext
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
Referenced (injected) services
None
Module transaction-datasource-spi
Name: DataSource services
Artifact: org.eclipse.edc:transaction-datasource-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
Extensions
Module transaction-local
Artifact: org.eclipse.edc:transaction-local:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.transaction.local.LocalTransactionExtension
Name: “Local Transaction”
Overview: Support for transaction context backed by one or more local resources, including a {@link DataSourceRegistry}.
Configuration_None_
Provided services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
org.eclipse.edc.transaction.spi.TransactionContext
Referenced (injected) services
None
Module transaction-spi
Name: Transactional context services
Artifact: org.eclipse.edc:transaction-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.transaction.spi.TransactionContext
Extensions
Module transfer-data-plane-signaling
Artifact: org.eclipse.edc:transfer-data-plane-signaling:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.transfer.dataplane.TransferDataPlaneSignalingExtension
Name: “Transfer Data Plane Signaling Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.dataplane.client.selector.strategy | | string | random | | | | Defines strategy for Data Plane instance selection in case Data Plane is not embedded in current runtime |
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowManager
(required)org.eclipse.edc.web.spi.configuration.context.ControlApiUrl
(optional)org.eclipse.edc.connector.dataplane.selector.spi.DataPlaneSelectorService
(required)org.eclipse.edc.connector.dataplane.selector.spi.client.DataPlaneClientFactory
(required)org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowPropertiesProvider
(optional)org.eclipse.edc.connector.controlplane.transfer.spi.flow.TransferTypeParser
(required)
Module transfer-data-plane-spi
Name: Transfer data plane services
Artifact: org.eclipse.edc:transfer-data-plane-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.transfer.dataplane.spi.security.DataEncrypter
Extensions
Module transfer-process-api
Artifact: org.eclipse.edc:transfer-process-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.api.management.transferprocess.TransferProcessApiExtension
Name: “Management API: Transfer Process”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.connector.controlplane.services.spi.transferprocess.TransferProcessService
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)
Module transfer-process-store-sql
Artifact: org.eclipse.edc:transfer-process-store-sql:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.store.sql.transferprocess.store.schema.TransferProcessStoreStatements
Extensions
Class: org.eclipse.edc.connector.controlplane.store.sql.transferprocess.SqlTransferProcessStoreExtension
Name: “SQL transfer process store”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.transferprocess.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)java.time.Clock
(required)org.eclipse.edc.connector.controlplane.store.sql.transferprocess.store.schema.TransferProcessStoreStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module transfer-pull-http-dynamic-receiver
Artifact: org.eclipse.edc:transfer-pull-http-dynamic-receiver:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.receiver.http.dynamic.HttpDynamicEndpointDataReferenceReceiverExtension
Name: “Http Dynamic Endpoint Data Reference Receiver”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.receiver.http.dynamic.endpoint | | string | `` | | | | Fallback endpoint when url is missing the the transfer process |
edc.receiver.http.dynamic.auth-key | | string | `` | | | | Header name that will be sent with the EDR |
edc.receiver.http.dynamic.auth-code | | string | `` | | | | Header value that will be sent with the EDR |
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.edr.EndpointDataReferenceReceiverRegistry
(required)okhttp3.OkHttpClient
(required)dev.failsafe.RetryPolicy<java.lang.Object>
(required)org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
(required)org.eclipse.edc.connector.controlplane.transfer.spi.observe.TransferProcessObservable
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module transfer-pull-http-receiver
Artifact: org.eclipse.edc:transfer-pull-http-receiver:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.controlplane.receiver.http.HttpEndpointDataReferenceReceiverExtension
Name: “Http Endpoint Data Reference Receiver”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.receiver.http.endpoint | | string | `` | | | | |
edc.receiver.http.auth-key | | string | `` | | | | |
edc.receiver.http.auth-code | | string | `` | | | | |
Provided services
None
Referenced (injected) services
org.eclipse.edc.connector.controlplane.transfer.spi.edr.EndpointDataReferenceReceiverRegistry
(required)org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module transfer-spi
Name: Transfer services
Artifact: org.eclipse.edc:transfer-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.connector.controlplane.transfer.spi.observe.TransferProcessObservable
org.eclipse.edc.connector.controlplane.transfer.spi.store.TransferProcessStore
org.eclipse.edc.connector.controlplane.transfer.spi.TransferProcessPendingGuard
org.eclipse.edc.connector.controlplane.transfer.spi.TransferProcessManager
org.eclipse.edc.connector.controlplane.transfer.spi.provision.ResourceManifestGenerator
org.eclipse.edc.connector.controlplane.transfer.spi.provision.ProvisionManager
org.eclipse.edc.connector.controlplane.transfer.spi.edr.EndpointDataReferenceReceiverRegistry
org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowManager
org.eclipse.edc.connector.controlplane.transfer.spi.flow.DataFlowPropertiesProvider
Extensions
Module validator-data-address-http-data
Artifact: org.eclipse.edc:validator-data-address-http-data:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.validator.dataaddress.httpdata.HttpDataDataAddressValidatorExtension
Name: “DataAddress HttpData Validator”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.validator.spi.DataAddressValidatorRegistry
(required)
Module validator-data-address-kafka
Artifact: org.eclipse.edc:validator-data-address-kafka:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.validator.dataaddress.kafka.KafkaDataAddressValidatorExtension
Name: “DataAddress Kafka Validator”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.validator.spi.DataAddressValidatorRegistry
(required)
Module vault-hashicorp
Artifact: org.eclipse.edc:vault-hashicorp:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.vault.hashicorp.health.HashicorpVaultHealthExtension
Name: “Hashicorp Vault Health”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.system.health.HealthCheckService
(required)org.eclipse.edc.vault.hashicorp.client.HashicorpVaultClient
(required)
Class: org.eclipse.edc.vault.hashicorp.HashicorpVaultExtension
Name: “Hashicorp Vault”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.vault.hashicorp.url | * | string | `` | | | | The URL of the Hashicorp Vault |
edc.vault.hashicorp.health.check.enabled | | boolean | true | | | | Whether or not the vault health check is enabled |
edc.vault.hashicorp.api.health.check.path | | string | /v1/sys/health | | | | The URL path of the vault’s /health endpoint |
edc.vault.hashicorp.health.check.standby.ok | | boolean | false | | | | Specifies if being a standby should still return the active status code instead of the standby status code |
edc.vault.hashicorp.token | * | string | `` | | | | The token used to access the Hashicorp Vault |
edc.vault.hashicorp.token.scheduled-renew-enabled | | string | true | | | | Whether the automatic token renewal process will be triggered or not. Should be disabled only for development and testing purposes |
edc.vault.hashicorp.token.ttl | | long | 300 | | | | The time-to-live (ttl) value of the Hashicorp Vault token in seconds |
edc.vault.hashicorp.token.renew-buffer | | long | 30 | | | | The renew buffer of the Hashicorp Vault token in seconds |
edc.vault.hashicorp.api.secret.path | | string | /v1/secret | | | | The URL path of the vault’s /secret endpoint |
edc.vault.hashicorp.folder | | string | `` | | | | The path of the folder that the secret is stored in, relative to VAULT_FOLDER_PATH |
Provided services
org.eclipse.edc.vault.hashicorp.client.HashicorpVaultClient
org.eclipse.edc.spi.security.Vault
Referenced (injected) services
org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)
Module verifiable-credentials
Artifact: org.eclipse.edc:verifiable-credentials:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.iam.verifiablecredentials.RevocationServiceRegistryExtension
Name: “Revocation Service Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.iam.verifiablecredentials.spi.model.RevocationServiceRegistry
Referenced (injected) services
None
Module version-api
Artifact: org.eclipse.edc:version-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.connector.api.management.version.VersionApiExtension
Name: “Management API: Version Information”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.web.spi.WebServer
(required)
Module web-spi
Name: Web services
Artifact: org.eclipse.edc:web-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.web.spi.WebService
org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
org.eclipse.edc.web.spi.WebServer
org.eclipse.edc.web.spi.validation.InterceptorFunctionRegistry
Extensions
3.2 - Identity-Hub
Module api-configuration
Artifact: org.eclipse.edc:api-configuration:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.configuration.IdentityApiConfigurationExtension
Name: “Identity API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.AuthorizationService
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module credential-watchdog
Artifact: org.eclipse.edc:credential-watchdog:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.common.credentialwatchdog.CredentialWatchdogExtension
Name: “VerifiableCredential Watchdog Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.credential.status.check.period | | integer | 60 | | | | Period (in seconds) at which the Watchdog thread checks all stored credentials for their status. Configuring a number <=0 disables the Watchdog. |
edc.iam.credential.status.check.delay | | integer | random number [1..5] | | | | Initial delay (in seconds) before the Watchdog thread begins its work. |
Provided services
None
Referenced (injected) services
org.eclipse.edc.spi.system.ExecutorInstrumentation
(required)org.eclipse.edc.identityhub.spi.verifiablecredentials.CredentialStatusCheckService
(required)org.eclipse.edc.identityhub.spi.store.CredentialStore
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)
Module did-api
Artifact: org.eclipse.edc:did-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.didmanagement.DidManagementApiExtension
Name: “DID management Identity API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identithub.spi.did.DidDocumentService
(required)org.eclipse.edc.identityhub.spi.AuthorizationService
(required)
Module did-spi
Name: Identity Hub DID services
Artifact: org.eclipse.edc:did-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.identithub.spi.did.store.DidResourceStore
org.eclipse.edc.identithub.spi.did.DidDocumentPublisher
org.eclipse.edc.identithub.spi.did.DidWebParser
Extensions
Module identity-hub-core
Artifact: org.eclipse.edc:identity-hub-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.core.CoreServicesExtension
Name: “IdentityHub Core Services Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.verification.AccessTokenVerifier
org.eclipse.edc.identityhub.spi.verifiablecredentials.resolution.CredentialQueryResolver
org.eclipse.edc.identityhub.spi.verifiablecredentials.generator.PresentationCreatorRegistry
org.eclipse.edc.identityhub.spi.verifiablecredentials.generator.VerifiablePresentationService
org.eclipse.edc.identityhub.spi.verifiablecredentials.CredentialStatusCheckService
Referenced (injected) services
org.eclipse.edc.iam.did.spi.resolution.DidPublicKeyResolver
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.identityhub.spi.store.CredentialStore
(required)org.eclipse.edc.identityhub.spi.ScopeToCriterionTransformer
(required)org.eclipse.edc.keys.spi.PrivateKeyResolver
(required)java.time.Clock
(required)org.eclipse.edc.iam.identitytrust.spi.verification.SignatureSuiteRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.token.spi.TokenValidationService
(required)org.eclipse.edc.token.spi.TokenValidationRulesRegistry
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.keys.spi.KeyParserRegistry
(required)org.eclipse.edc.iam.identitytrust.spi.verification.SignatureSuiteRegistry
(required)org.eclipse.edc.identityhub.spi.keypair.KeyPairService
(required)org.eclipse.edc.iam.verifiablecredentials.spi.model.RevocationServiceRegistry
(required)org.eclipse.edc.identityhub.spi.store.KeyPairResourceStore
(required)org.eclipse.edc.keys.spi.LocalPublicKeyService
(required)org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
(required)org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
(required)
Class: org.eclipse.edc.identityhub.DefaultServicesExtension
Name: “IdentityHub Default Services Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.iam.credential.revocation.cache.validity | | long | 900000 | | | | Validity period of cached StatusList2021 credential entries in milliseconds. |
edc.iam.accesstoken.jti.validation | | boolean | false | | | | Activates the JTI check: access tokens can only be used once to guard against replay attacks |
Provided services
org.eclipse.edc.identityhub.spi.store.CredentialStore
org.eclipse.edc.identityhub.spi.store.ParticipantContextStore
org.eclipse.edc.identityhub.spi.store.KeyPairResourceStore
org.eclipse.edc.identityhub.spi.ScopeToCriterionTransformer
org.eclipse.edc.iam.verifiablecredentials.spi.model.RevocationServiceRegistry
org.eclipse.edc.iam.identitytrust.spi.verification.SignatureSuiteRegistry
org.eclipse.edc.jwt.signer.spi.JwsSignerProvider
Referenced (injected) services
org.eclipse.edc.token.spi.TokenValidationRulesRegistry
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.keys.spi.PrivateKeyResolver
(required)org.eclipse.edc.jwt.validation.jti.JtiValidationStore
(required)
Module identity-hub-credentials-store-sql
Artifact: org.eclipse.edc:identity-hub-credentials-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.store.sql.credentials.SqlCredentialStoreExtension
Name: “CredentialResource SQL Store Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.credentials.name | | string | default | | | | Datasource name for the DidResource database |
edc.sql.store.credentials.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.identityhub.spi.store.CredentialStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.identityhub.store.sql.credentials.CredentialStoreStatements
(optional)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module identity-hub-did
Artifact: org.eclipse.edc:identity-hub-did:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.did.DidServicesExtension
Name: “DID Service Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identithub.spi.did.DidDocumentPublisherRegistry
org.eclipse.edc.identithub.spi.did.DidDocumentService
Referenced (injected) services
org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.identithub.spi.did.store.DidResourceStore
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.keys.spi.KeyParserRegistry
(required)org.eclipse.edc.identityhub.spi.store.ParticipantContextStore
(required)
Class: org.eclipse.edc.identityhub.did.defaults.DidDefaultServicesExtension
Name: “DID Default Services Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identithub.spi.did.store.DidResourceStore
Referenced (injected) services
org.eclipse.edc.spi.query.CriterionOperatorRegistry
(required)
Module identity-hub-did-store-sql
Artifact: org.eclipse.edc:identity-hub-did-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.did.store.sql.SqlDidResourceStoreExtension
Name: “DID Resource SQL Store Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.didresource.name | | string | default | | | | Datasource name for the DidResource database |
edc.sql.store.didresource.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.identithub.spi.did.store.DidResourceStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.identityhub.did.store.sql.DidResourceStatements
(optional)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module identity-hub-keypair-store-sql
Artifact: org.eclipse.edc:identity-hub-keypair-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.store.sql.keypair.SqlKeyPairResourceStoreExtension
Name: “KeyPair Resource SQL Store Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.keypair.name | | string | default | | | | Datasource name for the KeyPairResource database |
edc.sql.store.keypair.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.identityhub.spi.store.KeyPairResourceStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.identityhub.store.sql.keypair.KeyPairResourceStoreStatements
(optional)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module identity-hub-keypairs
Artifact: org.eclipse.edc:identity-hub-keypairs:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.keypairs.KeyPairServiceExtension
Name: “KeyPair Service Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.keypair.KeyPairService
org.eclipse.edc.identityhub.spi.keypair.events.KeyPairObservable
Referenced (injected) services
org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.identityhub.spi.store.KeyPairResourceStore
(required)org.eclipse.edc.spi.event.EventRouter
(required)java.time.Clock
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.identityhub.spi.store.ParticipantContextStore
(required)
Module identity-hub-participantcontext-store-sql
Artifact: org.eclipse.edc:identity-hub-participantcontext-store-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.store.sql.participantcontext.SqlParticipantContextStoreExtension
Name: “ParticipantContext SQL Store Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.participantcontext.name | | string | default | | | | Datasource name for the ParticipantContext database |
edc.sql.store.participantcontext.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.identityhub.spi.store.ParticipantContextStore
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.identityhub.store.sql.participantcontext.ParticipantContextStoreStatements
(optional)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module identity-hub-participants
Artifact: org.eclipse.edc:identity-hub-participants:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.participantcontext.ParticipantContextExtension
Name: “ParticipantContext Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
org.eclipse.edc.identityhub.spi.participantcontext.events.ParticipantContextObservable
Referenced (injected) services
org.eclipse.edc.identityhub.spi.store.ParticipantContextStore
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.identityhub.spi.keypair.KeyPairService
(required)java.time.Clock
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.identithub.spi.did.store.DidResourceStore
(required)org.eclipse.edc.identityhub.spi.participantcontext.StsAccountProvisioner
(required)
Class: org.eclipse.edc.identityhub.participantcontext.ParticipantContextCoordinatorExtension
Name: “ParticipantContext Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.identithub.spi.did.DidDocumentService
(required)org.eclipse.edc.identityhub.spi.keypair.KeyPairService
(required)java.time.Clock
(required)org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
(required)
Module identityhub-api-authentication
Artifact: org.eclipse.edc:identityhub-api-authentication:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.ApiAuthenticationExtension
Name: “Identity API Authentication Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
(required)org.eclipse.edc.spi.security.Vault
(required)
Module identityhub-api-authorization
Artifact: org.eclipse.edc:identityhub-api-authorization:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.ApiAuthorizationExtension
Name: “Identity API Authorization Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.AuthorizationService
Referenced (injected) services
None
Module keypair-api
Artifact: org.eclipse.edc:keypair-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.keypair.KeyPairResourceManagementApiExtension
Name: “KeyPairResource management Identity API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identityhub.spi.keypair.KeyPairService
(required)org.eclipse.edc.identityhub.spi.AuthorizationService
(required)org.eclipse.edc.spi.monitor.Monitor
(required)
Module local-did-publisher
Artifact: org.eclipse.edc:local-did-publisher:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.publisher.did.local.LocalDidPublisherExtension
Name: “Local DID publisher extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identithub.spi.did.events.DidDocumentObservable
Referenced (injected) services
org.eclipse.edc.identithub.spi.did.DidDocumentPublisherRegistry
(required)org.eclipse.edc.identithub.spi.did.store.DidResourceStore
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.web.spi.configuration.WebServiceConfigurer
(required)org.eclipse.edc.web.spi.WebServer
(required)org.eclipse.edc.identithub.spi.did.DidWebParser
(optional)java.time.Clock
(required)org.eclipse.edc.spi.event.EventRouter
(required)
Module participant-context-api
Artifact: org.eclipse.edc:participant-context-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.participantcontext.ParticipantContextManagementApiExtension
Name: “ParticipantContext management Identity API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
(required)org.eclipse.edc.identityhub.spi.AuthorizationService
(required)org.eclipse.edc.spi.monitor.Monitor
(required)
Module presentation-api
Artifact: org.eclipse.edc:presentation-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.PresentationApiExtension
Name: “Presentation API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.validator.spi.JsonObjectValidatorRegistry
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identityhub.spi.verification.AccessTokenVerifier
(required)org.eclipse.edc.identityhub.spi.verifiablecredentials.resolution.CredentialQueryResolver
(required)org.eclipse.edc.identityhub.spi.verifiablecredentials.generator.VerifiablePresentationService
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.identityhub.spi.participantcontext.ParticipantContextService
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module sts-account-provisioner
Artifact: org.eclipse.edc:sts-account-provisioner:0.10.1
Categories: None
Extension points
org.eclipse.edc.identityhub.common.provisioner.StsClientSecretGenerator
Extensions
Class: org.eclipse.edc.identityhub.common.provisioner.StsAccountProvisionerExtension
Name: “STS Account Provisioner Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.participantcontext.StsAccountProvisioner
Referenced (injected) services
org.eclipse.edc.spi.event.EventRouter
(required)org.eclipse.edc.spi.security.Vault
(required)org.eclipse.edc.identityhub.common.provisioner.StsClientSecretGenerator
(optional)org.eclipse.edc.identityhub.spi.participantcontext.StsAccountService
(optional)
Module sts-account-service-local
Artifact: org.eclipse.edc:sts-account-service-local:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.sts.accountservice.LocalStsAccountServiceExtension
Name: “Local (embedded) STS Account Service Extension”
Overview: No overview provided.
Configuration_None_
Provided services
org.eclipse.edc.identityhub.spi.participantcontext.StsAccountService
Referenced (injected) services
org.eclipse.edc.iam.identitytrust.sts.spi.store.StsAccountStore
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)
Module sts-account-service-remote
Artifact: org.eclipse.edc:sts-account-service-remote:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.sts.accountservice.RemoteStsAccountServiceExtension
Name: “Remote STS Account Service Extension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sts.accounts.api.auth.header.name | | string | x-api-key | | | | The name of the Auth header to use. Could be ‘Authorization’, some custom auth header, etc. |
edc.sts.accounts.api.auth.header.value | | string | `` | | | | The value of the Auth header to use. Currently we only support static values, e.g. tokens etc. |
edc.sts.account.api.url | | string | `` | | | | The base URL of the remote STS Accounts API |
Provided services
org.eclipse.edc.identityhub.spi.participantcontext.StsAccountService
Referenced (injected) services
org.eclipse.edc.http.spi.EdcHttpClient
(required)org.eclipse.edc.spi.types.TypeManager
(required)
Module verifiable-credentials-api
Artifact: org.eclipse.edc:verifiable-credentials-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.identityhub.api.verifiablecredentials.VerifiableCredentialApiExtension
Name: “VerifiableCredentials API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.identityhub.spi.store.CredentialStore
(required)org.eclipse.edc.identityhub.spi.AuthorizationService
(required)
3.3 - Federated-Catalog
Module connector-runtime
Artifact: org.eclipse.edc:connector-runtime:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.federatedcatalog.end2end.DataplaneInstanceRegistrationExtension
Name: “DataplaneInstanceRegistrationExtension”
Configuration_None_
Provided services
org.eclipse.edc.connector.dataplane.selector.spi.client.DataPlaneClientFactory
Referenced (injected) services
org.eclipse.edc.connector.dataplane.selector.spi.store.DataPlaneInstanceStore
(required)
Module crawler-spi
Name: Crawler services
Artifact: org.eclipse.edc:crawler-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.crawler.spi.TargetNodeDirectory
org.eclipse.edc.crawler.spi.TargetNodeFilter
Extensions
Module federated-catalog-api
Artifact: org.eclipse.edc:federated-catalog-api:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.catalog.api.query.FederatedCatalogApiExtension
Name: “Cache Query API Extension”
Overview: No overview provided.
Configuration_None_
Provided services
None
Referenced (injected) services
org.eclipse.edc.web.spi.WebService
(required)org.eclipse.edc.catalog.spi.QueryService
(required)org.eclipse.edc.spi.system.health.HealthCheckService
(optional)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.spi.system.apiversion.ApiVersionService
(required)
Module federated-catalog-cache-sql
Artifact: org.eclipse.edc:federated-catalog-cache-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.catalog.store.sql.SqlFederatedCatalogCacheExtension
Name: “SQL federated catalog cache”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.datasource.federatedcatalog.name | | string | `` | | | | |
edc.sql.store.federatedcatalog.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.catalog.spi.FederatedCatalogCache
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.catalog.store.sql.FederatedCatalogCacheStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
Module federated-catalog-core
Artifact: org.eclipse.edc:federated-catalog-core:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.catalog.cache.FederatedCatalogDefaultServicesExtension
Name: “FederatedCatalogDefaultServicesExtension”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.catalog.cache.execution.period.seconds | | string | `` | | | | The time to elapse between two crawl runs |
edc.catalog.cache.partition.num.crawlers | | string | `` | | | | The number of crawlers (execution threads) that should be used. The engine will re-use crawlers when necessary. |
edc.catalog.cache.execution.delay.seconds | | string | `` | | | | The initial delay for the cache crawler engine |
Provided services
org.eclipse.edc.catalog.spi.FederatedCatalogCache
org.eclipse.edc.crawler.spi.TargetNodeDirectory
org.eclipse.edc.catalog.spi.QueryService
org.eclipse.edc.crawler.spi.model.ExecutionPlan
Referenced (injected) services
org.eclipse.edc.catalog.spi.FederatedCatalogCache
(required)
Class: org.eclipse.edc.catalog.cache.FederatedCatalogCacheExtension
Name: “Federated Catalog Cache”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.catalog.cache.execution.enabled | | string | `` | | | | |
Provided services
org.eclipse.edc.crawler.spi.CrawlerActionRegistry
Referenced (injected) services
org.eclipse.edc.catalog.spi.FederatedCatalogCache
(required)org.eclipse.edc.spi.system.health.HealthCheckService
(optional)org.eclipse.edc.spi.message.RemoteMessageDispatcherRegistry
(required)org.eclipse.edc.crawler.spi.TargetNodeDirectory
(required)org.eclipse.edc.crawler.spi.TargetNodeFilter
(optional)org.eclipse.edc.crawler.spi.model.ExecutionPlan
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)org.eclipse.edc.jsonld.spi.JsonLd
(required)org.eclipse.edc.spi.monitor.Monitor
(required)org.eclipse.edc.transform.spi.TypeTransformerRegistry
(required)
Module federated-catalog-spi
Name: Catalog services
Artifact: org.eclipse.edc:federated-catalog-spi:0.10.1
Categories: None
Extension points
org.eclipse.edc.catalog.spi.FederatedCatalogCache
Extensions
Module target-node-directory-sql
Artifact: org.eclipse.edc:target-node-directory-sql:0.10.1
Categories: None
Extension points
None
Extensions
Class: org.eclipse.edc.catalog.store.sql.SqlTargetNodeDirectoryExtension
Name: “SQL target node directory”
Overview: No overview provided.
Configuration
Key | Required | Type | Default | Pattern | Min | Max | Description |
---|
edc.sql.store.targetnodedirectory.datasource | | string | default | | | | The datasource to be used |
Provided services
org.eclipse.edc.crawler.spi.TargetNodeDirectory
Referenced (injected) services
org.eclipse.edc.transaction.datasource.spi.DataSourceRegistry
(required)org.eclipse.edc.transaction.spi.TransactionContext
(required)org.eclipse.edc.catalog.store.sql.TargetNodeStatements
(optional)org.eclipse.edc.spi.types.TypeManager
(required)org.eclipse.edc.sql.QueryExecutor
(required)org.eclipse.edc.sql.bootstrapper.SqlSchemaBootstrapper
(required)
4 - Known Friends
To get to know how we define “adoptions” and see how to submit a feature, please take a look at our
guidelines for submitting adoption requests.
Title | Description | Links |
---|
EDC Extension for Asset Administration Shell (AAS) | Asset Administration Shell (AAS) data can be manually shared over the EDC by linking an EDC Asset to the HTTP endpoint of the specific AAS element. Additionally, contracts and policies have to be defined for each element. In order to minimize configuration effort and prevent errors, this extension is able to link existing AAS services and its elements to the EDC automatically. Furthermore, this extension can also start an AAS service by reading an static AAS model file. A default contract and policy can be chosen to be applied for all elements. For critical elements, additional contracts and policies can be placed. External changes to the structure of an AAS are automatically synchronized by the extension. | Link to repository |
Data tracking by auditing data | Proof of concept of how to track data usage by aggregating audit-logs of different components in a single instance: The work presents a first proof of concept of how traceability of data usage can be implemented using the EDC Connector event framework and audit logging. In this PoC, the traceability of data is limited to the AWS dataplane. The EDC Connector logs which exact assets are stored with which key in the AWS bucket. With this information, data usage can be traced from the shared logs of the EDC Connector and the AWS S3 bucket. Elasticsearch was chosen as the instance to merge both logs in this project. A simple Python script takes over the analysis of the log data. | Link to repository |
EDC GUI | Extended EDC Data Dashboard that integrates the open-source EDC Connector interfaces while adding asset properties and form validation, providing design and UX changes, and introducing configuration profiles. | Link to repository |
EDC Connector HTTP client | An HTTP client to communicate with the EDC Connector for Node.js and the browser. | Link to repository npm |
Integration for Microsoft Dynamics 365 and Power Platform | The prototype demonstrates how to publish product information from a Microsoft Power App to a participant in an existing dataspace. The Microsoft Power Automate custom connector calls the EDC endpoints from the Nocode/lowcode platform to publish an asset and create a contract. This example shows the integration into the Microsoft Dataverse. | Link to repository |
Silicon Economy EDC | The Silicon Economy EDC is a configured version from the Connector of the Eclipse Dataspace Components (EDC). It is used and specialized to easily integrate Silicon Economy components with the IDS. | Link to repository |
EDC Extension for IONOS S3 storage | We are providing an EDC extension to allow the connector to save and to access files kept into an IONOS S3 storage | Link to repository |
EDC metadata extractor extension | This extension is a PoC to automatically extract metadata of a file that can be used for further processing (e.g., calculating the FAIRness score). | Link to repository |
Huawei Dataspace Components | Technology repository containing OBS (S3-compatible object storage hosted in the cloud) and GaussDB (a relational database based on Postgres 9.2.4 as data retention backend) extensions | Link to repository |
… | … | … |
5 - Getting adopted
This document is intended as guideline for contributors who either already have implemented a feature, e.g. an
extension, or intend to do so, and are looking for ways to upstream that feature into the EDC.
There are currently two possible levels of adoption for the EDC project:
- incorporate a feature as core EDC component
- reference a feature as “friend”
Get referenced as “friend”
This means we will add a link to our known friends list, where we reference projects and features
that we are aware of. These are repositories that have no direct affiliation with EDC and are hosted outside the
eclipse-edc
GitHub organization. We call this a “friend” of EDC (derived from the C++ friend class
concept).
In order to become a “friend” of EDC, we do a quick scan of the code base to make sure it does not contain anything
offensive, or that contradicts our code of conduct, ethics or other core OSS values.
The EDC core team does not maintain or endorse “friend” projects in any way, nor is it responsible for it, but we do
provide a URL list to make it easier for other developers to find related projects and get an overview of the EDC market
spread.
This is the easiest way to “get in” and will be the suitable form of adoption for most features and projects.
Get adopted in EDC core
This means the contribution gets added to the EDC code base, and is henceforth maintained by the EDC core team. The
barrier of entry for this is much higher than for “friends”, and a more in-depth review of the code will be performed.
Note that this covers both what we call the EDC Core repository and any
current or future repositories in the eclipse-edc
GitHub organization.
It is up to the committers to decide where the code will eventually be hosted in case of adoption.
However, in order to do a preliminary check, please go through the following bullet points:
Why should this contribution be adopted?
Please argue why this feature must be hosted upstream and be maintained by the EDC core team.
Could it be achieved with existing functionality? If not, why?
If there is any existing code that can achieve the same thing with little modification, that is usually the preferable
way for the EDC core team. We aim to keep the code succinct and want to avoid similar/duplicate code. Make sure you
understand the EDC code base well!
Are there multiple use cases or applications who will benefit from the contribution?
Basically, we want you to motivate who will use that feature and why, thereby arguing the fact that it is well-suited to
be adopted in the core code base. One-off features are better suited to be maintained externally.
Can it be achieved without introducing third-party dependencies? If not, which ones?
EDC is a platform rather than an application, therefore we are extremely careful when it comes to introducing third
party libraries. The reasons are diverse: security, license issues and over all JAR weight, just to mention a few
important ones.
Features that do not work well in clustered environments are difficult to adopt, since EDC is designed from the ground
up to be stateless and clusterable. Similarly, features, that have dependencies onto certain operating systems are
difficult to argue.
Is it going to be a self-contained feature, or would it cut across the entire code base?
Features that have a large impact on the code base are very complex to thoroughly test, they have a high chance to
destabilize the code and require careful inspection. Self-contained features on the other hand are easier to isolate and
test.
And on a more general level:
When you submit an application for adopting a feature, be prepared to answer all of them in an exhaustive and coherent
way!
Note that even if all the aforementioned points are answered satisfactorily, the EDC core team reserves the right to
ultimately decide whether a feature will get adopted or not.
Submitting an application
Please open in issue using the adoption request
template, fill out all the sections to the best of your knowledge and wait to hear back from the EDC core team. We will
comment in the issue, or reach out to you directly. Be aware that omitting sections from the application will greatly
diminish the chance of approval.
6 - Committers
Committers decide what code goes into the code base, they decide how a project builds, and they ultimately decide what
gets delivered to the adopter community. With awesome power, comes awesome responsibility, and
so the Open Source Rules of Engagement
described by the Eclipse Foundation Development Process puts
meritocracy on equal footing with transparency and openness: becoming a committer isn’t necessarily hard, but it does
require a demonstration of merit:
- Operate in an open, transparent, and meritocratic manner;
- Write code (and other project content) and push it directly into the project’s source code repository;
- Review contributions (merge and pull requests) from contributors;
- Engage in the Intellectual Property Due Diligence Process;
- Vote in committer and project lead elections;
- Engage in the project planning process; and
- Otherwise represent the interests of the open source project.
For Eclipse projects (and the open source world in general), committers are the ones who hold the keys. Committers are
either appointed at the time of project creation or elected by the existing project team.
Inactive Committers
It’s inevitable, but there are times when someone shifts focus, changes jobs, or retires from a particular area of the
project (for a period of time). These people may be experts in certain areas of the codebase or representatives persons
for certain topics but can no longer devote the time necessary to take on the responsibilities of a Committer role.
However, being a Committer within an Eclipse Foundation project comes with an elevated set of permissions. These
capabilities should not be used by those that are not familiar with the current state of the EDC project.
From time to time, it is necessary to prune the internal organization and remove inactive folks. A core principle in
maintaining a healthy community is encouraging active participation. Those listed as a Committer of the project have a
higher activity requirement, as they directly impact the ability of others to contribute. Therefore, members who have
been absent from the project for a long period of time and have had no activity will be retired from their role as
Committers.
in the EDC and will be required to go through the meritocratic process again after re-familiarizing themselves with the
current state. Committers, that can no longer devote the time are kindly asked to follow the retirement process of the
Eclipse Foundation.
According to the EF rules, before retiring a Committer, the project’s community will be informed of the change and the
Committer must be given a chance to defend retaining their status via the project’s dev-list.
To honor the contributions, retired Committers are listed as Historic Committers on the
project’s Who’s Involved page. When a Committer returns to
being more active in that area, they may be promoted back on the decision of the Committers’ committee. However, after
an extended period away from the project with no activity those would need demonstrably have to re-familiarize
themselves with the current state before being able to contribute effectively.