top of page
  • Edoardo Pristeri

Data Model Interoperability in the EFPF Federation

Data Interoperability in EFPF Federation

In this blog we address the Data Model Interoperability Layer in the EFPF architecture. The objective of this layer is to support information exchange and business processes that spread across two or more of the EFPF tools, services or platforms.

When making different platforms interoperable, data models must be aligned for different tools or services to be able to exchange machine or business data. Two design strategies are supported in the EFPF ecosystem to provide interoperability between incompatible data models.

The first option is to use a set of standard data models at the level of Data Spine in the EFPF federation. The standard data models can be interpreted through a one-to-one translation strategy as shown in Figure 1. In this scenario, data coming from a given tool or service using a proprietary data model is converted on the EFPF Integration Flow Engine to a standard data model appropriate for the tool or service use-case. A set of standard data models for different use-cases is being evaluated in the EFPF project. Once transformed, the data can be then directly consumed by tools adopting the given standard data model or can be converted back to different proprietary data models.


Figure 1: Interoperability using a standard data model


This first strategy is suitable in scenarios where the data producers want to attract many different data consumers (e.g. shopfloor data which can be consumed by different analytics services). In fact, this strategy allows for more modularity and change resilience at the cost of a little more complexity due to the higher number of data model translations with respect to the other/second option.

A second option is to use a one-to-many translation strategy in which each tool or service has to directly deal with all the data models used from the other tools or services as showed in Figure 2. With this approach the data models are directly converted on the EFPF Integration Flow Engine from the proprietary data model used by the tool or service producing the data to the data model used by the tool or service consuming the data.

Figure 2: Interoperability between custom data models


This second strategy is advised if, differently from the first scenario, there is the need to connect few tools or services together. The benefits of the second strategy are in fact greater in a scenario with many interconnected tools or services. In this second scenario the developers do not have to deal with the complexity of a central standard data model at the cost of less interoperability with future additions.

In the EFPF federation, different technical solutions are available to the developers in order to allow them to easily integrate their tools and services. Since the data model transformation process is transparent from the point of view of the tools or services, it can happen both in the EFPF Integration Flow Engine and outside of it on dedicated infrastructure. The benefits of integrating these transformation tools with the EFPF Data Spine include easy access to all of its components such as the Message Broker or the API Security Gateway, this makes obtaining and giving access to transformed data easy and secure.

Developers can use the resources provided on the EFPF Portal to learn about the technical solutions offered in the EFPF platform and start integrating their tools or services with the EFPF platform. The resources on the EFPF portal include example transformations of data models using the different technologies and blueprints for adapting the provided examples to different use-cases.

For more information or updates use the contact form on the project website

bottom of page