Skip to content

Goals & Concepts

The development of the operator was the result of a number of ideas which resulted from some core requirements when thinking about how to modernize the development process by making it more agile. In this case the process itself is only one side of the medal since modern software has typically many requirements to their environment and is usually very interconnected. The other side of that medal is made up of the actual application management (e.g. deployments, infrastructure provisioning, feature reviews) which is way more elaborate and thus painful. Because of that many software development projects often show up very rigid structures making it nearly impossible to go for a really agile development process.

In the case of PXF the whole application stack to start with was quite complex and based on AWS, microservices and micro frontends which, in the end, resulted in managing a total of 20 deployments (13 backend services and 7 frontends). All of them being highly interconnected and depending on each other. There were exactly three supported environments (develop, preprod and production) which were managed by an external company. The effort of introducing new deployments or the adaption of external resources was enormous and involved major waiting times. Thus the project was bound to those rigid structures and many problems harmed the development process as well as the evolution of the project. The following goals have been formulated while thinking about how to break up those structures in order to modernize the process:

Main Goals

  • We wanted to simplify the management of different environments of the same application stack (different configurations for different purposes)
    • This enables the project to break out of fixed structures and evolve more into agile development with dynamic feature environments.
    • In order to support "physically separated" multi-tenancy we also need completely separate application environments per tenant or group of tenants.
  • High level of abstraction to mitigate possible future issues.
    • Avoid strict vendor lock-ins
    • Avoid recurring development effort when the stack evolves
  • Reduction of external dependencies for handling DevOps tasks
    • Instead, give the developers the opportunity to describe their application stack including all its requirements.
    • Project management should be able to configure and run individual application environments based on the underlying application stack.
    • In order to configure and run a specific environment (e.g. for a tenant) no deep technical knowledge must be required.
  • The creation and customization process of such application environments should be significantly accelerated (minutes instead of days).
  • The managed application stack should include all external resources of which this stack has an ownership.

Core Concepts

This tooling builds on some simple but fundamental core concepts that are respected in order to ensure a stable, extensible and sustainable design covering all requirements of different projects and application stacks. The following concepts and principles build the foundation of the operator tooling:

  • High level of abstraction
    • Application developers know their application best!
    • Developers are often not able to provide and configure runtime environments for their applications including all their external needs.
    • So let the developers describe the concrete requirements of their applications on an abstract level without being vendor specific!
  • Separation of technical and runtime levels
    • Instead of mixing everything together and introducing hard-coding issues or the like, the operator follows a strict layered design pattern.
    • There are basically two layers being involved into the management of application environments
      • The application stack layer (also called metamodel (MM) layer) on which all applications are described with their technical requirements, interconnections as well as their configuration options.
      • The model layer (also referred to as silo layer) which consists of individual application silos that instantiate metamodels with a concrete configuration in order to serve their scope.
  • Separation of service and resource layers
    • In order to make application environments somehow portable the operator also tries to separate the actual deployments from their required resources.
    • The application layer is completely bound to Kubernetes and served by a Flux CD deployment process.
    • All external resources are located on the resource layer and are abstracted very well in order to make the concrete providers for this layer management exchangeable. An application environment running on AWS must not necessarily stick to this environment!
    • All comes together at deployment time when the applications of the service layer are connected to their resources that have been created on the resource layer. Fully transparent and easy to understand.
  • Separation of deep technical application deployment descriptors from high level application runtime requirements
    • A last separation happens between the abstract service descriptions (in the MM) and the actual deployment descriptors that tell Kubernetes and Flux CD what and how to deploy. This is done using Helm Charts and frees the operator from requiring knowledge about the actual deployment structure.
    • This gap is then filled when all comes together and a combination and transformation of a silo, its metamodel, the created resources and the underlying deployment descriptors results in a so-called Helm release that is passed to Flux CD in order to manage the application deployments.