Uploaded image for project: 'AMQ Streams'
  1. AMQ Streams
  2. ENTMQST-3793

[DOC OCP] MirrorMaker 2 needs to be much more thoroughly documented, particular as regards its use with/on OpenShift

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.1.0.GA
    • None
    • documentation
    • None
    • False
    • False

      We are starting to get support requests from customers, concerning setting up MirrorMaker 2 for Kafka replication between two sites. These customers are generally running their Streams Kafka installations on OpenShift, and also running MM2 on Openshift – perhaps sharing a platform with one of the Kafka installations, perhaps not.

      There isn't a lot of documentation about MM2. It doesn't seem to be difficult to set up in simple cases, but for real, production installations we are lacking information.

      1. We probably need to use OpenShift routes for communication between Kafka and MM2. That means that a route must actually be exposed on each of the Kafka installations, if it is not already. We should describe how to do this or, at least, link to the relevant Kafka documentation. We need to document how to get the server certificates for the routes from the Kafka installations, and how to make them available to MM2. The first part of this is documented in the Kafka context, but not in the context of MM2. We need to describe how to present the certificates to MM2. That is, we need to document how to convert the certificates to a suitable format (probably text embedded in a secret) and how to specify the secret to MM2. We need to document what needs to be done (if anything) if the Kafka server certificates are self-signed, or need specific CA certificates.

      2. We need to document what kind of user account must exist on each of the Kafka installations, that is, we need to describe what permissions the user needs, or whether it's better to define a "superuser" in Kafka. The implications of these choices for security need to be described. There needs to be a documented way to extract the credentials for the user accounts from the Kafka hosts, and to present those credentials to MM2. Probably the password is stored in a secret, and we'll need to define a comparable secret in the MM2 host for each of the Kafka clusters. Similar considerations apply, I guess, if the user account is authenticated by a certificate.

      3. The implications for MM2 of the various kinds of authentication that Kafka can use (SCRAM, Oauth...) should be described, at least in outline. Maybe MM2 is expected to work with one particular authentication mode, in which case we should state what that is.

      4. We ought to be able to give at least some general guidance about the resource (CPU, memory) requirements and how to estimate these in a real installation.

      5. I think we should try to suggest which of the cast number of CRD parameters must be specified by the installer right from the start, and which could be left at default, or adjusted later.

       

              pmellor@redhat.com Paul Mellor
              rhn-support-kboone Kevin Boone
              Lukas Kral Lukas Kral
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: