Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-3910

Https service cannot be accessed from normal worker to edge worker on AWS local zone cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • 4.12
    • Documentation
    • None
    • Critical
    • None
    • Rejected
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      On a cluster with compute nodes on AWS Local Zone (edge nodes), the pods from normal workers cannot be accessed to the EDGE worker pods https service, but http service worked well. 
      
      In summary:
      
      Normal worker pods access EDGE worker pods
      Http -> work
      Https ->  NOT work
      
      EDGE worker pods access EDGE worker pods
      Http -> work
      Https -> work
      
      Normal worker pods access Normal worker pods
      Http -> work
      Https ->  work

      Version-Release number of selected component (if applicable):

      4.12

      How reproducible:

      always

      Steps to Reproduce:

       1. Deploy AWS Local zone cluster, refer to stroy https://issues.redhat.com/browse/SPLAT-557 and document https://github.com/mtulio/mtulio.labs/blob/article-ocp-aws-lz/docs/articles/ocp-aws-local-zones-day-0.md, the nodes on Local Zone are assigned to `edge` role
      
      
      

         

       
      2. Apply the following yaml 
      echo 'apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: hello
        namespace: default
        labels:
          name: test
      spec:
        selector:
          matchLabels:
            name: test
        updateStrategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
              name: test
          spec:
            nodeSelector:
              kubernetes.io/arch: amd64
            tolerations:
            - operator: Exists
            containers:
            - name: hello-pod
              image: quay.io/openshifttest/nginx-alpine@sha256:5d3f3372288b8a93fc9fc7747925df2328c24db41e4b4226126c3af293c5ad88' | oc create -f
      
      
      
      
      3.  
      Access to normal worker pods from edge worker pod, http works well , but https NOT
      $ oc rsh -n default hello-gx5n5/ 
      # curl 10.130.2.15:8080
      Hello-OpenShift hello-fblsj http-8080
      $ curl https://10.130.2.15:8443 -k -vv
      *   Trying 10.130.2.15:8443...* Connected to 10.130.2.15 (10.130.2.15) port 8443 (#0)* ALPN, offering h2* ALPN, offering http/1.1* successfully set certificate verify locations:*   CAfile: /etc/ssl/certs/ca-certificates.crt  CApath: none* TLSv1.3 (OUT), TLS handshake, Client hello (1):* OpenSSL SSL_connect: Connection reset by peer in connection to 10.130.2.15:8443 * Closing connection 0curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to 10.130.2.15:8443
      
      
      Actual results:

       

       

      Expected results:

      Https service can be accessed from normal worker to edge worker on AWS local zone cluster

      Additional info:

       

              sdudhgao@redhat.com Servesha Dudhgaonkar
              zzhao1@redhat.com Zhanqi Zhao
              Yunfei Jiang Yunfei Jiang
              Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: