GitLab CI in GKE private cluster can't connect to master












0















So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.



The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with Unable to connect to the server: dial tcp <IP>:443: i/o timeout error. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.



Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate & basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).



Am I missing some vital configuration? What else should I be looking at?










share|improve this question





























    0















    So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.



    The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with Unable to connect to the server: dial tcp <IP>:443: i/o timeout error. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.



    Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate & basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).



    Am I missing some vital configuration? What else should I be looking at?










    share|improve this question



























      0












      0








      0








      So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.



      The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with Unable to connect to the server: dial tcp <IP>:443: i/o timeout error. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.



      Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate & basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).



      Am I missing some vital configuration? What else should I be looking at?










      share|improve this question
















      So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.



      The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with Unable to connect to the server: dial tcp <IP>:443: i/o timeout error. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.



      Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate & basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).



      Am I missing some vital configuration? What else should I be looking at?







      kubernetes gitlab gitlab-ci google-kubernetes-engine






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 27 '18 at 12:04







      jj9987

















      asked Nov 27 '18 at 11:52









      jj9987jj9987

      2114




      2114
























          1 Answer
          1






          active

          oldest

          votes


















          2














          I have found the solution to my problem, but I am not fully sure of the cause.



          I used gcloud container clusters get-credentials [CLUSTER_NAME], which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.



          I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.



          Source: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip



          tl;dr - gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]






          share|improve this answer
























          • you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

            – Patrick W
            Nov 27 '18 at 17:43











          • @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

            – jj9987
            Nov 27 '18 at 18:15













          • right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

            – Patrick W
            Nov 28 '18 at 15:20











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53499074%2fgitlab-ci-in-gke-private-cluster-cant-connect-to-master%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          I have found the solution to my problem, but I am not fully sure of the cause.



          I used gcloud container clusters get-credentials [CLUSTER_NAME], which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.



          I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.



          Source: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip



          tl;dr - gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]






          share|improve this answer
























          • you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

            – Patrick W
            Nov 27 '18 at 17:43











          • @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

            – jj9987
            Nov 27 '18 at 18:15













          • right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

            – Patrick W
            Nov 28 '18 at 15:20
















          2














          I have found the solution to my problem, but I am not fully sure of the cause.



          I used gcloud container clusters get-credentials [CLUSTER_NAME], which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.



          I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.



          Source: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip



          tl;dr - gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]






          share|improve this answer
























          • you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

            – Patrick W
            Nov 27 '18 at 17:43











          • @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

            – jj9987
            Nov 27 '18 at 18:15













          • right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

            – Patrick W
            Nov 28 '18 at 15:20














          2












          2








          2







          I have found the solution to my problem, but I am not fully sure of the cause.



          I used gcloud container clusters get-credentials [CLUSTER_NAME], which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.



          I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.



          Source: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip



          tl;dr - gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]






          share|improve this answer













          I have found the solution to my problem, but I am not fully sure of the cause.



          I used gcloud container clusters get-credentials [CLUSTER_NAME], which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.



          I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.



          Source: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip



          tl;dr - gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 27 '18 at 13:48









          jj9987jj9987

          2114




          2114













          • you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

            – Patrick W
            Nov 27 '18 at 17:43











          • @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

            – jj9987
            Nov 27 '18 at 18:15













          • right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

            – Patrick W
            Nov 28 '18 at 15:20



















          • you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

            – Patrick W
            Nov 27 '18 at 17:43











          • @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

            – jj9987
            Nov 27 '18 at 18:15













          • right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

            – Patrick W
            Nov 28 '18 at 15:20

















          you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

          – Patrick W
          Nov 27 '18 at 17:43





          you can also get the private endpoint using kubectl get ep while connected to the cluster and it it will list an endpoint called kubernetes which will show you the private IPs which the cluster will use

          – Patrick W
          Nov 27 '18 at 17:43













          @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

          – jj9987
          Nov 27 '18 at 18:15







          @PatrickW Yes, it does list it, but kubectl must first be able to access the cluster. In my case, kubectl did not manage to connect to the cluster/master at all.

          – jj9987
          Nov 27 '18 at 18:15















          right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

          – Patrick W
          Nov 28 '18 at 15:20





          right, you'd have to get that once the cluster is created and run the original command first, get the IP and then use the internal IP moving forward. Unfortunately, it's not something that can be resolved otherwise using gcloud.

          – Patrick W
          Nov 28 '18 at 15:20




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53499074%2fgitlab-ci-in-gke-private-cluster-cant-connect-to-master%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks

          Calculate evaluation metrics using cross_val_predict sklearn

          Insert data from modal to MySQL (multiple modal on website)