Method to test if a number is a perfect power?












6












$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt{121}rfloor=sqrt{121}$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    5 hours ago










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    5 hours ago






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    5 hours ago










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    5 hours ago






  • 4




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod{4}$. The other direction gives: if $yequiv 2,3pmod{4}$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    5 hours ago


















6












$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt{121}rfloor=sqrt{121}$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    5 hours ago










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    5 hours ago






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    5 hours ago










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    5 hours ago






  • 4




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod{4}$. The other direction gives: if $yequiv 2,3pmod{4}$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    5 hours ago
















6












6








6


3



$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt{121}rfloor=sqrt{121}$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$




Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt{121}rfloor=sqrt{121}$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).







number-theory perfect-powers






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 5 hours ago









Chase Ryan Taylor

4,45521531




4,45521531










asked 5 hours ago









D.B.D.B.

1,30518




1,30518








  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    5 hours ago










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    5 hours ago






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    5 hours ago










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    5 hours ago






  • 4




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod{4}$. The other direction gives: if $yequiv 2,3pmod{4}$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    5 hours ago
















  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    5 hours ago










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    5 hours ago






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    5 hours ago










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    5 hours ago






  • 4




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod{4}$. The other direction gives: if $yequiv 2,3pmod{4}$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    5 hours ago










2




2




$begingroup$
One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
$endgroup$
– Alex R.
5 hours ago




$begingroup$
One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
$endgroup$
– Alex R.
5 hours ago












$begingroup$
Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
$endgroup$
– Servaes
5 hours ago




$begingroup$
Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
$endgroup$
– Servaes
5 hours ago




1




1




$begingroup$
@Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
$endgroup$
– D.B.
5 hours ago




$begingroup$
@Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
$endgroup$
– D.B.
5 hours ago












$begingroup$
Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
$endgroup$
– D.B.
5 hours ago




$begingroup$
Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
$endgroup$
– D.B.
5 hours ago




4




4




$begingroup$
@D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod{4}$. The other direction gives: if $yequiv 2,3pmod{4}$, then $y$ cannot be a perfect square.
$endgroup$
– Alex R.
5 hours ago






$begingroup$
@D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod{4}$. The other direction gives: if $yequiv 2,3pmod{4}$, then $y$ cannot be a perfect square.
$endgroup$
– Alex R.
5 hours ago












5 Answers
5






active

oldest

votes


















10












$begingroup$

See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



https://cr.yp.to/papers/powers-ams.pdf






share|cite|improve this answer









$endgroup$





















    2












    $begingroup$

    In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



    $$k = a^n tag{1}label{eq1}$$



    where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. In this case, you can perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac{1.0}{n}$, to get something like



    $$a = sqrt[n]{k} tag{2}label{eq2}$$



    Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



    $$ln(k) = nln(a) ; Rightarrow ; ln(a) = frac{ln(k)}{n} ; Rightarrow ; a = e^{frac{ln(k)}{n}} tag{3}label{eq3}$$



    As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve more of a cumulative error than using eqref{eq2} instead.



    As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^{11}$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^{-10}$ and relative difference was $1.332ldots times 10^{-15}$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



    Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



    You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqref{eq2}, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frac{k -k_2}{k_1 - k_2}right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take very loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being seen when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
      $endgroup$
      – miracle173
      3 hours ago










    • $begingroup$
      I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
      $endgroup$
      – Solomon Ucko
      1 hour ago








    • 1




      $begingroup$
      @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
      $endgroup$
      – John Omielan
      1 hour ago





















    0












    $begingroup$

    My suggestion on a computer is to run a root finder.



    Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



    You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      I don't think it's linear, given that you need to square the proposed number at every split.
      $endgroup$
      – Alex R.
      5 hours ago





















    0












    $begingroup$

    There are many powerful codes that factorize a number to its prime factors in a non-polynomial time (for more information you can refer to Integer Factorization on Wikipedia) . Once an integer was factorized as follows$$n=p_1^{alpha_1}times p_2^{alpha_2}timescdots times p_m^{alpha_m}$$then by defining $d=gcd(alpha_1,alpha_2,cdots ,alpha_m)$ we can say that $n$ is a full $d$-th power.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
      $endgroup$
      – gt6989b
      5 hours ago






    • 3




      $begingroup$
      For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
      $endgroup$
      – Arturo Magidin
      5 hours ago










    • $begingroup$
      This is orders-of-magnitude slower than just computing the square-root even by classical methods.
      $endgroup$
      – Alex R.
      5 hours ago






    • 1




      $begingroup$
      Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
      $endgroup$
      – miracle173
      4 hours ago






    • 1




      $begingroup$
      @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
      $endgroup$
      – Arturo Magidin
      4 hours ago



















    0












    $begingroup$

    It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag{1}$$ or prove that such numbers don't exists.



    We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



    We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3165146%2fmethod-to-test-if-a-number-is-a-perfect-power%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      5 Answers
      5






      active

      oldest

      votes








      5 Answers
      5






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      10












      $begingroup$

      See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



      https://cr.yp.to/papers/powers-ams.pdf






      share|cite|improve this answer









      $endgroup$


















        10












        $begingroup$

        See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



        https://cr.yp.to/papers/powers-ams.pdf






        share|cite|improve this answer









        $endgroup$
















          10












          10








          10





          $begingroup$

          See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



          https://cr.yp.to/papers/powers-ams.pdf






          share|cite|improve this answer









          $endgroup$



          See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



          https://cr.yp.to/papers/powers-ams.pdf







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 5 hours ago









          Alex J BestAlex J Best

          2,36611227




          2,36611227























              2












              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag{1}label{eq1}$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. In this case, you can perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac{1.0}{n}$, to get something like



              $$a = sqrt[n]{k} tag{2}label{eq2}$$



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = frac{ln(k)}{n} ; Rightarrow ; a = e^{frac{ln(k)}{n}} tag{3}label{eq3}$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve more of a cumulative error than using eqref{eq2} instead.



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^{11}$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^{-10}$ and relative difference was $1.332ldots times 10^{-15}$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqref{eq2}, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frac{k -k_2}{k_1 - k_2}right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take very loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being seen when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).






              share|cite|improve this answer











              $endgroup$









              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                3 hours ago










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                1 hour ago








              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                1 hour ago


















              2












              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag{1}label{eq1}$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. In this case, you can perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac{1.0}{n}$, to get something like



              $$a = sqrt[n]{k} tag{2}label{eq2}$$



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = frac{ln(k)}{n} ; Rightarrow ; a = e^{frac{ln(k)}{n}} tag{3}label{eq3}$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve more of a cumulative error than using eqref{eq2} instead.



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^{11}$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^{-10}$ and relative difference was $1.332ldots times 10^{-15}$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqref{eq2}, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frac{k -k_2}{k_1 - k_2}right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take very loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being seen when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).






              share|cite|improve this answer











              $endgroup$









              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                3 hours ago










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                1 hour ago








              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                1 hour ago
















              2












              2








              2





              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag{1}label{eq1}$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. In this case, you can perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac{1.0}{n}$, to get something like



              $$a = sqrt[n]{k} tag{2}label{eq2}$$



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = frac{ln(k)}{n} ; Rightarrow ; a = e^{frac{ln(k)}{n}} tag{3}label{eq3}$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve more of a cumulative error than using eqref{eq2} instead.



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^{11}$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^{-10}$ and relative difference was $1.332ldots times 10^{-15}$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqref{eq2}, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frac{k -k_2}{k_1 - k_2}right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take very loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being seen when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).






              share|cite|improve this answer











              $endgroup$



              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag{1}label{eq1}$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. In this case, you can perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac{1.0}{n}$, to get something like



              $$a = sqrt[n]{k} tag{2}label{eq2}$$



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = frac{ln(k)}{n} ; Rightarrow ; a = e^{frac{ln(k)}{n}} tag{3}label{eq3}$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve more of a cumulative error than using eqref{eq2} instead.



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^{11}$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^{-10}$ and relative difference was $1.332ldots times 10^{-15}$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqref{eq2}, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frac{k -k_2}{k_1 - k_2}right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take very loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being seen when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited 2 hours ago

























              answered 4 hours ago









              John OmielanJohn Omielan

              4,2562215




              4,2562215








              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                3 hours ago










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                1 hour ago








              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                1 hour ago
















              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                3 hours ago










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                1 hour ago








              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                1 hour ago










              1




              1




              $begingroup$
              you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
              $endgroup$
              – miracle173
              3 hours ago




              $begingroup$
              you skip important steps of your algorithm. How do you calculate $a = e^{frac{ln(k)}{n}}$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^{frac{ln(k)}{n}}$ and the calculated value of $e^{frac{ln(k)}{n}}$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
              $endgroup$
              – miracle173
              3 hours ago












              $begingroup$
              I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
              $endgroup$
              – Solomon Ucko
              1 hour ago






              $begingroup$
              I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
              $endgroup$
              – Solomon Ucko
              1 hour ago






              1




              1




              $begingroup$
              @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
              $endgroup$
              – John Omielan
              1 hour ago






              $begingroup$
              @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
              $endgroup$
              – John Omielan
              1 hour ago













              0












              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                5 hours ago


















              0












              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                5 hours ago
















              0












              0








              0





              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$



              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered 5 hours ago









              gt6989bgt6989b

              35.1k22557




              35.1k22557












              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                5 hours ago




















              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                5 hours ago


















              $begingroup$
              I don't think it's linear, given that you need to square the proposed number at every split.
              $endgroup$
              – Alex R.
              5 hours ago






              $begingroup$
              I don't think it's linear, given that you need to square the proposed number at every split.
              $endgroup$
              – Alex R.
              5 hours ago













              0












              $begingroup$

              There are many powerful codes that factorize a number to its prime factors in a non-polynomial time (for more information you can refer to Integer Factorization on Wikipedia) . Once an integer was factorized as follows$$n=p_1^{alpha_1}times p_2^{alpha_2}timescdots times p_m^{alpha_m}$$then by defining $d=gcd(alpha_1,alpha_2,cdots ,alpha_m)$ we can say that $n$ is a full $d$-th power.






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
                $endgroup$
                – gt6989b
                5 hours ago






              • 3




                $begingroup$
                For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
                $endgroup$
                – Arturo Magidin
                5 hours ago










              • $begingroup$
                This is orders-of-magnitude slower than just computing the square-root even by classical methods.
                $endgroup$
                – Alex R.
                5 hours ago






              • 1




                $begingroup$
                Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
                $endgroup$
                – miracle173
                4 hours ago






              • 1




                $begingroup$
                @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
                $endgroup$
                – Arturo Magidin
                4 hours ago
















              0












              $begingroup$

              There are many powerful codes that factorize a number to its prime factors in a non-polynomial time (for more information you can refer to Integer Factorization on Wikipedia) . Once an integer was factorized as follows$$n=p_1^{alpha_1}times p_2^{alpha_2}timescdots times p_m^{alpha_m}$$then by defining $d=gcd(alpha_1,alpha_2,cdots ,alpha_m)$ we can say that $n$ is a full $d$-th power.






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
                $endgroup$
                – gt6989b
                5 hours ago






              • 3




                $begingroup$
                For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
                $endgroup$
                – Arturo Magidin
                5 hours ago










              • $begingroup$
                This is orders-of-magnitude slower than just computing the square-root even by classical methods.
                $endgroup$
                – Alex R.
                5 hours ago






              • 1




                $begingroup$
                Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
                $endgroup$
                – miracle173
                4 hours ago






              • 1




                $begingroup$
                @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
                $endgroup$
                – Arturo Magidin
                4 hours ago














              0












              0








              0





              $begingroup$

              There are many powerful codes that factorize a number to its prime factors in a non-polynomial time (for more information you can refer to Integer Factorization on Wikipedia) . Once an integer was factorized as follows$$n=p_1^{alpha_1}times p_2^{alpha_2}timescdots times p_m^{alpha_m}$$then by defining $d=gcd(alpha_1,alpha_2,cdots ,alpha_m)$ we can say that $n$ is a full $d$-th power.






              share|cite|improve this answer











              $endgroup$



              There are many powerful codes that factorize a number to its prime factors in a non-polynomial time (for more information you can refer to Integer Factorization on Wikipedia) . Once an integer was factorized as follows$$n=p_1^{alpha_1}times p_2^{alpha_2}timescdots times p_m^{alpha_m}$$then by defining $d=gcd(alpha_1,alpha_2,cdots ,alpha_m)$ we can say that $n$ is a full $d$-th power.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited 4 hours ago

























              answered 5 hours ago









              Mostafa AyazMostafa Ayaz

              18.1k31040




              18.1k31040












              • $begingroup$
                not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
                $endgroup$
                – gt6989b
                5 hours ago






              • 3




                $begingroup$
                For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
                $endgroup$
                – Arturo Magidin
                5 hours ago










              • $begingroup$
                This is orders-of-magnitude slower than just computing the square-root even by classical methods.
                $endgroup$
                – Alex R.
                5 hours ago






              • 1




                $begingroup$
                Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
                $endgroup$
                – miracle173
                4 hours ago






              • 1




                $begingroup$
                @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
                $endgroup$
                – Arturo Magidin
                4 hours ago


















              • $begingroup$
                not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
                $endgroup$
                – gt6989b
                5 hours ago






              • 3




                $begingroup$
                For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
                $endgroup$
                – Arturo Magidin
                5 hours ago










              • $begingroup$
                This is orders-of-magnitude slower than just computing the square-root even by classical methods.
                $endgroup$
                – Alex R.
                5 hours ago






              • 1




                $begingroup$
                Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
                $endgroup$
                – miracle173
                4 hours ago






              • 1




                $begingroup$
                @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
                $endgroup$
                – Arturo Magidin
                4 hours ago
















              $begingroup$
              not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
              $endgroup$
              – gt6989b
              5 hours ago




              $begingroup$
              not sure about efficient, I believe it's an NP-complete problem. But surely checking if it is a perfect square would be doable more efficiently that doing the full prime factorization?!
              $endgroup$
              – gt6989b
              5 hours ago




              3




              3




              $begingroup$
              For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
              $endgroup$
              – Arturo Magidin
              5 hours ago




              $begingroup$
              For algorithms, "efficiently and fast" usually means being both deterministic and polynomial time in the length of the input; there is no known polynomial time deterministic algorithm for factoring integers, so I would absolutely quibble with your use of "efficiently and fast".
              $endgroup$
              – Arturo Magidin
              5 hours ago












              $begingroup$
              This is orders-of-magnitude slower than just computing the square-root even by classical methods.
              $endgroup$
              – Alex R.
              5 hours ago




              $begingroup$
              This is orders-of-magnitude slower than just computing the square-root even by classical methods.
              $endgroup$
              – Alex R.
              5 hours ago




              1




              1




              $begingroup$
              Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
              $endgroup$
              – miracle173
              4 hours ago




              $begingroup$
              Factorization of integer is not polynomial time. The believe that factorization is a hard computational problem is the basis of public hey cryptosytems like the RSA system.
              $endgroup$
              – miracle173
              4 hours ago




              1




              1




              $begingroup$
              @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
              $endgroup$
              – Arturo Magidin
              4 hours ago




              $begingroup$
              @Mostafa Ayaz: There is an extreme wide gap between "there are codes that factor a number so efficiently and fast" and "there are codes that factor a number in the most efficient manner that we know how to do it so far." The former is an objective assertion (given the usual understanding of 'efficient' and 'fast' in this context). The latter is a mealy-mouthed assertion that "we can do this as fast (or as slow) as we can do this".
              $endgroup$
              – Arturo Magidin
              4 hours ago











              0












              $begingroup$

              It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag{1}$$ or prove that such numbers don't exists.



              We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



              We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag{1}$$ or prove that such numbers don't exists.



                We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag{1}$$ or prove that such numbers don't exists.



                  We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                  We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






                  share|cite|improve this answer









                  $endgroup$



                  It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag{1}$$ or prove that such numbers don't exists.



                  We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                  We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 4 hours ago









                  miracle173miracle173

                  7,38022247




                  7,38022247






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3165146%2fmethod-to-test-if-a-number-is-a-perfect-power%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks

                      Calculate evaluation metrics using cross_val_predict sklearn

                      Insert data from modal to MySQL (multiple modal on website)