Is there an easy way to see that binary expansion is unique? [duplicate]












6












$begingroup$



This question already has an answer here:




  • Binary expansion Unique

    4 answers




Let $n in mathbb{N}$. Using the Euclidean algorithm, it is straightforward to see that every natural number can be written as



$$n = sum_{j=0}^m epsilon_j(n) 2^j $$



where $epsilon_j(n) in {0,1}$.



Is there an easy way to show that this way of writing the number is unique?










share|cite|improve this question











$endgroup$



marked as duplicate by Asaf Karagila Nov 29 '18 at 8:44


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • 8




    $begingroup$
    Have you tried induction?
    $endgroup$
    – saulspatz
    Nov 28 '18 at 16:12






  • 8




    $begingroup$
    @stanleydodds Why are you answering in a comment?
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:17






  • 4




    $begingroup$
    @stanleydodds Outlines of answers are still answers. Even one-line hints belong in answer posts in my opinion
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:19












  • $begingroup$
    Index $j$ should start at $0$. If it starts at $1$ then RHS is even.
    $endgroup$
    – drhab
    Nov 28 '18 at 16:30






  • 3




    $begingroup$
    As presented, such expansions are not unique. For uniqueness, you need to add the constraint that $$epsilon_m(n) = 1$$. Otherwise, an arbitrary number of additional leading 0 terms can be included to produce distinct expansions of the same n.
    $endgroup$
    – John Bollinger
    Nov 28 '18 at 17:54


















6












$begingroup$



This question already has an answer here:




  • Binary expansion Unique

    4 answers




Let $n in mathbb{N}$. Using the Euclidean algorithm, it is straightforward to see that every natural number can be written as



$$n = sum_{j=0}^m epsilon_j(n) 2^j $$



where $epsilon_j(n) in {0,1}$.



Is there an easy way to show that this way of writing the number is unique?










share|cite|improve this question











$endgroup$



marked as duplicate by Asaf Karagila Nov 29 '18 at 8:44


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • 8




    $begingroup$
    Have you tried induction?
    $endgroup$
    – saulspatz
    Nov 28 '18 at 16:12






  • 8




    $begingroup$
    @stanleydodds Why are you answering in a comment?
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:17






  • 4




    $begingroup$
    @stanleydodds Outlines of answers are still answers. Even one-line hints belong in answer posts in my opinion
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:19












  • $begingroup$
    Index $j$ should start at $0$. If it starts at $1$ then RHS is even.
    $endgroup$
    – drhab
    Nov 28 '18 at 16:30






  • 3




    $begingroup$
    As presented, such expansions are not unique. For uniqueness, you need to add the constraint that $$epsilon_m(n) = 1$$. Otherwise, an arbitrary number of additional leading 0 terms can be included to produce distinct expansions of the same n.
    $endgroup$
    – John Bollinger
    Nov 28 '18 at 17:54
















6












6








6


1



$begingroup$



This question already has an answer here:




  • Binary expansion Unique

    4 answers




Let $n in mathbb{N}$. Using the Euclidean algorithm, it is straightforward to see that every natural number can be written as



$$n = sum_{j=0}^m epsilon_j(n) 2^j $$



where $epsilon_j(n) in {0,1}$.



Is there an easy way to show that this way of writing the number is unique?










share|cite|improve this question











$endgroup$





This question already has an answer here:




  • Binary expansion Unique

    4 answers




Let $n in mathbb{N}$. Using the Euclidean algorithm, it is straightforward to see that every natural number can be written as



$$n = sum_{j=0}^m epsilon_j(n) 2^j $$



where $epsilon_j(n) in {0,1}$.



Is there an easy way to show that this way of writing the number is unique?





This question already has an answer here:




  • Binary expansion Unique

    4 answers








elementary-number-theory binary






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 28 '18 at 16:30







Math_QED

















asked Nov 28 '18 at 16:06









Math_QEDMath_QED

7,74131554




7,74131554




marked as duplicate by Asaf Karagila Nov 29 '18 at 8:44


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.









marked as duplicate by Asaf Karagila Nov 29 '18 at 8:44


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.










  • 8




    $begingroup$
    Have you tried induction?
    $endgroup$
    – saulspatz
    Nov 28 '18 at 16:12






  • 8




    $begingroup$
    @stanleydodds Why are you answering in a comment?
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:17






  • 4




    $begingroup$
    @stanleydodds Outlines of answers are still answers. Even one-line hints belong in answer posts in my opinion
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:19












  • $begingroup$
    Index $j$ should start at $0$. If it starts at $1$ then RHS is even.
    $endgroup$
    – drhab
    Nov 28 '18 at 16:30






  • 3




    $begingroup$
    As presented, such expansions are not unique. For uniqueness, you need to add the constraint that $$epsilon_m(n) = 1$$. Otherwise, an arbitrary number of additional leading 0 terms can be included to produce distinct expansions of the same n.
    $endgroup$
    – John Bollinger
    Nov 28 '18 at 17:54
















  • 8




    $begingroup$
    Have you tried induction?
    $endgroup$
    – saulspatz
    Nov 28 '18 at 16:12






  • 8




    $begingroup$
    @stanleydodds Why are you answering in a comment?
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:17






  • 4




    $begingroup$
    @stanleydodds Outlines of answers are still answers. Even one-line hints belong in answer posts in my opinion
    $endgroup$
    – Arthur
    Nov 28 '18 at 16:19












  • $begingroup$
    Index $j$ should start at $0$. If it starts at $1$ then RHS is even.
    $endgroup$
    – drhab
    Nov 28 '18 at 16:30






  • 3




    $begingroup$
    As presented, such expansions are not unique. For uniqueness, you need to add the constraint that $$epsilon_m(n) = 1$$. Otherwise, an arbitrary number of additional leading 0 terms can be included to produce distinct expansions of the same n.
    $endgroup$
    – John Bollinger
    Nov 28 '18 at 17:54










8




8




$begingroup$
Have you tried induction?
$endgroup$
– saulspatz
Nov 28 '18 at 16:12




$begingroup$
Have you tried induction?
$endgroup$
– saulspatz
Nov 28 '18 at 16:12




8




8




$begingroup$
@stanleydodds Why are you answering in a comment?
$endgroup$
– Arthur
Nov 28 '18 at 16:17




$begingroup$
@stanleydodds Why are you answering in a comment?
$endgroup$
– Arthur
Nov 28 '18 at 16:17




4




4




$begingroup$
@stanleydodds Outlines of answers are still answers. Even one-line hints belong in answer posts in my opinion
$endgroup$
– Arthur
Nov 28 '18 at 16:19






$begingroup$
@stanleydodds Outlines of answers are still answers. Even one-line hints belong in answer posts in my opinion
$endgroup$
– Arthur
Nov 28 '18 at 16:19














$begingroup$
Index $j$ should start at $0$. If it starts at $1$ then RHS is even.
$endgroup$
– drhab
Nov 28 '18 at 16:30




$begingroup$
Index $j$ should start at $0$. If it starts at $1$ then RHS is even.
$endgroup$
– drhab
Nov 28 '18 at 16:30




3




3




$begingroup$
As presented, such expansions are not unique. For uniqueness, you need to add the constraint that $$epsilon_m(n) = 1$$. Otherwise, an arbitrary number of additional leading 0 terms can be included to produce distinct expansions of the same n.
$endgroup$
– John Bollinger
Nov 28 '18 at 17:54






$begingroup$
As presented, such expansions are not unique. For uniqueness, you need to add the constraint that $$epsilon_m(n) = 1$$. Otherwise, an arbitrary number of additional leading 0 terms can be included to produce distinct expansions of the same n.
$endgroup$
– John Bollinger
Nov 28 '18 at 17:54












6 Answers
6






active

oldest

votes


















8












$begingroup$

Suppose $exists ninBbb N$ such that $n=sum_{iin A}2^i=sum_{iin B}2^i$ with $A,BsubsetBbb N_0$. Then $sum_{iin A}2^i-sum_{iin B}2^i=0$ and so for set $C=ADelta B$ (symmetric difference) and some function $s:Crightarrow{-1,1}$ we have $sum_{iin C}s(i)2^i=0$. Now if $Cneemptyset$ then $C$ has a largest element (say $x$) and we have $sum_{iin Cbackslash{x}}s(i)2^i=-s(x)2^x$ so $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^i=2^x$ but we now know $Cbackslash{x}subset{0,1,2,...,x-1}$ and also $-{s(i)over s(x)}le1$ so we have $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^ilesum_{i=0}^{x-1}2^i=2^{x}-1$ which is a contradiction. Hence $C=emptyset$, so $A=B$, so the representations are in fact the same, hence the representation of n is unique (assuming it exists, which I gather has been shown already).






share|cite|improve this answer









$endgroup$





















    9












    $begingroup$

    How do you know the ordinary base $10$ expansion is unique?



    Suppose digit strings $s$ and $t$ both represent the positive integer $n$. Then the units digit of each must be $d = n pmod {10}$. So you can lop off both units digits. The lopped strings then both represent $(n-d)/10$.



    Continue with the other digits (from the right) until you're done. Or, for a formal induction proof, apply that argument to the least $n$ with two representations to deduce a contradiction.



    This argument works for any base. It's the standard algorithm for base conversion, finding the digits from right to left. It depends on knowing that you can do division with remainder, but not on the full strength of the Euclidean algorithm.






    share|cite|improve this answer











    $endgroup$





















      4












      $begingroup$

      A very simple proof is by the pigeonhole principle. The key observation is that not only does any natural number $n$ have a binary expansion $$n = sum_{j=0}^m epsilon_j(n) 2^j,$$ but if $0leq n<2^N$ then we need no powers of $2$ above $2^{N-1}$ so we can take $m=N-1$. Now, for any fixed $N$, there are $2^N$ natural numbers $n$ such that $0leq n<2^N$ and $2^N$ different ways of choosing $epsilon_j(n)in{0,1}$ for each $j$ from $0$ to $N-1$. So, all $2^N$ of these binary expansions must have distinct sums, or else they would not be able to represent all $2^N$ of the different values of $n$.



      This proves that for any $N$, a natural number $n$ has at most one binary expansion using powers of $2$ up to $2^{N-1}$. It follows that $n$ has only one binary expansion, up to adding $0$s at the start (since given two expansions of different lengths, you can always extend one by $0$s to make them the same length, and then they must become the same).






      share|cite|improve this answer









      $endgroup$





















        3












        $begingroup$

        Not really an answer. But here's just a different way of framing your question which I think is neat.



        Let $f: P_{fin}(mathbb{N}) to mathbb{N}$ by $f(S) =sum_{sin S} 2^s$.



        $f$ is onto. You have claimed this can be handled Euclidean algorithm.



        What about $1-1$? We use the argument Stanley Dodds presented.




        So we've seen that the set of all finite subsets of the natural numbers is in 1-1 correspondence with the set of natural numbers.







        share|cite|improve this answer











        $endgroup$





















          3












          $begingroup$

          Hint $ $ Uniqueness of radix rep can be deduced intuitively from the simple fact that an integer root of an integer coef polynomial divides the least degree coef (i.e. Rational Root Test). For example



          $qquad11001_2 = g(2),, g(x) = x^4+x^3+1$



          $qquad 10011_2 = h(2), h(x) = x^4+x+1$



          If they're equal $, 0 = g(2)-h(2) =: f(2),$ for $,f = g-h = x(x^2-1),$ so $,2,$ is a root of $,x^2-1,$ so $, 2^2 = 1,Rightarrow, 2mid 1,,$ contradiction. This idea works generally - the nonzero coef's of $g-h$ are $pm1$ contra the root $2$ must divide the least degree such coef. Below is the proof for general radix.





          If $,g(x) = sum g_i x^i$ is a polynomial with integer coefficients $,g_i,$ such that $,0le g_i < b,$ and $,g(b) = n,$ then we call $,(g,b),$ a radix $,b,$ representation of $,n.,$ It is unique: $ $ if $,n,$ has another rep $,(h,b),,$ with $,g(x) ne h(x),,$ then $,f(x)= g(x)-h(x)ne 0,$ has root $,b,$ but all coefficients $,color{#c00}{|f_i| < b},,$ contra the below slight generalization of: $ $ integer roots of integer polynomials divide their constant term.



          Theorem $ $ If $,f(x) = x^k(color{#0a0}{f_0}!+f_1 x +cdots + f_n x^n)=x^kbar f(x),$ is a polynomial with integer coefficients $,f_i,$ and with $,color{#0a0}{f_0ne 0},$ then an integer root $,bne 0,$ satisfies $,bmid f_0,,$ so $,color{#c00}{|b| le |f_0|}$



          Proof $ 0 = f(b) = b^k bar f(b),overset{large b,ne, 0}Rightarrow, 0 = bar f(b),,$ so, subtracting $,f_0$ from both sides yields $$-f_0 =, b,(f_1!+f_2 b+,cdots+f_n b^{n-1}), Rightarrow,bmid f_0, overset{large color{#0a0}{f_0,ne, 0}}Rightarrow, |b| le |f_0|qquad {bf QED}qquadquad$$



          Remark $ $ Thus uniqueness of radix rep is essentially a special case of the Rational Root Test,






          share|cite|improve this answer











          $endgroup$





















            1












            $begingroup$

            This argument is very basic but I think the notation is easy to follow:
            Suppose a number has two different expansions ${a_i}$ and ${b_i}$. Then
            $$ sum_{i=0}^m a_i(n) 2^i = sum_{i=0}^m b_i(n) 2^i $$ Break off the first term in both sums and you can get $$ a_o(n) - b_0(n) = 2sum_{i=1}^m (b_i(n)-a_i(n)) 2^{i-1}$$
            The l.h.s can only be ($0-0$), ($0-1$), ($1-0$), or ($1-1$), and it must be divisible by $2$, so it must equal $0$, i.e. $a_0(n) = b_0(n)$. Then you can divide by the $2$ in front of the sum, and repeat the argument (or use induction) to get $a_i(n) = b_i(n)$ for all $i$.






            share|cite|improve this answer









            $endgroup$




















              6 Answers
              6






              active

              oldest

              votes








              6 Answers
              6






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              8












              $begingroup$

              Suppose $exists ninBbb N$ such that $n=sum_{iin A}2^i=sum_{iin B}2^i$ with $A,BsubsetBbb N_0$. Then $sum_{iin A}2^i-sum_{iin B}2^i=0$ and so for set $C=ADelta B$ (symmetric difference) and some function $s:Crightarrow{-1,1}$ we have $sum_{iin C}s(i)2^i=0$. Now if $Cneemptyset$ then $C$ has a largest element (say $x$) and we have $sum_{iin Cbackslash{x}}s(i)2^i=-s(x)2^x$ so $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^i=2^x$ but we now know $Cbackslash{x}subset{0,1,2,...,x-1}$ and also $-{s(i)over s(x)}le1$ so we have $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^ilesum_{i=0}^{x-1}2^i=2^{x}-1$ which is a contradiction. Hence $C=emptyset$, so $A=B$, so the representations are in fact the same, hence the representation of n is unique (assuming it exists, which I gather has been shown already).






              share|cite|improve this answer









              $endgroup$


















                8












                $begingroup$

                Suppose $exists ninBbb N$ such that $n=sum_{iin A}2^i=sum_{iin B}2^i$ with $A,BsubsetBbb N_0$. Then $sum_{iin A}2^i-sum_{iin B}2^i=0$ and so for set $C=ADelta B$ (symmetric difference) and some function $s:Crightarrow{-1,1}$ we have $sum_{iin C}s(i)2^i=0$. Now if $Cneemptyset$ then $C$ has a largest element (say $x$) and we have $sum_{iin Cbackslash{x}}s(i)2^i=-s(x)2^x$ so $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^i=2^x$ but we now know $Cbackslash{x}subset{0,1,2,...,x-1}$ and also $-{s(i)over s(x)}le1$ so we have $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^ilesum_{i=0}^{x-1}2^i=2^{x}-1$ which is a contradiction. Hence $C=emptyset$, so $A=B$, so the representations are in fact the same, hence the representation of n is unique (assuming it exists, which I gather has been shown already).






                share|cite|improve this answer









                $endgroup$
















                  8












                  8








                  8





                  $begingroup$

                  Suppose $exists ninBbb N$ such that $n=sum_{iin A}2^i=sum_{iin B}2^i$ with $A,BsubsetBbb N_0$. Then $sum_{iin A}2^i-sum_{iin B}2^i=0$ and so for set $C=ADelta B$ (symmetric difference) and some function $s:Crightarrow{-1,1}$ we have $sum_{iin C}s(i)2^i=0$. Now if $Cneemptyset$ then $C$ has a largest element (say $x$) and we have $sum_{iin Cbackslash{x}}s(i)2^i=-s(x)2^x$ so $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^i=2^x$ but we now know $Cbackslash{x}subset{0,1,2,...,x-1}$ and also $-{s(i)over s(x)}le1$ so we have $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^ilesum_{i=0}^{x-1}2^i=2^{x}-1$ which is a contradiction. Hence $C=emptyset$, so $A=B$, so the representations are in fact the same, hence the representation of n is unique (assuming it exists, which I gather has been shown already).






                  share|cite|improve this answer









                  $endgroup$



                  Suppose $exists ninBbb N$ such that $n=sum_{iin A}2^i=sum_{iin B}2^i$ with $A,BsubsetBbb N_0$. Then $sum_{iin A}2^i-sum_{iin B}2^i=0$ and so for set $C=ADelta B$ (symmetric difference) and some function $s:Crightarrow{-1,1}$ we have $sum_{iin C}s(i)2^i=0$. Now if $Cneemptyset$ then $C$ has a largest element (say $x$) and we have $sum_{iin Cbackslash{x}}s(i)2^i=-s(x)2^x$ so $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^i=2^x$ but we now know $Cbackslash{x}subset{0,1,2,...,x-1}$ and also $-{s(i)over s(x)}le1$ so we have $sum_{iin Cbackslash{x}}-{s(i)over s(x)}2^ilesum_{i=0}^{x-1}2^i=2^{x}-1$ which is a contradiction. Hence $C=emptyset$, so $A=B$, so the representations are in fact the same, hence the representation of n is unique (assuming it exists, which I gather has been shown already).







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 28 '18 at 16:40









                  stanley doddsstanley dodds

                  4451310




                  4451310























                      9












                      $begingroup$

                      How do you know the ordinary base $10$ expansion is unique?



                      Suppose digit strings $s$ and $t$ both represent the positive integer $n$. Then the units digit of each must be $d = n pmod {10}$. So you can lop off both units digits. The lopped strings then both represent $(n-d)/10$.



                      Continue with the other digits (from the right) until you're done. Or, for a formal induction proof, apply that argument to the least $n$ with two representations to deduce a contradiction.



                      This argument works for any base. It's the standard algorithm for base conversion, finding the digits from right to left. It depends on knowing that you can do division with remainder, but not on the full strength of the Euclidean algorithm.






                      share|cite|improve this answer











                      $endgroup$


















                        9












                        $begingroup$

                        How do you know the ordinary base $10$ expansion is unique?



                        Suppose digit strings $s$ and $t$ both represent the positive integer $n$. Then the units digit of each must be $d = n pmod {10}$. So you can lop off both units digits. The lopped strings then both represent $(n-d)/10$.



                        Continue with the other digits (from the right) until you're done. Or, for a formal induction proof, apply that argument to the least $n$ with two representations to deduce a contradiction.



                        This argument works for any base. It's the standard algorithm for base conversion, finding the digits from right to left. It depends on knowing that you can do division with remainder, but not on the full strength of the Euclidean algorithm.






                        share|cite|improve this answer











                        $endgroup$
















                          9












                          9








                          9





                          $begingroup$

                          How do you know the ordinary base $10$ expansion is unique?



                          Suppose digit strings $s$ and $t$ both represent the positive integer $n$. Then the units digit of each must be $d = n pmod {10}$. So you can lop off both units digits. The lopped strings then both represent $(n-d)/10$.



                          Continue with the other digits (from the right) until you're done. Or, for a formal induction proof, apply that argument to the least $n$ with two representations to deduce a contradiction.



                          This argument works for any base. It's the standard algorithm for base conversion, finding the digits from right to left. It depends on knowing that you can do division with remainder, but not on the full strength of the Euclidean algorithm.






                          share|cite|improve this answer











                          $endgroup$



                          How do you know the ordinary base $10$ expansion is unique?



                          Suppose digit strings $s$ and $t$ both represent the positive integer $n$. Then the units digit of each must be $d = n pmod {10}$. So you can lop off both units digits. The lopped strings then both represent $(n-d)/10$.



                          Continue with the other digits (from the right) until you're done. Or, for a formal induction proof, apply that argument to the least $n$ with two representations to deduce a contradiction.



                          This argument works for any base. It's the standard algorithm for base conversion, finding the digits from right to left. It depends on knowing that you can do division with remainder, but not on the full strength of the Euclidean algorithm.







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Nov 28 '18 at 21:47

























                          answered Nov 28 '18 at 16:49









                          Ethan BolkerEthan Bolker

                          45.4k553120




                          45.4k553120























                              4












                              $begingroup$

                              A very simple proof is by the pigeonhole principle. The key observation is that not only does any natural number $n$ have a binary expansion $$n = sum_{j=0}^m epsilon_j(n) 2^j,$$ but if $0leq n<2^N$ then we need no powers of $2$ above $2^{N-1}$ so we can take $m=N-1$. Now, for any fixed $N$, there are $2^N$ natural numbers $n$ such that $0leq n<2^N$ and $2^N$ different ways of choosing $epsilon_j(n)in{0,1}$ for each $j$ from $0$ to $N-1$. So, all $2^N$ of these binary expansions must have distinct sums, or else they would not be able to represent all $2^N$ of the different values of $n$.



                              This proves that for any $N$, a natural number $n$ has at most one binary expansion using powers of $2$ up to $2^{N-1}$. It follows that $n$ has only one binary expansion, up to adding $0$s at the start (since given two expansions of different lengths, you can always extend one by $0$s to make them the same length, and then they must become the same).






                              share|cite|improve this answer









                              $endgroup$


















                                4












                                $begingroup$

                                A very simple proof is by the pigeonhole principle. The key observation is that not only does any natural number $n$ have a binary expansion $$n = sum_{j=0}^m epsilon_j(n) 2^j,$$ but if $0leq n<2^N$ then we need no powers of $2$ above $2^{N-1}$ so we can take $m=N-1$. Now, for any fixed $N$, there are $2^N$ natural numbers $n$ such that $0leq n<2^N$ and $2^N$ different ways of choosing $epsilon_j(n)in{0,1}$ for each $j$ from $0$ to $N-1$. So, all $2^N$ of these binary expansions must have distinct sums, or else they would not be able to represent all $2^N$ of the different values of $n$.



                                This proves that for any $N$, a natural number $n$ has at most one binary expansion using powers of $2$ up to $2^{N-1}$. It follows that $n$ has only one binary expansion, up to adding $0$s at the start (since given two expansions of different lengths, you can always extend one by $0$s to make them the same length, and then they must become the same).






                                share|cite|improve this answer









                                $endgroup$
















                                  4












                                  4








                                  4





                                  $begingroup$

                                  A very simple proof is by the pigeonhole principle. The key observation is that not only does any natural number $n$ have a binary expansion $$n = sum_{j=0}^m epsilon_j(n) 2^j,$$ but if $0leq n<2^N$ then we need no powers of $2$ above $2^{N-1}$ so we can take $m=N-1$. Now, for any fixed $N$, there are $2^N$ natural numbers $n$ such that $0leq n<2^N$ and $2^N$ different ways of choosing $epsilon_j(n)in{0,1}$ for each $j$ from $0$ to $N-1$. So, all $2^N$ of these binary expansions must have distinct sums, or else they would not be able to represent all $2^N$ of the different values of $n$.



                                  This proves that for any $N$, a natural number $n$ has at most one binary expansion using powers of $2$ up to $2^{N-1}$. It follows that $n$ has only one binary expansion, up to adding $0$s at the start (since given two expansions of different lengths, you can always extend one by $0$s to make them the same length, and then they must become the same).






                                  share|cite|improve this answer









                                  $endgroup$



                                  A very simple proof is by the pigeonhole principle. The key observation is that not only does any natural number $n$ have a binary expansion $$n = sum_{j=0}^m epsilon_j(n) 2^j,$$ but if $0leq n<2^N$ then we need no powers of $2$ above $2^{N-1}$ so we can take $m=N-1$. Now, for any fixed $N$, there are $2^N$ natural numbers $n$ such that $0leq n<2^N$ and $2^N$ different ways of choosing $epsilon_j(n)in{0,1}$ for each $j$ from $0$ to $N-1$. So, all $2^N$ of these binary expansions must have distinct sums, or else they would not be able to represent all $2^N$ of the different values of $n$.



                                  This proves that for any $N$, a natural number $n$ has at most one binary expansion using powers of $2$ up to $2^{N-1}$. It follows that $n$ has only one binary expansion, up to adding $0$s at the start (since given two expansions of different lengths, you can always extend one by $0$s to make them the same length, and then they must become the same).







                                  share|cite|improve this answer












                                  share|cite|improve this answer



                                  share|cite|improve this answer










                                  answered Nov 28 '18 at 23:10









                                  Eric WofseyEric Wofsey

                                  191k14216349




                                  191k14216349























                                      3












                                      $begingroup$

                                      Not really an answer. But here's just a different way of framing your question which I think is neat.



                                      Let $f: P_{fin}(mathbb{N}) to mathbb{N}$ by $f(S) =sum_{sin S} 2^s$.



                                      $f$ is onto. You have claimed this can be handled Euclidean algorithm.



                                      What about $1-1$? We use the argument Stanley Dodds presented.




                                      So we've seen that the set of all finite subsets of the natural numbers is in 1-1 correspondence with the set of natural numbers.







                                      share|cite|improve this answer











                                      $endgroup$


















                                        3












                                        $begingroup$

                                        Not really an answer. But here's just a different way of framing your question which I think is neat.



                                        Let $f: P_{fin}(mathbb{N}) to mathbb{N}$ by $f(S) =sum_{sin S} 2^s$.



                                        $f$ is onto. You have claimed this can be handled Euclidean algorithm.



                                        What about $1-1$? We use the argument Stanley Dodds presented.




                                        So we've seen that the set of all finite subsets of the natural numbers is in 1-1 correspondence with the set of natural numbers.







                                        share|cite|improve this answer











                                        $endgroup$
















                                          3












                                          3








                                          3





                                          $begingroup$

                                          Not really an answer. But here's just a different way of framing your question which I think is neat.



                                          Let $f: P_{fin}(mathbb{N}) to mathbb{N}$ by $f(S) =sum_{sin S} 2^s$.



                                          $f$ is onto. You have claimed this can be handled Euclidean algorithm.



                                          What about $1-1$? We use the argument Stanley Dodds presented.




                                          So we've seen that the set of all finite subsets of the natural numbers is in 1-1 correspondence with the set of natural numbers.







                                          share|cite|improve this answer











                                          $endgroup$



                                          Not really an answer. But here's just a different way of framing your question which I think is neat.



                                          Let $f: P_{fin}(mathbb{N}) to mathbb{N}$ by $f(S) =sum_{sin S} 2^s$.



                                          $f$ is onto. You have claimed this can be handled Euclidean algorithm.



                                          What about $1-1$? We use the argument Stanley Dodds presented.




                                          So we've seen that the set of all finite subsets of the natural numbers is in 1-1 correspondence with the set of natural numbers.








                                          share|cite|improve this answer














                                          share|cite|improve this answer



                                          share|cite|improve this answer








                                          edited Nov 28 '18 at 16:42

























                                          answered Nov 28 '18 at 16:36









                                          MasonMason

                                          1,7951630




                                          1,7951630























                                              3












                                              $begingroup$

                                              Hint $ $ Uniqueness of radix rep can be deduced intuitively from the simple fact that an integer root of an integer coef polynomial divides the least degree coef (i.e. Rational Root Test). For example



                                              $qquad11001_2 = g(2),, g(x) = x^4+x^3+1$



                                              $qquad 10011_2 = h(2), h(x) = x^4+x+1$



                                              If they're equal $, 0 = g(2)-h(2) =: f(2),$ for $,f = g-h = x(x^2-1),$ so $,2,$ is a root of $,x^2-1,$ so $, 2^2 = 1,Rightarrow, 2mid 1,,$ contradiction. This idea works generally - the nonzero coef's of $g-h$ are $pm1$ contra the root $2$ must divide the least degree such coef. Below is the proof for general radix.





                                              If $,g(x) = sum g_i x^i$ is a polynomial with integer coefficients $,g_i,$ such that $,0le g_i < b,$ and $,g(b) = n,$ then we call $,(g,b),$ a radix $,b,$ representation of $,n.,$ It is unique: $ $ if $,n,$ has another rep $,(h,b),,$ with $,g(x) ne h(x),,$ then $,f(x)= g(x)-h(x)ne 0,$ has root $,b,$ but all coefficients $,color{#c00}{|f_i| < b},,$ contra the below slight generalization of: $ $ integer roots of integer polynomials divide their constant term.



                                              Theorem $ $ If $,f(x) = x^k(color{#0a0}{f_0}!+f_1 x +cdots + f_n x^n)=x^kbar f(x),$ is a polynomial with integer coefficients $,f_i,$ and with $,color{#0a0}{f_0ne 0},$ then an integer root $,bne 0,$ satisfies $,bmid f_0,,$ so $,color{#c00}{|b| le |f_0|}$



                                              Proof $ 0 = f(b) = b^k bar f(b),overset{large b,ne, 0}Rightarrow, 0 = bar f(b),,$ so, subtracting $,f_0$ from both sides yields $$-f_0 =, b,(f_1!+f_2 b+,cdots+f_n b^{n-1}), Rightarrow,bmid f_0, overset{large color{#0a0}{f_0,ne, 0}}Rightarrow, |b| le |f_0|qquad {bf QED}qquadquad$$



                                              Remark $ $ Thus uniqueness of radix rep is essentially a special case of the Rational Root Test,






                                              share|cite|improve this answer











                                              $endgroup$


















                                                3












                                                $begingroup$

                                                Hint $ $ Uniqueness of radix rep can be deduced intuitively from the simple fact that an integer root of an integer coef polynomial divides the least degree coef (i.e. Rational Root Test). For example



                                                $qquad11001_2 = g(2),, g(x) = x^4+x^3+1$



                                                $qquad 10011_2 = h(2), h(x) = x^4+x+1$



                                                If they're equal $, 0 = g(2)-h(2) =: f(2),$ for $,f = g-h = x(x^2-1),$ so $,2,$ is a root of $,x^2-1,$ so $, 2^2 = 1,Rightarrow, 2mid 1,,$ contradiction. This idea works generally - the nonzero coef's of $g-h$ are $pm1$ contra the root $2$ must divide the least degree such coef. Below is the proof for general radix.





                                                If $,g(x) = sum g_i x^i$ is a polynomial with integer coefficients $,g_i,$ such that $,0le g_i < b,$ and $,g(b) = n,$ then we call $,(g,b),$ a radix $,b,$ representation of $,n.,$ It is unique: $ $ if $,n,$ has another rep $,(h,b),,$ with $,g(x) ne h(x),,$ then $,f(x)= g(x)-h(x)ne 0,$ has root $,b,$ but all coefficients $,color{#c00}{|f_i| < b},,$ contra the below slight generalization of: $ $ integer roots of integer polynomials divide their constant term.



                                                Theorem $ $ If $,f(x) = x^k(color{#0a0}{f_0}!+f_1 x +cdots + f_n x^n)=x^kbar f(x),$ is a polynomial with integer coefficients $,f_i,$ and with $,color{#0a0}{f_0ne 0},$ then an integer root $,bne 0,$ satisfies $,bmid f_0,,$ so $,color{#c00}{|b| le |f_0|}$



                                                Proof $ 0 = f(b) = b^k bar f(b),overset{large b,ne, 0}Rightarrow, 0 = bar f(b),,$ so, subtracting $,f_0$ from both sides yields $$-f_0 =, b,(f_1!+f_2 b+,cdots+f_n b^{n-1}), Rightarrow,bmid f_0, overset{large color{#0a0}{f_0,ne, 0}}Rightarrow, |b| le |f_0|qquad {bf QED}qquadquad$$



                                                Remark $ $ Thus uniqueness of radix rep is essentially a special case of the Rational Root Test,






                                                share|cite|improve this answer











                                                $endgroup$
















                                                  3












                                                  3








                                                  3





                                                  $begingroup$

                                                  Hint $ $ Uniqueness of radix rep can be deduced intuitively from the simple fact that an integer root of an integer coef polynomial divides the least degree coef (i.e. Rational Root Test). For example



                                                  $qquad11001_2 = g(2),, g(x) = x^4+x^3+1$



                                                  $qquad 10011_2 = h(2), h(x) = x^4+x+1$



                                                  If they're equal $, 0 = g(2)-h(2) =: f(2),$ for $,f = g-h = x(x^2-1),$ so $,2,$ is a root of $,x^2-1,$ so $, 2^2 = 1,Rightarrow, 2mid 1,,$ contradiction. This idea works generally - the nonzero coef's of $g-h$ are $pm1$ contra the root $2$ must divide the least degree such coef. Below is the proof for general radix.





                                                  If $,g(x) = sum g_i x^i$ is a polynomial with integer coefficients $,g_i,$ such that $,0le g_i < b,$ and $,g(b) = n,$ then we call $,(g,b),$ a radix $,b,$ representation of $,n.,$ It is unique: $ $ if $,n,$ has another rep $,(h,b),,$ with $,g(x) ne h(x),,$ then $,f(x)= g(x)-h(x)ne 0,$ has root $,b,$ but all coefficients $,color{#c00}{|f_i| < b},,$ contra the below slight generalization of: $ $ integer roots of integer polynomials divide their constant term.



                                                  Theorem $ $ If $,f(x) = x^k(color{#0a0}{f_0}!+f_1 x +cdots + f_n x^n)=x^kbar f(x),$ is a polynomial with integer coefficients $,f_i,$ and with $,color{#0a0}{f_0ne 0},$ then an integer root $,bne 0,$ satisfies $,bmid f_0,,$ so $,color{#c00}{|b| le |f_0|}$



                                                  Proof $ 0 = f(b) = b^k bar f(b),overset{large b,ne, 0}Rightarrow, 0 = bar f(b),,$ so, subtracting $,f_0$ from both sides yields $$-f_0 =, b,(f_1!+f_2 b+,cdots+f_n b^{n-1}), Rightarrow,bmid f_0, overset{large color{#0a0}{f_0,ne, 0}}Rightarrow, |b| le |f_0|qquad {bf QED}qquadquad$$



                                                  Remark $ $ Thus uniqueness of radix rep is essentially a special case of the Rational Root Test,






                                                  share|cite|improve this answer











                                                  $endgroup$



                                                  Hint $ $ Uniqueness of radix rep can be deduced intuitively from the simple fact that an integer root of an integer coef polynomial divides the least degree coef (i.e. Rational Root Test). For example



                                                  $qquad11001_2 = g(2),, g(x) = x^4+x^3+1$



                                                  $qquad 10011_2 = h(2), h(x) = x^4+x+1$



                                                  If they're equal $, 0 = g(2)-h(2) =: f(2),$ for $,f = g-h = x(x^2-1),$ so $,2,$ is a root of $,x^2-1,$ so $, 2^2 = 1,Rightarrow, 2mid 1,,$ contradiction. This idea works generally - the nonzero coef's of $g-h$ are $pm1$ contra the root $2$ must divide the least degree such coef. Below is the proof for general radix.





                                                  If $,g(x) = sum g_i x^i$ is a polynomial with integer coefficients $,g_i,$ such that $,0le g_i < b,$ and $,g(b) = n,$ then we call $,(g,b),$ a radix $,b,$ representation of $,n.,$ It is unique: $ $ if $,n,$ has another rep $,(h,b),,$ with $,g(x) ne h(x),,$ then $,f(x)= g(x)-h(x)ne 0,$ has root $,b,$ but all coefficients $,color{#c00}{|f_i| < b},,$ contra the below slight generalization of: $ $ integer roots of integer polynomials divide their constant term.



                                                  Theorem $ $ If $,f(x) = x^k(color{#0a0}{f_0}!+f_1 x +cdots + f_n x^n)=x^kbar f(x),$ is a polynomial with integer coefficients $,f_i,$ and with $,color{#0a0}{f_0ne 0},$ then an integer root $,bne 0,$ satisfies $,bmid f_0,,$ so $,color{#c00}{|b| le |f_0|}$



                                                  Proof $ 0 = f(b) = b^k bar f(b),overset{large b,ne, 0}Rightarrow, 0 = bar f(b),,$ so, subtracting $,f_0$ from both sides yields $$-f_0 =, b,(f_1!+f_2 b+,cdots+f_n b^{n-1}), Rightarrow,bmid f_0, overset{large color{#0a0}{f_0,ne, 0}}Rightarrow, |b| le |f_0|qquad {bf QED}qquadquad$$



                                                  Remark $ $ Thus uniqueness of radix rep is essentially a special case of the Rational Root Test,







                                                  share|cite|improve this answer














                                                  share|cite|improve this answer



                                                  share|cite|improve this answer








                                                  edited Nov 28 '18 at 17:58

























                                                  answered Nov 28 '18 at 17:04









                                                  Bill DubuqueBill Dubuque

                                                  213k29195654




                                                  213k29195654























                                                      1












                                                      $begingroup$

                                                      This argument is very basic but I think the notation is easy to follow:
                                                      Suppose a number has two different expansions ${a_i}$ and ${b_i}$. Then
                                                      $$ sum_{i=0}^m a_i(n) 2^i = sum_{i=0}^m b_i(n) 2^i $$ Break off the first term in both sums and you can get $$ a_o(n) - b_0(n) = 2sum_{i=1}^m (b_i(n)-a_i(n)) 2^{i-1}$$
                                                      The l.h.s can only be ($0-0$), ($0-1$), ($1-0$), or ($1-1$), and it must be divisible by $2$, so it must equal $0$, i.e. $a_0(n) = b_0(n)$. Then you can divide by the $2$ in front of the sum, and repeat the argument (or use induction) to get $a_i(n) = b_i(n)$ for all $i$.






                                                      share|cite|improve this answer









                                                      $endgroup$


















                                                        1












                                                        $begingroup$

                                                        This argument is very basic but I think the notation is easy to follow:
                                                        Suppose a number has two different expansions ${a_i}$ and ${b_i}$. Then
                                                        $$ sum_{i=0}^m a_i(n) 2^i = sum_{i=0}^m b_i(n) 2^i $$ Break off the first term in both sums and you can get $$ a_o(n) - b_0(n) = 2sum_{i=1}^m (b_i(n)-a_i(n)) 2^{i-1}$$
                                                        The l.h.s can only be ($0-0$), ($0-1$), ($1-0$), or ($1-1$), and it must be divisible by $2$, so it must equal $0$, i.e. $a_0(n) = b_0(n)$. Then you can divide by the $2$ in front of the sum, and repeat the argument (or use induction) to get $a_i(n) = b_i(n)$ for all $i$.






                                                        share|cite|improve this answer









                                                        $endgroup$
















                                                          1












                                                          1








                                                          1





                                                          $begingroup$

                                                          This argument is very basic but I think the notation is easy to follow:
                                                          Suppose a number has two different expansions ${a_i}$ and ${b_i}$. Then
                                                          $$ sum_{i=0}^m a_i(n) 2^i = sum_{i=0}^m b_i(n) 2^i $$ Break off the first term in both sums and you can get $$ a_o(n) - b_0(n) = 2sum_{i=1}^m (b_i(n)-a_i(n)) 2^{i-1}$$
                                                          The l.h.s can only be ($0-0$), ($0-1$), ($1-0$), or ($1-1$), and it must be divisible by $2$, so it must equal $0$, i.e. $a_0(n) = b_0(n)$. Then you can divide by the $2$ in front of the sum, and repeat the argument (or use induction) to get $a_i(n) = b_i(n)$ for all $i$.






                                                          share|cite|improve this answer









                                                          $endgroup$



                                                          This argument is very basic but I think the notation is easy to follow:
                                                          Suppose a number has two different expansions ${a_i}$ and ${b_i}$. Then
                                                          $$ sum_{i=0}^m a_i(n) 2^i = sum_{i=0}^m b_i(n) 2^i $$ Break off the first term in both sums and you can get $$ a_o(n) - b_0(n) = 2sum_{i=1}^m (b_i(n)-a_i(n)) 2^{i-1}$$
                                                          The l.h.s can only be ($0-0$), ($0-1$), ($1-0$), or ($1-1$), and it must be divisible by $2$, so it must equal $0$, i.e. $a_0(n) = b_0(n)$. Then you can divide by the $2$ in front of the sum, and repeat the argument (or use induction) to get $a_i(n) = b_i(n)$ for all $i$.







                                                          share|cite|improve this answer












                                                          share|cite|improve this answer



                                                          share|cite|improve this answer










                                                          answered Nov 29 '18 at 0:26









                                                          JonathanZJonathanZ

                                                          2,232613




                                                          2,232613















                                                              Popular posts from this blog

                                                              A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks

                                                              Calculate evaluation metrics using cross_val_predict sklearn

                                                              Insert data from modal to MySQL (multiple modal on website)