Are confidence intervals useful?












2












$begingroup$


In frequentist statistics, a 95% confidence interval is an interval-producing procedure that, if repeated an infinite number of times, would contain the true parameter 95% of the time. Why is this useful?



Confidence intervals are often misunderstood. They are not an interval that we can be 95% certain the parameter is in (unless you are using the similar Bayesian credibility interval). Confidence intervals feel like a bait-and-switch to me.



The one use case I can think of is to provide the range of values for which we could not reject the null hypothesis that the parameter is that value. Wouldn't p-values provide this information, but better? Without being so misleading?



In short: Why do we need confidence intervals? How are they, when correctly interpreted, useful?










share|cite|improve this question









$endgroup$

















    2












    $begingroup$


    In frequentist statistics, a 95% confidence interval is an interval-producing procedure that, if repeated an infinite number of times, would contain the true parameter 95% of the time. Why is this useful?



    Confidence intervals are often misunderstood. They are not an interval that we can be 95% certain the parameter is in (unless you are using the similar Bayesian credibility interval). Confidence intervals feel like a bait-and-switch to me.



    The one use case I can think of is to provide the range of values for which we could not reject the null hypothesis that the parameter is that value. Wouldn't p-values provide this information, but better? Without being so misleading?



    In short: Why do we need confidence intervals? How are they, when correctly interpreted, useful?










    share|cite|improve this question









    $endgroup$















      2












      2








      2





      $begingroup$


      In frequentist statistics, a 95% confidence interval is an interval-producing procedure that, if repeated an infinite number of times, would contain the true parameter 95% of the time. Why is this useful?



      Confidence intervals are often misunderstood. They are not an interval that we can be 95% certain the parameter is in (unless you are using the similar Bayesian credibility interval). Confidence intervals feel like a bait-and-switch to me.



      The one use case I can think of is to provide the range of values for which we could not reject the null hypothesis that the parameter is that value. Wouldn't p-values provide this information, but better? Without being so misleading?



      In short: Why do we need confidence intervals? How are they, when correctly interpreted, useful?










      share|cite|improve this question









      $endgroup$




      In frequentist statistics, a 95% confidence interval is an interval-producing procedure that, if repeated an infinite number of times, would contain the true parameter 95% of the time. Why is this useful?



      Confidence intervals are often misunderstood. They are not an interval that we can be 95% certain the parameter is in (unless you are using the similar Bayesian credibility interval). Confidence intervals feel like a bait-and-switch to me.



      The one use case I can think of is to provide the range of values for which we could not reject the null hypothesis that the parameter is that value. Wouldn't p-values provide this information, but better? Without being so misleading?



      In short: Why do we need confidence intervals? How are they, when correctly interpreted, useful?







      hypothesis-testing bayesian mathematical-statistics confidence-interval frequentist






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 3 hours ago









      purpleostrichpurpleostrich

      856




      856






















          1 Answer
          1






          active

          oldest

          votes


















          4












          $begingroup$

          So long as the confidence interval is treated as random (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level $1-alpha$ for the parameter $theta$, and the interval has bounds $L(mathbf{x}) leqslant U(mathbf{x})$. Then we can say that:



          $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X}) | theta) = 1-alpha
          quad quad quad text{for all } theta in Theta.$$



          Moving outside the frequentist paradigm and marginalising over $theta$ for any prior distribution gives the corresponding marginal probability result:



          $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X})) = 1-alpha.$$



          Once we fix the bounds of the confidence interval by fixing the data to $mathbf{X} = mathbb{x}$, we no longer appeal to this probability statement, because we now have fixed the data. However, if the confidence interval is treated as a random interval then we can indeed make this probability statement --- i.e., with probability $1-alpha$ the parameter $theta$ will fall within the (random) interval.



          Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of every probability statement in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense.



          I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at.






          share|cite|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "65"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f390093%2fare-confidence-intervals-useful%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            So long as the confidence interval is treated as random (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level $1-alpha$ for the parameter $theta$, and the interval has bounds $L(mathbf{x}) leqslant U(mathbf{x})$. Then we can say that:



            $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X}) | theta) = 1-alpha
            quad quad quad text{for all } theta in Theta.$$



            Moving outside the frequentist paradigm and marginalising over $theta$ for any prior distribution gives the corresponding marginal probability result:



            $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X})) = 1-alpha.$$



            Once we fix the bounds of the confidence interval by fixing the data to $mathbf{X} = mathbb{x}$, we no longer appeal to this probability statement, because we now have fixed the data. However, if the confidence interval is treated as a random interval then we can indeed make this probability statement --- i.e., with probability $1-alpha$ the parameter $theta$ will fall within the (random) interval.



            Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of every probability statement in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense.



            I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at.






            share|cite|improve this answer









            $endgroup$


















              4












              $begingroup$

              So long as the confidence interval is treated as random (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level $1-alpha$ for the parameter $theta$, and the interval has bounds $L(mathbf{x}) leqslant U(mathbf{x})$. Then we can say that:



              $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X}) | theta) = 1-alpha
              quad quad quad text{for all } theta in Theta.$$



              Moving outside the frequentist paradigm and marginalising over $theta$ for any prior distribution gives the corresponding marginal probability result:



              $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X})) = 1-alpha.$$



              Once we fix the bounds of the confidence interval by fixing the data to $mathbf{X} = mathbb{x}$, we no longer appeal to this probability statement, because we now have fixed the data. However, if the confidence interval is treated as a random interval then we can indeed make this probability statement --- i.e., with probability $1-alpha$ the parameter $theta$ will fall within the (random) interval.



              Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of every probability statement in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense.



              I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at.






              share|cite|improve this answer









              $endgroup$
















                4












                4








                4





                $begingroup$

                So long as the confidence interval is treated as random (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level $1-alpha$ for the parameter $theta$, and the interval has bounds $L(mathbf{x}) leqslant U(mathbf{x})$. Then we can say that:



                $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X}) | theta) = 1-alpha
                quad quad quad text{for all } theta in Theta.$$



                Moving outside the frequentist paradigm and marginalising over $theta$ for any prior distribution gives the corresponding marginal probability result:



                $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X})) = 1-alpha.$$



                Once we fix the bounds of the confidence interval by fixing the data to $mathbf{X} = mathbb{x}$, we no longer appeal to this probability statement, because we now have fixed the data. However, if the confidence interval is treated as a random interval then we can indeed make this probability statement --- i.e., with probability $1-alpha$ the parameter $theta$ will fall within the (random) interval.



                Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of every probability statement in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense.



                I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at.






                share|cite|improve this answer









                $endgroup$



                So long as the confidence interval is treated as random (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level $1-alpha$ for the parameter $theta$, and the interval has bounds $L(mathbf{x}) leqslant U(mathbf{x})$. Then we can say that:



                $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X}) | theta) = 1-alpha
                quad quad quad text{for all } theta in Theta.$$



                Moving outside the frequentist paradigm and marginalising over $theta$ for any prior distribution gives the corresponding marginal probability result:



                $$mathbb{P}(L(mathbf{X}) leqslant theta leqslant U(mathbf{X})) = 1-alpha.$$



                Once we fix the bounds of the confidence interval by fixing the data to $mathbf{X} = mathbb{x}$, we no longer appeal to this probability statement, because we now have fixed the data. However, if the confidence interval is treated as a random interval then we can indeed make this probability statement --- i.e., with probability $1-alpha$ the parameter $theta$ will fall within the (random) interval.



                Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of every probability statement in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense.



                I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 2 hours ago









                BenBen

                23.6k224113




                23.6k224113






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f390093%2fare-confidence-intervals-useful%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Contact image not getting when fetch all contact list from iPhone by CNContact

                    count number of partitions of a set with n elements into k subsets

                    A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks