Clarification regarding Parameter Estimation (Andriy Burkov's book) The 2019 Stack Overflow Developer Survey Results Are InImplementation of the Baum-Welch algorithm for HMM parameter estimationBayesian Parameter Estimation DoubtParameter estimation of Lorenz system (nonlinear dynamical system)What is the problem with this model parameter estimation algorithm?Statistical Estimationwhy a denoising auto-encoder is like performing stochastic gradient this on this expression?$O_p$ and $o_p$ notations in asymptotic normality proofBayesian and frequency tail estimation.2016 - Unpingco's Python for Probability, Statistics, and Machine Learning page 112Derived parameter instead of parameter estimation

Time travel alters history but people keep saying nothing's changed

How come people say “Would of”?

What are the motivations for publishing new editions of an existing textbook, beyond new discoveries in a field?

Can a flute soloist sit?

Why isn't the circumferential light around the M87 black hole's event horizon symmetric?

What is the closest word meaning "respect for time / mindful"

Why is the Constellation's nose gear so long?

Have you ever entered Singapore using a different passport or name?

What is the accessibility of a package's `Private` context variables?

Feature engineering suggestion required

How to support a colleague who finds meetings extremely tiring?

If I score a critical hit on an 18 or higher, what are my chances of getting a critical hit if I roll 3d20?

Is "plugging out" electronic devices an American expression?

Why did Acorn's A3000 have red function keys?

How to check whether the reindex working or not in Magento?

Is there a symbol for a right arrow with a square in the middle?

Why do UK politicians seemingly ignore opinion polls on Brexit?

Falsification in Math vs Science

Is bread bad for ducks?

What is the meaning of Triage in Cybersec world?

Does a dangling wire really electrocute me if I'm standing in water?

Did 3000BC Egyptians use meteoric iron weapons?

What to do when moving next to a bird sanctuary with a loosely-domesticated cat?

Multiply Two Integer Polynomials



Clarification regarding Parameter Estimation (Andriy Burkov's book)



The 2019 Stack Overflow Developer Survey Results Are InImplementation of the Baum-Welch algorithm for HMM parameter estimationBayesian Parameter Estimation DoubtParameter estimation of Lorenz system (nonlinear dynamical system)What is the problem with this model parameter estimation algorithm?Statistical Estimationwhy a denoising auto-encoder is like performing stochastic gradient this on this expression?$O_p$ and $o_p$ notations in asymptotic normality proofBayesian and frequency tail estimation.2016 - Unpingco's Python for Probability, Statistics, and Machine Learning page 112Derived parameter instead of parameter estimation










1












$begingroup$


So I recently decided to read Andriy Burkov's "The 100-Page Machine
Learning Book" and got confused in Chapter Two (Page 11) where he
discusses Parameter Estimation techniques. A picture of the relevant
section has been attached in the end and the problematic parts have been highlighted.



$(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.



$(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.



If someone could shed some light here, that'd be really helpful.




enter image description here










share|cite|improve this question









$endgroup$
















    1












    $begingroup$


    So I recently decided to read Andriy Burkov's "The 100-Page Machine
    Learning Book" and got confused in Chapter Two (Page 11) where he
    discusses Parameter Estimation techniques. A picture of the relevant
    section has been attached in the end and the problematic parts have been highlighted.



    $(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
    but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.



    $(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
    but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.



    If someone could shed some light here, that'd be really helpful.




    enter image description here










    share|cite|improve this question









    $endgroup$














      1












      1








      1


      1



      $begingroup$


      So I recently decided to read Andriy Burkov's "The 100-Page Machine
      Learning Book" and got confused in Chapter Two (Page 11) where he
      discusses Parameter Estimation techniques. A picture of the relevant
      section has been attached in the end and the problematic parts have been highlighted.



      $(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
      but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.



      $(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
      but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.



      If someone could shed some light here, that'd be really helpful.




      enter image description here










      share|cite|improve this question









      $endgroup$




      So I recently decided to read Andriy Burkov's "The 100-Page Machine
      Learning Book" and got confused in Chapter Two (Page 11) where he
      discusses Parameter Estimation techniques. A picture of the relevant
      section has been attached in the end and the problematic parts have been highlighted.



      $(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
      but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.



      $(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
      but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.



      If someone could shed some light here, that'd be really helpful.




      enter image description here







      statistical-inference machine-learning bayes-theorem






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Apr 7 at 10:51









      s0ulr3aper07s0ulr3aper07

      683112




      683112




















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.



          As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".



          In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.



          Sorry for the inconvenience.






          share|cite|improve this answer










          New contributor




          aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3178065%2fclarification-regarding-parameter-estimation-andriy-burkovs-book%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3












            $begingroup$

            The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.



            As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".



            In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.



            Sorry for the inconvenience.






            share|cite|improve this answer










            New contributor




            aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$

















              3












              $begingroup$

              The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.



              As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".



              In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.



              Sorry for the inconvenience.






              share|cite|improve this answer










              New contributor




              aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$















                3












                3








                3





                $begingroup$

                The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.



                As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".



                In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.



                Sorry for the inconvenience.






                share|cite|improve this answer










                New contributor




                aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$



                The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.



                As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".



                In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.



                Sorry for the inconvenience.







                share|cite|improve this answer










                New contributor




                aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|cite|improve this answer



                share|cite|improve this answer








                edited Apr 7 at 20:41





















                New contributor




                aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered Apr 7 at 20:30









                aburkovaburkov

                462




                462




                New contributor




                aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                aburkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3178065%2fclarification-regarding-parameter-estimation-andriy-burkovs-book%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Hidroelektrana Sadržaj Povijest | Podjela hidroelektrana | Snaga dobivena u hidroelektranama | Dijelovi hidroelektrane | Uloga hidroelektrana u suvremenom svijetu | Prednosti hidroelektrana | Nedostaci hidroelektrana | Države s najvećom proizvodnjom hidro-električne energije | Deset najvećih hidroelektrana u svijetu | Hidroelektrane u Hrvatskoj | Izvori | Poveznice | Vanjske poveznice | Navigacijski izbornikTechnical Report, Version 2Zajedničkom poslužiteljuHidroelektranaHEP Proizvodnja d.o.o. - Hidroelektrane u Hrvatskoj

                    Bosc Connection Yimello Approaching Angry The produce zaps the market. 구성 기록되다 변경...

                    WordPress Information needed