Clarification regarding Parameter Estimation (Andriy Burkov's book) The 2019 Stack Overflow Developer Survey Results Are InImplementation of the Baum-Welch algorithm for HMM parameter estimationBayesian Parameter Estimation DoubtParameter estimation of Lorenz system (nonlinear dynamical system)What is the problem with this model parameter estimation algorithm?Statistical Estimationwhy a denoising auto-encoder is like performing stochastic gradient this on this expression?$O_p$ and $o_p$ notations in asymptotic normality proofBayesian and frequency tail estimation.2016 - Unpingco's Python for Probability, Statistics, and Machine Learning page 112Derived parameter instead of parameter estimation
Time travel alters history but people keep saying nothing's changed
How come people say “Would of”?
What are the motivations for publishing new editions of an existing textbook, beyond new discoveries in a field?
Can a flute soloist sit?
Why isn't the circumferential light around the M87 black hole's event horizon symmetric?
What is the closest word meaning "respect for time / mindful"
Why is the Constellation's nose gear so long?
Have you ever entered Singapore using a different passport or name?
What is the accessibility of a package's `Private` context variables?
Feature engineering suggestion required
How to support a colleague who finds meetings extremely tiring?
If I score a critical hit on an 18 or higher, what are my chances of getting a critical hit if I roll 3d20?
Is "plugging out" electronic devices an American expression?
Why did Acorn's A3000 have red function keys?
How to check whether the reindex working or not in Magento?
Is there a symbol for a right arrow with a square in the middle?
Why do UK politicians seemingly ignore opinion polls on Brexit?
Falsification in Math vs Science
Is bread bad for ducks?
What is the meaning of Triage in Cybersec world?
Does a dangling wire really electrocute me if I'm standing in water?
Did 3000BC Egyptians use meteoric iron weapons?
What to do when moving next to a bird sanctuary with a loosely-domesticated cat?
Multiply Two Integer Polynomials
Clarification regarding Parameter Estimation (Andriy Burkov's book)
The 2019 Stack Overflow Developer Survey Results Are InImplementation of the Baum-Welch algorithm for HMM parameter estimationBayesian Parameter Estimation DoubtParameter estimation of Lorenz system (nonlinear dynamical system)What is the problem with this model parameter estimation algorithm?Statistical Estimationwhy a denoising auto-encoder is like performing stochastic gradient this on this expression?$O_p$ and $o_p$ notations in asymptotic normality proofBayesian and frequency tail estimation.2016 - Unpingco's Python for Probability, Statistics, and Machine Learning page 112Derived parameter instead of parameter estimation
$begingroup$
So I recently decided to read Andriy Burkov's "The 100-Page Machine
Learning Book" and got confused in Chapter Two (Page 11) where he
discusses Parameter Estimation techniques. A picture of the relevant
section has been attached in the end and the problematic parts have been highlighted.
$(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.
$(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.
If someone could shed some light here, that'd be really helpful.
statistical-inference machine-learning bayes-theorem
$endgroup$
add a comment |
$begingroup$
So I recently decided to read Andriy Burkov's "The 100-Page Machine
Learning Book" and got confused in Chapter Two (Page 11) where he
discusses Parameter Estimation techniques. A picture of the relevant
section has been attached in the end and the problematic parts have been highlighted.
$(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.
$(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.
If someone could shed some light here, that'd be really helpful.
statistical-inference machine-learning bayes-theorem
$endgroup$
add a comment |
$begingroup$
So I recently decided to read Andriy Burkov's "The 100-Page Machine
Learning Book" and got confused in Chapter Two (Page 11) where he
discusses Parameter Estimation techniques. A picture of the relevant
section has been attached in the end and the problematic parts have been highlighted.
$(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.
$(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.
If someone could shed some light here, that'd be really helpful.
statistical-inference machine-learning bayes-theorem
$endgroup$
So I recently decided to read Andriy Burkov's "The 100-Page Machine
Learning Book" and got confused in Chapter Two (Page 11) where he
discusses Parameter Estimation techniques. A picture of the relevant
section has been attached in the end and the problematic parts have been highlighted.
$(1) $To me it seems that applying Bayes' Theorem in this particular context would yield:$$Pr(theta = hat theta |X=x)=fractheta=hat theta) Pr(theta = hat theta)Pr(X=x)=fractheta=hat theta) Pr(theta = hat theta)sum_tildethetaPr(X=x $$
but the book doesn't have the same denominator in the last fraction and I wonder why that is the case.
$(2)$ As per my understanding of Maximum Likelihood Estimate, the objective is to find that value of $theta$ which maximizes the Likelihood function given by:$$L(theta)=prod_i=1^n f(x_i|theta)$$
but the book seems to have used a different expression for the Likelihood function, the origins of which remain unknown to me.
If someone could shed some light here, that'd be really helpful.
statistical-inference machine-learning bayes-theorem
statistical-inference machine-learning bayes-theorem
asked Apr 7 at 10:51
s0ulr3aper07s0ulr3aper07
683112
683112
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.
As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".
In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.
Sorry for the inconvenience.
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3178065%2fclarification-regarding-parameter-estimation-andriy-burkovs-book%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.
As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".
In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.
Sorry for the inconvenience.
New contributor
$endgroup$
add a comment |
$begingroup$
The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.
As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".
In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.
Sorry for the inconvenience.
New contributor
$endgroup$
add a comment |
$begingroup$
The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.
As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".
In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.
Sorry for the inconvenience.
New contributor
$endgroup$
The author here. You are right and the denominator on the first screenshot indeed misses a term. This was fixed several weeks ago and the book on Amazon and Leanpub now contains the fixed content.
As for your second screenshot, again in the updated version of the book, "maximum likelihood" was replaced by "maximum a posteriori".
In case of doubt, please refer to the online version of the book available at http://themlbook.com/wiki/doku.php as it is updated more regularly compared to the printed edition.
Sorry for the inconvenience.
New contributor
edited Apr 7 at 20:41
New contributor
answered Apr 7 at 20:30
aburkovaburkov
462
462
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3178065%2fclarification-regarding-parameter-estimation-andriy-burkovs-book%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown