Fit data to function $g(t) = frac1001+alpha e^-beta t$ by using least squares method (projection/orthogonal families of polynomials) The 2019 Stack Overflow Developer Survey Results Are InFitting points to curve $g(t) = frac1001+alpha e^-beta t$ by thinking about projections and inner productsWhat is and how can I find an orthogonal component?Finding orthogonal projections onto $1$ (co)-dimensional subspaces of $mathbb R^n$How do I prove that these two numbers are the only eigenvalues?Existence proof of subspace of projections from V to WProjection of v onto orthogonal subspaces are the those with minmum distance to v?Alternating projection convergenceProjections onto subspace - least square methodUnderstanding the definition of method of least squaresFind $a,b,c$ that minimize $F(a,b,c) = int_-1^1 (t^2-a-bt-ccos t)^2 dt$Least square method to fit a curve using projection/orthogonal families of polynomials
Can an undergraduate be advised by a professor who is very far away?
Why isn't the circumferential light around the M87 black hole's event horizon symmetric?
Can a flute soloist sit?
Ubuntu Server install with full GUI
Is it ok to offer lower paid work as a trial period before negotiating for a full-time job?
What is preventing me from simply constructing a hash that's lower than the current target?
Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past
How to charge AirPods to keep battery healthy?
Why couldn't they take pictures of a closer black hole?
How much of the clove should I use when using big garlic heads?
Pokemon Turn Based battle (Python)
Can there be female White Walkers?
Getting crown tickets for Statue of Liberty
Is bread bad for ducks?
What is the most efficient way to store a numeric range?
Will it cause any balance problems to have PCs level up and gain the benefits of a long rest mid-fight?
Why does the nucleus not repel itself?
Worn-tile Scrabble
Why was M87 targeted for the Event Horizon Telescope instead of Sagittarius A*?
Does HR tell a hiring manager about salary negotiations?
If I score a critical hit on an 18 or higher, what are my chances of getting a critical hit if I roll 3d20?
Is it ethical to upload a automatically generated paper to a non peer-reviewed site as part of a larger research?
How can I define good in a religion that claims no moral authority?
Can I have a signal generator on while it's not connected?
Fit data to function $g(t) = frac1001+alpha e^-beta t$ by using least squares method (projection/orthogonal families of polynomials)
The 2019 Stack Overflow Developer Survey Results Are InFitting points to curve $g(t) = frac1001+alpha e^-beta t$ by thinking about projections and inner productsWhat is and how can I find an orthogonal component?Finding orthogonal projections onto $1$ (co)-dimensional subspaces of $mathbb R^n$How do I prove that these two numbers are the only eigenvalues?Existence proof of subspace of projections from V to WProjection of v onto orthogonal subspaces are the those with minmum distance to v?Alternating projection convergenceProjections onto subspace - least square methodUnderstanding the definition of method of least squaresFind $a,b,c$ that minimize $F(a,b,c) = int_-1^1 (t^2-a-bt-ccos t)^2 dt$Least square method to fit a curve using projection/orthogonal families of polynomials
$begingroup$
t --- 0 1 2 3 4 5 6
F(t) 10 15 23 33 45 58 69
Adjust $F$ by a fnction of the type $$g(t) = frac1001+alpha
e^-beta t$$ by the discrete least squares method
I'm studying orhotogonal families of polynomials and projection onto subspaces in the context of least squares method.
I think need to see this problem as a projection onto some subspace and use some inner product but I'm lost.
UPDATE:
Shouldn't the function $g(t)$ be a member of a vector space? I tried applying $ln$ to see if I'd get something from a vector space but it also won't work
linear-algebra numerical-methods numerical-linear-algebra
$endgroup$
add a comment |
$begingroup$
t --- 0 1 2 3 4 5 6
F(t) 10 15 23 33 45 58 69
Adjust $F$ by a fnction of the type $$g(t) = frac1001+alpha
e^-beta t$$ by the discrete least squares method
I'm studying orhotogonal families of polynomials and projection onto subspaces in the context of least squares method.
I think need to see this problem as a projection onto some subspace and use some inner product but I'm lost.
UPDATE:
Shouldn't the function $g(t)$ be a member of a vector space? I tried applying $ln$ to see if I'd get something from a vector space but it also won't work
linear-algebra numerical-methods numerical-linear-algebra
$endgroup$
$begingroup$
You can solve the problem in a simpler manner.
$endgroup$
– Claude Leibovici
Apr 7 at 5:01
$begingroup$
@ClaudeLeibovici does it involve least squares method? I need to use it
$endgroup$
– Guerlando OCs
Apr 7 at 19:37
$begingroup$
Yes ! The problem can be easily solved using standard least squares methods with anything else. I shall try to make an answer i that spirit.
$endgroup$
– Claude Leibovici
Apr 8 at 3:35
add a comment |
$begingroup$
t --- 0 1 2 3 4 5 6
F(t) 10 15 23 33 45 58 69
Adjust $F$ by a fnction of the type $$g(t) = frac1001+alpha
e^-beta t$$ by the discrete least squares method
I'm studying orhotogonal families of polynomials and projection onto subspaces in the context of least squares method.
I think need to see this problem as a projection onto some subspace and use some inner product but I'm lost.
UPDATE:
Shouldn't the function $g(t)$ be a member of a vector space? I tried applying $ln$ to see if I'd get something from a vector space but it also won't work
linear-algebra numerical-methods numerical-linear-algebra
$endgroup$
t --- 0 1 2 3 4 5 6
F(t) 10 15 23 33 45 58 69
Adjust $F$ by a fnction of the type $$g(t) = frac1001+alpha
e^-beta t$$ by the discrete least squares method
I'm studying orhotogonal families of polynomials and projection onto subspaces in the context of least squares method.
I think need to see this problem as a projection onto some subspace and use some inner product but I'm lost.
UPDATE:
Shouldn't the function $g(t)$ be a member of a vector space? I tried applying $ln$ to see if I'd get something from a vector space but it also won't work
linear-algebra numerical-methods numerical-linear-algebra
linear-algebra numerical-methods numerical-linear-algebra
edited Apr 8 at 1:26
Guerlando OCs
asked Apr 7 at 0:22
Guerlando OCsGuerlando OCs
6021856
6021856
$begingroup$
You can solve the problem in a simpler manner.
$endgroup$
– Claude Leibovici
Apr 7 at 5:01
$begingroup$
@ClaudeLeibovici does it involve least squares method? I need to use it
$endgroup$
– Guerlando OCs
Apr 7 at 19:37
$begingroup$
Yes ! The problem can be easily solved using standard least squares methods with anything else. I shall try to make an answer i that spirit.
$endgroup$
– Claude Leibovici
Apr 8 at 3:35
add a comment |
$begingroup$
You can solve the problem in a simpler manner.
$endgroup$
– Claude Leibovici
Apr 7 at 5:01
$begingroup$
@ClaudeLeibovici does it involve least squares method? I need to use it
$endgroup$
– Guerlando OCs
Apr 7 at 19:37
$begingroup$
Yes ! The problem can be easily solved using standard least squares methods with anything else. I shall try to make an answer i that spirit.
$endgroup$
– Claude Leibovici
Apr 8 at 3:35
$begingroup$
You can solve the problem in a simpler manner.
$endgroup$
– Claude Leibovici
Apr 7 at 5:01
$begingroup$
You can solve the problem in a simpler manner.
$endgroup$
– Claude Leibovici
Apr 7 at 5:01
$begingroup$
@ClaudeLeibovici does it involve least squares method? I need to use it
$endgroup$
– Guerlando OCs
Apr 7 at 19:37
$begingroup$
@ClaudeLeibovici does it involve least squares method? I need to use it
$endgroup$
– Guerlando OCs
Apr 7 at 19:37
$begingroup$
Yes ! The problem can be easily solved using standard least squares methods with anything else. I shall try to make an answer i that spirit.
$endgroup$
– Claude Leibovici
Apr 8 at 3:35
$begingroup$
Yes ! The problem can be easily solved using standard least squares methods with anything else. I shall try to make an answer i that spirit.
$endgroup$
– Claude Leibovici
Apr 8 at 3:35
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Forgetting (projection/orthogonal families of polynomials), the problem is quite easy to solve using standard nonlinear regression.
As usual, we need good or at least consistent estimates of parameters $(alpha, beta)$ and these can be obtained by a linearization of the model.
$$g = frac1001+alpha e^-beta t implies colorredy=log left(frac100g-1right)=log(alpha)-beta,t=colorreda+b t$$
Consider the data to be
$$left(
beginarrayccc
t & g & y=log left(frac100g-1right) \
0 & 10 & +2.197225 \
1 & 15 & +1.734601 \
2 & 23 & +1.208311 \
3 & 33 & +0.708185 \
4 & 45 & +0.200671 \
5 & 58 & -0.322773 \
6 & 69 & -0.800119
endarray
right)$$
A preliminary linear regression leads to
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
a & +2.21599 & 0.01226 & +2.18195,+2.25003 \
b & -0.50409 & 0.00340 & -0.51353,-0.49465 \
endarray$$ corresponding to $R^2=0.999878$ which is already very good.
This gives as estimates $alpha=e^a=9.17046$ and $beta=-b=0.50409$.
Now, we can start the nonlinear regression and obtain
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
alpha & 9.22336 & 0.13438 & 8.85027,9.59645 \
beta & 0.50576 & 0.00350 & 0.49603,0.51549 \
endarray$$ corresponding to $R^2=0.999972$ which is very good. PLease, notice how good are the initial estimates.
Below are reproduced the data as well as the predicted values
$$left(
beginarrayccc
t & g & g_pred \
0 & 10 & 9.782 \
1 & 15 & 15.24 \
2 & 23 & 22.97 \
3 & 33 & 33.08 \
4 & 45 & 45.05 \
5 & 58 & 57.62 \
6 & 69 & 69.27
endarray
right)$$
If we had in advance known that the model was good (based on physics for example) and the data in small errors (because of accurate measurements), we could have skipped th first step and used the first and last data points to estimate $(alpha, beta)$
$$10=frac 1001+alpha implies alpha=9$$
$$69=frac 1001+9 e^-6betaimplies beta=frac16 log left(frac62131right)=0.499557$$
$endgroup$
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
1
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
add a comment |
$begingroup$
Make the transformations:
$$g(t) = frac1001+alpha
e^-beta t iff alpha e^-beta t=frac100g(t)-1 iff underbraceln left(frac100g(t)-1right)_y(x)=underbrace-beta t_ax+underbraceln alpha_b$$
Hence:
$$beginarrayr
&x&y(x)&xy&x^2\
hline
&0&2.20&0.00&0\
&1&1.73&1.73&1\
&2&1.21&2.42&4\
&3&0.71&2.13&9\
&4&0.20&0.80&16\
&5&-0.32&-1.60&25\
&6&-0.80&-4.80&36\
hline
textTotal&21&4.93&0.68&91\
endarray\
beginaligna&=fracsum xy-fracsum x sum ynsum x^2-frac(sum x)^2n=frac0.68-frac21cdot 4.93791-frac21^27=-0.5\
b&=bary-abarx=frac4.937-(-0.5)frac217=2.2\
ln alpha&=b=2.2 Rightarrow alpha =9.03\
beta &=-a=0.5endalign$$
So, the final answer:
$$g^*(t) = frac1001+9.03
e^-0.5t\
beginarrayc
t&g(t)&g^*(t)\
hline
0&10&9.97\
1&15&15.44\
2&23&23.14\
3&33&33.17\
4&45&45.00\
5&58&57.43\
6&69&68.99
endarray$$
$endgroup$
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
|
show 3 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3177594%2ffit-data-to-function-gt-frac1001-alpha-e-beta-t-by-using-least-s%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Forgetting (projection/orthogonal families of polynomials), the problem is quite easy to solve using standard nonlinear regression.
As usual, we need good or at least consistent estimates of parameters $(alpha, beta)$ and these can be obtained by a linearization of the model.
$$g = frac1001+alpha e^-beta t implies colorredy=log left(frac100g-1right)=log(alpha)-beta,t=colorreda+b t$$
Consider the data to be
$$left(
beginarrayccc
t & g & y=log left(frac100g-1right) \
0 & 10 & +2.197225 \
1 & 15 & +1.734601 \
2 & 23 & +1.208311 \
3 & 33 & +0.708185 \
4 & 45 & +0.200671 \
5 & 58 & -0.322773 \
6 & 69 & -0.800119
endarray
right)$$
A preliminary linear regression leads to
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
a & +2.21599 & 0.01226 & +2.18195,+2.25003 \
b & -0.50409 & 0.00340 & -0.51353,-0.49465 \
endarray$$ corresponding to $R^2=0.999878$ which is already very good.
This gives as estimates $alpha=e^a=9.17046$ and $beta=-b=0.50409$.
Now, we can start the nonlinear regression and obtain
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
alpha & 9.22336 & 0.13438 & 8.85027,9.59645 \
beta & 0.50576 & 0.00350 & 0.49603,0.51549 \
endarray$$ corresponding to $R^2=0.999972$ which is very good. PLease, notice how good are the initial estimates.
Below are reproduced the data as well as the predicted values
$$left(
beginarrayccc
t & g & g_pred \
0 & 10 & 9.782 \
1 & 15 & 15.24 \
2 & 23 & 22.97 \
3 & 33 & 33.08 \
4 & 45 & 45.05 \
5 & 58 & 57.62 \
6 & 69 & 69.27
endarray
right)$$
If we had in advance known that the model was good (based on physics for example) and the data in small errors (because of accurate measurements), we could have skipped th first step and used the first and last data points to estimate $(alpha, beta)$
$$10=frac 1001+alpha implies alpha=9$$
$$69=frac 1001+9 e^-6betaimplies beta=frac16 log left(frac62131right)=0.499557$$
$endgroup$
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
1
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
add a comment |
$begingroup$
Forgetting (projection/orthogonal families of polynomials), the problem is quite easy to solve using standard nonlinear regression.
As usual, we need good or at least consistent estimates of parameters $(alpha, beta)$ and these can be obtained by a linearization of the model.
$$g = frac1001+alpha e^-beta t implies colorredy=log left(frac100g-1right)=log(alpha)-beta,t=colorreda+b t$$
Consider the data to be
$$left(
beginarrayccc
t & g & y=log left(frac100g-1right) \
0 & 10 & +2.197225 \
1 & 15 & +1.734601 \
2 & 23 & +1.208311 \
3 & 33 & +0.708185 \
4 & 45 & +0.200671 \
5 & 58 & -0.322773 \
6 & 69 & -0.800119
endarray
right)$$
A preliminary linear regression leads to
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
a & +2.21599 & 0.01226 & +2.18195,+2.25003 \
b & -0.50409 & 0.00340 & -0.51353,-0.49465 \
endarray$$ corresponding to $R^2=0.999878$ which is already very good.
This gives as estimates $alpha=e^a=9.17046$ and $beta=-b=0.50409$.
Now, we can start the nonlinear regression and obtain
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
alpha & 9.22336 & 0.13438 & 8.85027,9.59645 \
beta & 0.50576 & 0.00350 & 0.49603,0.51549 \
endarray$$ corresponding to $R^2=0.999972$ which is very good. PLease, notice how good are the initial estimates.
Below are reproduced the data as well as the predicted values
$$left(
beginarrayccc
t & g & g_pred \
0 & 10 & 9.782 \
1 & 15 & 15.24 \
2 & 23 & 22.97 \
3 & 33 & 33.08 \
4 & 45 & 45.05 \
5 & 58 & 57.62 \
6 & 69 & 69.27
endarray
right)$$
If we had in advance known that the model was good (based on physics for example) and the data in small errors (because of accurate measurements), we could have skipped th first step and used the first and last data points to estimate $(alpha, beta)$
$$10=frac 1001+alpha implies alpha=9$$
$$69=frac 1001+9 e^-6betaimplies beta=frac16 log left(frac62131right)=0.499557$$
$endgroup$
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
1
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
add a comment |
$begingroup$
Forgetting (projection/orthogonal families of polynomials), the problem is quite easy to solve using standard nonlinear regression.
As usual, we need good or at least consistent estimates of parameters $(alpha, beta)$ and these can be obtained by a linearization of the model.
$$g = frac1001+alpha e^-beta t implies colorredy=log left(frac100g-1right)=log(alpha)-beta,t=colorreda+b t$$
Consider the data to be
$$left(
beginarrayccc
t & g & y=log left(frac100g-1right) \
0 & 10 & +2.197225 \
1 & 15 & +1.734601 \
2 & 23 & +1.208311 \
3 & 33 & +0.708185 \
4 & 45 & +0.200671 \
5 & 58 & -0.322773 \
6 & 69 & -0.800119
endarray
right)$$
A preliminary linear regression leads to
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
a & +2.21599 & 0.01226 & +2.18195,+2.25003 \
b & -0.50409 & 0.00340 & -0.51353,-0.49465 \
endarray$$ corresponding to $R^2=0.999878$ which is already very good.
This gives as estimates $alpha=e^a=9.17046$ and $beta=-b=0.50409$.
Now, we can start the nonlinear regression and obtain
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
alpha & 9.22336 & 0.13438 & 8.85027,9.59645 \
beta & 0.50576 & 0.00350 & 0.49603,0.51549 \
endarray$$ corresponding to $R^2=0.999972$ which is very good. PLease, notice how good are the initial estimates.
Below are reproduced the data as well as the predicted values
$$left(
beginarrayccc
t & g & g_pred \
0 & 10 & 9.782 \
1 & 15 & 15.24 \
2 & 23 & 22.97 \
3 & 33 & 33.08 \
4 & 45 & 45.05 \
5 & 58 & 57.62 \
6 & 69 & 69.27
endarray
right)$$
If we had in advance known that the model was good (based on physics for example) and the data in small errors (because of accurate measurements), we could have skipped th first step and used the first and last data points to estimate $(alpha, beta)$
$$10=frac 1001+alpha implies alpha=9$$
$$69=frac 1001+9 e^-6betaimplies beta=frac16 log left(frac62131right)=0.499557$$
$endgroup$
Forgetting (projection/orthogonal families of polynomials), the problem is quite easy to solve using standard nonlinear regression.
As usual, we need good or at least consistent estimates of parameters $(alpha, beta)$ and these can be obtained by a linearization of the model.
$$g = frac1001+alpha e^-beta t implies colorredy=log left(frac100g-1right)=log(alpha)-beta,t=colorreda+b t$$
Consider the data to be
$$left(
beginarrayccc
t & g & y=log left(frac100g-1right) \
0 & 10 & +2.197225 \
1 & 15 & +1.734601 \
2 & 23 & +1.208311 \
3 & 33 & +0.708185 \
4 & 45 & +0.200671 \
5 & 58 & -0.322773 \
6 & 69 & -0.800119
endarray
right)$$
A preliminary linear regression leads to
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
a & +2.21599 & 0.01226 & +2.18195,+2.25003 \
b & -0.50409 & 0.00340 & -0.51353,-0.49465 \
endarray$$ corresponding to $R^2=0.999878$ which is already very good.
This gives as estimates $alpha=e^a=9.17046$ and $beta=-b=0.50409$.
Now, we can start the nonlinear regression and obtain
$$beginarrayclclclclc
text & textEstimate & textStandard Error & textConfidence Interval \
alpha & 9.22336 & 0.13438 & 8.85027,9.59645 \
beta & 0.50576 & 0.00350 & 0.49603,0.51549 \
endarray$$ corresponding to $R^2=0.999972$ which is very good. PLease, notice how good are the initial estimates.
Below are reproduced the data as well as the predicted values
$$left(
beginarrayccc
t & g & g_pred \
0 & 10 & 9.782 \
1 & 15 & 15.24 \
2 & 23 & 22.97 \
3 & 33 & 33.08 \
4 & 45 & 45.05 \
5 & 58 & 57.62 \
6 & 69 & 69.27
endarray
right)$$
If we had in advance known that the model was good (based on physics for example) and the data in small errors (because of accurate measurements), we could have skipped th first step and used the first and last data points to estimate $(alpha, beta)$
$$10=frac 1001+alpha implies alpha=9$$
$$69=frac 1001+9 e^-6betaimplies beta=frac16 log left(frac62131right)=0.499557$$
answered Apr 8 at 4:38
Claude LeiboviciClaude Leibovici
125k1158135
125k1158135
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
1
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
add a comment |
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
1
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
$begingroup$
+1 for the detailed answer and discussion with me.
$endgroup$
– farruhota
Apr 8 at 10:54
1
1
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
@farruhota. This was my pleasure ! Computing the $SSQ$ from your table gives $0.568$ while, from mine $0.333$.
$endgroup$
– Claude Leibovici
Apr 8 at 11:14
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
If I didn't round, mine would be $0.356$. Agreed, still more than yours. When you say "Now, we can start the nonlinear regression", are you minimizing $SSQ=sum_i=0^6 left(g_i-frac1001+alpha e^-beta tright)^2$ (for example, using $alpha=beta=0$ as starting values, the Excel solver gives $alpha =9.223294081; beta =0.505758705$)? How do you iterate it from the first step (linear regression)?
$endgroup$
– farruhota
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
$begingroup$
@farruhota. This problem is very simple. In general, use optimization or Newton-Raphson for solving the partial derivatives equal to zero.
$endgroup$
– Claude Leibovici
2 days ago
add a comment |
$begingroup$
Make the transformations:
$$g(t) = frac1001+alpha
e^-beta t iff alpha e^-beta t=frac100g(t)-1 iff underbraceln left(frac100g(t)-1right)_y(x)=underbrace-beta t_ax+underbraceln alpha_b$$
Hence:
$$beginarrayr
&x&y(x)&xy&x^2\
hline
&0&2.20&0.00&0\
&1&1.73&1.73&1\
&2&1.21&2.42&4\
&3&0.71&2.13&9\
&4&0.20&0.80&16\
&5&-0.32&-1.60&25\
&6&-0.80&-4.80&36\
hline
textTotal&21&4.93&0.68&91\
endarray\
beginaligna&=fracsum xy-fracsum x sum ynsum x^2-frac(sum x)^2n=frac0.68-frac21cdot 4.93791-frac21^27=-0.5\
b&=bary-abarx=frac4.937-(-0.5)frac217=2.2\
ln alpha&=b=2.2 Rightarrow alpha =9.03\
beta &=-a=0.5endalign$$
So, the final answer:
$$g^*(t) = frac1001+9.03
e^-0.5t\
beginarrayc
t&g(t)&g^*(t)\
hline
0&10&9.97\
1&15&15.44\
2&23&23.14\
3&33&33.17\
4&45&45.00\
5&58&57.43\
6&69&68.99
endarray$$
$endgroup$
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
|
show 3 more comments
$begingroup$
Make the transformations:
$$g(t) = frac1001+alpha
e^-beta t iff alpha e^-beta t=frac100g(t)-1 iff underbraceln left(frac100g(t)-1right)_y(x)=underbrace-beta t_ax+underbraceln alpha_b$$
Hence:
$$beginarrayr
&x&y(x)&xy&x^2\
hline
&0&2.20&0.00&0\
&1&1.73&1.73&1\
&2&1.21&2.42&4\
&3&0.71&2.13&9\
&4&0.20&0.80&16\
&5&-0.32&-1.60&25\
&6&-0.80&-4.80&36\
hline
textTotal&21&4.93&0.68&91\
endarray\
beginaligna&=fracsum xy-fracsum x sum ynsum x^2-frac(sum x)^2n=frac0.68-frac21cdot 4.93791-frac21^27=-0.5\
b&=bary-abarx=frac4.937-(-0.5)frac217=2.2\
ln alpha&=b=2.2 Rightarrow alpha =9.03\
beta &=-a=0.5endalign$$
So, the final answer:
$$g^*(t) = frac1001+9.03
e^-0.5t\
beginarrayc
t&g(t)&g^*(t)\
hline
0&10&9.97\
1&15&15.44\
2&23&23.14\
3&33&33.17\
4&45&45.00\
5&58&57.43\
6&69&68.99
endarray$$
$endgroup$
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
|
show 3 more comments
$begingroup$
Make the transformations:
$$g(t) = frac1001+alpha
e^-beta t iff alpha e^-beta t=frac100g(t)-1 iff underbraceln left(frac100g(t)-1right)_y(x)=underbrace-beta t_ax+underbraceln alpha_b$$
Hence:
$$beginarrayr
&x&y(x)&xy&x^2\
hline
&0&2.20&0.00&0\
&1&1.73&1.73&1\
&2&1.21&2.42&4\
&3&0.71&2.13&9\
&4&0.20&0.80&16\
&5&-0.32&-1.60&25\
&6&-0.80&-4.80&36\
hline
textTotal&21&4.93&0.68&91\
endarray\
beginaligna&=fracsum xy-fracsum x sum ynsum x^2-frac(sum x)^2n=frac0.68-frac21cdot 4.93791-frac21^27=-0.5\
b&=bary-abarx=frac4.937-(-0.5)frac217=2.2\
ln alpha&=b=2.2 Rightarrow alpha =9.03\
beta &=-a=0.5endalign$$
So, the final answer:
$$g^*(t) = frac1001+9.03
e^-0.5t\
beginarrayc
t&g(t)&g^*(t)\
hline
0&10&9.97\
1&15&15.44\
2&23&23.14\
3&33&33.17\
4&45&45.00\
5&58&57.43\
6&69&68.99
endarray$$
$endgroup$
Make the transformations:
$$g(t) = frac1001+alpha
e^-beta t iff alpha e^-beta t=frac100g(t)-1 iff underbraceln left(frac100g(t)-1right)_y(x)=underbrace-beta t_ax+underbraceln alpha_b$$
Hence:
$$beginarrayr
&x&y(x)&xy&x^2\
hline
&0&2.20&0.00&0\
&1&1.73&1.73&1\
&2&1.21&2.42&4\
&3&0.71&2.13&9\
&4&0.20&0.80&16\
&5&-0.32&-1.60&25\
&6&-0.80&-4.80&36\
hline
textTotal&21&4.93&0.68&91\
endarray\
beginaligna&=fracsum xy-fracsum x sum ynsum x^2-frac(sum x)^2n=frac0.68-frac21cdot 4.93791-frac21^27=-0.5\
b&=bary-abarx=frac4.937-(-0.5)frac217=2.2\
ln alpha&=b=2.2 Rightarrow alpha =9.03\
beta &=-a=0.5endalign$$
So, the final answer:
$$g^*(t) = frac1001+9.03
e^-0.5t\
beginarrayc
t&g(t)&g^*(t)\
hline
0&10&9.97\
1&15&15.44\
2&23&23.14\
3&33&33.17\
4&45&45.00\
5&58&57.43\
6&69&68.99
endarray$$
answered Apr 8 at 4:11
farruhotafarruhota
22k2942
22k2942
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
|
show 3 more comments
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
You must take care that this is a first step since what is measured is $g$ and not any of its possible transforms.
$endgroup$
– Claude Leibovici
Apr 8 at 4:40
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
@ClaudeLeibovici, thank you for commenting. Am I not measuring $g$? I transformed and relabeled, which is the linearization. We get the same results except rounding discrepancies.
$endgroup$
– farruhota
Apr 8 at 5:11
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
This is exactly what I wrote. You transformed $g$ ! Linearization (as we both did) is very good to get estimates of the parameters. Then, you must use $g$ by itself. This case was not bad because of very marginal errors.
$endgroup$
– Claude Leibovici
Apr 8 at 5:15
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
Yes, $alpha, beta$ are estimates of the population parameters calculated from sample data of $7$ observations. Sorry, I’m not getting my mistake if any.
$endgroup$
– farruhota
Apr 8 at 5:51
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
$begingroup$
And by calculating $g^*(t)$ (your $g_pred$) it is calculated a point estimate, not interval estimate.
$endgroup$
– farruhota
Apr 8 at 5:56
|
show 3 more comments
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3177594%2ffit-data-to-function-gt-frac1001-alpha-e-beta-t-by-using-least-s%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
You can solve the problem in a simpler manner.
$endgroup$
– Claude Leibovici
Apr 7 at 5:01
$begingroup$
@ClaudeLeibovici does it involve least squares method? I need to use it
$endgroup$
– Guerlando OCs
Apr 7 at 19:37
$begingroup$
Yes ! The problem can be easily solved using standard least squares methods with anything else. I shall try to make an answer i that spirit.
$endgroup$
– Claude Leibovici
Apr 8 at 3:35