Formulate residual for Levenberg-Marquart
I want to minimize a cost function with the form,
with the Levenberg-Marquart method with the scipy.optimize.least_squares function. But I do not see how to formulate it in terms of residuals, so that I can use such method. Otherwise I get the error message "Method 'lm' doesn't work when the number of residuals is less than the number of variables."
My cost function is defined as follows:
def canonical_cost(qv, t, A, B, C, delta, epsilon, lam):
assert(type(qv) is np.ndarray and len(qv) == 4)
# assert(type(t) is np.ndarray and len(t) == 3)
q = Quaternion(*qv)
qv, tv = qv.reshape(-1, 1), np.vstack(([0], t.reshape(-1, 1)))
f1 = qv.T @ (A + B) @ qv
f2 = tv.T @ C @ tv + delta @ tv + epsilon @ (q.Q.T @ q.W) @ tv
qnorm = (1 - qv.T @ qv)**2
return np.squeeze(f1 + f2 + lam*qnorm)
And I try to optimize with,
def cost(x):
qv, t = x[:4], x[4:]
return canonical_cost(qv, t, A, B, C, delta, epsilon, lam)
result = opt.least_squares(cost, initial_conditions, method='lm',
**kwargs)
Thank you
python optimization scipy least-squares levenberg-marquardt
|
show 1 more comment
I want to minimize a cost function with the form,
with the Levenberg-Marquart method with the scipy.optimize.least_squares function. But I do not see how to formulate it in terms of residuals, so that I can use such method. Otherwise I get the error message "Method 'lm' doesn't work when the number of residuals is less than the number of variables."
My cost function is defined as follows:
def canonical_cost(qv, t, A, B, C, delta, epsilon, lam):
assert(type(qv) is np.ndarray and len(qv) == 4)
# assert(type(t) is np.ndarray and len(t) == 3)
q = Quaternion(*qv)
qv, tv = qv.reshape(-1, 1), np.vstack(([0], t.reshape(-1, 1)))
f1 = qv.T @ (A + B) @ qv
f2 = tv.T @ C @ tv + delta @ tv + epsilon @ (q.Q.T @ q.W) @ tv
qnorm = (1 - qv.T @ qv)**2
return np.squeeze(f1 + f2 + lam*qnorm)
And I try to optimize with,
def cost(x):
qv, t = x[:4], x[4:]
return canonical_cost(qv, t, A, B, C, delta, epsilon, lam)
result = opt.least_squares(cost, initial_conditions, method='lm',
**kwargs)
Thank you
python optimization scipy least-squares levenberg-marquardt
You'd have to subtract the calculated value from the actual values (f(x) - y), then square that subtraction, and return that as your cost function. I don't see you do that anywhere. In fact, I don't see youry_i
values anywhere.
– 9769953
Nov 27 '18 at 9:18
Perhaps you're better off withcurve_fit
: you can feed that the measured/actual values and the cost function directly.
– 9769953
Nov 27 '18 at 9:19
yes, that is exactly my problem. I do not know what the actual values are. My problem is that I do not see if it is possible to formulate the above problem as a regression problem, so that I can use the LM method
– juampa
Nov 27 '18 at 9:26
You can't perform regression if you don't have any actual, measured values. Are you simply trying to minimize/maximize a function?
– 9769953
Nov 27 '18 at 9:39
Exactly. That is what I am trying to do. In the literature this problem it is also solved with the LM method. But I do not see how they were able to. I would like to compare it to other methods like TRF or BFGS
– juampa
Nov 27 '18 at 10:07
|
show 1 more comment
I want to minimize a cost function with the form,
with the Levenberg-Marquart method with the scipy.optimize.least_squares function. But I do not see how to formulate it in terms of residuals, so that I can use such method. Otherwise I get the error message "Method 'lm' doesn't work when the number of residuals is less than the number of variables."
My cost function is defined as follows:
def canonical_cost(qv, t, A, B, C, delta, epsilon, lam):
assert(type(qv) is np.ndarray and len(qv) == 4)
# assert(type(t) is np.ndarray and len(t) == 3)
q = Quaternion(*qv)
qv, tv = qv.reshape(-1, 1), np.vstack(([0], t.reshape(-1, 1)))
f1 = qv.T @ (A + B) @ qv
f2 = tv.T @ C @ tv + delta @ tv + epsilon @ (q.Q.T @ q.W) @ tv
qnorm = (1 - qv.T @ qv)**2
return np.squeeze(f1 + f2 + lam*qnorm)
And I try to optimize with,
def cost(x):
qv, t = x[:4], x[4:]
return canonical_cost(qv, t, A, B, C, delta, epsilon, lam)
result = opt.least_squares(cost, initial_conditions, method='lm',
**kwargs)
Thank you
python optimization scipy least-squares levenberg-marquardt
I want to minimize a cost function with the form,
with the Levenberg-Marquart method with the scipy.optimize.least_squares function. But I do not see how to formulate it in terms of residuals, so that I can use such method. Otherwise I get the error message "Method 'lm' doesn't work when the number of residuals is less than the number of variables."
My cost function is defined as follows:
def canonical_cost(qv, t, A, B, C, delta, epsilon, lam):
assert(type(qv) is np.ndarray and len(qv) == 4)
# assert(type(t) is np.ndarray and len(t) == 3)
q = Quaternion(*qv)
qv, tv = qv.reshape(-1, 1), np.vstack(([0], t.reshape(-1, 1)))
f1 = qv.T @ (A + B) @ qv
f2 = tv.T @ C @ tv + delta @ tv + epsilon @ (q.Q.T @ q.W) @ tv
qnorm = (1 - qv.T @ qv)**2
return np.squeeze(f1 + f2 + lam*qnorm)
And I try to optimize with,
def cost(x):
qv, t = x[:4], x[4:]
return canonical_cost(qv, t, A, B, C, delta, epsilon, lam)
result = opt.least_squares(cost, initial_conditions, method='lm',
**kwargs)
Thank you
python optimization scipy least-squares levenberg-marquardt
python optimization scipy least-squares levenberg-marquardt
asked Nov 27 '18 at 9:06
juampajuampa
142
142
You'd have to subtract the calculated value from the actual values (f(x) - y), then square that subtraction, and return that as your cost function. I don't see you do that anywhere. In fact, I don't see youry_i
values anywhere.
– 9769953
Nov 27 '18 at 9:18
Perhaps you're better off withcurve_fit
: you can feed that the measured/actual values and the cost function directly.
– 9769953
Nov 27 '18 at 9:19
yes, that is exactly my problem. I do not know what the actual values are. My problem is that I do not see if it is possible to formulate the above problem as a regression problem, so that I can use the LM method
– juampa
Nov 27 '18 at 9:26
You can't perform regression if you don't have any actual, measured values. Are you simply trying to minimize/maximize a function?
– 9769953
Nov 27 '18 at 9:39
Exactly. That is what I am trying to do. In the literature this problem it is also solved with the LM method. But I do not see how they were able to. I would like to compare it to other methods like TRF or BFGS
– juampa
Nov 27 '18 at 10:07
|
show 1 more comment
You'd have to subtract the calculated value from the actual values (f(x) - y), then square that subtraction, and return that as your cost function. I don't see you do that anywhere. In fact, I don't see youry_i
values anywhere.
– 9769953
Nov 27 '18 at 9:18
Perhaps you're better off withcurve_fit
: you can feed that the measured/actual values and the cost function directly.
– 9769953
Nov 27 '18 at 9:19
yes, that is exactly my problem. I do not know what the actual values are. My problem is that I do not see if it is possible to formulate the above problem as a regression problem, so that I can use the LM method
– juampa
Nov 27 '18 at 9:26
You can't perform regression if you don't have any actual, measured values. Are you simply trying to minimize/maximize a function?
– 9769953
Nov 27 '18 at 9:39
Exactly. That is what I am trying to do. In the literature this problem it is also solved with the LM method. But I do not see how they were able to. I would like to compare it to other methods like TRF or BFGS
– juampa
Nov 27 '18 at 10:07
You'd have to subtract the calculated value from the actual values (f(x) - y), then square that subtraction, and return that as your cost function. I don't see you do that anywhere. In fact, I don't see your
y_i
values anywhere.– 9769953
Nov 27 '18 at 9:18
You'd have to subtract the calculated value from the actual values (f(x) - y), then square that subtraction, and return that as your cost function. I don't see you do that anywhere. In fact, I don't see your
y_i
values anywhere.– 9769953
Nov 27 '18 at 9:18
Perhaps you're better off with
curve_fit
: you can feed that the measured/actual values and the cost function directly.– 9769953
Nov 27 '18 at 9:19
Perhaps you're better off with
curve_fit
: you can feed that the measured/actual values and the cost function directly.– 9769953
Nov 27 '18 at 9:19
yes, that is exactly my problem. I do not know what the actual values are. My problem is that I do not see if it is possible to formulate the above problem as a regression problem, so that I can use the LM method
– juampa
Nov 27 '18 at 9:26
yes, that is exactly my problem. I do not know what the actual values are. My problem is that I do not see if it is possible to formulate the above problem as a regression problem, so that I can use the LM method
– juampa
Nov 27 '18 at 9:26
You can't perform regression if you don't have any actual, measured values. Are you simply trying to minimize/maximize a function?
– 9769953
Nov 27 '18 at 9:39
You can't perform regression if you don't have any actual, measured values. Are you simply trying to minimize/maximize a function?
– 9769953
Nov 27 '18 at 9:39
Exactly. That is what I am trying to do. In the literature this problem it is also solved with the LM method. But I do not see how they were able to. I would like to compare it to other methods like TRF or BFGS
– juampa
Nov 27 '18 at 10:07
Exactly. That is what I am trying to do. In the literature this problem it is also solved with the LM method. But I do not see how they were able to. I would like to compare it to other methods like TRF or BFGS
– juampa
Nov 27 '18 at 10:07
|
show 1 more comment
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53496067%2fformulate-residual-for-levenberg-marquart%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53496067%2fformulate-residual-for-levenberg-marquart%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
You'd have to subtract the calculated value from the actual values (f(x) - y), then square that subtraction, and return that as your cost function. I don't see you do that anywhere. In fact, I don't see your
y_i
values anywhere.– 9769953
Nov 27 '18 at 9:18
Perhaps you're better off with
curve_fit
: you can feed that the measured/actual values and the cost function directly.– 9769953
Nov 27 '18 at 9:19
yes, that is exactly my problem. I do not know what the actual values are. My problem is that I do not see if it is possible to formulate the above problem as a regression problem, so that I can use the LM method
– juampa
Nov 27 '18 at 9:26
You can't perform regression if you don't have any actual, measured values. Are you simply trying to minimize/maximize a function?
– 9769953
Nov 27 '18 at 9:39
Exactly. That is what I am trying to do. In the literature this problem it is also solved with the LM method. But I do not see how they were able to. I would like to compare it to other methods like TRF or BFGS
– juampa
Nov 27 '18 at 10:07