Are there any other methods to apply to solving simultaneous equations?
$begingroup$
We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.
I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$begin{align}3x+3y - y &= 36 tag{1a}\ 5x + 5y - y &= 64tag{1b}end{align}$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.
I can even use matrices!
$(1)$ and $(2)$ could be written in matrix form:
$$begin{align}begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}begin{bmatrix} x \ yend{bmatrix}&=begin{bmatrix}36 \ 64end{bmatrix}tag3 \ begin{bmatrix} x \ yend{bmatrix} &= {begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}}^{-1}begin{bmatrix}36 \ 64end{bmatrix} \ &= frac{1}{2}begin{bmatrix}4 &-2 \ -5 &3end{bmatrix}begin{bmatrix}36 \ 64end{bmatrix} \ &=frac12begin{bmatrix} 16 \ 12end{bmatrix} \ &= begin{bmatrix} 8 \ 6end{bmatrix} \ \ therefore x&=8 \ therefore y&= 6end{align}$$
Question
Are there any other methods to solve for both $x$ and $y$?
linear-algebra systems-of-equations
$endgroup$
add a comment |
$begingroup$
We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.
I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$begin{align}3x+3y - y &= 36 tag{1a}\ 5x + 5y - y &= 64tag{1b}end{align}$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.
I can even use matrices!
$(1)$ and $(2)$ could be written in matrix form:
$$begin{align}begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}begin{bmatrix} x \ yend{bmatrix}&=begin{bmatrix}36 \ 64end{bmatrix}tag3 \ begin{bmatrix} x \ yend{bmatrix} &= {begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}}^{-1}begin{bmatrix}36 \ 64end{bmatrix} \ &= frac{1}{2}begin{bmatrix}4 &-2 \ -5 &3end{bmatrix}begin{bmatrix}36 \ 64end{bmatrix} \ &=frac12begin{bmatrix} 16 \ 12end{bmatrix} \ &= begin{bmatrix} 8 \ 6end{bmatrix} \ \ therefore x&=8 \ therefore y&= 6end{align}$$
Question
Are there any other methods to solve for both $x$ and $y$?
linear-algebra systems-of-equations
$endgroup$
$begingroup$
you can always take the row reduced echelon form of (3), which is just the first method made more systematic
$endgroup$
– K.M
9 hours ago
1
$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
9 hours ago
$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
2 hours ago
add a comment |
$begingroup$
We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.
I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$begin{align}3x+3y - y &= 36 tag{1a}\ 5x + 5y - y &= 64tag{1b}end{align}$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.
I can even use matrices!
$(1)$ and $(2)$ could be written in matrix form:
$$begin{align}begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}begin{bmatrix} x \ yend{bmatrix}&=begin{bmatrix}36 \ 64end{bmatrix}tag3 \ begin{bmatrix} x \ yend{bmatrix} &= {begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}}^{-1}begin{bmatrix}36 \ 64end{bmatrix} \ &= frac{1}{2}begin{bmatrix}4 &-2 \ -5 &3end{bmatrix}begin{bmatrix}36 \ 64end{bmatrix} \ &=frac12begin{bmatrix} 16 \ 12end{bmatrix} \ &= begin{bmatrix} 8 \ 6end{bmatrix} \ \ therefore x&=8 \ therefore y&= 6end{align}$$
Question
Are there any other methods to solve for both $x$ and $y$?
linear-algebra systems-of-equations
$endgroup$
We are asked to solve for $x$ and $y$ in the following pair of simultaneous equations:
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
I can multiply $(1)$ by $2$, yielding $6x + 4y = 72$, and subtracting $(2)$ from this new equation eliminates $4y$ to solve strictly for $x$; i.e. $6x - 5x = 72 - 64 Rightarrow x = 8$. Substituting $x=8$ into $(2)$ reveals that $y=6$.
I could also subtract $(1)$ from $(2)$ and divide by $2$, yielding $x+y=14$. Let $$begin{align}3x+3y - y &= 36 tag{1a}\ 5x + 5y - y &= 64tag{1b}end{align}$$ then expand brackets, and it follows that $42 - y = 36$ and $70 - y = 64$, thus revealing $y=6$ and so $x = 14 - 6 = 8$.
I can even use matrices!
$(1)$ and $(2)$ could be written in matrix form:
$$begin{align}begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}begin{bmatrix} x \ yend{bmatrix}&=begin{bmatrix}36 \ 64end{bmatrix}tag3 \ begin{bmatrix} x \ yend{bmatrix} &= {begin{bmatrix} 3 &2 \ 5 &4end{bmatrix}}^{-1}begin{bmatrix}36 \ 64end{bmatrix} \ &= frac{1}{2}begin{bmatrix}4 &-2 \ -5 &3end{bmatrix}begin{bmatrix}36 \ 64end{bmatrix} \ &=frac12begin{bmatrix} 16 \ 12end{bmatrix} \ &= begin{bmatrix} 8 \ 6end{bmatrix} \ \ therefore x&=8 \ therefore y&= 6end{align}$$
Question
Are there any other methods to solve for both $x$ and $y$?
linear-algebra systems-of-equations
linear-algebra systems-of-equations
edited 7 hours ago
Rodrigo de Azevedo
13.2k41962
13.2k41962
asked 9 hours ago
user477343user477343
3,67131244
3,67131244
$begingroup$
you can always take the row reduced echelon form of (3), which is just the first method made more systematic
$endgroup$
– K.M
9 hours ago
1
$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
9 hours ago
$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
2 hours ago
add a comment |
$begingroup$
you can always take the row reduced echelon form of (3), which is just the first method made more systematic
$endgroup$
– K.M
9 hours ago
1
$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
9 hours ago
$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
2 hours ago
$begingroup$
you can always take the row reduced echelon form of (3), which is just the first method made more systematic
$endgroup$
– K.M
9 hours ago
$begingroup$
you can always take the row reduced echelon form of (3), which is just the first method made more systematic
$endgroup$
– K.M
9 hours ago
1
1
$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
9 hours ago
$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
9 hours ago
$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
2 hours ago
$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
2 hours ago
add a comment |
7 Answers
7
active
oldest
votes
$begingroup$
Is this method allowed ?
$$begin{pmatrix}3&2&36\5&4&64 end{pmatrix} sim begin{pmatrix}1& 2/3&12\5&4&64 end{pmatrix} sim begin{pmatrix}1&2/3&12\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&1&6 end{pmatrix}$$
Which yields $x=8$ and $y=6$
The first step is $R_1 to R_1 times frac{1}{3}$
The second step is $R_2 to R_2 - 5R_1$
The third step is $R_1 to R_1 -R_2$
The fourth step is $R_2 to R_2times frac{3}{2}$
Here $R_i$ denotes the $i$ -th row.
$endgroup$
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
1
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
1
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
7
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
add a comment |
$begingroup$
How about using Cramer's Rule? Define $Delta_x=left[begin{matrix}36 & 2 \ 64 & 4end{matrix}right]$, $Delta_y=left[begin{matrix}3 & 36\ 5 & 64end{matrix}right]$
and $Delta_0=left[begin{matrix}3 & 2\ 5 &4end{matrix}right]$.
Now computation is trivial as you have: $x=dfrac{detDelta_x}{detDelta_0}$ and $y=dfrac{detDelta_y}{detDelta_0}$.
$endgroup$
1
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
1
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
3
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
1
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
add a comment |
$begingroup$
By false position:
Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification
$$3x'+2y'=0,\5x'+4y'=2.$$
We easily eliminate $y'$ (using $4y'=-6x'$) and get
$$-x'=2.$$
Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.
$endgroup$
add a comment |
$begingroup$
Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.
$endgroup$
add a comment |
$begingroup$
Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.
Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.
Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.
The steps of standard Gaussian elimination:
$$begin{cases}ax+by=c,\dx+ey=f.end{cases}$$
Subtract the first times $dfrac da$ from the second,
$$begin{cases}ax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.end{cases}$$
Solve for $y$,
$$begin{cases}ax+by=c,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
Solve for $x$,
$$begin{cases}x=dfrac{c-bdfrac{f-cdfrac da}{e-bdfrac da}}a,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:
$$d'=frac da,e'=e-bd',f'=f-cd'to y=frac{f'}{e'}, x=frac{c-by}a.$$
Anyway, for a $2times2$ system, this is worse than Cramer !
$$begin{cases}x=dfrac{ce-bf}{Delta},\y=dfrac{af-cd}{Delta}end{cases}$$ where $Delta=ae-bd$.
For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)
$endgroup$
add a comment |
$begingroup$
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
From $(1)$, $x=frac{36-2y}{3}$, substitute in $(2)$ and you'll get $5(frac{36-2y}{3})+4y=64 implies y=6$ and then you can get that $x=24/3=8$
Another Method
From $(1)$, $x=frac{36-2y}{3}$
From $(2)$, $x=frac{64-4y}{5}$
But $x=x implies frac{36-2y}{3}=frac{64-4y}{5}$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$
$endgroup$
1
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
add a comment |
$begingroup$
Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.
Method $1$: (multiplicity of $y$)
Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac{36}{2k+3}\5x+4y=64implies x(4k+5)=64implies x=frac{64}{4k+5}$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac{64}{4k+5}=frac{64}{4cdotfrac34+5}=8implies y=kx=frac34cdot8=6.quadsquare$$
Method $2$: (use this if you really like quadratic equations :P)
How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!
$endgroup$
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
1
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3180580%2fare-there-any-other-methods-to-apply-to-solving-simultaneous-equations%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Is this method allowed ?
$$begin{pmatrix}3&2&36\5&4&64 end{pmatrix} sim begin{pmatrix}1& 2/3&12\5&4&64 end{pmatrix} sim begin{pmatrix}1&2/3&12\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&1&6 end{pmatrix}$$
Which yields $x=8$ and $y=6$
The first step is $R_1 to R_1 times frac{1}{3}$
The second step is $R_2 to R_2 - 5R_1$
The third step is $R_1 to R_1 -R_2$
The fourth step is $R_2 to R_2times frac{3}{2}$
Here $R_i$ denotes the $i$ -th row.
$endgroup$
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
1
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
1
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
7
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
add a comment |
$begingroup$
Is this method allowed ?
$$begin{pmatrix}3&2&36\5&4&64 end{pmatrix} sim begin{pmatrix}1& 2/3&12\5&4&64 end{pmatrix} sim begin{pmatrix}1&2/3&12\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&1&6 end{pmatrix}$$
Which yields $x=8$ and $y=6$
The first step is $R_1 to R_1 times frac{1}{3}$
The second step is $R_2 to R_2 - 5R_1$
The third step is $R_1 to R_1 -R_2$
The fourth step is $R_2 to R_2times frac{3}{2}$
Here $R_i$ denotes the $i$ -th row.
$endgroup$
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
1
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
1
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
7
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
add a comment |
$begingroup$
Is this method allowed ?
$$begin{pmatrix}3&2&36\5&4&64 end{pmatrix} sim begin{pmatrix}1& 2/3&12\5&4&64 end{pmatrix} sim begin{pmatrix}1&2/3&12\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&1&6 end{pmatrix}$$
Which yields $x=8$ and $y=6$
The first step is $R_1 to R_1 times frac{1}{3}$
The second step is $R_2 to R_2 - 5R_1$
The third step is $R_1 to R_1 -R_2$
The fourth step is $R_2 to R_2times frac{3}{2}$
Here $R_i$ denotes the $i$ -th row.
$endgroup$
Is this method allowed ?
$$begin{pmatrix}3&2&36\5&4&64 end{pmatrix} sim begin{pmatrix}1& 2/3&12\5&4&64 end{pmatrix} sim begin{pmatrix}1&2/3&12\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&2/3&4 end{pmatrix} sim begin{pmatrix}1&0&8\0&1&6 end{pmatrix}$$
Which yields $x=8$ and $y=6$
The first step is $R_1 to R_1 times frac{1}{3}$
The second step is $R_2 to R_2 - 5R_1$
The third step is $R_1 to R_1 -R_2$
The fourth step is $R_2 to R_2times frac{3}{2}$
Here $R_i$ denotes the $i$ -th row.
edited 8 hours ago
answered 9 hours ago
Chinnapparaj RChinnapparaj R
6,0762928
6,0762928
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
1
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
1
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
7
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
add a comment |
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
1
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
1
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
7
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
$begingroup$
I have never seen that! What is it? :D
$endgroup$
– user477343
8 hours ago
1
1
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
$begingroup$
elementary operations!
$endgroup$
– Chinnapparaj R
8 hours ago
1
1
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
$begingroup$
I assume $R$ stands for Row.
$endgroup$
– user477343
8 hours ago
7
7
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
It's also called Gaussian elimination.
$endgroup$
– YiFan
6 hours ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
$begingroup$
See also augmented matrix and, for typesetting, tex.stackexchange.com/questions/2233/… .
$endgroup$
– Eric Towers
5 mins ago
add a comment |
$begingroup$
How about using Cramer's Rule? Define $Delta_x=left[begin{matrix}36 & 2 \ 64 & 4end{matrix}right]$, $Delta_y=left[begin{matrix}3 & 36\ 5 & 64end{matrix}right]$
and $Delta_0=left[begin{matrix}3 & 2\ 5 &4end{matrix}right]$.
Now computation is trivial as you have: $x=dfrac{detDelta_x}{detDelta_0}$ and $y=dfrac{detDelta_y}{detDelta_0}$.
$endgroup$
1
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
1
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
3
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
1
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
add a comment |
$begingroup$
How about using Cramer's Rule? Define $Delta_x=left[begin{matrix}36 & 2 \ 64 & 4end{matrix}right]$, $Delta_y=left[begin{matrix}3 & 36\ 5 & 64end{matrix}right]$
and $Delta_0=left[begin{matrix}3 & 2\ 5 &4end{matrix}right]$.
Now computation is trivial as you have: $x=dfrac{detDelta_x}{detDelta_0}$ and $y=dfrac{detDelta_y}{detDelta_0}$.
$endgroup$
1
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
1
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
3
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
1
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
add a comment |
$begingroup$
How about using Cramer's Rule? Define $Delta_x=left[begin{matrix}36 & 2 \ 64 & 4end{matrix}right]$, $Delta_y=left[begin{matrix}3 & 36\ 5 & 64end{matrix}right]$
and $Delta_0=left[begin{matrix}3 & 2\ 5 &4end{matrix}right]$.
Now computation is trivial as you have: $x=dfrac{detDelta_x}{detDelta_0}$ and $y=dfrac{detDelta_y}{detDelta_0}$.
$endgroup$
How about using Cramer's Rule? Define $Delta_x=left[begin{matrix}36 & 2 \ 64 & 4end{matrix}right]$, $Delta_y=left[begin{matrix}3 & 36\ 5 & 64end{matrix}right]$
and $Delta_0=left[begin{matrix}3 & 2\ 5 &4end{matrix}right]$.
Now computation is trivial as you have: $x=dfrac{detDelta_x}{detDelta_0}$ and $y=dfrac{detDelta_y}{detDelta_0}$.
answered 8 hours ago
Paras KhoslaParas Khosla
2,978523
2,978523
1
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
1
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
3
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
1
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
add a comment |
1
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
1
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
3
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
1
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
1
1
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
$begingroup$
Wow! Very useful! I have never heard of this method, before! $(+1)$
$endgroup$
– user477343
8 hours ago
1
1
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
$begingroup$
You must've made a calculation mistake. Recheck your calculations. It does indeed give $(2, 1)$ as the answer. Cheers :)
$endgroup$
– Paras Khosla
8 hours ago
3
3
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
Cramer's rule is important theoretically, but it is a very inefficient way to solve equations numerically, except for two equations in two unknowns. For $n$ equations, Cramer's rule requires $n!$ arithmetic operations to evaluate the determinants, compared with about $n^3$ operations to solve using Gaussian elimination. Even when $n = 10$, $n^3 = 1000$ but $n! = 3628800$. And in many real world applied math computations, $n = 100,000$ is a "small problem!"
$endgroup$
– alephzero
5 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
$begingroup$
@alephzero Just to be technical, there are faster ways to calculate the determinant of large matrices. However the one method I know to do it in n^3 relies on Gaussian elimination itself, which makes it a bit redundant...
$endgroup$
– mlk
4 hours ago
1
1
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
$begingroup$
@user477343 asked for different ways to solve, not more efficient ways to solve. This is awesome.
$endgroup$
– user1717828
2 hours ago
add a comment |
$begingroup$
By false position:
Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification
$$3x'+2y'=0,\5x'+4y'=2.$$
We easily eliminate $y'$ (using $4y'=-6x'$) and get
$$-x'=2.$$
Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.
$endgroup$
add a comment |
$begingroup$
By false position:
Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification
$$3x'+2y'=0,\5x'+4y'=2.$$
We easily eliminate $y'$ (using $4y'=-6x'$) and get
$$-x'=2.$$
Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.
$endgroup$
add a comment |
$begingroup$
By false position:
Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification
$$3x'+2y'=0,\5x'+4y'=2.$$
We easily eliminate $y'$ (using $4y'=-6x'$) and get
$$-x'=2.$$
Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.
$endgroup$
By false position:
Assume $x=10,y=3$, which fulfills the first equation, and let $x=10+x',y=3+y'$. Now, after simplification
$$3x'+2y'=0,\5x'+4y'=2.$$
We easily eliminate $y'$ (using $4y'=-6x'$) and get
$$-x'=2.$$
Though this method is not essentially different from, say elimination, it can be useful for by-hand computation as it yields smaller terms.
answered 8 hours ago
Yves DaoustYves Daoust
132k676230
132k676230
add a comment |
add a comment |
$begingroup$
Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.
$endgroup$
add a comment |
$begingroup$
Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.
$endgroup$
add a comment |
$begingroup$
Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.
$endgroup$
Another method to solve simultaneous equations in two dimensions, is by plotting graphs of the equations on a cartesian plane, and finding the point of intersection.
answered 5 hours ago
Elements in SpaceElements in Space
1,20411127
1,20411127
add a comment |
add a comment |
$begingroup$
Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.
Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.
Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.
The steps of standard Gaussian elimination:
$$begin{cases}ax+by=c,\dx+ey=f.end{cases}$$
Subtract the first times $dfrac da$ from the second,
$$begin{cases}ax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.end{cases}$$
Solve for $y$,
$$begin{cases}ax+by=c,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
Solve for $x$,
$$begin{cases}x=dfrac{c-bdfrac{f-cdfrac da}{e-bdfrac da}}a,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:
$$d'=frac da,e'=e-bd',f'=f-cd'to y=frac{f'}{e'}, x=frac{c-by}a.$$
Anyway, for a $2times2$ system, this is worse than Cramer !
$$begin{cases}x=dfrac{ce-bf}{Delta},\y=dfrac{af-cd}{Delta}end{cases}$$ where $Delta=ae-bd$.
For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)
$endgroup$
add a comment |
$begingroup$
Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.
Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.
Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.
The steps of standard Gaussian elimination:
$$begin{cases}ax+by=c,\dx+ey=f.end{cases}$$
Subtract the first times $dfrac da$ from the second,
$$begin{cases}ax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.end{cases}$$
Solve for $y$,
$$begin{cases}ax+by=c,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
Solve for $x$,
$$begin{cases}x=dfrac{c-bdfrac{f-cdfrac da}{e-bdfrac da}}a,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:
$$d'=frac da,e'=e-bd',f'=f-cd'to y=frac{f'}{e'}, x=frac{c-by}a.$$
Anyway, for a $2times2$ system, this is worse than Cramer !
$$begin{cases}x=dfrac{ce-bf}{Delta},\y=dfrac{af-cd}{Delta}end{cases}$$ where $Delta=ae-bd$.
For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)
$endgroup$
add a comment |
$begingroup$
Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.
Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.
Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.
The steps of standard Gaussian elimination:
$$begin{cases}ax+by=c,\dx+ey=f.end{cases}$$
Subtract the first times $dfrac da$ from the second,
$$begin{cases}ax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.end{cases}$$
Solve for $y$,
$$begin{cases}ax+by=c,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
Solve for $x$,
$$begin{cases}x=dfrac{c-bdfrac{f-cdfrac da}{e-bdfrac da}}a,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:
$$d'=frac da,e'=e-bd',f'=f-cd'to y=frac{f'}{e'}, x=frac{c-by}a.$$
Anyway, for a $2times2$ system, this is worse than Cramer !
$$begin{cases}x=dfrac{ce-bf}{Delta},\y=dfrac{af-cd}{Delta}end{cases}$$ where $Delta=ae-bd$.
For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)
$endgroup$
Any method you can come up with will in the end amount to Cramer's rule, which gives explicit formulas for the solution. Except special cases, the solution of a system is unique, so that you will always be computing the ratio of those determinants.
Anyway, it turns out that by organizing the computation in certain ways, you can reduce the number of arithmetic operations to be performed. For $2times2$ systems,
the different variants make little difference in this respect. Things become more interesting for $ntimes n$ systems.
Direct application of Cramer is by far the worse, as it takes a number of operations proportional to $(n+1)!$, which is huge. Even for $3times3$ systems, it should be avoided. The best method to date is Gaussian elimination (you eliminate one unknown at a time by forming linear combinations of the equations and turn the system to a triangular form). The total workload is proportional to $n^3$ operations.
The steps of standard Gaussian elimination:
$$begin{cases}ax+by=c,\dx+ey=f.end{cases}$$
Subtract the first times $dfrac da$ from the second,
$$begin{cases}ax+by=c,\0x+left(e-bdfrac daright)y=f-cdfrac da.end{cases}$$
Solve for $y$,
$$begin{cases}ax+by=c,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
Solve for $x$,
$$begin{cases}x=dfrac{c-bdfrac{f-cdfrac da}{e-bdfrac da}}a,\y=dfrac{f-cdfrac da}{e-bdfrac da}.end{cases}$$
So written, the formulas are a little scary, but when you use intermediate variables, the complexity vanishes:
$$d'=frac da,e'=e-bd',f'=f-cd'to y=frac{f'}{e'}, x=frac{c-by}a.$$
Anyway, for a $2times2$ system, this is worse than Cramer !
$$begin{cases}x=dfrac{ce-bf}{Delta},\y=dfrac{af-cd}{Delta}end{cases}$$ where $Delta=ae-bd$.
For large systems, say $100times100$ and up, very different methods are used. They work by computing approximate solutions and improving them iteratively until the inaccuracy becomes acceptable. Quite often such systems are sparse (many coefficients are zero), and this is exploited to reduce the number of operations. (The direct methods are inappropriate as they will break the sparseness property.)
edited 7 hours ago
answered 7 hours ago
Yves DaoustYves Daoust
132k676230
132k676230
add a comment |
add a comment |
$begingroup$
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
From $(1)$, $x=frac{36-2y}{3}$, substitute in $(2)$ and you'll get $5(frac{36-2y}{3})+4y=64 implies y=6$ and then you can get that $x=24/3=8$
Another Method
From $(1)$, $x=frac{36-2y}{3}$
From $(2)$, $x=frac{64-4y}{5}$
But $x=x implies frac{36-2y}{3}=frac{64-4y}{5}$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$
$endgroup$
1
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
add a comment |
$begingroup$
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
From $(1)$, $x=frac{36-2y}{3}$, substitute in $(2)$ and you'll get $5(frac{36-2y}{3})+4y=64 implies y=6$ and then you can get that $x=24/3=8$
Another Method
From $(1)$, $x=frac{36-2y}{3}$
From $(2)$, $x=frac{64-4y}{5}$
But $x=x implies frac{36-2y}{3}=frac{64-4y}{5}$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$
$endgroup$
1
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
add a comment |
$begingroup$
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
From $(1)$, $x=frac{36-2y}{3}$, substitute in $(2)$ and you'll get $5(frac{36-2y}{3})+4y=64 implies y=6$ and then you can get that $x=24/3=8$
Another Method
From $(1)$, $x=frac{36-2y}{3}$
From $(2)$, $x=frac{64-4y}{5}$
But $x=x implies frac{36-2y}{3}=frac{64-4y}{5}$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$
$endgroup$
$$begin{align}3x+2y&=36 tag1\ 5x+4y&=64tag2end{align}$$
From $(1)$, $x=frac{36-2y}{3}$, substitute in $(2)$ and you'll get $5(frac{36-2y}{3})+4y=64 implies y=6$ and then you can get that $x=24/3=8$
Another Method
From $(1)$, $x=frac{36-2y}{3}$
From $(2)$, $x=frac{64-4y}{5}$
But $x=x implies frac{36-2y}{3}=frac{64-4y}{5}$ do cross multiplication and you'll get $5(36-2y)=3(64-4y) implies y=6$ and substitute to get $x=8$
edited 7 hours ago
answered 7 hours ago
Fareed AFFareed AF
722112
722112
1
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
add a comment |
1
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
1
1
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
$begingroup$
Pure algebra! I personally prefer the second method. Thanks for that! $(+1)$
$endgroup$
– user477343
7 hours ago
add a comment |
$begingroup$
Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.
Method $1$: (multiplicity of $y$)
Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac{36}{2k+3}\5x+4y=64implies x(4k+5)=64implies x=frac{64}{4k+5}$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac{64}{4k+5}=frac{64}{4cdotfrac34+5}=8implies y=kx=frac34cdot8=6.quadsquare$$
Method $2$: (use this if you really like quadratic equations :P)
How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!
$endgroup$
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
1
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
add a comment |
$begingroup$
Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.
Method $1$: (multiplicity of $y$)
Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac{36}{2k+3}\5x+4y=64implies x(4k+5)=64implies x=frac{64}{4k+5}$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac{64}{4k+5}=frac{64}{4cdotfrac34+5}=8implies y=kx=frac34cdot8=6.quadsquare$$
Method $2$: (use this if you really like quadratic equations :P)
How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!
$endgroup$
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
1
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
add a comment |
$begingroup$
Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.
Method $1$: (multiplicity of $y$)
Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac{36}{2k+3}\5x+4y=64implies x(4k+5)=64implies x=frac{64}{4k+5}$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac{64}{4k+5}=frac{64}{4cdotfrac34+5}=8implies y=kx=frac34cdot8=6.quadsquare$$
Method $2$: (use this if you really like quadratic equations :P)
How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!
$endgroup$
Other answers have given standard, elementary methods of solving simultaneous equations. Here are a few other ones that can be more long-winded and excessive, but work nonetheless.
Method $1$: (multiplicity of $y$)
Let $y=kx$ for some $kinBbb R$. Then $$3x+2y=36implies x(2k+3)=36implies x=frac{36}{2k+3}\5x+4y=64implies x(4k+5)=64implies x=frac{64}{4k+5}$$ so $$36(4k+5)=64(2k+3)implies (144-128)k=(192-180)implies k=frac34.$$ Now $$x=frac{64}{4k+5}=frac{64}{4cdotfrac34+5}=8implies y=kx=frac34cdot8=6.quadsquare$$
Method $2$: (use this if you really like quadratic equations :P)
How about we try squaring the equations? We get $$3x+2y=36implies 9x^2+12xy+4y^2=1296\5x+4y=64implies 25x^2+40xy+16y^2=4096$$ Multiplying the first equation by $10$ and the second by $3$ yields $$90x^2+120xy+40y^2=12960\75x^2+120xy+48y^2=12288$$ and subtracting gives us $$15x^2-8y^2=672$$ which is a hyperbola. Notice that subtracting the two linear equations gives you $2x+2y=28implies y=14-x$ so you have the nice quadratic $$15x^2-8(14-x)^2=672.$$ Enjoy!
edited 6 hours ago
answered 6 hours ago
TheSimpliFireTheSimpliFire
13.1k62464
13.1k62464
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
1
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
add a comment |
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
1
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
$begingroup$
In your first method, why do you substitute $k=frac34$ in the second equation $5x+4y=64$ as opposed to the first equation $3x+2y=36$? Also, hello! :D
$endgroup$
– user477343
6 hours ago
1
1
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
Because for $3x+2y=36$, we get $2k$ in the denominator, but $2k=3/2$ leaves us with a fraction. If we use the other equation, we get $4k=3$ which is neater.
$endgroup$
– TheSimpliFire
6 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
So, it doesn't really matter which one we substitute it in; but it is good to have some intuition when deciding! Thanks for your answer :P $(+1)$
$endgroup$
– user477343
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
$begingroup$
No, at an intersection point between two lines, most of their properties at that point are the same (apart from gradient, of course)
$endgroup$
– TheSimpliFire
5 hours ago
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3180580%2fare-there-any-other-methods-to-apply-to-solving-simultaneous-equations%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
you can always take the row reduced echelon form of (3), which is just the first method made more systematic
$endgroup$
– K.M
9 hours ago
1
$begingroup$
you can use the substitution $y = 18 - frac 32 x.$ Or, you could use Cramer's rule
$endgroup$
– Doug M
9 hours ago
$begingroup$
This is a linear system of equations, which some believe it is the most studied equation in all of mathematics. The reason being that it is so widely used in applied mathematics that there's always reason to find faster and more robust methods that will either be generic or suit the particularities of a given problem. You might roll your eyes at my claim when thinking of your two variable system, but soem engineers need to solve such systems with hundreds of variables in their jobs.
$endgroup$
– Mefitico
2 hours ago