Systems of Equations

A system of equations is a set of equations that involve the same variables. A system of linear equations is a system of equations in which each equation is linear. Systems of linear equations in two variables \((x\) and \(y)\) and three variables \((x, y\), and \(z\) ) have the following forms:

\[ \begin{array}{cc} \text { Linear system } & \text { Linear system } \\ \text { 2 variables } & \text { 3 variables } \\ a_{11} x+a_{12} y=b_{1} & a_{11} x+a_{12} y+a_{13} z=b_{1} \\ a_{21} x+a_{22} y=b_{2} & a_{21} x+a_{22} y+a_{23} z=b_{2} \\ & a_{31} x+a_{32} y+a_{33} z=b_{3} \end{array} \]

A solution of a system of equations is an assignment of values for the variables that makes each equation in the system true. To solve a system means to find all solutions of the system.

Substitution Method

To solve a pair of equations in two variables by substitution:

  1. Solve for one variable in terms of the other variable in one equation.
  2. Substitute into the other equation to get an equation in one variable, and solve for this variable.
  3. Back-substitute the value(s) of the variable you have found into either original equation, and solve for the remaining variable.

Elimination Method

To solve a pair of equations in two variables by elimination:

  1. Adjust the coefficients by multiplying the equations by appropriate constants so that the term(s) involving one of the variables are of opposite signs in the equations.
  2. Add the equations to eliminate that one variable; this gives an equation in the other variable. Solve for this variable.
  3. Back-substitute the value(s) of the variable that you have found into either original equation, and solve for the remaining variable.

Graphical Method

To solve a pair of equations in two variables graphically, first put each equation in function form, \(y=f(x)\).

  1. Graph the equations on a common screen.
  2. Find the points of intersection of the graphs. The solutions are the \(x\) - and \(y\)-coordinates of the points of intersection.

Gaussian Elimination

To use Gaussian elimination to solve a system of linear equations, use the following operations to change the system to an equivalent simpler system:

  1. Add a nonzero multiple of one equation to another.
  2. Multiply an equation by a nonzero constant.
  3. Interchange the position of two equations in the system.

Number of Solutions of a Linear System

A system of linear equations can have:

  1. A unique solution for each variable.
  2. No solution, in which case the system is inconsistent.
  3. Infinitely many solutions, in which case the system is dependent.

How to Determine the Number of Solutions of a Linear System

When Gaussian elimination is used to solve a system of linear equations, then we can tell that the system has:

  1. No solution (is inconsistent) if we arrive at a false equation of the form \(0=c\), where \(c\) is nonzero.
  2. Infinitely many solutions (is dependent) if the system is consistent but we end up with fewer equations than variables (after discarding redundant equations of the form \(0=0\) ).

Matrices

A matrix \(A\) of dimension \(m \times n\) is a rectangular array of numbers with \(m\) rows and \(n\) columns:

\[ A=\left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 n} \\ a_{21} & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m 1} & a_{m 2} & \cdots & a_{m n} \end{array}\right] \]

Augmented Matrix of a System

The augmented matrix of a system of linear equations is the matrix consisting of the coefficients and the constant terms. For example, for the two-variable system

\[ \begin{aligned} & a_{11} x+a_{12} x=b_{1} \\ & a_{21} x+a_{22} x=b_{2} \end{aligned} \]

the augmented matrix is

\[ \left[\begin{array}{lll} a_{11} & a_{12} & b_{1} \\ a_{21} & a_{22} & b_{2} \end{array}\right] \]

Elementary Row Operations

To solve a system of linear equations using the augmented matrix of the system, the following operations can be used to transform the rows of the matrix:

  1. Add a nonzero multiple of one row to another.
  2. Multiply a row by a nonzero constant.
  3. Interchange two rows.

Row-Echelon Form of a Matrix

A matrix is in row-echelon form if its entries satisfy the following conditions:

  1. The first nonzero entry in each row (the leading entry) is the number 1.
  2. The leading entry of each row is to the right of the leading entry in the row above it.
  3. All rows consisting entirely of zeros are at the bottom of the matrix.

If the matrix also satisfies the following condition, it is in reduced row-echelon form: 4. If a column contains a leading entry, then every other entry in that column is a 0 .

Number of Solutions of a Linear System

If the augmented matrix of a system of linear equations has been reduced to row-echelon form using elementary row operations, then the system has:

  1. No solution if the row-echelon form contains a row that represents the equation \(0=1\). In this case the system is inconsistent.
  2. One solution if each variable in the row-echelon form is a leading variable.
  3. Infinitely many solutions if the system is not inconsistent but not every variable is a leading variable. In this case the system is dependent.

Operations on Matrices

If \(A\) and \(B\) are \(m \times n\) matrices and \(c\) is a scalar (real number), then:

  1. The \(\operatorname{sum} A+B\) is the \(m \times n\) matrix that is obtained by adding corresponding entries of \(A\) and \(B\).
  2. The difference \(A-B\) is the \(m \times n\) matrix that is obtained by subtracting corresponding entries of \(A\) and \(B\).
  3. The scalar product \(c A\) is the \(m \times n\) matrix that is obtained by multiplying each entry of \(A\) by \(c\).

Multiplication of Matrices

If \(A\) is an \(m \times n\) matrix and \(B\) is an \(n \times k\) matrix (so the number of columns of matrix \(A\) is the same as the number of rows of matrix \(B\) ), then the matrix product \(A B\) is the \(m \times k\) matrix whose \(i j\)-entry is the inner product of the \(i\) th row of \(A\) and the \(j\) th column of \(B\).

Properties of Matrix Operations

If \(A, B\), and \(C\) are matrices of compatible dimensions, then the following properties hold:

  1. Commutativity of addition:
\[ A+B=B+A \]
  1. Associativity:
\[ \begin{gathered} (A+B)+C=A+(B+C) \\ (A B) C=A(B C) \end{gathered} \]
  1. Distributivity:
\[ \begin{aligned} & A(B+C)=A B+A C \\ & (B+C) A=B A+C A \end{aligned} \]

(Note that matrix multiplication is not commutative.)

Identity Matrix

The identity matrix \(I_{n}\) is the \(n \times n\) matrix whose main diagonal entries are all 1 and whose other entries are all 0 :

\[ I_{n}=\left[\begin{array}{cccc} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{array}\right] \]

If \(A\) is an \(m \times n\) matrix, then

\[ A I_{n}=A \quad \text { and } \quad I_{m} A=A \]

Inverse of a Matrix

If \(A\) is an \(n \times n\) matrix, then the inverse of \(A\) is the \(n \times n\) matrix \(A^{-1}\) with the following properties:

\[ A^{-1} A=I_{n} \quad \text { and } \quad A A^{-1}=I_{n} \]

To find the inverse of a matrix, we use a procedure involving elementary row operations. (Note that some square matrices do not have an inverse.)

Inverse of a \(2 \times 2\) Matrix

For \(2 \times 2\) matrices the following special rule provides a shortcut for finding the inverse:

\[ A=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \Rightarrow A^{-1}=\frac{1}{a d-b c}\left[\begin{array}{rr} d & -b \\ -c & a \end{array}\right] \]

Writing a Linear System as a Matrix Equation

A system of \(n\) linear equations in \(n\) variables can be written as a single matrix equation

\[ A X=B \]

where \(A\) is the \(n \times n\) matrix of coefficients, \(X\) is the \(n \times 1\) matrix of the variables, and \(B\) is the \(n \times 1\) matrix of the constants. For example, the linear system of two equations in two variables

\[ \begin{aligned} & a_{11} x+a_{12} x=b_{1} \\ & a_{21} x+a_{22} x=b_{2} \end{aligned} \]

can be expressed as

\[ \left[\begin{array}{ll} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right]=\left[\begin{array}{l} b_{1} \\ b_{2} \end{array}\right] \]

Solving Matrix Equations

If \(A\) is an invertible \(n \times n\) matrix, \(X\) is an \(n \times 1\) variable matrix, and \(B\) is an \(n \times 1\) constant matrix, then the matrix equation

\[ A X=B \]

has the unique solution

\[ X=A^{-1} B \]

Determinant of a \(2 \times 2\) Matrix

The determinant of the matrix

\[ A=\left[\begin{array}{ll} a & b \\ c & d \end{array}\right] \]

is the number

\[ \operatorname{det}(A)=|A|=a d-b c \]

Minors and Cofactors

If \(A=\left|a_{i j}\right|\) is an \(n \times n\) matrix, then the minor \(M_{i j}\) of the entry \(a_{i j}\) is the determinant of the matrix obtained by deleting the \(i\) th row and the \(j\) th column of \(A\).

The cofactor \(A_{i j}\) of the entry \(a_{i j}\) is

\[ A_{i j}=(-1)^{i+j} M_{i j} \]

(Thus, the minor and the cofactor of each entry either are the same or are negatives of each other.)

Determinant of an \(n \times n\) Matrix

To find the determinant of the \(n \times n\) matrix

\[ A=\left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 n} \\ a_{21} & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & \cdots & a_{n n} \end{array}\right] \]

we choose a row or column to expand, and then we calculate the number that is obtained by multiplying each element of that row or column by its cofactor and then adding the resulting products. For example, if we choose to expand about the first row, we get

\[ \operatorname{det}(A)=|A|=a_{11} A_{11}+a_{12} A_{12}+\cdots+a_{1 n} A_{1 n} \]

Invertibility Criterion

A square matrix has an inverse if and only if its determinant is not 0 .

Row and Column Transformations

If we add a nonzero multiple of one row to another row in a square matrix or a nonzero multiple of one column to another column, then the determinant of the matrix is unchanged.

Cramer's Rule

If a system of \(n\) linear equations in the \(n\) variables \(x_{1}, x_{2}, \ldots, x_{n}\) is equivalent to the matrix equation \(D X=B\) and if \(|D| \neq 0\), then the solutions of the system are

\[ x_{1}=\frac{\left|D_{x_{1}}\right|}{|D|} \quad x_{2}=\frac{\left|D_{x_{2}}\right|}{|D|} \quad \cdots \quad x_{n}=\frac{\left|D_{x_{n}}\right|}{|D|} \]

where \(D_{x_{i}}\) is the matrix that is obtained from \(D\) by replacing its \(i\) th column by the constant matrix \(B\).

Area of a Triangle Using Determinants

If a triangle in the coordinate plane has vertices \(\left(a_{1}, b_{1}\right),\left(a_{2}, b_{2}\right)\), and \(\left(a_{3}, b_{3}\right)\), then the area of the triangle is given by

\[ \mathscr{A}= \pm \frac{1}{2}\left|\begin{array}{lll} a_{1} & b_{1} & 1 \\ a_{2} & b_{2} & 1 \\ a_{3} & b_{3} & 1 \end{array}\right| \]

where the sign is chosen to make the area positive.

Partial Fractions

The partial fraction decomposition of a rational function

\[ r(x)=\frac{P(x)}{Q(x)} \]

(where the degree of \(P\) is less than the degree of \(Q\) ) is a sum of simpler fractional expressions that equal \(r(x)\) when brought to a common denominator. The denominator of each simpler fraction is either a linear or quadratic factor of \(Q(x)\) or a power of such a linear or quadratic factor. To find the terms of the partial fraction decomposition, we first factor \(Q(x)\) into linear and irreducible quadratic factors. The terms then have the following forms, depending on the factors of \(Q(x)\).

  1. For every distinct linear factor \(a x+b\) there is a term of the form
\[ \frac{A}{a x+b} \]
  1. For every repeated linear factor \((a x+b)^{m}\) there are terms of the form
\[ \frac{A_{1}}{a x+b}+\frac{A_{2}}{(a x+b)^{2}}+\cdots+\frac{A_{m}}{(a x+b)^{m}} \]
  1. For every distinct quadratic factor \(a x^{2}+b x+c\) there is a term of the form
\[ \frac{A x+B}{a x^{2}+b x+c} \]
  1. For every repeated quadratic factor \(\left(a x^{2}+b x+c\right)^{m}\) there are terms of the form
\[ \frac{A_{1} x+B_{1}}{a x^{2}+b x+c}+\frac{A_{2} x+B_{2}}{\left(a x^{2}+b x+c\right)^{2}}+\cdots+\frac{A_{m} x+B_{m}}{\left(a x^{2}+b x+c\right)^{m}} \]

Graphing Inequalities

To graph an inequality:

  1. Graph the equation that corresponds to the inequality. This "boundary curve" divides the coordinate plane into separate regions.
  2. Use test points to determine which region(s) satisfy the inequality.
  3. Shade the region(s) that satisfy the inequality, and use a solid line for the boundary curve if it satisfies the inequality ( \(\leq\) or \(\geq\) ) and a dashed line if it does not \((<\) or \(\rangle\) ).

Graphing Systems of Inequalities

To graph the solution of a system of inequalities (or feasible region determined by the inequalities):

  1. Graph all the inequalities on the same coordinate plane.
  2. The solution is the intersection of the solutions of all the inequalities, so shade the region that satisfies all the inequalities.
  3. Determine the coordinates of the intersection points of all the boundary curves that touch the solution set of the system. These points are the vertices of the solution.