Difference between revisions of "Systems of Equations in Three Variables"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
See [[Systems of Equations in Two Variables]] for more information on systems of equations.
 
See [[Systems of Equations in Two Variables]] for more information on systems of equations.
 +
 +
[[File:Secretsharing 3-point.svg|thumb|right|A linear system in three variables determines a collection of [[plane (mathematics)|planes]]. The intersection point is the solution.]]
 +
A '''system of linear equations''' (or '''linear system''') is a collection of one or more linear equations involving the same set of variables. For example,
 +
:<math>\begin{alignat}{7}
 +
3x &&\; + \;&& 2y            &&\; - \;&& z  &&\; = \;&& 1 & \\
 +
2x &&\; - \;&& 2y            &&\; + \;&& 4z &&\; = \;&& -2 & \\
 +
-x &&\; + \;&& \tfrac{1}{2} y &&\; - \;&& z  &&\; = \;&& 0 &
 +
\end{alignat}</math>
 +
is a system of three equations in the three variables {{math|''x'', ''y'', ''z''}}. A '''solution''' to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
 +
:<math>\begin{alignat}{2}
 +
x &\,=\,& 1 \\
 +
y &\,=\,& -2 \\
 +
z &\,=\,& -2
 +
\end{alignat}</math>
 +
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
 +
 +
==Solving a linear system==
 +
There are several algorithms for solving a system of linear equations.
 +
 +
===Describing the solution===
 +
When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example <math>(x=3, \;y=-2,\; z=6)</math>. When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like <math>(3, \,-2,\, 6)</math> for the previous example.
 +
 +
To describe a set with an infinite number of solutions, typically some of the variables are designated as '''free''' (or '''independent''', or as '''parameters'''), meaning that they are allowed to take any value, while the remaining variables are '''dependent''' on the values of the free variables.
 +
 +
For example, consider the following system:
 +
:<math>\begin{alignat}{7}
 +
x &&\; + \;&& 3y &&\; - \;&& 2z &&\; = \;&& 5 & \\
 +
3x &&\; + \;&& 5y &&\; + \;&& 6z &&\; = \;&& 7 &
 +
\end{alignat}</math>
 +
The solution set to this system can be described by the following equations:
 +
:<math>x=-7z-1\;\;\;\;\text{and}\;\;\;\;y=3z+2\text{.}</math>
 +
Here ''z'' is the free variable, while ''x'' and ''y'' are dependent on ''z''. Any point in the solution set can be obtained by first choosing a value for ''z'', and then computing the corresponding values for ''x'' and ''y''.
 +
 +
Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter ''z''. An infinite solution of higher order may describe a plane, or higher-dimensional set.
 +
 +
Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:
 +
:<math>y=-\frac{3}{7}x + \frac{11}{7}\;\;\;\;\text{and}\;\;\;\;z=-\frac{1}{7}x-\frac{1}{7}\text{.}</math>
 +
Here ''x'' is the free variable, and ''y'' and ''z'' are dependent.
 +
 +
===Elimination of variables===
 +
The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:
 +
# In the first equation, solve for one of the variables in terms of the others.
 +
# Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and one fewer unknown.
 +
# Repeat until the system is reduced to a single linear equation.
 +
# Solve this equation, and then back-substitute until the entire solution is found.
 +
 +
For example, consider the following system:
 +
:<math>\begin{alignat}{7}
 +
x &&\; + \;&& 3y &&\; - \;&& 2z &&\; = \;&& 5 & \\
 +
3x &&\; + \;&& 5y &&\; + \;&& 6z &&\; = \;&& 7 & \\
 +
2x &&\; + \;&& 4y &&\; + \;&& 3z &&\; = \;&& 8 &
 +
\end{alignat}</math>
 +
Solving the first equation for ''x'' gives ''x'' = 5 + 2''z'' - 3''y'', and plugging this into the second and third equation yields
 +
:<math>\begin{alignat}{5}
 +
-4y &&\; + \;&& 12z &&\; = \;&& -8 & \\
 +
-2y &&\; + \;&& 7z &&\; = \;&& -2 &
 +
\end{alignat}</math>
 +
Solving the first of these equations for ''y'' yields ''y'' = 2 + 3''z'', and plugging this into the second equation yields ''z'' = 2. We now have:
 +
:<math>\begin{alignat}{7}
 +
x &&\; = \;&& 5 &&\; + \;&& 2z &&\; - \;&& 3y & \\
 +
y &&\; = \;&& 2 &&\; + \;&& 3z && && & \\
 +
z &&\; = \;&& 2 && && && && &
 +
\end{alignat}</math>
 +
Substituting ''z'' = 2 into the second equation gives ''y'' = 8, and substituting ''z'' = 2 and ''y'' = 8 into the first equation yields ''x'' = -15. Therefore, the solution set is the single point (''x'', ''y'', ''z'') = (-15, 8, 2).
 +
 +
===Row reduction===
 +
In '''row reduction''' (also known as '''Gaussian elimination'''), the linear system is represented as an augmented matrix:
 +
:<math>\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
3 & 5 & 6 & 7 \\
 +
2 & 4 & 3 & 8
 +
\end{array}\right]\text{.}
 +
</math>
 +
This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations:
 +
:'''Type 1''': Swap the positions of two rows.
 +
:'''Type 2''': Multiply a row by a nonzero scalar.
 +
:'''Type 3''': Add to one row a scalar multiple of another.
 +
Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.
 +
 +
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above:
 +
:<math>\begin{align}\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
3 & 5 & 6 & 7 \\
 +
2 & 4 & 3 & 8
 +
\end{array}\right]&\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
0 & -4 & 12 & -8 \\
 +
2 & 4 & 3 & 8
 +
\end{array}\right]\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
0 & -4 & 12 & -8 \\
 +
0 & -2 & 7 & -2
 +
\end{array}\right]\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
0 & 1 & -3 & 2 \\
 +
0 & -2 & 7 & -2
 +
\end{array}\right]
 +
\\
 +
&\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
0 & 1 & -3 & 2 \\
 +
0 & 0 & 1 & 2
 +
\end{array}\right]\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 3 & -2 & 5 \\
 +
0 & 1 & 0 & 8 \\
 +
0 & 0 & 1 & 2
 +
\end{array}\right]\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 3 & 0 & 9 \\
 +
0 & 1 & 0 & 8 \\
 +
0 & 0 & 1 & 2
 +
\end{array}\right]\sim
 +
\left[\begin{array}{rrr|r}
 +
1 & 0 & 0 & -15 \\
 +
0 & 1 & 0 & 8 \\
 +
0 & 0 & 1 & 2
 +
\end{array}\right].\end{align}</math>
 +
The last matrix is in reduced row echelon form, and represents the system ''x'' = -15, ''y'' = 8, ''z'' = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.
 +
 +
===Cramer's rule===
 +
 +
'''Cramer's rule''' is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system
 +
:<math>\begin{alignat}{7}
 +
x &\; + &\; 3y &\; - &\; 2z &\; = &\; 5 \\
 +
3x &\; + &\; 5y &\; + &\; 6z &\; = &\; 7 \\
 +
2x &\; + &\; 4y &\; + &\; 3z &\; = &\; 8
 +
\end{alignat}</math>
 +
is given by
 +
:<math>
 +
x=\frac
 +
{\, \begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix} \,}
 +
{\, \begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix} \,}
 +
,\;\;\;\;
 +
y=\frac
 +
{\, \begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix} \,}
 +
{\, \begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix} \,}
 +
,\;\;\;\;
 +
z=\frac
 +
{\, \begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix} \,}
 +
{\, \begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix} \,}.
 +
</math>
 +
For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.
 +
 +
Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.)
 +
Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.
 +
 +
===Matrix solution===
 +
 +
If the equation system is expressed in the matrix form <math>A\mathbf{x}=\mathbf{b}</math>, the entire solution set can also be expressed in matrix form. If the matrix ''A'' is square (has ''m'' rows and ''n''=''m'' columns) and has full rank (all ''m'' rows are independent), then the system has a unique solution given by
 +
 +
:<math>\mathbf{x}=A^{-1}\mathbf{b}</math>
 +
 +
where <math>A^{-1}</math> is the inverse of ''A''. More generally, regardless of whether ''m''=''n'' or not and regardless of the rank of ''A'', all solutions (if any exist) are given using the Moore-Penrose pseudoinverse of ''A'', denoted <math>A^+</math>, as follows:
 +
 +
:<math>\mathbf{x}=A^+ \mathbf{b} + \left(I - A^+ A\right)\mathbf{w}</math>
 +
 +
where <math>\mathbf{w}</math> is a vector of free parameters that ranges over all possible ''n''×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using <math>\mathbf{w}=\mathbf{0}</math> satisfy <math>A\mathbf{x}=\mathbf{b}</math> &mdash; that is, that <math>AA^+ \mathbf{b}=\mathbf{b}.</math> If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which ''A'' is square and of full rank, <math>A^+</math> simply equals <math>A^{-1}</math> and the general solution equation simplifies to
 +
:<math>\mathbf{x}=A^{-1}\mathbf{b} + \left(I - A^{-1}A\right)\mathbf{w} = A^{-1}\mathbf{b} + \left(I-I\right)\mathbf{w} = A^{-1}\mathbf{b}</math>
 +
as previously stated, where <math>\mathbf{w}</math> has completely dropped out of the solution, leaving only a single solution. In other cases, though, <math>\mathbf{w}</math> remains and hence an infinitude of potential values of the free parameter vector <math>\mathbf{w}</math> give an infinitude of solutions of the equation.
 +
 +
===Other methods===
 +
While systems of three or four equations can be readily solved by hand, computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as ''pivoting''. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix ''A''. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix ''A'' but different vectors '''b'''.
 +
 +
If the matrix ''A'' has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.
 +
 +
A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods.  For some sparse matrices, the introduction of randomness improves the speed of the iterative methods.
 +
 +
There is also a quantum algorithm for linear systems of equations.
 +
 
==Examples==
 
==Examples==
 
* One solution: <math> x + y + z = 3 </math>, <math> 2x - 3y + z = 0 </math>, and <math> -5x - 5y + 23z = 13 </math>. <math> x + y + z = 3 \implies 5x + 5y + 5z = 15</math>. We can add this to the third equation to get <math> 28z = 28 </math>, which means z = 1. So, the first two equations can be rewritten as <math> x + y = 2 </math> and <math> 2x - 3y = -1 </math>. Using substitution, elimination, or graphing, we can calculate that x = 1 and y = 1 with these two equations. Thus, the solution to the system is (x, y, z) = (1, 1, 1).
 
* One solution: <math> x + y + z = 3 </math>, <math> 2x - 3y + z = 0 </math>, and <math> -5x - 5y + 23z = 13 </math>. <math> x + y + z = 3 \implies 5x + 5y + 5z = 15</math>. We can add this to the third equation to get <math> 28z = 28 </math>, which means z = 1. So, the first two equations can be rewritten as <math> x + y = 2 </math> and <math> 2x - 3y = -1 </math>. Using substitution, elimination, or graphing, we can calculate that x = 1 and y = 1 with these two equations. Thus, the solution to the system is (x, y, z) = (1, 1, 1).
* No solutions: <math> x + y + z = 1 </math>, <math> x + y + z = 2 </math>, <math> x + y + z = 3 </math>. These equations represent three parallel planes, and there is no x, y, and z that satisfy all three equations simultaneously. So, this system has no solutions.
+
* No solutions: <math> x + y + z = 1 </math> and <math> x + y + z = 5 </math>. These equations represent two parallel planes, and there is no x, y, and z that satisfy both equations simultaneously. So, this system has no solutions.
* Infinite solutions: <math> x + y = z </math>, <math> x + y = 2z </math>, <math> x + y = 4z </math>. x + y = 0 for all x and y such that y = -x. Since <math> z = 2z = 4z </math> when z = 0, this system of equations has an infinite number of solutions of the form (x, -x, 0) (for example, <math> (3, -3, 0), (-2, 2, 0), (\pi, -\pi, 0)</math>, etc.).
+
* Infinite solutions: <math> x + y = z </math> and <math> x + y = 2z </math>. x + y = 0 for all x and y such that y = -x. Since <math> z = 2z </math> when z = 0, this system has an infinite number of solutions of the form (x, -x, 0) where x can be any real number (for example, <math> (3, -3, 0), (-0.5, 0.5, 0), </math> and <math> (\pi, -\pi, 0)</math> are solutions of this system of equations).
  
 
==Resources==
 
==Resources==
 
* [https://tutorial.math.lamar.edu/classes/alg/systemsthreevrble.aspx Linear Systems with Three Variables], Paul's Online Notes (Lamar Math)
 
* [https://tutorial.math.lamar.edu/classes/alg/systemsthreevrble.aspx Linear Systems with Three Variables], Paul's Online Notes (Lamar Math)
 +
* [https://courses.lumenlearning.com/wmopen-collegealgebra/chapter/introduction-systems-of-linear-equations-three-variables/ Systems of Equations: Three Variables], Lumen Learning
 +
* [https://www.youtube.com/watch?v=CdpFu7t0dJ4 Solving a System of 3 Variables With Elimination], patrickJMT
 +
* [https://www.youtube.com/watch?v=GjbRnAjVlXM Solving a System of 3 Variables With Substitution], patrickJMT
 +
* [https://www.youtube.com/watch?v=tGPSEXVYw_o Solving a System of Two Equations with Three Variables (Infinite Solutions)], patrickJMT
 +
 +
== Licensing ==
 +
Content obtained and/or adapted from:
 +
* [https://en.wikipedia.org/wiki/System_of_linear_equations System of linear equations, Wikipedia] under a CC BY-SA license

Latest revision as of 12:16, 15 January 2022

See Systems of Equations in Two Variables for more information on systems of equations.

A linear system in three variables determines a collection of planes. The intersection point is the solution.

A system of linear equations (or linear system) is a collection of one or more linear equations involving the same set of variables. For example,

is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by

since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.

Solving a linear system

There are several algorithms for solving a system of linear equations.

Describing the solution

When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like for the previous example.

To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.

For example, consider the following system:

The solution set to this system can be described by the following equations:

Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y.

Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set.

Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:

Here x is the free variable, and y and z are dependent.

Elimination of variables

The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:

  1. In the first equation, solve for one of the variables in terms of the others.
  2. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and one fewer unknown.
  3. Repeat until the system is reduced to a single linear equation.
  4. Solve this equation, and then back-substitute until the entire solution is found.

For example, consider the following system:

Solving the first equation for x gives x = 5 + 2z - 3y, and plugging this into the second and third equation yields

Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the second equation yields z = 2. We now have:

Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and y = 8 into the first equation yields x = -15. Therefore, the solution set is the single point (x, y, z) = (-15, 8, 2).

Row reduction

In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix:

This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations:

Type 1: Swap the positions of two rows.
Type 2: Multiply a row by a nonzero scalar.
Type 3: Add to one row a scalar multiple of another.

Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.

There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above:

The last matrix is in reduced row echelon form, and represents the system x = -15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.

Cramer's rule

Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system

is given by

For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.

Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.

Matrix solution

If the equation system is expressed in the matrix form , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by

where is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore-Penrose pseudoinverse of A, denoted , as follows:

where is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using satisfy — that is, that If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, simply equals and the general solution equation simplifies to

as previously stated, where has completely dropped out of the solution, leaving only a single solution. In other cases, though, remains and hence an infinitude of potential values of the free parameter vector give an infinitude of solutions of the equation.

Other methods

While systems of three or four equations can be readily solved by hand, computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b.

If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.

A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods.

There is also a quantum algorithm for linear systems of equations.

Examples

  • One solution: , , and . . We can add this to the third equation to get , which means z = 1. So, the first two equations can be rewritten as and . Using substitution, elimination, or graphing, we can calculate that x = 1 and y = 1 with these two equations. Thus, the solution to the system is (x, y, z) = (1, 1, 1).
  • No solutions: and . These equations represent two parallel planes, and there is no x, y, and z that satisfy both equations simultaneously. So, this system has no solutions.
  • Infinite solutions: and . x + y = 0 for all x and y such that y = -x. Since when z = 0, this system has an infinite number of solutions of the form (x, -x, 0) where x can be any real number (for example, and are solutions of this system of equations).

Resources

Licensing

Content obtained and/or adapted from: