Harvard University Shield

 

Math 20 - Introduction to Linear Algebra and Multivariable Calculus
Frequently Asked Questions


Home
Course Info
Schedule
Problem Sets
FAQ
Resources
Q&A Tool
Math Department
 

General

  1. I was wondering if you had any suggestions about how to study for tests and/or any particularly helpful strategies to be more effective or productive with my studying.

    With math tests, the best practice is typically to find some problems that are on-topic for the midterm and have solutions available. Work these problems as best you can without using your notes or textbook. Then check your answers with the solutions. Any problem you miss should be analyzed further, either by yourself or with the help of a fellow classmate or CA or me. Repeat this cycle with another set of practice problems until you feel you have the material down.

  2. What is the significance of the "Grade This Exam" link on the Pre-Class Reading Assignments?

    The "Grade This Exam" link does not currently provide an accurate record of your scores on the PCRAs. In theory, I have the ability to grade your PCRAs online through the Q&A Tool, but in reality, there is a bug in the code that prevents me from doing so without deleting your responses. The Instructional Computing Group (the people who created the Q&A Tool) are working on fixing this bug.

    In the meantime, it's same to assume that if you answered a PCRA with anything remotely sensible, you received full credit.

Chapter 1

  1. What is the difference between an augmented matrix and a coefficient matrix?

    Suppose you have a system of linear equations like

    2x1 + 3x2 = 5

    -x1 + 4x2 = 7

    Then the coefficient matrix for this system is the matrix

    [  2  3 ]
    [ -1  4 ]
    		

    and the augmented matrix for this system is the matrix

    [  2  3  5 ]
    [ -1  4  7 ].
    		

    The coefficient matrix captures the information on the left side of the equals signs. The augmented matrix captures the information on both sides.

    If we were to write this system as a matrix equation Ax=b, then A] would be the coefficient matrix.

  2. When stating the properties of row echelon form and reduced echelon form, the book never says whether they are talking about coefficient matrices or augmented matrices. Should we think of these properties being related to coefficient or augmented matrices? Or does it not matter?

    Any matrix can be put in echelon or reduced echelon form. How you interpret that form depends on where the matrix comes from.

    For instance, if the augmented matrix of a system of linear equations is put in echelon form and it has a row of zeros at the bottom, that's doesn't imply the system is inconsistent. It just means one of the equations was redundant.

    On the other hand, if the coefficient matrix of a system of linear equations is put in echelon form and it has a row of zeros at the bottom, then the system will not be consistent for every right-hand side. Some right-hand sides will result in consistent systems; others won't.

  3. If you have three vectors in R3, does one of them have to be a multiple of the other for them to be linearly dependent? Or could none of them be multiples of the others and still be a linearly dependent set?

    If you have three vectors in R3 that are linearly dependent, then it is not necessarily true that one is a multiple of another. However, it is true that one must be a linear combination of the other two. For instance, the vectors

    [ 1 ]  [ 1 ]  [ 3  ]
    [ 2 ]  [ 0 ]  [ 4  ]
    [ 3 ]  [ 5 ]  [ 11 ]
    

    are linearly dependent because the third vector is equal to twice the first vector plus the second vector.

  4. (Follow-Up to Previous Question) What if the second vector is multiple of the first? Are the three of them linearly dependent? What if we have four vectors and the third is a linear combination of the first two -- are the four vectors linearly dependent?

    If you have three vectors and one of them does happen to be a multiple of the other, then, yes, the vectors are linearly dependent. For instance:

    [ 1 ]  [ 2 ]  [ 1 ]
    [ 2 ]  [ 4 ]  [ 0 ]
    [ 3 ]  [ 6 ]  [ 0 ]
    

    In this case the second vector is a multiple of the first and so you can write 0 as a linear combination of these vectors.

      [ 1 ]   [ 2 ]     [ 1 ]
    2 [ 2 ] - [ 4 ] + 0 [ 0 ] = 0
      [ 3 ]   [ 6 ]     [ 0 ]
    

    So these vectors are linearly dependent.

    Similar reasoning works for larger sets of vectors. So if you have four vectors and the third is a linear combination of the first two, then the whole set of four is linearly dependent.

  5. Can you explain the terms domain, codomain, range, and image?

    I can't tell you about a transformation without also telling you its domain and codomain. So when we write something like T:RnRm, this says that T is a transformation with domain Rn (the set of all possible inputs) and codomain Rm (the set of all possible outputs).

    The difference between codomain and range is this. While the outputs of T are vectors in its codomain, the range of T consists of only the vectors in the codomain that are achievable outputs of T.

    For instance, if T:R2R2 is a linear transformation given by the formula T(x1, x2) = (x1 + x2, 0), then the codomain of T is the set R2, but the range of T consists of only those vectors in R2 with a 0 in the second entry. Since, for instance, there is no input to T that gives the output (4, 1), it follows that the vector (4, 1) is not in the range of T.

    The image of a vector x under a transformation T is simply the output that corresponds to the input x. For instance, in the last example, the image of the vector (2, 3) is the vector (5, 0) since T(2, 3) = (5, 0).

  6. Regarding Theorem 12 in §1.9, I thought that if the columns of A span Rm, then those columns must be linearly independent. Are "onto" and "one-to-one" actually the same thing?

    If the columns of A span Rm, they need not be linearly independent. For instance, consider this matrix:

    [ 1 0 0 2 ]
    [ 0 1 0 3 ]
    [ 0 0 1 4 ]
    

    The columns span R^3, but the columns are not linearly dependent since you can write the fourth column as a linear combination of the first three columns.

    Now if A is a square matrix and its columns span Rm, then they do have to be linearly independent. There wouldn't be "room" for a redundant column like in the above example.

Chapter Two

  1. I feel that Theorem 5 on pg. 120 and part g of the Invertible Matrix Theorem are confusing. So, should it be that for each b, the equation Ax=b has only one solution, instead of at least one?

    It turns out that the following statements are equivalent for an n x n matrix A.

    (i) The matrix A is invertible.
    (ii) The equation Ax=b has at least one solution for every b.
    (iii) The equation Ax=b has a unique solution for every b.

    Why? Suppose that A is invertible. Then A-1b is a solution for Ax=b, so Ax=b is consistent for every b. So we've shown that (i) implies (ii).

    Now suppose that Ax=b has at least one solution for every b. Consider the reduced row echelon form of the augmented matrix [ A b ]. The last row can't look like [ 0 0 ... 0 s ], where s is a nonzero number. Now for some b's, row reduction might produce a 0 in the last entry, but not for all b's. It follows from this that the reduced row echelon form can't have a row of all zeros. Thus A has a pivot position in every row. Since A is square, it must then have a pivot position in every column. Thus the system Ax=b has no free variables, and so it has a unique solution. This shows that (ii) implies (iii).

    Finally, if the equation Ax=b has a unique solution for every b, then the system Ax=b has no free variables. This implies that A has a pivot position in every column. Since A is square, this implies that the reduced row echelon form of A is the identity matrix, and so the matrix is invertible. This shows that (iii) implies (i), and so all three statements are equivalent.

  2. I'm having a little trouble understanding the concept of a basis. I was wondering if you could just explain briefly what they are and why they're important.

    Let's say you have three linearly independent vectors u, v, and w in Rn. The set of all linear combinations of u, v, and w is called the span of u, v, and w. This is a subspace of Rn. Since every vector in this subspace is a linear combination of u, v, and w and since u, v, and w are linearly independent, it follows that {u, v, w} is a basis for this subspace. They span the subspace and are linearly independent.

    Instead of starting with some vectors, suppose we started with a particular subspace. (Maybe it's the column space of a matrix or the range of a linear transformation or the eigenspace of a particular eigenvalue.) If we can find a set of vectors that spans the subspace and is linearly independent, then that set of vectors is called a basis.

    A basis for a particular subspace is a set of vectors that is just the right size to describe the subspace, in the following sense. The basis vectors span the subspace, so there has to be enough of them to do so. But the basis vectors are linearly independent, so there can't be too many of them or we would have some redundant vectors.

    It turns out that if you have a particular subspace in mind, any basis you can find for that subspace has the same number of vectors in it. This means we can associate to a particular subspace a particular number -- the number of vectors in any basis for that subspace. We call this the dimension of the subspace.


Page maintained by Derek Bruff (bruff [at] fas.harvard.edu).
Last updated on April 24, 2005.
Instructor's Toolkit