// Numbas version: finer_feedback_settings {"name": "MA100 LT Week 4", "extensions": [], "custom_part_types": [], "resources": [], "navigation": {"allowregen": true, "showfrontpage": false, "preventleave": false, "typeendtoleave": false}, "question_groups": [{"pickingStrategy": "all-ordered", "questions": [{"statement": "
Lent Term Week 4 (lectures 27 and 28): In this question you will look at diagonalisablity and orthogonal diagonalisability.
\nIf you have not provided an answer to every input gap of a question or part of the question, and you try to submit your answers to the question or part, then you will see the message \"Can not submit answer - check for errors\". In reality your answer has been submitted, but the system is just concerned that you have not submitted an answer to every input gap. For this reason, please ensure that you provide an answer to every input gap in the question or part before submitting. Even if you are unsure of the answer, write down what you think is most likely to be correct; you can always change your answer or retry the question.
\nAs with all questions, there may be parts where you can choose to \"Show steps\". This may give a hint, or it may present sub-parts which will help you to solve that part of the question. Furthermore, remember to always press the \"Show feedback\" button at the end of each part. Sometimes, helpful feedback will be given here, and often it will depend on how correctly you have answered and will link to other parts of the question. Hence, always retry the parts until you obtain full marks, and then look at the feedback again.
Keep in mind that in order to see the feedback for a particular part of a question, you must provide a full (but not necessarily correct) answer to that part. Do not worry though, as you can look at the feedback and then ammend your answer accordingly.
Furthermore, as with all questions, choosing to reveal the answers will only show you the answers which change every time the question is loaded (i.e. answers to randomised questions); the fixed answers will not be revealed.
This is the matrix with columns equal to our g1 , g2 , and g3.
", "name": "P", "definition": "matrix([aa , bb , 1] , [1 , 0 , -aa] , [0 , 1 , -bb])"}, "g1": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is our first eigenvector.
", "name": "g1", "definition": "vector(aa , 1 , 0)"}, "w2": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This will be found by the student when applying the Gram-schmidt process to obtain u1 , u2 , u3 from g1 , g2 , g3.
", "name": "w2", "definition": "g2 - dot(g2 , u1)*u1"}, "g3_2_norm": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the absolute value of g3_2.
", "name": "g3_2_norm", "definition": "(dot(g3 , g3))^(1/2)"}, "hh": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the eigenvalue for our third vector.
", "name": "hh", "definition": "(1 + bb^2 + aa^2)*random(4,5,6)"}, "w2_norm": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the absolute value of w2.
", "name": "w2_norm", "definition": "(dot(w2 , w2))^(1/2)"}, "gg": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the eigenvalue for our first two eigenvectors.
", "name": "gg", "definition": "(1 + bb^2 + aa^2)*random(1,2,3)"}, "g": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the eigenvalue of our second eigenvector.
", "name": "g", "definition": "(a*j*d + c*b*k) * (f + random(1..2)) "}, "a": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is one of the entries for our first eigenvector.
", "name": "a", "definition": "random(-3..3 except(0))"}, "u2": {"group": "Variables for parts d and e", "templateType": "anything", "description": "u1 , u2 , and u3 are what we get when we apply the Gram-Schmidt process to the g1 , g2 , and g3.
", "name": "u2", "definition": "(1/(((aa^2)*(bb^2) + 2*(aa^2) + bb^2 + aa^4 + 1 )^(1/2))) * vector(bb , -aa*bb , 1 + aa^2)"}, "f1": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is our first eigenvector.
", "name": "f1", "definition": "vector(a , b , 0)"}, "A_T": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the matrix for the transformation T which is defined by T(f1) = f * f1 , T(f2) = g * f2 , T(f3) = h * f3. Note that f,g, and h have been chosen so that they are multiples of the determinant of P_C, and hence A_T = P_C * DD * P_C_inv has integer entries.
", "name": "A_T", "definition": "P_C * DD * P_C_inv"}, "f": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the eigenvalue of our first eigenvector.
", "name": "f", "definition": "0"}, "g3": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is our third eigenvector.
", "name": "g3", "definition": "vector(1 , -aa , -bb)"}, "PP_D": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the orthogonal matrix with u1 , u2 , and u3 as the columns.
", "name": "PP_D", "definition": "matrix([aa/((1 + aa^2)^(1/2)) , bb/(((aa^2)*(bb^2) + 2*(aa^2) + bb^2 + aa^4 + 1 )^(1/2)) , 1/((1+ aa^2 +bb^2)^(1/2))] , [1/((1 + aa^2)^(1/2)) , -bb*aa/(((aa^2)*(bb^2) + 2*(aa^2) + bb^2 + aa^4 + 1 )^(1/2)) , -aa/((1+ aa^2 +bb^2)^(1/2))] , [0 , (1+aa^2)/(((aa^2)*(bb^2) + 2*(aa^2) + bb^2 + aa^4 + 1 )^(1/2)) , -bb/((1+ aa^2 +bb^2)^(1/2))])"}, "h": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the eigenvalue of our third eigenvector. Note that |f|<|g|<|h| by definition
", "name": "h", "definition": "(a*j*d + c*b*k) * (f + random(3..4))"}, "AA_T": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is our matric A_T for parts d and e. It satisfies A_T g1 = gg * g1 and A_T g2 = hh * g2.
", "name": "AA_T", "definition": "P * DDD * P_inv"}, "P_C": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the transition matrix from basis C to the standard basis B.
", "name": "P_C", "definition": "matrix([a , c , 0] , [b , 0 , j] , [0 , d , k])"}, "N_hh": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the RRE form of A_T - hh * I
", "name": "N_hh", "definition": "matrix([1 , 0 , 1/bb] , [0 , 1 , -aa/bb] , [0 , 0 , 0])"}, "j": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is one of the entries for our third eigenvector.
", "name": "j", "definition": "random(-3..3 except(0))"}, "DD": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the diagonal matrix consisting of the eigenvalues in ascending order.
", "name": "DD", "definition": "matrix([f , 0 , 0] , [0 , g , 0] , [0 , 0 , h])"}, "P_C_inv": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is the inverse of P_C
", "name": "P_C_inv", "definition": "(1/(a*j*d + c*b*k)) * matrix([j*d , c*k , -c*j] , [b*k , -a*k , a*j] , [-b*d , a*d , b*c])"}, "aa": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This will be used in the definition of our eigenvectors.
", "name": "aa", "definition": "random(-3..3 except(0))"}, "c": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is one of the entries for our second eigenvector.
", "name": "c", "definition": "random(-3..3 except(0))"}, "N_gg": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the RRE form of A_T - gg * I
", "name": "N_gg", "definition": "matrix([1 , -aa , -bb] , [0 , 0 , 0] , [0 , 0 , 0])"}, "d": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is one of the entries for our second eigenvector.
", "name": "d", "definition": "random(-3..3 except(0))"}, "DDD": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the diagonal matrix with our eigenvalues for the diagonal entries.
", "name": "DDD", "definition": "matrix([gg , 0 , 0] , [0 , gg , 0] , [0 , 0 , hh])"}, "g2": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is our second eigenvector.
", "name": "g2", "definition": "vector(bb , 0 , 1)"}, "u1": {"group": "Variables for parts d and e", "templateType": "anything", "description": "u1 , u2 , and u3 are what we get when we apply the Gram-Schmidt process to the g1 , g2 , and g3.
", "name": "u1", "definition": "(1/((1+ aa^2 )^(1/2))) * vector(aa , 1 , 0)"}, "b": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is one of the entries for our first eigenvector.
", "name": "b", "definition": "random(-3..3 except(0))"}, "g1_norm": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the absolute value of g1.
", "name": "g1_norm", "definition": "(dot(g1 , g1))^(1/2)"}, "u3": {"group": "Variables for parts d and e", "templateType": "anything", "description": "u1 , u2 , and u3 are what we get when we apply the Gram-Schmidt process to the g1 , g2 , and g3.
", "name": "u3", "definition": "(1/((1+ aa^2 +bb^2)^(1/2))) * vector(1 , -aa , -bb)"}, "k": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is one of the entries for ourthird eigenvector.
", "name": "k", "definition": "random(-3..3 except(0) except(-a*d*j/(b*c)))"}, "f3": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is our third eigenvector. By our definition of k, this vector is linearly independent to f1 and f2. f1 , f2 , and f3 form a basis for R^3, which we call C.
", "name": "f3", "definition": "vector(0 , j , k)"}, "f2": {"group": "Variables for parts a to c", "templateType": "anything", "description": "This is our second eigenvector. Note that it is linearly independent from f1.
", "name": "f2", "definition": "vector(c , 0 , d)"}, "bb": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This will be used in the definition of our eigenvectors.
", "name": "bb", "definition": "random(-3..3 except(0))"}, "P_inv": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is the inverse of P.
", "name": "P_inv", "definition": "(1/(1 + bb^2 + aa^2)) * matrix([aa , 1 + bb^2 , -aa*bb] , [bb , -aa*bb , aa^2 + 1] , [1 , -aa , -bb])"}, "g3_2": {"group": "Variables for parts d and e", "templateType": "anything", "description": "This is a rescaling of our eigenvector g3 so its matches what the student will get when they find it by using the null space of A_T - hh * I.
", "name": "g3_2", "definition": "vector(-1/bb , aa/bb , 1)"}}, "name": "MA100 LT Week 4", "functions": {}, "extensions": [], "parts": [{"showFeedbackIcon": true, "scripts": {}, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "stepsPenalty": 0, "gaps": [{"showFeedbackIcon": true, "scripts": {}, "mustBeReducedPC": 0, "variableReplacementStrategy": "originalfirst", "notationStyles": ["plain", "en", "si-en"], "allowFractions": false, "minValue": "f", "mustBeReduced": false, "correctAnswerStyle": "plain", "showCorrectAnswer": true, "customMarkingAlgorithm": "", "type": "numberentry", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "maxValue": "f", "unitTests": [], "correctAnswerFraction": false}, {"showFeedbackIcon": true, "scripts": {}, "mustBeReducedPC": 0, "variableReplacementStrategy": "originalfirst", "notationStyles": ["plain", "en", "si-en"], "allowFractions": false, "minValue": "g", "mustBeReduced": false, "correctAnswerStyle": "plain", "showCorrectAnswer": true, "customMarkingAlgorithm": "", "type": "numberentry", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "maxValue": "g", "unitTests": [], "correctAnswerFraction": false}, {"showFeedbackIcon": true, "scripts": {}, "mustBeReducedPC": 0, "variableReplacementStrategy": "originalfirst", "notationStyles": ["plain", "en", "si-en"], "allowFractions": false, "minValue": "h", "mustBeReduced": false, "correctAnswerStyle": "plain", "showCorrectAnswer": true, "customMarkingAlgorithm": "", "type": "numberentry", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "maxValue": "h", "unitTests": [], "correctAnswerFraction": false}], "showCorrectAnswer": true, "unitTests": [], "type": "gapfill", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 0, "sortAnswers": false, "prompt": "Consider the linear transformation $T : \\mathbb{R}^3 \\longrightarrow \\mathbb{R}^3$ defined by $T( \\boldsymbol{v} ) = A_T \\boldsymbol{v}$, for all $\\boldsymbol{v} \\in \\mathbb{R}^3$, where $A_T = \\var{{A_T}}$.
\nIn parts a to c, we will diagonalise the matrix $A_T$. That is, we will find a diagonal matrix $D$ and an invertible matrix $P$ such that $P^{-1} A_T P = D$.
\nFirst we will need to find the eigenvalues of $A_T$. Find the eigenvalues $k$, $l$, and $m$ of $A_T$, with $|k| < |l| < |m|$. (Press the \"Show steps\" button if you require a hint).
\n$k =$ [[0]] .
$l =$ [[1]] .
$m =$ [[2]] .
Suppose $A$ is an $n \\times n$ matrix. In order to find the eigenvalues of $A$ we must solve the equation $\\det (A - \\lambda I_n) = 0$ for $\\lambda$, where $I_n$ is the $n \\times n$ identity matrix.
\nThat is, we first find the determinant of the matrix $A - \\lambda I_n$. This will be a polynomial of $\\lambda$ with degree $n$. We then determine when this polynomial is equal to zero; i.e. we find its roots. These roots are the eigenvalues.
"}]}, {"showFeedbackIcon": true, "scripts": {}, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "stepsPenalty": 0, "gaps": [{"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": false, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "f1", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": false, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": false, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "f2", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": false, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": false, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "f3", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": false, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}], "showCorrectAnswer": true, "unitTests": [], "type": "gapfill", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 0, "sortAnswers": false, "prompt": "Now we will find corresponding eigenvectors to the eigenvalues. Let $\\boldsymbol{f_1}$ be an eigenvector for $k$, $\\boldsymbol{f_2}$ be an eigenvector for $l$, and $\\boldsymbol{f_3}$ be an eigenvector for $m$.
\nSuppose we are also told that $\\boldsymbol{f_1}$, $\\boldsymbol{f_2}$, $\\boldsymbol{f_3}$ are of the form $\\boldsymbol{f_1} = \\begin{pmatrix} \\var{{a}} \\\\ * \\\\* \\end{pmatrix}$ , $\\boldsymbol{f_2} = \\begin{pmatrix} \\var{{c}} \\\\ * \\\\* \\end{pmatrix}$ , $\\boldsymbol{f_3} = \\begin{pmatrix} * \\\\ * \\\\ \\var{{k}} \\end{pmatrix}$.
\nFind $\\boldsymbol{f_1}$, $\\boldsymbol{f_2}$, $\\boldsymbol{f_3}$. (If you require a hint, press the \"Show steps\" button).
\n$\\boldsymbol{f_1} =$ [[0]] .
$\\boldsymbol{f_2} =$ [[1]] .
$\\boldsymbol{f_3} =$ [[2]] .
Suppose we have a matrix $A$ with eigenvalue $\\lambda$, and we want to find an eigenvector, $\\boldsymbol{v}$, of $A$ corresponding to the eigenvalue $\\lambda$.
\nFor this to be the case, $\\boldsymbol{v}$ must satisfy $A \\boldsymbol{v} = \\lambda \\boldsymbol{v}$. This is equivaluent to satisfying $(A - \\lambda I) \\boldsymbol{v} = \\boldsymbol{0}$. That is, $\\boldsymbol{v}$ must be in the null space of $A - \\lambda I$: $\\boldsymbol{v} \\in N(A-\\lambda I)$.
\nNow, we recall that the null space of $A - \\lambda I$ is the same as the null space of the reduced row echelon form of $A - \\lambda I$; i.e. $N(A-\\lambda I) = N \\Big(\\textbf{RRE} (A-\\lambda I) \\Big)$. It is easier to deal with the reduced row echelon form here, so the first step is to find $\\textbf{RRE} (A-\\lambda I)$.
\nOnce we have found $\\textbf{RRE} (A-\\lambda I)$, we can find $N \\Big(\\textbf{RRE} (A-\\lambda I) \\Big)$ in the same way that we would find the set of solutions to $\\Big(\\textbf{RRE} (A-\\lambda I) \\Big) \\boldsymbol{x} = \\boldsymbol{0}$.
\n$N(A-\\lambda I)$ (which is equal to $N \\Big(\\textbf{RRE} (A-\\lambda I) \\Big)$) is called the eigenspace of the eigenvalue $\\lambda$, and it contains all eigenvectors of the eigenvalue $\\lambda$.
\nFor the question in above, we just need to find the eigenspaces for the eigenvalues $k$, $l$, and $m$, and then pick an eigenvector $\\boldsymbol{f_1}$, $\\boldsymbol{f_2}$, and $\\boldsymbol{f_3}$ from each eigenspace, respectively, so that they satisfy the conditions given in the question: $\\boldsymbol{f_1}$, $\\boldsymbol{f_2}$, $\\boldsymbol{f_3}$ are of the form $\\boldsymbol{f_1} = \\begin{pmatrix} \\var{{a}} \\\\ * \\\\* \\end{pmatrix}$ , $\\boldsymbol{f_2} = \\begin{pmatrix} \\var{{c}} \\\\ * \\\\* \\end{pmatrix}$ , $\\boldsymbol{f_3} = \\begin{pmatrix}* \\\\ * \\\\ \\var{{k}} \\end{pmatrix}$.
\n"}]}, {"showFeedbackIcon": true, "scripts": {}, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "stepsPenalty": 0, "gaps": [{"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": false, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "P_C", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": "2", "correctAnswerFractions": false, "markPerCell": false, "scripts": {}, "numColumns": "3", "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": false, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "DD", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": "2", "correctAnswerFractions": false, "markPerCell": false, "scripts": {}, "numColumns": "3", "tolerance": 0}], "showCorrectAnswer": true, "unitTests": [], "type": "gapfill", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 0, "sortAnswers": false, "prompt": "Now that we have found the eigenvalues of $A_T$ and corresponding eigenvectors, we can find a diagonal matrix $D$ and an invertible matrix $P$ such that $P^{-1} A_T P = D$. As discussed in the lecture notes, this is possible because there is a basis, $C$, of $\\mathbb{R}^3$ which consists of eigenvectors of $A_T$; namely, $C = \\{ \\boldsymbol{f_1} , \\boldsymbol{f_2} , \\boldsymbol{f_3} \\}$. (We know $\\boldsymbol{f_1} , \\boldsymbol{f_2} , \\boldsymbol{f_3}$ are linearly independent because they come from different eigenvalues; due to this, and the fact that they are three vectors, they must also span $\\mathbb{R^3}$, and therefore form a basis for $\\mathbb{R}^3$.) See section 27.1 of the lecture notes for more details.
\nGiven what we have found in parts a and b, we can find $P$ and $D$ immediately, without even doing any further calculations. What are $P$ and $D$? (If you require a hint, please press the \"Show steps\" button below).
\n$P= $ [[0]] .
$D =$ [[1]] .
Suppose we have an $n \\times n$ matrix $A$ with $n$ linearly independent eigenvectors $\\boldsymbol{v_1} , \\boldsymbol{v_2} , \\ldots , \\boldsymbol{v_n}$ and corresponding eigenvalues $\\lambda_1 , \\lambda_2 , \\ldots , \\lambda_n$ (i.e. these eigenvectors form a basis of $\\mathbb{R}^n$).
\nLet $P = (\\boldsymbol{v_1} \\; \\boldsymbol{v_2} \\; \\ldots \\; \\boldsymbol{v_n})$ and let $D$ be the $n \\times n$ diagonal matrix with $\\lambda_i$ in the $(i,i)$ entry. Then, $P^{-1} A P = D$.
"}]}, {"showFeedbackIcon": true, "scripts": {}, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "gaps": [{"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": true, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "N_gg", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": true, "markPerCell": false, "scripts": {}, "numColumns": "3", "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": true, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "g1", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": "0.5", "correctAnswerFractions": true, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": true, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "g2", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": "0.5", "correctAnswerFractions": true, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": true, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "N_hh", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": true, "markPerCell": false, "scripts": {}, "numColumns": "3", "tolerance": 0}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": true, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "g3_2", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": true, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}, {"showFeedbackIcon": true, "scripts": {}, "vsetRangePoints": 5, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "failureRate": 1, "checkVariableNames": false, "extendBaseMarkingAlgorithm": true, "showCorrectAnswer": true, "unitTests": [], "type": "jme", "expectedVariableNames": [], "variableReplacements": [], "showPreview": true, "marks": 1, "checkingAccuracy": 0.001, "answer": "{g1_norm}", "checkingType": "absdiff", "vsetRange": [0, 1]}, {"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": true, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "w2", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 1, "correctAnswerFractions": true, "markPerCell": false, "scripts": {}, "numColumns": 1, "tolerance": 0}, {"showFeedbackIcon": true, "scripts": {}, "vsetRangePoints": 5, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "failureRate": 1, "checkVariableNames": false, "extendBaseMarkingAlgorithm": true, "showCorrectAnswer": true, "unitTests": [], "type": "jme", "expectedVariableNames": [], "variableReplacements": [], "showPreview": true, "marks": 1, "checkingAccuracy": 0.001, "answer": "{w2_norm}", "checkingType": "absdiff", "vsetRange": [0, 1]}, {"showFeedbackIcon": true, "scripts": {}, "vsetRangePoints": 5, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "failureRate": 1, "checkVariableNames": false, "extendBaseMarkingAlgorithm": true, "showCorrectAnswer": true, "unitTests": [], "type": "jme", "expectedVariableNames": [], "variableReplacements": [], "showPreview": true, "marks": 1, "checkingAccuracy": 0.001, "answer": "{g3_2_norm}", "checkingType": "absdiff", "vsetRange": [0, 1]}], "showCorrectAnswer": true, "unitTests": [], "type": "gapfill", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 0, "sortAnswers": false, "prompt": "Now consider a new linear transformation $T : \\mathbb{R}^3 \\longrightarrow \\mathbb{R}^3$ defined by $T( \\boldsymbol{v} ) = A_T \\boldsymbol{v}$, for all $\\boldsymbol{v} \\in \\mathbb{R}^3$, where $A_T = \\var{{AA_T}}$. We are also told that the eigenvalues of $A_T$ are $\\var{{gg}}$ and $\\var{{hh}}$.
\nIn parts d to f, we will orthogonally diagonalise the matrix $A_T$. That is, we will find a diagonal matrix $D$ and an orthogonal matrix $P$ such that $P^{\\text{T}} A_T P = D$. (Recall that an invertible square matrix $M$ is orthogonal if and only if $M^{-1} = M^{\\text{T}}$).
\nTo do this we will need to find an orthonormal basis of $\\mathbb{R}^3$ consisting of eigenvectors of $A_T$. First we will find a regular basis of eigenvectors, and then we will orthonormalise it by applying the Gram-Schmidt process. We have already been told what the eigenvalues are, so this will make it easier to find the eigenspaces/eigenvectors.
\ni) The first eigenvalue is $\\var{{gg}}$. We know that the eigenspace of $\\var{{gg}}$ is $N(A_T - (\\var{gg}) I_3 )$, which is the same as $N \\Big(\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big) \\Big)$. It is much easier to deal with the latter than the former.
\nWhat is $\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big)$?
$\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big) =$ [[0]] .
$N \\Big(\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big) \\Big)$ is the set of all solutions to $\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big) \\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} = \\boldsymbol{0}$. In an attempt to find the general solution of this system, we see that $\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big)$ has the free variables $y$ and $z$. Letting $y=s$ and $z=t$, we see that the general solution is
$\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} =$ [[1]]$s +$ [[2]]$t$.
So, $\\Bigg\\{$ [[1]] , [[2]] $\\Bigg\\}$ is a basis for $N \\Big(\\textbf{RRE} \\big( A_T - (\\var{gg}) I_3 \\big) \\Big)$, and hence for $N(A_T - (\\var{gg}) I_3 )$ as well.
\nLet $\\boldsymbol{g_1} =$ [[1]] and $\\boldsymbol{g_2} =$ [[2]] be our first two eigenvectors.
\n\nii) Now let us look at the second eigenvalue, $\\var{{hh}}$. In a similar fashion as in part a, we first find $\\textbf{RRE} \\big( A_T - (\\var{hh}) I_3 \\big)$.
\n$\\textbf{RRE} \\big( A_T - (\\var{hh}) I_3 \\big) =$ [[3]] .
\nIn an attempt to find the general solution of the system $\\textbf{RRE} \\big( A_T - (\\var{hh}) I_3 \\big) \\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} = \\boldsymbol{0}$, we see that $\\textbf{RRE} \\big( A_T - (\\var{hh}) I_3 \\big)$ has the free variable $z$. Letting $z=t$, we see that the general solution is
$\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} =$ [[4]]$t$.
So, $\\Bigg\\{$ [[4]] $\\Bigg\\}$ is a basis for $N \\Big(\\textbf{RRE} \\big( A_T - (\\var{hh}) I_3 \\big) \\Big)$, and hence for $N(A_T - (\\var{hh}) I_3 )$ as well.
\nLet $\\boldsymbol{g_3} =$ [[4]] be our third eigenvector.
\n\niii) Now that we have a basis of eigenvectors, $\\{ \\boldsymbol{g_1} , \\boldsymbol{g_2} , \\boldsymbol{g_3} \\}$, we will attempt to apply the Gram-Schmidt process to obtain an orthonormal basis.
\nThe first step is to normalise $\\boldsymbol{g_1}$.
$\\boldsymbol{\\| g_1 \\|} =$ [[5]] . (If you wish to write something like $\\sqrt{2}$, then please write 2^(1/2) ).
Hence $\\boldsymbol{u_1} := \\frac{1}{\\boldsymbol{\\| g_1 \\|}} \\boldsymbol{g_1} = \\Big(1\\Big/$ [[5]]$\\Big)$ [[1]].
iv) Now we wish to find $\\boldsymbol{u_2}$.
\n$\\boldsymbol{w_2} := \\boldsymbol{g_2} - \\langle \\boldsymbol{g_2} , \\boldsymbol{u_1} \\rangle \\boldsymbol{u_1} =$ [[6]] .
\n$ \\|\\boldsymbol{w_2} \\| =$ [[7]] .
\nHence, $\\boldsymbol{u_2} := \\frac{1}{\\boldsymbol{\\| w_2 \\|}} \\boldsymbol{w_2} = \\Big(1\\Big/$ [[7]] $\\Big)$ [[6]] .
\nv) Finally, we need to find $\\boldsymbol{u_3}$.
\nWe note that $\\boldsymbol{g_3}$ is orthogonal to $\\boldsymbol{u_1}$. and $\\boldsymbol{u_2}$, and hence we only need to normalise it.
\n$ \\|\\boldsymbol{g_3} \\| =$ [[8]] .
\nHence, $\\boldsymbol{u_3} := \\frac{1}{\\boldsymbol{\\| g_3 \\|}} \\boldsymbol{g_3} =\\Big(1\\Big/$ [[8]] $\\Big)$ [[4]] .
\n\nThe set $ \\{ \\boldsymbol{u_1} , \\boldsymbol{u_2} , \\boldsymbol{u_3} \\}$ is an orthonormal basis for $\\mathbb{R}^3$ consisting of eigenvectors of $A_T$.
"}, {"showFeedbackIcon": true, "scripts": {}, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "stepsPenalty": 0, "gaps": [{"showFeedbackIcon": true, "showCorrectAnswer": true, "variableReplacementStrategy": "originalfirst", "customMarkingAlgorithm": "", "allowFractions": false, "allowResize": false, "numRows": "3", "unitTests": [], "correctAnswer": "DDD", "type": "matrix", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": "2", "correctAnswerFractions": false, "markPerCell": false, "scripts": {}, "numColumns": "3", "tolerance": 0}], "showCorrectAnswer": true, "unitTests": [], "type": "gapfill", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "marks": 0, "sortAnswers": false, "prompt": "Now that we have found and orthonormal basis for $\\mathbb{R}^3$ consisting of eigenvectors of $A_T$, we can find a diagonal matrix $D$ and an orthogonal matrix $P$ such that $P^{\\text{T}} A_T P = D$.
Given what we have found in parts a and b, we can find $P$ and $D$ immediately, without even doing any further calculations. Suppose we are told that $P = ( \\boldsymbol{u_1} \\; , \\; \\boldsymbol{u_2} \\; , \\; \\boldsymbol{u_3})$. What will $D$ be then? (If you require a hint, please press the \"Show steps\" button below).
$D =$ [[0]] .
", "steps": [{"showCorrectAnswer": true, "customMarkingAlgorithm": "", "type": "information", "extendBaseMarkingAlgorithm": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "scripts": {}, "marks": 0, "unitTests": [], "showFeedbackIcon": true, "prompt": "Suppose we have an $n \\times n$ matrix $A$ with $n$ orthogonal eigenvectors $\\boldsymbol{v_1} , \\boldsymbol{v_2} , \\ldots , \\boldsymbol{v_n}$ and corresponding eigenvalues $\\lambda_1 , \\lambda_2 , \\ldots , \\lambda_n$.
\nLet $P = (\\boldsymbol{v_1} \\; \\boldsymbol{v_2} \\; \\ldots \\; \\boldsymbol{v_n})$ and let $D$ be the $n \\times n$ diagonal matrix with $\\lambda_i$ in the $(i,i)$ entry. Then, $P^{\\text{T}} A P = D$.
"}]}], "preamble": {"css": "", "js": ""}, "tags": [], "metadata": {"description": "This is the question for Lent Term week 4 of the MA100 course at the LSE. It looks at material from chapters 27 and 28.
", "licence": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International"}, "variable_groups": [{"name": "Variables for parts a to c", "variables": ["a", "b", "c", "d", "j", "k", "f", "g", "h", "f1", "f2", "f3", "P_C", "P_C_inv", "DD", "A_T"]}, {"name": "Variables for parts d and e", "variables": ["aa", "bb", "g1", "g2", "g3", "P", "P_inv", "gg", "hh", "DDD", "AA_T", "u1", "w2", "u2", "u3", "PP_D", "N_gg", "N_hh", "g3_2", "g1_norm", "w2_norm", "g3_2_norm"]}], "variablesTest": {"maxRuns": 100, "condition": ""}, "advice": "", "ungrouped_variables": [], "type": "question", "contributors": [{"name": "Michael Yiasemides", "profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/1440/"}]}]}], "contributors": [{"name": "Michael Yiasemides", "profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/1440/"}]}