First of all let's understand what the matrix we have. Write a matrix in general form, then
A = "\\begin{bmatrix}\n 0 & -1 & -2 &...&1-n \\\\\n 1 & 0 & -1 &...&2-n \\\\\n 2 & 1 & 0 & ...&3-n\\\\\n ...&...&...&...&...\\\\\nn-2&n-1&n&...&-1\\\\\nn-1&n-2&n-3&...&0\n\\end{bmatrix}". Now we can see that if we subtract from the second row the first one, we get the equivalent matrix (it means determinant doesn't change) and in the second row there are only ones. Do the same thing with two last rows (subtract from the last row (row index = n) the previous one (row index = n-1)), then the last row also is full of ones.
That means that we can get the full zero row by subtracting from the second row the last one.
But when we get the full row of zeros it exactly means that the det(A) = 0.
Comments
Leave a comment