Monday, May 24, 2010

I have 2 matrices (2X3) and (3X2). i need conditions so that when i multiply them, i dont get the identity.?

the conditions need to be set on the matrices so that when i multiply them, i dont get the identity matrix. i dont need like one exception. i need something that works for every case.


the matrices are:


A is 2x3


A= [a b c


d e f ]





B is 3x2


B= [g h


i j


k l ]

I have 2 matrices (2X3) and (3X2). i need conditions so that when i multiply them, i dont get the identity.?
AB =


[ ag+bi+ck ah+bj+cl


dg+ei+fk dh+fj+el ]





The identity is


[ 1 0


0 1 ]








So you have four conditions:


ag+bi+ck %26lt;%26gt; 1


ah+bj+cl %26lt;%26gt; 0


dg+ei+fk %26lt;%26gt; 0


dh+fj+el %26lt;%26gt; 1
Reply:A = [ a b c ]


. . . .[ d e f ]





B = [ g h ]


. . . .[ i j ]


. . . .[k l ]





AB = [ ag + bi + ck ah + bj + cl ]


. . . . . [ dg + ei + fk dh + ej + fl ]





In order for this not to be the identity matrix





ag + bi + ck != 1


ah + bj + cl != 0


dg + ei + fk != 0


fh + ej + fl != 1





Where "!=" means "not equal".


They can be generated infinitely.
Reply:its simple !!!!!!!!








n mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied. Matrices are used to describe linear equations, keep track of the coefficients of linear transformations and to record data that depend on two parameters. Matrices can be added, multiplied, and decomposed in various ways, making them a key concept in linear algebra and matrix theory.





In this article, the entries of a matrix are real or complex numbers unless otherwise noted.


Organization of a matrix


Organization of a matrix


Contents


[hide]





* 1 Definitions and notations


* 2 Example


* 3 Adding and multiplying matrices


o 3.1 Sum


o 3.2 Scalar multiplication


o 3.3 Matrix multiplication


* 4 Linear transformations, ranks and transpose


* 5 Square matrices and related definitions


* 6 Special types of matrices


* 7 Matrices in abstract algebra


* 8 History


* 9 Applications


o 9.1 Encryption


o 9.2 Computer graphics


* 10 Further reading


* 11 See also


* 12 References


* 13 External links





[edit] Definitions and notations





The horizontal lines in a matrix are called rows and the vertical lines are called columns. A matrix with m rows and n columns is called an m-by-n matrix (written m \times n) and m and n are called its dimensions. The dimensions of a matrix are always given with the number of rows first, then the number of columns. It is commonly said that an m-by-n matrix has an order of m \times n (order meaning size).





Almost always capital letters are used to denote matrices with the corresponding lower case letter with two indices representing the entries. For example the entry of a matrix A that lies in the i-th row and the j-th column is written as ai,j and called the i,j entry or (i,j)-th entry of A. Alternative notations for that entry are A[i,j] or Ai,j. The row is always noted first, then the column.





We often write A:=(a_{i,j})_{i=1,\ldots,m;j=1,\ldots,n} or A:=(a_{i,j})_{m \times n} to define an m \times n matrix A. In this case the entries ai,j are defined separately for all integers 1\le i \le m and 1\le j \le n. In some programming languages the numbering of rows and colums starts at zero. Texts, which make use of such a language extensively, frequently follow that convention, so we have 0\le i \le m-1 and 0\le j \le n-1.





A matrix where one of the dimensions equals one is often called a vector, and interpreted as an element of real coordinate space. An m \times 1 matrix (one column and m rows) is called a column vector and an 1 \times n matrix (one row and n columns) is called a row vector.





[edit] Example





The matrix





A = \begin{bmatrix} 1 %26amp; 2 %26amp; 3 \\ 1 %26amp; 2 %26amp; 7 \\ 4%26amp;9%26amp;2 \\ 6%26amp;0%26amp;5\end{bmatrix} or A = \begin{pmatrix} 1 %26amp; 2 %26amp; 3 \\ 1 %26amp; 2 %26amp; 7 \\ 4%26amp;9%26amp;2 \\ 6%26amp;0%26amp;5 \end{pmatrix}





is a 4\times 3 matrix. The element a2,3 or A[2,3] is 7.





The matrix





R = \begin{bmatrix} 1 %26amp; 2 %26amp; 3 %26amp; 4 %26amp; 5 %26amp; 6 %26amp; 7 %26amp; 8 %26amp; 9 \end{bmatrix}





is a 1\times 9 matrix, or 9-element row vector.





[edit] Adding and multiplying matrices





[edit] Sum





Main article: Matrix addition





Two or more matrices of identical dimensions m and n can be added. Given m-by-n matrices A and B, their sum A + B is the m-by-n matrix computed by adding corresponding elements (i.e. A+B= (a_{i,j})_{1\le i \le m; 1\le j \le n} + (b_{i,j})_{1\le i \le m; 1\le j \le n} = (a_{i,j}+b_{i,j})_{1\le i \le m; 1\le j \le n} ). For example:





\begin{bmatrix} 1 %26amp; 3 %26amp; 2 \\ 1 %26amp; 0 %26amp; 0 \\ 1 %26amp; 2 %26amp; 2 \end{bmatrix} + \begin{bmatrix} 0 %26amp; 0 %26amp; 5 \\ 7 %26amp; 5 %26amp; 0 \\ 2 %26amp; 1 %26amp; 1 \end{bmatrix} = \begin{bmatrix} 1+0 %26amp; 3+0 %26amp; 2+5 \\ 1+7 %26amp; 0+5 %26amp; 0+0 \\ 1+2 %26amp; 2+1 %26amp; 2+1 \end{bmatrix} = \begin{bmatrix} 1 %26amp; 3 %26amp; 7 \\ 8 %26amp; 5 %26amp; 0 \\ 3 %26amp; 3 %26amp; 3 \end{bmatrix}





Another, much less often used notion of matrix addition is the direct sum.





[edit] Scalar multiplication





Main article: Matrix multiplication





Given a matrix A and a number c, the scalar multiplication cA is computed by multiplying every element of A by the scalar c (i.e. (cA)_{i,j} = c \cdot a_{i,j} ). For example:





2 \cdot \begin{bmatrix} 1 %26amp; 8 %26amp; -3 \\ 4 %26amp; -2 %26amp; 5 \end{bmatrix} = \begin{bmatrix} 2 \cdot 1 %26amp; 2\cdot 8 %26amp; 2\cdot -3 \\ 2\cdot 4 %26amp; 2\cdot -2 %26amp; 2\cdot 5 \end{bmatrix} = \begin{bmatrix} 2 %26amp; 16 %26amp; -6 \\ 8 %26amp; -4 %26amp; 10 \end{bmatrix}





Matrix addition and scalar multiplication turn the set \text{M}(m,n,\mathbb{R}) of all m-by-n matrices with real entries into a real vector space of dimension m\cdot n.





[edit] Matrix multiplication





Main article: Matrix multiplication





Multiplication of two matrices is well-defined only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix given by:





(AB)_{i,j} = a_{i,1} b_{1,j} + a_{i,2} b_{2,j} + \ldots + a_{i,n} b_{n,j}





for each pair (i,j).





For example:





\begin{bmatrix} 1 %26amp; 0 %26amp; 2 \\ -1 %26amp; 3 %26amp; 1 \\ \end{bmatrix} \times \begin{bmatrix} 3 %26amp; 1 \\ 2 %26amp; 1 \\ 1 %26amp; 0 \\ \end{bmatrix} = \begin{bmatrix} ( 1 \times 3 + 0 \times 2 + 2 \times 1) %26amp; ( 1 \times 1 + 0 \times 1 + 2 \times 0) \\ (-1 \times 3 + 3 \times 2 + 1 \times 1) %26amp; (-1 \times 1 + 3 \times 1 + 1 \times 0) \\ \end{bmatrix}





= \begin{bmatrix} 5 %26amp; 1 \\ 4 %26amp; 2 \\ \end{bmatrix}





Matrix multiplication has the following properties:





* (AB)C = A(BC) for all k-by-m matrices A, m-by-n matrices B and n-by-p matrices C ("associativity").


* (A + B)C = AC + BC for all m-by-n matrices A and B and n-by-k matrices C ("right distributivity").


* C(A + B) = CA + CB for all m-by-n matrices A and B and k-by-m matrices C ("left distributivity").





It is important to note that commutativity does not generally hold; that is, given matrices A and B and their product defined, then generally AB \ne BA.





[edit] Linear transformations, ranks and transpose





Main article: Transformation matrix


Main article: Transpose





Matrices can conveniently represent linear transformations because matrix multiplication neatly corresponds to the composition of maps, as will be described next. This same property makes them powerful data structures in high-level programming languages.





Here and in the sequel we identify Rn with the set of "columns" or n-by-1 matrices. For every linear map f : Rn → Rm there exists a unique m-by-n matrix A such that f(x) = Ax for all x in Rn. We say that the matrix A "represents" the linear map f. Now if the k-by-m matrix B represents another linear map g : Rm → Rk, then the linear map g o f is represented by BA. This follows from the above-mentioned associativity of matrix multiplication.





More generally, a linear map from an n-dimensional vector space to an m-dimensional vector space is represented by an m-by-n matrix, provided that bases have been chosen for each.





The rank of a matrix A is the dimension of the image of the linear map represented by A; this is the same as the dimension of the space generated by the rows of A, and also the same as the dimension of the space generated by the columns of A.





The transpose of an m-by-n matrix A is the n-by-m matrix Atr (also sometimes written as AT or tA) formed by turning rows into columns and columns into rows, i.e. Atr[i, j] = A[j, i] for all indices i and j. If A describes a linear map with respect to two bases, then the matrix Atr describes the transpose of the linear map with respect to the dual bases, see dual space.





We have (A + B)tr = Atr + Btr and (AB)tr = Btr Atr.





[edit] Square matrices and related definitions





A square matrix is a matrix which has the same number of rows and columns. The set of all square n-by-n matrices, together with matrix addition and matrix multiplication is a ring. Unless n = 1, this ring is not commutative.





M(n, R), the ring of real square matrices, is a real unitary associative algebra. M(n, C), the ring of complex square matrices, is a complex associative algebra.





The unit matrix or identity matrix In, with elements on the main diagonal set to 1 and all other elements set to 0, satisfies MIn=M and InN=N for any m-by-n matrix M and n-by-k matrix N. For example, if n = 3:





I_3 = \begin{bmatrix} 1 %26amp; 0 %26amp; 0 \\ 0 %26amp; 1 %26amp; 0 \\ 0 %26amp; 0 %26amp; 1 \end{bmatrix} .





The identity matrix is the identity element in the ring of square matrices.





Invertible elements in this ring are called invertible matrices or non-singular matrices. An n by n matrix A is invertible if and only if there exists a matrix B such that





AB = In ( = BA).





In this case, B is the inverse matrix of A, denoted by A−1. The set of all invertible n-by-n matrices forms a group (specifically a Lie group) under matrix multiplication, the general linear group.





If λ is a number and v is a non-zero vector such that Av = λv, then we call v an eigenvector of A and λ the associated eigenvalue. (Eigen means "own" in German and in Dutch.) The number λ is an eigenvalue of A if and only if A−λIn is not invertible, which happens if and only if pA(λ) = 0. Here pA(x) is the characteristic polynomial of A. This is a polynomial of degree n and has therefore n complex roots (counting multiple roots according to their multiplicity). In this sense, every square matrix has n complex eigenvalues.





The determinant of a square matrix A is the product of its n eigenvalues, but it can also be defined by the Leibniz formula. Invertible matrices are precisely those matrices with nonzero determinant.





The Gaussian elimination algorithm is of central importance: it can be used to compute determinants, ranks and inverses of matrices and to solve systems of linear equations.





The trace of a square matrix is the sum of its diagonal entries, which equals the sum of its n eigenvalues.





Matrix exponential is defined for square matrices, using power series.





[edit] Special types of matrices





In many areas in mathematics, matrices with certain structure arise. A few important examples are





* Symmetric matrices are such that elements symmetric about the main diagonal (from the upper left to the lower right) are equal, that is, a_{i,j}=a_{j,i} \Leftrightarrow A^\mathrm{T} = A.


* Skew-symmetric matrices are such that elements symmetric about the main diagonal are the negative of each other, that is, a_{i,j}=-a_{j,i} \Leftrightarrow A^\mathrm{T}=-A. In a skew-symmetric matrix, all diagonal elements are zero, that is, a_{i,i}=-a_{i,i}\Rightarrow a_{i,i}=0.


* Hermitian (or self-adjoint) matrices are such that elements symmetric about the diagonal are each others complex conjugates, that is, a_{i,j}=\overline{a}_{j,i} \Leftrightarrow A^\mathrm{H} = A, where \overline{z} signifies the complex conjugate of a complex number z and \,\! A^\mathrm{H} the conjugate transpose of A.


* Toeplitz matrices have common elements on their diagonals, that is, \,\! a_{i,j}=a_{i+1,j+1}.


* Stochastic matrices are square matrices whose rows are probability vectors; they are used to define Markov chains.


* A square matrix A is called idempotent if A2 = AA = A.





For a more extensive list see list of matrices.





[edit] Matrices in abstract algebra





If we start with a ring R, we can consider the set M(m,n, R) of all m by n matrices with entries in R. Addition and multiplication of these matrices can be defined as in the case of real or complex matrices (see above). The set M(n, R) of all square n by n matrices over R is a ring in its own right, isomorphic to the endomorphism ring of the left R-module Rn.





Similarly, if the entries are taken from a semiring S, matrix addition and multiplication can still be defined as usual. The set of all square n×n matrices over S is itself a semiring. Note that fast matrix multiplication algorithms such as the Strassen algorithm generally only apply to matrices over rings and will not work for matrices over semirings that are not rings.





If R is a commutative ring, then M(n, R) is a unitary associative algebra over R. It is then also meaningful to define the determinant of square matrices using the Leibniz formula; a matrix is invertible if and only if its determinant is invertible in R.





All statements mentioned in this article for real or complex matrices remain correct for matrices over an arbitrary field.





Matrices over a polynomial ring are important in the study of control theory.





[edit] History





The study of matrices is quite old. A 3-by-3 magic square appears in Chinese literature dating from as early as 650 BC.[1]





Matrices have a long history of application in solving linear equations. An important Chinese text from between 300 BC and AD 200, The Nine Chapters on the Mathematical Art (Chiu Chang Suan Shu), is the first example of the use of matrix methods to solve simultaneous equations. In the seventh chapter, "Too much and not enough," the concept of a determinant first appears almost 2000 years before its invention by the Japanese mathematician Seki Kowa in 1683 and the German mathematician Gottfried Leibniz in 1693.





Magic squares were known to Arab mathematicians, possibly as early as the 7th century, when the Arabs conquered northwestern parts of the Indian subcontinent and learned Indian mathematics and astronomy, including other aspects of combinatorial mathematics. It has also been suggested that the idea came via China. The first magic squares of order 5 and 6 appear in an encyclopedia from Baghdad circa 983 AD, the Encyclopedia of the Brethren of Purity (Rasa'il Ihkwan al-Safa); simpler magic squares were known to several earlier Arab mathematicians.[1]





After the development of the theory of determinants by Seki Kowa and Leibniz in the late 17th century, Cramer developed the theory further in the 18th century, presenting Cramer's rule in 1750. Carl Friedrich Gauss and Wilhelm Jordan developed Gauss-Jordan elimination in the 1800s.





The term "matrix" was coined in 1848 by J. J. Sylvester. Cayley, Hamilton, Grassmann, Frobenius and von Neumann are among the famous mathematicians who have worked on matrix theory.





Olga Taussky-Todd (1906-1995) used matrix theory to investigate an aerodynamic phenomenon called fluttering or aeroelasticity during WWII.





[edit] Applications





[edit] Encryption





See also: Matrix encryption





Matrices can be used to encrypt numerical data. Encryption is done by multiplying the data matrix with a key matrix. Decryption is done simply by multiplying the encrypted matrix with the inverse of the key.





[edit] Computer graphics





See also: Transformation matrix





4×4 transformation matrices are commonly used in computer graphics. The upper left 3×3 portion of a transformation matrix is composed of the new X, Y, and Z axes of the post-transformation coordinate space.





[edit] Further reading





A more advanced article on matrices is Matrix theory.





[edit] See also





* List of matrices


* Logical matrix


* Relation composition


* Matrix calculus





[edit] References





1. ^ a b Swaney, Mark. History of Magic Squares.





[edit] External links


Wikibooks


Wikibooks Algebra has a page on the topic of


Matrices





* Resources


o Matrix name and history: very brief overview, ualr.edu


o Introduction to Matrix Algebra: definitions and properties, xycoon.com


o Matrix Algebra, sosmath.com


o The Matrix Reference Manual, Imperial College


o An online textbook on Introduction to Matrix Algebra at Holistic Numerical Methods Institute


o Applied examples of matrices used in graphical game programming, Riemer's DirectX Tutorials





* Online Matrix Calculators


o easycalculation.com


o bluebit.gr


o wims.unice.fr





* Freeware


o MATRIX 2.1 Excel add-in, foxes


o MacAnova, University of Minnesota School of Statistics





Retrieved from "http://en.wikipedia.org/wiki/Matrix_%28...





Categories: Abstract algebra | Linear algebra | Matrices


Views





* Article


* Discussion


* Edit this page


* History





Personal tools





* Sign in / create account





Navigation





* Main page


* Contents


* Featured content


* Current events


* Random article





interaction





* About Wikipedia


* Community portal


* Recent changes


* File upload wizard


* Contact us


* Make a donation


* Help





Search





Toolbox





* What links here


* Related changes


* Upload file


* Special pages


* Printable version


* Permanent link


* Cite this article





In other languages





* العربية


* Azərbaycan


* বাংলা


* Български


* Català


* Česky


* Dansk


* Deutsch


* Eesti


* Español


* Esperanto


* فارسی


* Français


* Hrvatski


* 한국어


* Bahasa Indonesia


* Íslenska


* Italiano


* עברית


* Lietuvių


* Nederlands


* 日本語


* ‪Norsk (bokmål)‬


* ‪Norsk (nynorsk)‬


* Polski


* Português


* Română


* Русский


* Simple English


* Slovenčina


* Slovenščina


* Српски / Srpski


* Suomi


* Svenska


* தமிழ்


* ไทย


* Tiếng Việt


* Türkçe


* Українська


* اردو


* 中文





Powered by MediaWiki


Wikimedia Foundation





* This page was last modified 21:24, 11 July 2007.


* All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)


Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.


* Privac
Reply:First, it will help if you use some more normal terminology:





Label the elements of the matrices as:





A(n,m):


where A(1,1) = a, A(1,2) = b, A(1,3) = c, A(2,1) = d, A(2,2) =e, A(2,3) = f





B(p,q):


where B(1,1) = g, B(1,2) = h, B(2,1) = i, B(2,2) = j, B(3,1) = k, B(3,2) = l





The calculation of the product is:


AB(y,z) = Sum over x: A(y,x)B(x,z)


= A(y,1)B(1,z) + A(y,2)B(2,z) + A(y,3)B(3,z)





AB is identity matrix if and only if:


AB(1,1) = 1


AB(1,2) = 0


AB(2,1) = 0


AB(2,2) = 1





So if these 4 equations aren't all true, AB is not the identity matrix.





If you're trying to find a guarantee that you aren't going to get one, then make an arbitrary condition that will block it. For example, you could guarantee that AB(1,1) = 0 if


A(1,1) = 0 = A(1,2) = 0 = A(1,3)





Actually, a more subtle approach occurs to me: If you look at the rank of the matrices A and B, you can find more interesting ways of preventing the identity, having to do with the dimensionality of the vector space that is acted upon by these matrices. Unless A has rank 2, there's no chance. But this line of argument depends a bit more deeply on linear algebra than I can remember without some consultation.


No comments:

Post a Comment