75.90
《线性代数(第5版)》([美]Gilbert Strang(吉尔伯特·斯特朗))
正品保障
快速发货
支持正版
  • 产品详情
  • 产品参数
  • 产品评论

编辑推荐

Gilbert Strang的《线性代数(第5版)》是一本经典线性代数教材。此书深入浅出地展示了线性代数的所有核心概念,讲述过程中恰当穿插了各种应用,体现了线性代数极端有用的思想。

内容简介

线性代数内容包括行列式、矩阵、线性方程组与向量、矩阵的特征值与特征向量、二次型及Mathematica 软件的应用等。 每章都配有习题,书后给出了习题答案。本书在编写中力求重点突出、由浅入深、 通俗易懂,努力体现教学的适用性。本书可作为高等院校工科专业的学生的教材,也可作为其他非数学类本科专业学生的教材或教学参考书。

作者简介

作者GILBERT STRANG为Massachusetts Institute of Technology数学系教授。从UCLA博士毕业后一直在MIT任教.教授的课程有“数据分析的矩阵方法”“线性代数”“计算机科学与工程”等,出版的图书有Linear Algebra and Learning from Data (NEW)、See math.mit.edu/learningfromdata、Introduction to Linear Algebra - Fifth Edition 、Contact linearalgebrabook@gmail.com、Complete List of Books and Articles、Differential Equations and Linear Algebra。


目录

Table of Contents
1 Introduction to Vectors 1
1.1 VectorsandLinearCombinations...................... 2
1.2 LengthsandDotProducts.......................... 11
1.3 Matrices ................................... 22
2 Solving Linear Equations 31
2.1 VectorsandLinearEquations........................ 31
2.2 TheIdeaofElimination........................... 46
2.3 EliminationUsingMatrices......................... 58
2.4 RulesforMatrixOperations ........................ 70
2.5 InverseMatrices............................... 83
2.6 Elimination = Factorization: A = LU .................. 97
2.7 TransposesandPermutations ........................ 108
3 Vector Spaces and Subspaces 122
3.1 SpacesofVectors .............................. 122
3.2 The Nullspace of A: Solving Ax = 0and Rx =0 ........... 134
3.3 The Complete Solution to Ax = b ..................... 149
3.4 Independence,BasisandDimension .................... 163
3.5 DimensionsoftheFourSubspaces ..................... 180
4 Orthogonality 193
4.1 OrthogonalityoftheFourSubspaces . . . . . . . . . . . . . . . . . . . . 193
4.2 Projections ................................. 205
4.3 LeastSquaresApproximations ....................... 218
4.4 OrthonormalBasesandGram-Schmidt. . . . . . . . . . . . . . . . . . . 232
5 Determinants 246
5.1 ThePropertiesofDeterminants....................... 246
5.2 PermutationsandCofactors......................... 257
5.3 Cramer’sRule,Inverses,andVolumes . . . . . . . . . . . . . . . . . . . 272
vii
6 Eigenvalues and Eigenvectors 287
6.1 IntroductiontoEigenvalues......................... 287
6.2 DiagonalizingaMatrix ........................... 303
6.3 SystemsofDifferentialEquations ..................... 318
6.4 SymmetricMatrices............................. 337
6.5 PositiveDe.niteMatrices.......................... 349
7 TheSingularValueDecomposition (SVD) 363
7.1 ImageProcessingbyLinearAlgebra .................... 363
7.2 BasesandMatricesintheSVD ....................... 370
7.3 Principal Component Analysis (PCA by the SVD) . . . . . . . . . . . . . 381
7.4 TheGeometryoftheSVD ......................... 391
8 LinearTransformations 400
8.1 TheIdeaofaLinearTransformation .................... 400
8.2 TheMatrixofaLinearTransformation. . . . . . . . . . . . . . . . . . . 410
8.3 TheSearchforaGoodBasis ........................ 420
9 ComplexVectorsand Matrices 429
9.1 ComplexNumbers ............................. 430
9.2 HermitianandUnitaryMatrices ...................... 437
9.3 TheFastFourierTransform......................... 444
10 Applications 451
10.1GraphsandNetworks ............................ 451
10.2MatricesinEngineering........................... 461
10.3 Markov Matrices, Population, and Economics . . . . . . . . . . . . . . . 473
10.4LinearProgramming ............................ 482
10.5 Fourier Series: Linear Algebra for Functions . . . . . . . . . . . . . . . . 489
10.6ComputerGraphics ............................. 495
10.7LinearAlgebraforCryptography...................... 501
11 NumericalLinear Algebra 507
11.1GaussianEliminationinPractice ...................... 507
11.2NormsandConditionNumbers....................... 517
11.3 IterativeMethodsandPreconditioners . . . . . . . . . . . . . . . . . . . 523
12LinearAlgebrain Probability& Statistics 534
12.1Mean,Variance,andProbability ...................... 534
12.2 Covariance Matrices and Joint Probabilities . . . . . . . . . . . . . . . . 545
12.3 Multivariate Gaussian and Weighted Least Squares . . . . . . . . . . . . 554
MatrixFactorizations 562
Index 564
SixGreatTheorems/LinearAlgebrain aNutshell 573

精彩书摘

  Chapter 1
  Introduction to Vectors
  The heart of linear algebra is in two operations—both with vectors. We add vectors to get v + w. We multiply them by numbers c and d to get cv and dw. Combining those two operations (adding cv to dw) gives the linear combinationcv + dw.
  Example v + w =「 1 「 +「2 「 =「3 「 is the combination with c = d =1 134
  Linear combinations are all-important in this subject! Sometimes we want one partic-ular combination, the speci.c choice c =2 and d =1 that produces cv + dw =(4,5). Other times we want all the combinations of v and w (coming from all c and d).
  The vectors cv lie along a line. When w is not on that line, the combinations cv +dw .ll the whole two-dimensional plane. Starting from four vectors u,v,w,z in four-dimensional space, their combinations cu + dv + ew + fz are likely to .ll the space— but not always. The vectors and their combinations could lie in a plane or on a line.
  Chapter 1 explains these central ideas, on which everything builds. We start with two-dimensional vectors and three-dimensional vectors, which are reasonable to draw. Then we move into higher dimensions. The really impressive feature of linear algebra is how smoothly it takes that step into n-dimensional space. Your mental picture stays completely correct, even if drawing a ten-dimensional vector is impossible.
  This is where the book is going (into n-dimensional space). The .rst steps are the operations in Sections 1.1 and 1.2. Then Section 1.3 outlines three fundamental ideas.
  1.1 Vector additionv + w andlinear combinations cv + dw.
  √
  1.2 Thedotproductv · w of two vectors and thelength 1v1 = v · v.
  1.3 MatricesA, linear equationsAx = b, solutions x = A.1b.
  1
  1.1 Vectors and Linear Combinations
  
  1 3v +5w is a typical linear combination cv + dw of the vectors v and w.
  2 For v =「 1 「 and w =「2 「 that combination is 3「1 「 +5「2 「 =「3+10 「 =「13 「 . 13133+15
  18
  3 The vector「 2 「 =「2 「 +「0 「 goes across to x =2and up to y =3in the xy plane. 303
  4 The combinations c「 1 「 + d「2 「 .ll the whole xy plane. They produce every「x 「 .
  13y
  .1 .. 2 ..1 ..3 .
  5 The combinations c 1 .+d 3 ..ll a plane in xyz space. Same plane for 1 ., 4 ..
  .. ..
  14 15
  c +2d =1 .1 .
  6 But c +3d =0 has no solution because its right side 0 .is not on that plane.
  .
  c +4d =0 0
  
  “You can’t add apples and oranges.” In a strange way, this is the reason for vectors. We have two separate numbers v1 and v2. That pair produces a two-dimensional vector v:
  Column vector vv =「 v1 「 v1 = .rst component of v v2v2 = second component of v
  We write v as a column, not as a row. The main point so far is to have a single letter v (in boldfaceitalic) for this pair of numbers v1 and v2 (in lightfaceitalic). Even if we don’t add v1 to v2, we do add vectors. The .rst components of v and w stay separate from the second components:
  VECTOR v =「v1 「 and w =「w1 「 add to v + w =「v1 + w1 「 . ADDITION v2w2v2 + w2
  Subtraction follows the same idea: The components of v . w are v1 . w1 and v2 . w2. The other basic operation is scalar multiplication. Vectors can be multiplied by 2or by .1or by any number c. To .nd 2v, multiply each component of v by 2:
  SCALAR 2v =「2v1 「 = v + v .v =「.v1 「 .
  MULTIPLICATION 2v2.v2
  The components of cv are cv1 and cv2. The number c is called a “scalar”.
  Notice that the sum of .v and v is the zero vector. This is 0, which is not the same as the number zero! The vector 0 has components 0 and 0. Forgive me for hammering away at the difference between a vector and its components. Linear algebra is built on these operations v + w and cv and dw—adding vectors and multiplyingby scalars.
  1.1. Vectors and Linear Combinations
  Linear Combinations
  Now we combine addition with scalar multiplication to produce a “linear combination” of v and w. Multiply v by c and multiply w by d. Then add cv + dw.
  Four special linear combinations are: sum, difference, zero, and a scalar multiple cv:
  1v +1w = sum of vectors in Figure 1.1a 1v . 1w = difference of vectors in Figure 1.1b 0v +0w = zero vector cv +0w = vector cv in the direction of v
  The zero vector is always a possible combination (its coef.cients are zero). Every time we see a “space” of vectors, that zero vector will be included. This big view, taking all the combinations of v and w, is linear algebra at work.
  The .gures show how you can visualize vectors. For algebra, we just need the com-ponents (like 4 and 2). That vector v is represented by an arrow. The arrow goes v1 =4 units to the right and v2 =2 units up. It ends at the point whose x,y coordinates are 4,2. This point is another representation of the vector—so we have three ways to describe v:
  We add using the numbers. We visualize v + w using arrows:
  Vector addition(head to tail) At the end of v, place the start of w.
  We travel along v and then along w. Or we take the diagonal shortcut along v + w. We could also go along w and then v. In other words, w + v gives the same answer as v +w. These are different ways along the parallelogram (in this example it is a rectangle).
  Vectors in Three Dimensions
  A vector with two components corresponds to a point in the xy plane. The components of v are the coordinates of the point: x = v1 and y = v2. The arrow ends at this point (v1,v2), when it starts from (0,0). Now we allow vectors to have three components (v1,v2,v3).
  The xy plane is replaced by three-dimensional xyz space. Here are typical vectors (still column vectors but with three components):
  . 1..2..3.
  v =1. and w =3. and v + w =4 .
  .. ..
  .14 3
  The vector v corresponds to an arrow in 3-space. Usually the arrow starts at the “origin”, where the xyz axes meet and the coordinates are (0,0,0). The arrow ends at the point with coordinates v1, v2, v3. There is a perfect match between the column vector and the arrowfrom the origin and the pointwhere the arrow ends.
  The vector (x,y)in the plane is different from (x,y,0)in 3-space !
  z
  y
  0.
  (3,2) 2
  .
  1
  y x
  .2. 2
  ..
  x
  0
  Figure 1.2: Vectors「 x「 and .xy.. correspond to points (x,y)and (x,y,z). y.
  z
  The reason for the row form (in parentheses) is to save space. But v = (1,1,.1) is not a row vector! It is in actuality a column vector, just temporarily lying down. The row vector [1 1 .1] is absolutely different, even though it has the same three components. That 1by 3row vector is the “transpose” of the 3by 1column vector v.
  1.1. Vectors and Linear Combinations
  In three dimensions, v + w is still found a component at a time. The sum has components v1 + w1 and v2 + w2 and v3 + w3. You see how to add vectors in 4 or 5 or n dimensions. When w starts at the end of v, the third side is v + w. The other way around the parallelogram is w + v. Question: Do the four sides all lie in the same plane? Yes. And the sum v + w . v . w goes completely around to produce the vector.
  A typical linear combination of three vectors in three dimensions is u +4v . 2w:
  Linear combination .1..1.. 2..1.
  Multiply by 1,4,.2 0.+4 2.. 23 =2 .
  . . ....
  Then add 31 .19
  ……
库存量
1
市场价
0.00
价格
75.90
首页
客服
购物车
加入购物车
立即购买