Matrix Divide: Difference between revisions

From APL Wiki
Jump to navigation Jump to search
m (Text replacement - "http://help.dyalog.com" to "https://help.dyalog.com")
m (Text replacement - "</source>" to "</syntaxhighlight>")
Line 1: Line 1:
{{Built-in|Matrix Divide|⌹}} is a [[dyadic function]] that performs [[wikipedia:matrix division|matrix division]] between two [[argument]]s of rank 2 or less. Some dialects automatically apply it to rank-2 [[subarray]]s of higher-rank arguments. It shares the [[glyph]] ''Quad Divide'' <source lang=apl inline>⌹</source> (often called ''Domino'') with the monadic function [[Matrix Inverse]].
{{Built-in|Matrix Divide|⌹}} is a [[dyadic function]] that performs [[wikipedia:matrix division|matrix division]] between two [[argument]]s of rank 2 or less. Some dialects automatically apply it to rank-2 [[subarray]]s of higher-rank arguments. It shares the [[glyph]] ''Quad Divide'' <source lang=apl inline>⌹</syntaxhighlight> (often called ''Domino'') with the monadic function [[Matrix Inverse]].


== Examples ==
== Examples ==


The result of <source lang=apl inline>X⌹Y</source> is equal to <source lang=apl inline>(⌹Y)+.×X</source>, which is analogous to <source lang=apl inline>X÷Y</source> being equal to <source lang=apl inline>(÷Y)×X</source>. As a consequence, <source lang=apl inline>X≡Y+.×X⌹Y</source> is true for square matrices.
The result of <source lang=apl inline>X⌹Y</syntaxhighlight> is equal to <source lang=apl inline>(⌹Y)+.×X</syntaxhighlight>, which is analogous to <source lang=apl inline>X÷Y</syntaxhighlight> being equal to <source lang=apl inline>(÷Y)×X</syntaxhighlight>. As a consequence, <source lang=apl inline>X≡Y+.×X⌹Y</syntaxhighlight> is true for square matrices.


<source lang=apl>
<source lang=apl>
Line 20: Line 20:
       X≡Y+.×X⌹Y
       X≡Y+.×X⌹Y
1
1
</source>
</syntaxhighlight>


== Applications ==
== Applications ==
Line 35: Line 35:
       Y⌹X
       Y⌹X
4.2 0.4
4.2 0.4
</source>
</syntaxhighlight>


The following example solves the linear least squares over the five points <math>(1,5), (2,1), (3,4), (4,2), (5,8)</math>. The answer is <math>y=1.9 + 0.7x</math>.
The following example solves the linear least squares over the five points <math>(1,5), (2,1), (3,4), (4,2), (5,8)</math>. The answer is <math>y=1.9 + 0.7x</math>.
Line 49: Line 49:
       Y⌹X
       Y⌹X
1.9 0.7
1.9 0.7
</source>
</syntaxhighlight>


When used with real vectors as both arguments, <source lang=apl inline>Y×X⌹Y</source> gives the [[wikipedia:Projection (linear algebra)#Finding projection with an inner product|projection]] of X onto a basis vector Y. The remaining component of X, namely <source lang=apl inline>R←X-Y×X⌹Y</source>, is [[wikipedia:Orthogonality#Euclidean vector spaces|orthogonal]] to Y (<source lang=apl inline>R+.×Y</source> is zero).
When used with real vectors as both arguments, <source lang=apl inline>Y×X⌹Y</syntaxhighlight> gives the [[wikipedia:Projection (linear algebra)#Finding projection with an inner product|projection]] of X onto a basis vector Y. The remaining component of X, namely <source lang=apl inline>R←X-Y×X⌹Y</syntaxhighlight>, is [[wikipedia:Orthogonality#Euclidean vector spaces|orthogonal]] to Y (<source lang=apl inline>R+.×Y</syntaxhighlight> is zero).


<source lang=apl>
<source lang=apl>
Line 63: Line 63:
       ⎕CT>|Y+.×X-Y×X⌹Y  ⍝ ∧ is orthogonal to Y (with negligible error)
       ⎕CT>|Y+.×X-Y×X⌹Y  ⍝ ∧ is orthogonal to Y (with negligible error)
1
1
</source>
</syntaxhighlight>


== External links ==
== External links ==
Line 76: Line 76:
* [http://microapl.com/apl_help/ch_020_020_280.htm APLX]
* [http://microapl.com/apl_help/ch_020_020_280.htm APLX]
* [http://wiki.nars2000.org/index.php/Matrix_Inverse/Divide NARS2000]
* [http://wiki.nars2000.org/index.php/Matrix_Inverse/Divide NARS2000]
* J [https://www.jsoftware.com/help/dictionary/d131.htm Dictionary], [https://code.jsoftware.com/wiki/Vocabulary/percentdot#dyadic NuVoc] (as <source lang=j inline>%.</source>)
* J [https://www.jsoftware.com/help/dictionary/d131.htm Dictionary], [https://code.jsoftware.com/wiki/Vocabulary/percentdot#dyadic NuVoc] (as <source lang=j inline>%.</syntaxhighlight>)


{{APL built-ins}}[[Category:Primitive functions]]
{{APL built-ins}}[[Category:Primitive functions]]

Revision as of 22:09, 10 September 2022

Matrix Divide () is a dyadic function that performs matrix division between two arguments of rank 2 or less. Some dialects automatically apply it to rank-2 subarrays of higher-rank arguments. It shares the glyph Quad Divide <source lang=apl inline>⌹</syntaxhighlight> (often called Domino) with the monadic function Matrix Inverse.

Examples

The result of <source lang=apl inline>X⌹Y</syntaxhighlight> is equal to <source lang=apl inline>(⌹Y)+.×X</syntaxhighlight>, which is analogous to <source lang=apl inline>X÷Y</syntaxhighlight> being equal to <source lang=apl inline>(÷Y)×X</syntaxhighlight>. As a consequence, <source lang=apl inline>X≡Y+.×X⌹Y</syntaxhighlight> is true for square matrices.

<source lang=apl>

     ⎕←X←2 2⍴1 2 3 4

1 2 3 4

     ⎕←Y←2 2⍴5 6 7 8

5 6 7 8

     X⌹Y
5  4

¯4 ¯3

     (⌹Y)+.×X
5  4

¯4 ¯3

     X≡Y+.×X⌹Y

1 </syntaxhighlight>

Applications

From the properties of Moore-Penrose inverse (which Matrix Inverse uses), Matrix Divide can not only be used to solve a system of linear equations, but also to find the linear least squares solution to an overdetermined system.

The following example solves the system of equations . The answer is .

<source lang=apl>

     ⎕←X←2 2⍴1 2 2 ¯1

1 2 2 ¯1

     Y←5 8
     Y⌹X

4.2 0.4 </syntaxhighlight>

The following example solves the linear least squares over the five points . The answer is .

<source lang=apl>

     ⎕←X←1,⍪⍳5

1 1 1 2 1 3 1 4 1 5

     Y←5 1 4 2 8
     Y⌹X

1.9 0.7 </syntaxhighlight>

When used with real vectors as both arguments, <source lang=apl inline>Y×X⌹Y</syntaxhighlight> gives the projection of X onto a basis vector Y. The remaining component of X, namely <source lang=apl inline>R←X-Y×X⌹Y</syntaxhighlight>, is orthogonal to Y (<source lang=apl inline>R+.×Y</syntaxhighlight> is zero).

<source lang=apl>

     (X Y)←(2 7)(3 1)
     X⌹Y

1.3

     Y×X⌹Y  ⍝ Projection of X onto Y

3.9 1.3

     X-Y×X⌹Y  ⍝ The remaining component in X

¯1.9 5.7

     ⎕CT>|Y+.×X-Y×X⌹Y  ⍝ ∧ is orthogonal to Y (with negligible error)

1 </syntaxhighlight>

External links

Lesson

Documentation


APL built-ins [edit]
Primitives (Timeline) Functions
Scalar
Monadic ConjugateNegateSignumReciprocalMagnitudeExponentialNatural LogarithmFloorCeilingFactorialNotPi TimesRollTypeImaginarySquare RootRound
Dyadic AddSubtractTimesDivideResiduePowerLogarithmMinimumMaximumBinomialComparison functionsBoolean functions (And, Or, Nand, Nor) ∙ GCDLCMCircularComplexRoot
Non-Scalar
Structural ShapeReshapeTallyDepthRavelEnlistTableCatenateReverseRotateTransposeRazeMixSplitEncloseNestCut (K)PairLinkPartitioned EnclosePartition
Selection FirstPickTakeDropUniqueIdentityStopSelectReplicateExpandSet functions (IntersectionUnionWithout) ∙ Bracket indexingIndexCartesian ProductSort
Selector Index generatorGradeIndex OfInterval IndexIndicesDealPrefix and suffix vectors
Computational MatchNot MatchMembershipFindNub SieveEncodeDecodeMatrix InverseMatrix DivideFormatExecuteMaterialiseRange
Operators Monadic EachCommuteConstantReplicateExpandReduceWindowed ReduceScanOuter ProductKeyI-BeamSpawnFunction axisIdentity (Null, Ident)
Dyadic BindCompositions (Compose, Reverse Compose, Beside, Withe, Atop, Over) ∙ Inner ProductDeterminantPowerAtUnderRankDepthVariantStencilCutDirect definition (operator)Identity (Lev, Dex)
Quad names Index originComparison toleranceMigration levelAtomic vector