Leading axis theory

From APL Wiki
Revision as of 15:21, 20 November 2019 by RichPark (talk | contribs) (3 revisions imported: Migrate from miraheze)
Jump to navigation Jump to search

Leading axis theory, or the leading axis model, is an approach to array language design and use that emphasizes working with arrays by manipulating their cells and mapping functions over leading axes implicitly using function rank or explicitly using the Rank operator. It was initially developed in SHARP APL in the early 1980s and is now a major feature of J and Dyalog APL, as well as languages influenced by these. The name "leading axis" comes from the frame, which consists of leading axes of an array, the related concept of leading axis agreement, which extends scalar conformability, and the emphasis on first axis forms of functions while deprecating or discarding other choices of axis.

Features

The leading axis theory as a complete model of programming is best exemplified by J, since it was designed entirely using the theory from the start. While various APLs have developed or adopted features of the leading axis model, backwards compatibility requirements can prevent them from making certain changes to align with the theory.

J defines all one-dimensional functions to work on the first axis, that is, to manipulate major cells of their arguments. In this way it unifies APL's pairs of first- and last-axis functions and operators including Reverse, Rotate, Replicate, Expand, Reduce, and Scan. J retains two functions corresponding to , and , with , as Ravel and Catenate First and ,. for second- (rather than last-) axis forms of these functions; ,. is identical to ,"_1 (which transliterates to APL ,⍤¯1) in both valences.

J also extends Rotate so that it can work on multiple leading axes rather than a single axis: additional values in the left argument apply to leading axes of the right in order. This aligns Rotate with the SHARP APL extensions to Take, Drop, and Squad allowing short left arguments: in each case the left argument is a vector corresponding to axes of the right argument starting at the first.

The Rank operator is present in every language influenced by leading axis theory. By mapping over leading axes of the arguments, it allows a left operand which works with leading axes of its own arguments to be applied on axes other than the first. The Rank operator is the reason to define functions on the leading axes: by applying Rank to a leading-axis function, the function can be made to work on any axis or contiguous sequence of axes in the argument.

In J and SHARP APL, every function has a rank defined by the language. For example, scalar functions inherently have rank 0 as they apply only to scalars. J and SHARP APL also define close compositions, which compose two functions while retaining the rank of the first one applied. While the concept of applying with rank makes these features possible, it's unclear whether they are part of leading axis theory or simply a decision made by two languages with a shared heritage. J introduced non-close compositions, and Dyalog APL has added only non-close compositions, avoiding introducing function rank or close compositions despite otherwise adhering to leading axis theory.

Adoption in APL

Because APL was designed before the leading axis model was developed, many APL primitives do not naturally adhere to the theory and some are not compatible with it at all. The following table describes how functions and operators that act on specific axes of their arguments (omitting for example Enclose and Match which apply to entire arrays) interact with leading axis theory.

Compatibility Functions
Already compatible Grade (, ), Decode (), Encode ()
Use first axis form only Reverse, Rotate (), Replicate, Reduce (), Expand, Scan (), Catenate ()
Extendible to leading axes Take (), Drop (), indexing (), scalar dyadics, Unique ( or ) and most set functions (⍳∪∩~)
Incompatible Split (), First ( or ), Membership (), Partition ( and )
Unclear Find ()
Designed for leading axes Rank operator (), Tally (), Interval Index (), Key ()

Backwards-compatible language changes are only effective for functions which are extendible to leading axes. The following extensions have been made in order to support leading axis theory:

Functions SHARP APL Dyalog APL
Take, Drop 19.0 13.0
Indexing function 19.0 13.0
Bracket indexing No No
Scalar dyadics Yes No
Unique Yes 17.0
Index Of No* 14.0
Union, Intersection, Without No No

Index Of in SHARP APL was not extended to apply to left argument major cells as in J and Dyalog; instead it was given rank 1 0 making such a change impossible. The leading-axis extension of scalar dyadics in SHARP is a direct consequence of given them a function rank 0, as SHARP's concept of function rank includes leading axis agreement.

History

Leading axis theory was first developed by employees of I. P. Sharp Associates including Ken Iverson, Arthur Whitney, and Bob Bernecky in the early 1980s: the Rank operator itself is attributed to Whitney, who invented it while travelling to the APL82 conference. It was further developed by Iverson and Roger Hui when creating the J language in the 1990s and 2000s; the leading axis model and its various incompatibilities with APL had been a major reason to break with APL and create a new language.

Leading axis theory was brought to nested APLs by Dyalog APL in the 2010s after Dyalog Ltd. employed Hui. Working with Jay Foad and Morten Kromberg, Hui designed and implemented versions of Rank and other J functionality compatible with Dyalog's nested arrays.


APL features [edit]
Built-ins Primitives (functions, operators) ∙ Quad name
Array model ShapeRankDepthBoundIndex (Indexing) ∙ AxisRavelRavel orderElementScalarVectorMatrixSimple scalarSimple arrayNested arrayCellMajor cellSubarrayEmpty arrayPrototype
Data types Number (Boolean, Complex number) ∙ Character (String) ∙ BoxNamespaceFunction array
Concepts and paradigms Conformability (Scalar extension, Leading axis agreement) ∙ Scalar function (Pervasion) ∙ Identity elementComplex floorArray ordering (Total) ∙ Tacit programming (Function composition, Close composition) ∙ GlyphLeading axis theoryMajor cell searchFirst-class function
Errors LIMIT ERRORRANK ERRORSYNTAX ERRORDOMAIN ERRORLENGTH ERRORINDEX ERRORVALUE ERROREVOLUTION ERROR