Performance: Difference between revisions

Jump to navigation Jump to search
415 bytes added ,  08:00, 7 July 2022
Line 31: Line 31:
{{Main|Magic function}}
{{Main|Magic function}}
The technique of implementing APL primitives using other primitives, or even simpler cases of the same primitive, can be advantageous for performance in addition to being easier for the implementer.<ref>[[Roger Hui]]. [http://www.dyalog.com/blog/2015/06/in-praise-of-magic-functions-part-one/ "In Praise of Magic Functions: Part I"]. [[Dyalog Ltd.|Dyalog]] blog. 2015-06-22.</ref> Even when a primitive does not use APL directly, reasoning in APL can lead to faster implementation techniques.<ref>[[Marshall Lochbaum]]. [https://www.dyalog.com/blog/2018/06/expanding-bits-in-shrinking-time/ "Expanding Bits in Shrinking Time"]. [[Dyalog Ltd.|Dyalog]] blog. 2018-06-11.</ref>
The technique of implementing APL primitives using other primitives, or even simpler cases of the same primitive, can be advantageous for performance in addition to being easier for the implementer.<ref>[[Roger Hui]]. [http://www.dyalog.com/blog/2015/06/in-praise-of-magic-functions-part-one/ "In Praise of Magic Functions: Part I"]. [[Dyalog Ltd.|Dyalog]] blog. 2015-06-22.</ref> Even when a primitive does not use APL directly, reasoning in APL can lead to faster implementation techniques.<ref>[[Marshall Lochbaum]]. [https://www.dyalog.com/blog/2018/06/expanding-bits-in-shrinking-time/ "Expanding Bits in Shrinking Time"]. [[Dyalog Ltd.|Dyalog]] blog. 2018-06-11.</ref>
=== APL hardware ===
{{Main|APL hardware}}
APL hardware is hardware that has been designed to natively support APL array operations. This breaks the popular understanding of APL as an interpreted language. Unlike x86, which is targeted to operate on individual scalars one at a time, native APL architectures would be targeted to operate on entire arrays at a time, thereby increasing the speed of APL processing.


=== Alternate array representations ===
=== Alternate array representations ===

Navigation menu