There are so many developers who insist on always using "optimal" algorithms for everything, but who don't realize that those often have really shitty constant overhead and that data locality is a thing. A simple array can easily outperform a B-tree or other complex data structure if your n isn't too large, and can provide acceptable performance even when it is.
Optimizing for code size and memory use will often get you pretty good performance for free. Everything depends on context.
@ayo Picolisp is my favorite example of this, it's an implementation in a family of surprisingly fast Lisp interpreters where the authors were careful about data representation and the design of features. It makes for my fastest MAL implementation so far, despite having three data types only: Numbers, symbols and pairs. For example, its OOP implementation foregoes the typical struct approach and uses symbols with property lists. Walking through a small one is faster than using a hash table.
@ayo This actually came into the implementation of my templatation I described (again) today.
One day I realized I was commonly dealing with tiny amounts of data so I switched from using a hashmap for storing it's variables to a linked list.
@ayo See also http://dtrace.org/blogs/bmc/2018/09/28/the-relative-performance-of-c-and-rust/ , where the memory-locality of Rust's standard containers results in an naieve reimplementation being significantly faster than a highly optimised C version.
Welcome to your niu world ! We are a cute and loving international community Ｏ(≧▽≦)Ｏ !