The code allocates memory for cells using constant ADJ which defaults to 32.
That means 32 bytes per cell.
That is excessive, since sizeof(struct cell) is 24 bytes on a 64-bit machine.
So an easy performance fix is to redefine ADJ to 24.
That uses one-quarter less memory for cell memory.
The code also futzes with pointer returned by malloc of cell memory,
saying "adjust in TYPE_BITS-bit boundary" where TYPE_BITS is usually 5.
I dont understand the reason for this, when is a "5-bit boundary" important?
If anyone does understand, please respond.
I would say the code is misquided.
Maybe there is a reason concerning some arcane embedded processor?
Or endianness?
Or a custom malloc that does not align returned pointers
to a machine word boundary?
If I knew the intention of the code was say for portability to embedded processors,
I would feel more comfortable changing it.
I help maintain TinyScheme fork in Gimp, where ADJ remains defined to 32.
I also see that the TinyScheme fork at the gnupg project defines ADJ to 64,
where it would allocate 64 bytes per cell.
My mistake, it does allocate cells the sizeof(cell), not size ADJ.
So it is NOT using excessive memory for cell structs.
But I would still like to understand why it uses ADJ.