Recently there have been a couple of issues spawned by the attempts to convert (geographic) digital maps to ZIM format:
The problem boils down to the fact that the count of entries in such a ZIM file created from full-world data exceeds the count of entries in our thus far largest ZIM files by an order of magnitude (hundreds of millions vs tens of million in wikipedia_en_all_maxi). Such a leap pushes the current implementation of libzim if not the ZIM file format spec itself beyond limits implicitly assumed during design.
The huge number of entries leads to the following issues/problems/challenges:
- ZIM file size (inefficient usage of ZIM file space by the entry listing which has always been assumed to be much smaller than the space taken by the entry data).
- Memory consumption (both during ZIM file creation and/or consumption)
- Performance (both during ZIM file creation and/or consumption)
Recently there have been a couple of issues spawned by the attempts to convert (geographic) digital maps to ZIM format:
The problem boils down to the fact that the count of entries in such a ZIM file created from full-world data exceeds the count of entries in our thus far largest ZIM files by an order of magnitude (hundreds of millions vs tens of million in wikipedia_en_all_maxi). Such a leap pushes the current implementation of
libzimif not the ZIM file format spec itself beyond limits implicitly assumed during design.The huge number of entries leads to the following issues/problems/challenges: