Navigation Class Library  0.1.0
Navigation Class Library
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Phisycal Mapping of Database Files

Because there are a lot of database files (can be more than one million) it is very important to use them in an efficient way.
This document describes only reading files, because it is not necessary to write anything into the databases by such a navigation software.

So, let's see how to handle read-only files:

  • The Conventional Way:
    A trivial way to use such a file is to load it into memory by some generic file reader function. This is the most popular method in use.
    Let's see an example (without error handling):
    int size = 2000; // the size to be loaded
    char * buffer = malloc(size); // buffer for the data
    int fd = open("path/to/file", O_RDONLY); //
    read(fd, buffer, size); // read it in the memory
    However, it seems to be simple (at first sight), it triggers some important problems. Let's see them:
    • Memory Allocation:
      The software must decide the size of the memory used for file loading, and must allocate memory buffers for storage. Because it is written by the software, it seems a modified memory (dirty) from the point of view of the operating system. It triggers a virtual memory handling problem described below.
    • Loading:
      If the software loads such a file into memory, then it must decide which part of the file is to be loaded. It probably means that it loads more than necessary. Using higher level functions they may have additional caching, which means additional copying of memory.
    • Cache Handling:
      To make the operation fast enough it is necessary to keep the recently used information in memory. To do it with a reasonable memory allocation it is not possible to load everything: a caching method must be used by the software. Such a cache may share the memory within the application (at a cost of a speed impact), but it is very hard (if not impossible) to share it with other processes. It can be very important in a system with a lot of processes.
    • Virtual Memory Usage:
      The operating system is responsible to deal out the resources to the processes, including the memory pages. Because such memory pages are modified by the user, the operating system must write them to the page file to be able to give it to another process. It means that it cannot be done without page file and it also has a serious speed impact. Wihout page file, such pages are unavailable for other processes.

  • Memory Mapping:
    All the problems described above are solved by a single step: mapping the file into memory.
    How does it work?
    • Opening File:
      Opening a file for mapping (using the function mmap() on Unix/Linux systems) is very fast: does not load anything into memory but builds a virtual memory area to the given address with the content of the given file.
    • Loading:
      It is not necessary to do anything about it, because it is done by the kernel.
      The data will be loaded by the virtual memory system on demand: the first reading access triggers a page fault which loads the corresponding memory page (usually 4k) in the physical memory. It is done in a very efficient way and does not need any additional copying. It also guarantees that only the necessary pages are loaded in, without any user management.
    • Cache Handling:
      It is important to note that the read-only mapped memory does not belong to any process, but it is handled by the kernel. It is not released after closing: if any process opens it again, it will have the same mapping in the memory immediately. It means a kernel-level cache, which is more efficient than any kind of process-level cache.
      The memory usage does not need any limitation: the whole physical memory can be used this way without any slowdown. Why? The answer is simple: the kernel can take any page from these mapped tables just like from the free pool, because they can be re-loaded at any time on demand. That's why the process-level cache is absolutely unnecessary.
    • Virtual Memory:
      Such page mapping uses the virtual memory support of the kernel, even if no pagefile is in use, so it must be compiled in.
      Note that using pagefile is not recommended for flash-based systems.
    • 64-bit systems:
      Such systems have very large virtual memory space - even having lower amount of physical memory.
      It is possible to map all the database files (even if it needs hundreds of gigabytes memory space) at startup, and use them as they would be in the memory. It makes the software more simple: smaller and faster. However, loading them on-demand seems fast enough, so possibly it is not necessary to map everything at startup.
    • Other Operating Systems:
      This technology is available on all other Unix-based operating systems too (such as e.g. QNX). There is also similar functionality on Windows-like systems (see CreateFileMapping()) which is available on Windows CE too, but i don't know anything about its efficiency.

That's why I chose the memory mapping method for database file handling.
Because of memory mapping, there is a possibility to use binary structures in these database files. It makes the things easier and faster, as it is described in page Database Files.
It must be decided whether to use packed structures or not. Using it generates a lot of unaligned accesses: if the system is unable to use such, packed structures cannot be used. Anyway, an unaligned access is slower, but i found the slowdown acceptable. The advantage of packed structures is the memory consumption: some space can be saved this way. Because my database for navigation software needs a lot of 16 and 8 bit data, using packed structures seems necessary. The target hardwares (x86 and ARM) are able to use unaligned access.