Benchmark environment is as follows:
- Dell Inspiron 7373/i5 8th generation
- Windows 10 Home
- Target database(all 32bits)
- PostgreSQL 10.6: used libpq
- FIrebird 3.0.3: used OO API (C++)
- SQLite3: #include <sqlite3.h> // ...... :P
(I'm not that much good enough to make something open sourced, so this is the best I can offer to the community...... ;) )
- Insertion speed
- 1: SQLite, in-memory database: 2 seconds
- 2: PostgreSQL, COPY: 5 seconds
- 3: PostgreSQL, bulk insert: 7 seconds
- 4: Firebird, bulk insert: 10 seconds
- 5: SQLite, save to file: 56 seconds
- Disk I/O
- Thanks to Windows kernel, if the system receives too many small I/Os, it burdens disk too much, making it to the bottleneck
- When using SQLite to save the result to disk where bulk insertion is impossible, disk I/O hit 100%
- In Firebird, bulk insertion is implemented via PSQL, where you send a number of records and the each INSERT line in PSQL sequentially inserts the records; still it burdens the disk I/O very high(around 50%), though lower than SQLite
- In Firebird, set the page size to at least 8192 to lessen disk I/O burdens
- Bulk insertion
- We can speed up the insertion using bulk insertion, but we have some speed bottleneck, regardless of disk I/O.
- Firebird: 10 records at once, regardless of page size
- Using stored procedure
- PostgreSQL: 40 records show the maximum performance
- PostgreSQL
- COPY(source: CSV) is incredibly fast. Even the official documentation recommends COPY in bulk insertion
- In libpq, the speed is around same whether you send the records in either text or binary
- Miscellany
- It's far faster to use the official interface directly rather than using any wrapper. When testing Firebird at first, with SOCI it took 59 seconds, while after applying the same logic to Firebird OO API directly the time taken is shrunk to only 20-something seconds.
No comments:
Post a Comment