BerkleyDB, Java, Databases, Real Time Databases
[ Pobierz całość w formacie PDF ]
WHITE PAPER
Performance Metrics
& Benchmarks:
Berkeley DB
Purpose
This white paper presents performance measurements
for Berkeley DB under various conditions and on several
major operating systems and platforms. Performance
estimates presented here are designed to help you
understand what to expect from Berkeley DB in some
common configurations on commodity hardware run-
ning widely used operating systems. Your application
performance depends on your data, data access pat-
terns, cache size, other configuration parameters,
operating system and hardware. Benchmarks are almost
never representative of the performance of a particular
application; however, they provide some guidelines and
help to set basic operational expectations.
Makers of
Berkeley DB
Introduction
Sleepycat Software’s Berkeley DB is a database engine
designed to serve as the substrate within any appli-
cation requiring efficient and robust data storage.
Berkeley DB is a complete data management library,
including support for concurrent access, ACID transac-
tions and recovery, and replication for highly available
systems. Berkeley DB is designed to support any struc-
tured information model and has been incorporated into
relational, object, network and hierarchical systems.
Due to its scalability and minimal overhead, it has also
been used in a number of reliable messaging, data
streaming, caching and data archival systems.
BERKELEY DB
PERFORMANCE WHITE PAPER
2
Platforms and Testing Methodology
All performance tests were run on the following commodity hardware and operating system combinations, without
customized configuration, tuning or any other performance enhancing techniques. All tests were run on clean
installations in multi-user mode systems that were quiescent. For the workloads presented, these are the results that
can be obtained under actual use.
The following platforms represent the most common operating systems and hardware platforms we’ve encountered in
our existing customer base. It is by no means an exhaustive list, nor does it represent the extent of supported platforms
for Berkeley DB. The version of Berkeley DB tested was 4.3.
• Linux – SuSE Linux 9.1 running on an AMD Athlon™ 64 processor 3200+ at 1GHz system with 1GB of RAM. Western
Digital Caviar SE 120GB EIDE (7200RPM) with 8MB cache RAM was connected via an SIS 5513 controller formatted with
the EXT3 file system with journaling enabled. The compiler was GCC 3.3.3. The system reported 4046.84 BogoMIPS.
• Solaris – Sun Solaris 10 (x64) running on a SunFire v20z with dual AMD Opteron™ processors at 2GHz with 16GB of
RAM. A 80GB Seagate Cheetah (ST373453LC) SCSI hard drive was connected via an integrated SCSI controller on a
PCI-X bus and formatted with the UFS filesystem.
• Windows XP – Microsoft Windows™ XP Professional (XP Version 2002, Service Pack 2) running on an AMD Athlon™
2600+ at 2.0GHz with 768MB RAM. A DiamondMax Plus9 (6Y060L0) 60GB(7200RPM) DMA/ATA-133 hard drive with
2MB cache RAM was connected via an integrated nVIDIA nForce2 SATA controller into the PCI bus and formatted with an
NTFS file system with journaling enabled. The system was compiled with Microsoft Visual Studio 6.
(NOTE: The timer resolution of Windows XP is much lower than that provided by the other operating systems.
As a result the results may be skewed up to 10%.)
• BSD Unix – FreeBSD 5.3p4 running on an Intel™ Pentium™ 4 CPU at 3.00GHz with 2GB of RAM. An integrated Intel
ICH5 SATA150 controller on a PCI bus controlled a Barracuda 7200.7 Plus (ST3200822A) Ultra ATA/100 (7200RPM)
hard drive formatted with the UFS file system. The compiler was GCC 3.4.2.
• Mac OS/X – Mac OS/X version 10.3.9 running on a dual Power Mac G4 CPUs at 1.25GHz with 2GB of DDR SDRAM.
IBM Deskstar 120GXP 120GB UATA100 hard drive was connected via an integrated ATA bus formatted with the HFS+ file
system with journaling enabled. The compiler was GCC 3.3.
Note that the systems vary greatly. This is intentional. It is not the goal of this paper to demonstrate the relative perfor-
mance of the profiled systems when performing similar tasks, but rather to demonstrate basic performance metrics on
a wide variety of systems so that you can estimate how your system might perform when using Berkeley DB.
Any transactional database claiming durability (the ‘D’ in ACID) will necessarily have to interact with some form of sta-
ble storage or replicate the transaction to other systems. The most common case is to use a hard disk drive. Hard drive
technology, bus subsystems, and memory interconnects vary widely, as do their performance. For this set of tests, we
used commodity disk technology – that is, slow disks with average subsystems. If you add a fast disk subsystem with
the fastest disks, you’ll see much higher throughput with Berkeley DB, just as you would with any disk-bound applica-
tion. Additional hardware options can also affect performance. For instance, using a disk subsystem with a battery-
backed memory write cache will improve throughput. There are other, faster storage systems, some based entirely on
memory, but they are less common. These hardware-based techniques affect all databases in a similar manner and so
BERKELEY DB
PERFORMANCE WHITE PAPER
3
are irrelevant for this study. This paper explores the performance curve of the Berkeley DB software, not the surround-
ing hardware.
Download the raw results at
Download the test code at
Performance
Throughput – records read or written in a fixed time interval – is the most common measure of database performance.
Each test in this benchmark uses fixed-size records containing keys that are eight bytes long and a data portion of
thirty-two bytes. Pages are 32KB, the buffer for bulk transfers was set to 4MB, the log buffer is 8MB, and the buffer
cache is 32MB. Source code for the benchmark is roughly 350 lines of C code and may be downloaded from the URL
supplied above.
Berkeley DB supports concurrent database access on multi-processor systems. Many application workloads will benefit
from such concurrency. Regardless of the system tested, the effects of concurrency are not measured in the following
tests. The tests are single-threaded and single-process. On a multi-processor system, they will use no more than one of
the CPUs at a time.
This first test measures throughput as operations per second using Berkeley DB Data Store (DS). Since this is a non-
transactional, in-memory test, performance is dependent on the operating system, memory, and CPU, and not the
underlying I/O or disk system for the read tests. When writing using DS, the data is cached in memory and flushed
when buffers are filled or when the application explicitly calls for data to be flushed to disk. These tests are written
such that writes occur in memory and are not flushed to disk.
DS (ops/sec)
Linux Solaris Win XP BSD Mac OS/X
Single-record read
1,002,200 1,008,580 1,000,000 1,108,920 524,360
Single-record write
766,034 550,748 447,628 614,116 317,141
Berkeley DB provides optimized access to sequential data allowing for faster overall throughput. If you read large
amounts of data sequentially from your databases, using the bulk access APIs will improve your read performance.
DS (ops/sec)
Linux Solaris Win XP BSD Mac OS/X
Read using bulk APIs
13,501,800 12,114,500 4,565,910 19,299,700 5,615,280
Speedup relative to non-bulk APIs
13.47 12.01 4.57 17.40 10.71
4
BERKELEY DB
PERFORMANCE WHITE PAPER
4
Most applications using Berkeley DB will require transactional access to data as a result of concurrent access or simply
to maintain a reliable repository of data. This is accomplished by using the Berkeley DB Transactional Data Store (TDS)
features. In this first test of TDS we enable the locking, logging and transaction subsystems. Although read-only trans-
actions do not perform logging, performing reads in transactions will introduce locking overhead. That overhead
is reflected in the following test results.
TDS (ops/sec)
Linux Solaris Win XP BSD Mac OS/X
Single-record read
466,623 328,809 259,134 331,523 171,549
Relative to DS reads
.609 .597 .579 .540 .541
The transactional system can be configured to run entirely in memory. If the process fails, the system is powered down,
or some other unforeseen failure occurs, the data will be lost. Essentially, durability is sacrificed for speed. Replicated
systems running entirely in core memory using Berkeley DB High Availability (HA) can provide full ACID transactional
semantics, including durability, as long as one replica remains active, and will perform much faster than systems storing
data to disk. The following measures only the in-memory speed of the transactional system without replication.
TDS in-memory(ops/sec) Linux Solaris Win XP BSD Mac OS/X
Single-record write
239,824 143,466 125,486 111,671 82,256
Relative to DS writes
.313 .260 .280 .182 .259
When designing your system, your requirements will determine what, if any, transactional guarantees you can relax.
The following tests examine the performance of each tradeoff from the most relaxed to the most stringent.
Applications requiring the ‘ACI’ (atomic, consistent, and isolatable) characteristics of transactions but willing to risk
the ‘D’ (durability) can configure Berkeley DB such that it does not synchronize transaction logs to disk on commit.
In this case, the transaction log buffers are maintained in memory and only written to stable storage when Berkeley DB
determines that they are full, rather than as a component of the transaction. If the application, operating system, or
hardware should fail, the data in those buffers that is not yet written to disk is lost. This ‘TDS no-sync’ case is
measured below.
TDS no-sync (ops/sec) Linux Solaris Win XP BSD Mac OS/X
Single-record write
141,109 107,626 68,521 70,582 68,591
Relative to
TDS
in-memory logs
. 588 .755 .546 .632 .834
BERKELEY DB
PERFORMANCE WHITE PAPER
5
The performance is nearly identical to that of the ‘TDS in-memory logs’ case. In both cases, the disk is never accessed
during the test. The difference is that for the ‘TDS no-sync’ case, the data is eventually written to disk.
Next, we consider a configuration that will guarantee no loss of data after an application crash, as long as the operating
system does not also crash. TDS is again configured with locking, logging and transactions all enabled. However, in this
case Berkeley DB requests that the operating system write to disk, but not explicitly flush those writes to disk at each
transaction commit. The operating system will acknowledge the write and is likely to buffer writes for some amount of
time before flushing the data to disk. In this mode, TDS is sacrificing some durability, but less than before.
TDS write no-sync (ops/sec) Linux Solaris Win XP BSD Mac OS/X
Single-record write
90,774 85,101 45,748 62,087 34,888
Relative to
TDS
no-sync
.643 .791 .668 .880 .509
Notice the significant drop in performance, as each transaction commit requires a call into the operating system and
copies data from the application’s memory to the operating system’s write buffers.
If we enable fully synchronous writes, throughput will be limited by your disk’s ability to perform such writes. The
resulting numbers are, therefore, completely determined by your disk system and not the software.
Note that it is possible to get spectacular numbers in this mode, because some disks report writes as having completed
once the data has been written into the disk’s cache, instead of when the data is actually on the disk. This performance
boost comes at the price of durability guarantees, and we advise running critical applications on disk subsystems that
do not respond to synchronous write requests until the data is persistently on the disk.
Berkeley DB can optionally group commit operations, allowing higher throughput when doing synchronous transactions.
In conclusion, Berkeley DB allows application designers to make transaction durability tradeoffs when building a high
throughput system, based on their design goals and the characteristics of the data involved. With an understanding of
these and other tuning parameters found in Berkeley DB and in your deployment system it is possible to exceed these
measurements.
Aspects of Berkeley DB Not Measured
• Out of cache performance – The impact of storage to disk is so great that when measuring database performance
out of cache, you are essentially measuring the underlying operating system and associated hardware. The time
spent waiting on disk accesses takes up the vast majority of the overall access time. As a result, the tests have been
designed to work entirely in cache except for those times when it is necessary to write to disk for transactional guar-
antees. Systems with faster disks or disk controllers with built-in RAM-based cache will perform better.
[ Pobierz całość w formacie PDF ]