XFLAIM Database Engine

XFLAIM Database Engine

name = XFLAIM

caption =
latest_release_version =
latest_release_date =
operating_system = Cross-platform
genre = Development Library
license = GPL
website = [http://developer.novell.com/wiki/index.php/FLAIM XFLAIM]



XFLAIM is an embeddable database technology, developed by Novell and released in 2006 as an open source project. "XFLAIM" is an acronym for XML FLexible Adaptable Information Management, terms which appropriately describe the fundamental design goals of the technology. XFLAIM is a spin-off of the widely deployed FLAIM database engine, with both projects sharing a lot of the same code.

XFLAIM is similar to other embeddable database engines such as SQLite and Sleepycat/Oracle's Berkeley DB. To access the functionality offered by XFLAIM, an application merely needs to link against either a static or dynamic version of the XFLAIM library. XFLAIM has been ported to a wide variety of 32- and 64-bit platforms, including NetWare, Microsoft Windows, Linux, Sun Solaris, AIX, Mac OS X, and HP-UX.

=XFLAIM Features=


*Transaction begin, commit, abort. Use of rollback log for transaction abort and for recovery after a crash.
*Transaction types:
**Update. Update, read, and query operations allowed.
**Read. Only read and query operations allowed. Read transactions provide a read consistent snapshot of the database as of the point in time the transaction is started.
**Automatic. Single update operations may be told to automatically begin and end (commit or abort) a transaction if no transaction has been explicitly started.
**Automatic rollback of failed transactions (due to application failures or CPU failures).
*Periodic checkpoints to minimize recovery time after a system crash.
*No limit on size of update transactions.
*ACID principles supported: Atomicity, Consistency, Isolation, Durability.
*Group Commit allows multiple update transactions committed to disk at once to enhance update performance.

Roll-forward Logging

*Use of roll-forward log to minimize data that has to be written to commit a transaction.
*Roll-forward log is used in automatic recovery after a crash. Transactions that were committed since the last checkpoint will be redone.
*Multiple roll-forward log files may be used to support continuous backup feature. Files are numbered sequentially and are also identified with serial numbers to guarantee proper sequencing - no spoofing. Up to 4 billion log files supported - capacity is practically unlimited.
*Option to use only a single roll-forward log file - for applications that do not care about continuous backup.
*Roll-forward log files may be stored on a separate disk from rest of database.
*Minimal transaction logging. Only deltas logged for record modifies. Only DRNs logged for record deletes.
*Aborted transactions can be logged for debug purposes, but default is to not log them.
*Support for logging of application data.

Database Reliability and Recovery

*Automatic database recovery after a system crash. Rollback log is used to roll database back to last consistent checkpointed state. Then roll-forward log is used to redo transactions that were committed after the last checkpoint.
*Recovery is idempotent. That is, if we crash during recovery, it will be resumed when the database is subsequently opened.
*Reliability has been tested using an automated pull-the-plug test, which randomly cycles the power on the server during high volume updates to test database recovery. Thousands of pull-the-plug iterations have been performed.
*Handling of disk-full conditions and other disk errors. Database attempts to stall new update transactions until disk-full condition is resolved - without requiring a shut down.
*Protection against media failure. Customers can take hot backups and put roll-forward logs on a different volume than the database. If they do these things, two simultaneous disk failures would be required to lose any data.


*Block checksums are set on all blocks in the database when writing to disk and are verified whenever blocks are read from disk.
*The checksums are used to automatically detect database inconsistencies.


*One writer, multiple readers.
*Readers don't block writers (they NEVER lock items in the database).
*Writers don't block readers.
*Read consistency for readers (readers get a stable consistent snapshot of the database). Rollback log is used to provide block multi-versioning.
*Uncommitted data is not visible to other transactions.

DOM Nodes and Documents

*Documents are stored as DOM nodes.
*All element, attribute, and data nodes have a name id tag.
*Each DOM node can contain up to 4 gigabytes of data.
*Data types include text (Unicode and UTF-8), numeric, and binary.


*Documents are stored in collections
*There may be multiple collections per database.
*Collections allow data to be logically partitioned.


*Compound indexes, key component may be any XFLAIM data type. Contextual relationships between nodes in a document may be specified (sibling, child, parent, etc.) for each component.
*Optional and/or required nodes in compound indexes (key not generated if required nodes are missing)
*Presence indexes (indexes the existence of a node rather than its content).
*Case insensitive and case sensitive collation.
*White space compression and other special key-generation rules.
*Ascending/Descending sort order. Ascending or descending may be specified separately for each key component in a compound index.
*Cross-document type indexes.
*Substring indexing.
*Each-word indexing.
*Approximate indexing (Metaphone).
*Support for many international languages and collating sequences, including Arabic, Hebrew, and Asian (Japanese, Korean, Chinese).
*Each index in a database can have its own international language.
*Keys up to 1024 bytes long, key truncation supported.
*Multiple indexes per collection.
*APIs for reading indexes directly.
*Indexes are dynamically updated when nodes are added, modified, or deleted.
*Indexes can be built in the background.
*Indexes can be taken off-line (suspend) and later resumed.

Dynamic Dictionary

*Add, modify, and drop index, collection, element, attribute, prefix, and encryption definitions.

Query Capabilities

*XPATH is used as the query language.
*Rich set of query expression operators:
**Comparison operators (equal, not equal, less than, less than or equal, greater than, greater than or equal). Text comparison operators include wild card matching, allowing for match begin, match end, and substring (contains) searching.
**Arithmetic operators (unary minus, multiply, divide, mod, plus, minus).
**Logical operators (not, and, or).
**Parentheses (used to alter normal operator precedence).
**Advanced query optimization (XFLAIM will automatically select indexes, etc. based on least cost estimation).
**Index specification. The application may explicitly specify an index to use.
*Powerful navigational calls for retrieving and browsing through query results (first, last, next, previous, and current node/document).

Read and Update Operations

*Ability to retrieve nodes directly from collections by 64 bit node id. APIs for navigation within a document (next/prev sibling, first/last child, parent, etc.)
*Index keys can be read directly.
*Advanced querying capabilities are supported via XPATH.
*Add, modify, and delete operations are supported.


*A block cache is shared by all threads in a process. XFLAIM supports up to 4 GB of cache on 32 bit platforms and much more on 64 bit platforms.
*Document node cache.
*Cache poisoning prevention.
*Memory fragmentation prevention via smart management of cache and node allocations.
*Cache statistics can be queried, and include hits, faults, hit looks, and fault looks.

Optimized Disk Reading / Writing

*Direct I/O allows file system cache to be bypassed.
*Asynchronous writes.
*Cache blocks are written in ascending order to optimize disk head movements. Adjacent blocks are coalesced into larger write buffers for improved performance.

Database Validation and Repair

*Routines for checking the physical and logical structure of database are provided. Links between blocks, the B-Tree structure, block checksums, node/document structure, index keys/reference sets and data in nodes are verified. Damaged indexes can be fixed on-line if problems are encountered during the check.
*Routines for repairing a database allow data recovery from severely damaged databases.
*Progress and status callbacks are possible with all check and repair routines. This allows the application to display progress and cancel the operation if desired. Corruptions are also reported via the callbacks so that an application can create a detailed log of corruptions found if desired.


*Hot backup. Backups can be performed without taking the database offline and without stopping updates.
*Continuous backup. Roll-forward logs can be managed in a way that allows them to serve as a continuous backup of the database. No committed transaction will be lost.
*Incremental backups. This minimizes what must be backed up - only blocks changed since last backup.
*Backup and restore use flexible streaming interfaces to allow the application to efficiently select and manage the backup media. For example, an application could even choose to send backup data across a network to be stored on a remote device. XFLAIM uses double buffering so that an output device can be kept busy while XFLAIM is fetching the next set of blocks to backup. This helps prevent streaming devices (such as tape drives) from stalling.
*All blocks in backup include a checksum to ensure that data is reliable when restored.
*Simple block compression used to minimize size of backup.
*Use of serial numbers in roll-forward log files and backups to ensure “identifiability” when restoring. Database also has a serial number.
*Restore from full backup, multiple incremental backups, and/or roll-forward logs - all in one call.
*Status callbacks are supported during backup and restore operations, allowing the application to report progress and/or abort the backup or restore operation.
*Partial restore of a database is supported. An application has the option of stopping a restore operation after either: 1) a full backup or incremental has been restored, or 2) after a particular transaction in the roll-forward log has been re-played.

Database Monitoring / Statistics Collection

* APIs to collect detailed statistics on disk I/O activity and transaction activity.
* APIs to monitor cache utilization, including bytes used, number of blocks and nodes cached, cache hits, faults, etc.
* APIs to collect detailed information about queries. This includes the ability to see which indexes are used, how many keys are fetched, how many nodes are fetched, how many nodes failed the criteria, etc. This allows analyzing of query efficiency and troubleshooting of query performance problems.

Database Size

*Up to 8 terabytes of data per database.
*Up to 2^64 - 1 (64 bits) of document IDs per collection.
*Database grows as needed. There is no need to pre-allocate disk space.
*Support is provided for re-claiming unused database blocks and log areas and returning to them to the host file system. *Space may be reclaimed without taking database off-line.
*The database block size can be set on database creation to 4, 8, 16 or 32 KB.
*Sophisticated block splitting and block combining to maximize block utilization.
*Roughly 80% utilization in index blocks.
*Roughly 80-95% utilization in data blocks.

Cross Platform

*Databases files are binary portable across ALL supported platforms. There is no need for explicit conversions when moving a database from one platform to another. The platform where the database is created determines whether a little-endian or big-endian storage format will be used for database metadata. If a database is moved to a platform with a different endian format, conversions happen automatically as needed. Thus, it is possible for a database that was originally created on a little-endian platform and subsequently moved to a big-endian platform to gradually migrate to over time.
*Platforms: Netware, Windows (NT, 2000, XP-64 bit), Unix (Solaris, AIX, HP/UX), Linux, Mac OS X (both PowerPC and Intel). 64 bit supported for Windows, Linux, and Unix platforms where it is available.
*Source code is developed in C++ programming language (one source for all platforms), allowing XFLAIM to easily build libraries for other platforms – a new platform is generally an hour or two of work.
*JAVA APIs are also available for JAVA developers. JNI is used to interface to the C++ methods.
*Operating System services are abstracted into common interfaces or C++ classes for upper layers of code so they don’t have to worry about operating system differences. Code is maintained in a handful of files. Abstractions exist for disk I/O, memory management, semaphores and mutexes, and so forth.


*Database checking utility (checkdb).
*Database rebuild utility (rebuild).
*Database browser and editor utility (xshell, DOMEdit). Provides support for retrieving, adding, modifying, and deleting documents and individual nodes.
*Low-level physical structure viewer/editor (view).
*All utilities build and work on all platforms and have the same look and feel.

=External links=
* [http://developer.novell.com/wiki/index.php/FLAIM XFLAIM project home]
* [http://developer.novell.com/wiki/index.php/XFLAIM_Download XFLAIM project downloads]

Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”