skip to Main Content


ExtremeStor B BeeGFS Appliance with Expert Design, Implementation and Support

World’s Fastest Growing Parallel File System

High Performance Parallel File System

BeeGFS transparently spreads data across multiple servers, linearly scaling performance and capacity, from small clusters to supercomputer-class systems with thousands of nodes

Easy to Manage

Graphical administration and monitoring with a command line interface (with few commands) provide easy management, avoiding the notorious complexity of legacy open source parallel file systems

No Charge Open Source Software

Basic BeeGFS is free of charge open source software, combined with vendor agnostic industry standard hardware make ExtremeStor B economically compelling

Additional Enterprise Data Services

High-availability, quotas, ACLs, and with ZFS as the underlying file system: RAID Z, data compression, snapshots, and powerful management tools are available under additional support contract

Network and Protocol Support

  • Standard TCP/IP and RDMA over Converged Ethernet (RoCE), InfiniBand and OmniPath
  • POSIX Client, NFS and SMB export

Scalable Distributed Metadata

BeeGFS uses multiple dedicated metadata servers to manage global metadata in order to deliver best in class metadata performance and linear scalability

ExtremeStor B Delivers Extreme Performance and Scale with Easy Management

ExtremeStor B delivers maximum performance and scalability on fully integrated, top quality, industry standard hardware for maximum performance, reliability and data protection across a wide range of technical applications. ExtremeStor B with BeeGFS distributes files in parallel across multiple storage servers with dedicated, distributed metadata processing.

ExtremeStor B delivers up 8 GB/s client throughput with a single process streaming on a 100Gb network, with only a few streams capable of fully saturating the network. ExtremeStor B delivers best in class metadata performance with linear scalability through dynamic metadata name-space partitioning and distributing metadata operations per directory and subdirectory across metadata nodes in a simplified manner.

BeeGFS at UC Santa Barbara

A BeeGFS deployment at UCSB delivered over 13 GB/s with RAID 6 data protection.

UCSB Configuration

  • To meet performance and capacity requirements, Applied Data Systems deftly architected the appropriate drive capacity and number of Object Servers
  • 4x Object Storage Servers with 36x6TB NL SAS disks each
  • 2x Metadata servers with Buddy Mirroring for high availability
  • Optional features such as BeeOND deliver the performance of all-flash arrays as compute jobs require
  • Affordable object server building blocks can be added in order to scale both performance and capacity
BeeGFS Powers “Pod”, the Newest Computing Cluster at UC Santa Barbara

Under the Hood, What Makes BeeGFS so Fast and Easy to Manage

In contrast to other parallel file systems, BeeGFS uses all available RAM on its storage servers to quickly write bursts of data into the server RAM cache and to quickly read data from it. BeeGFS also serves data direct from the cache if it has already been recently requested by another client. BeeGFS also aggregates small I/O requests into larger blocks before writing them to disk. A single large file is distributed across multiple storage targets for high throughput.

In addition to performance, BeeGFS was designed for easy deployment and administration. The graphical administration and monitoring system facilitates simple and intuitive management including cluster installation, load statistics, storages service management and health monitoring. ExtremeStor B is delivered as an integrated hardware software appliance from Applied Data Systems.

Dynamic Network Fail-Over

BeeGFS supports multiple networks and dynamic fail-over in case one of the network connections is down

BeeOND (BeeGFS On Demand)

BeeOND allows on the fly creation of temporary parallel file system instances on the internal SSDs of compute nodes on a per-job basis for burst-buffering

Built-in High Availability By Replication

BeeGFS includes a replication HA mechanism called Buddy Mirroring that is fully integrated and does not rely on special hardware

Storage Pools Combine the Performance of Flash with the Economics of Disk

BeeGFS Storage Pools make different types of storage devices available within the same namespace. Economic high capacity disks are accessed in parallel for high throughput and capacity, combined with a high performance flash tier.

White Glove Installation and Support

ExtremeStor B is expertly installed and supported by Applied Data Systems who is the single point of contact for all support issues

Flexible Building Blocks

ExtremeStor B is delivered as a modular, repeatable, and highly supportable solution consisting of best of breed industry standard components

Engineered for High Availability, Data Protection and Fault Tolerance

ExtremeStor B BeeGFS storage servers come with underlying RAID (either RAID-6 or RAID-Z2) to transparently handle disk errors. BeeGFS includes an HA mechanism that is fully integrated and which does not rely on special hardware. This approach is called Buddy Mirroring, based on the concept of pairs of servers (the so-called buddies) that internally replicate each other and that help each other in case one of them has a problem.

The built-in BeeGFS Buddy Mirroring approach can tolerate the loss of complete servers including all data on their RAID volumes – on commodity servers and shared-nothing hardware. Buddy Mirroring can also be used to put buddies in different failure domains, different racks or different server rooms.

BeeGFS Buddy Mirror Groups

We’d love to work on your project. We do extensive analysis of your existing and future needs, deliver a comprehensive solution architecture on a validated hardware and software build that ships fully integrated. Contact us for a custom BeeGFS consultation today!

ExtremeStor B

Back To Top