Red Hat Storage 2 0 2 0 Update 4 and Update 5 Release Notes en US

background image

Divya Muntimadugu

Anjana Suparna Sriram Pavithra Srinivasan

Red Hat Storage 2.0

2.0 Update 4 and Update 5

Release Notes

Release Notes for Red Hat Storage 2.0 - Update 4 and Update 5
Edition 1

background image

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

Release Notes for Red Hat Storage 2.0 - Update 4 and Update 5
Edition 1

Divya Muntimadugu
Red Hat Engineering Co ntent Services
divya@redhat.co m

Anjana Suparna Sriram
Red Hat Engineering Co ntent Services
psriniva@redhat.co m

Pavithra Srinivasan
Red Hat Engineering Co ntent Services
psriniva@redhat.co m

background image

Legal Notice

Copyright © 2013 Red Hat Inc.

This document is licensed by Red Hat under the

Creative Commons Attribution-ShareAlike 3.0 Unported

License

. If you distribute this document, or a modified version of it, you must provide attribution to Red

Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be
removed.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section
4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo,
and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.

Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or
endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/service marks or
trademarks/service marks of the OpenStack Foundation, in the United States and other countries and
are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract

The Release Notes provide high-level coverage of the improvements and additions that have been
implemented in Red Hat Storage 2.0 Update 4 and Update 5.

background image

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Table of Contents

Preface

Chapter 1. Introducing Red Hat Storage

Chapter 2. What is New in this Release?

Chapter 3. Known Issues

Chapter 4 . Package Updates

4.1. RHSA-2013:0691
4.2. RHBA-2013:1064
4.3. RHBA-2013:1092

Chapter 5. Technology Previews

5.1. Red Hat Storage Console
5.2. Striped Volumes
5.3. Distributed-Striped Volumes
5.4. Distributed-Striped-Replicated Volumes
5.5. Striped-Replicated Volumes
5.6. Replicated Volumes with Replica Count greater than 2
5.7. Support for RDMA over Infiniband
5.8. Stopping Remove Brick Operation
5.9. Directory Quota
5.10. Hadoop Compatible Storage
5.11. Granular Locking for Large Files
5.12. Read-only Volume

Revision History

3

4

5

6

13

13
16
18

19

19
19
19
19
19

20
20
20
20
20
20
20

22

Table of Contents

1

background image

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

2

background image

Preface

Red Hat Storage minor releases are an aggregation of individual enhancement, security and bug fix
errata. The Red Hat Storage 2.0 - Update 4 and Update 5 Release Notes documents the changes (that
is, bugs fixed, enhancements added, and known issues found) in this minor release. This document also
contains a complete list of all currently available Technology Preview features.

Should you require information regarding the Red Hat Storage life cycle, refer to

https://access.redhat.com/support/policy/updates/rhs/

.

Preface

3

background image

Chapter 1. Introducing Red Hat Storage

Red Hat Storage is a software only, scale-out storage solution that provides flexible and agile
unstructured data storage for the enterprise. Red Hat Storage provides new opportunities to unify data
storage and infrastructure, increase performance, and improve availability and manageability in order to
meet a broader set of an organization’s storage challenges and needs.

GlusterFS, a key building block of Red Hat Storage, is based on a stackable user space design and can
deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers
over network interconnects into one large parallel network file system. The POSIX compatible GlusterFS
servers, which use XFS file system format to store data on disks, can be accessed using industry
standard access protocols including Network File System (NFS) and Common Internet File System
(CIFS).

Red Hat Storage can be deployed in the private cloud or data center using Red Hat Storage Server for
On-premise. Red Hat Storage can be installed on commodity servers and storage hardware resulting in
a powerful, massively scalable, and highly available NAS environment. Additionally, Red Hat Storage can
be deployed in the public cloud using Red Hat Storage Server for Public Cloud, for example, within the
Amazon Web Services (AWS) cloud. It delivers all the features and functionality possible in a private
cloud or datacenter to the public cloud by providing massively scalable and high available NAS in the
cloud.

Red Hat Storage Server for On-Premise

Red Hat Storage Server for On-Premise enables enterprises to treat physical storage as a virtualized,
scalable, and centrally managed pool of storage by using commodity server and storage hardware.

Red Hat Storage Server for Public Cloud

Red Hat Storage Server for Public Cloud packages GlusterFS as an Amazon Machine Image (AMI) for
deploying scalable NAS in the AWS public cloud. This powerful storage server provides a highly
available, scalable, virtualized, and centrally managed pool of storage for Amazon users.

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

4

background image

Chapter 2. What is New in this Release?

This chapter describes the key features added to Red Hat Storage 2.0.

Red Hat Storage for Virtualization
Red Hat Storage provides a POSIX compatible file system that allows you to store virtual machine in
a Red Hat Storage Server instead of scalable software based storage or a storage area network
(SAN) array.
Red Hat provides a way to create and optimize Red Hat Storage volumes using the Red Hat
Enterprise Virtualization Manager. Red Hat Enterprise Virtualization platform and Red Hat Storage
platform comprises of various components which work seamlessly together, enabling system
administrators to install, configure, and manage a virtualized environment using Red Hat Storage as
the virtual machine image store.
For more information, refer to the Quick Start Guide.
Server-Side Quorum
You can now configure Server-Side quorum to guard against network partitions in the trusted storage
pool. The configuration of quorum in a trusted storage pool determines the number of server failures
that the trusted storage pool can sustain. If an additional failure occurs, the trusted storage pool
becomes unavailable. It is essential that the trusted storage pool stops running, if too many server
failures occur or if there is a problem with communication between the trusted storage pool nodes to
prevent data loss.
For more information, refer to section Configuring Server-Side Quorum in the Administration Guide.
Root Squashing
You can now prevent root users from having root privileges and assign them the privileges of
nfsnobody using the volume tuning option. This effectively squashes the power of the root user to
the user nfsnobody, preventing unauthorized modification of files on the Red Hat Storage Servers.
For more information, refer to section Tuning Volume Options in the Administration Guide.
Configurable FUSE Queue Length
You can now configure FUSE queue length to handle n number of requests to be queued before
stopping to accept new requests.
For more information, refer to section Native Client in the Administration Guide.
Support for Disk Encryption
Red Hat Storage provides the ability to create bricks on encrypted devices so that access to data is
restricted. You can create bricks on encrypted disk and use them to create Red Hat Storage
volumes.
For more information, refer to section Encrypted Disk in the Administration Guide.

Chapter 2. What is New in this Release?

5

background image

Chapter 3. Known Issues

This chapter provides a list of known issues at the time of release.

Issues related to Red Hat Enterprise Virtualization and Red Hat Storage Integration

A split-brain is observed on the virtual machine image files, if the source brick for self-heal
operation goes off-line before the self-heal is completed.
The scatter-gather I/O patch significantly improves the performance of the virtual machines
hosted on the fuse-enabled Red Hat Storage Servers. This patch has been integrated into the
Red Hat Enterprise Linux 6.3.z kernel-2.6.32-279.22.1.el6. Ensure that this kernel patch is
updated on your system.
sosreport generation throws errors.
Work Around: Bootstrap the Red Hat Storage server nodes using Red Hat Enterprise
Virtualization Manager or manually create /etc/vdsm/vdsm.conf on all Red Hat Storage
server nodes. You must ensure to delete the /etc/vdsm/vdsm.conf, if the nodes are
bootstrapped later.
If Red Hat Storage server nodes and the Red Hat Enterprise Virtualization Hypervisors are
present in the same data center, the servers of both types are listed for selection during the
creation of a virtual machine or adding a storage domain. Red Hat recommends that you create a
separate data center for Red Hat Storage server nodes.
BZ#

867236

While deleting a virtual machine using Red Hat Enterprise Virtualization Manager results in the
virtual machine being deleted but not from the actual storage. This consumes unnecessary
storage.
Work Around: Delete the virtual machine manually using the command line interface. The virtual
image file is deleted and free space is available.
BZ#

922361

Virtual machine moves into a paused status due to unknown storage I/O error and storage
read/write permissions while self heal and rebalance is in progress.
Work Around: Move the virtual machine to the maintenance mode and resume.
BZ#

918032

In this release, direct-io-mode=enable mount option does not work on the hypervisor.
BZ#

922744

In this release, backupvolfile-server mount option does not work on the hypervisor.
BZ#

920791

and BZ#

920530

In a plain distributed hash table (DHT), there is no assurance of data availability which leads to
unavailability of virtual machines. This may result in disruption of the cluster.
It is recommended that you use distributed-replicate volumes on the hypervisors, if high
availability is a requirement.
BZ#

921399

Unable to create virtual machines on POSIX compliant file system data center with Red Hat
Enterprise Virtualization Manager - quota mode is in Audit/Enforced.
BZ#

979901

Virtual machines may experience very very slow performance when a rebalance operation is
initiated on the storage volume. This scenario is observed when the load on storage servers are
extremely high. Hence, it is recommended to run rebalance operation when the load is low.
BZ#

856121

When a volume starts, .glusterfs directory is created in the back-end export directory. When a

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

6

background image

rem ove-brick command is performed, it only changes the volume configuration to remove the
brick and stale data is present in back-end export directory.
Work Around: Run rm -rf /export-dir command on the Red Hat Storage Server node to
delete the stale data.
BZ#

866908

gluster volum e heal <volnam e> info command gives stale entries in its output in few
scenarios.
Work Around: Execute the command after 10 minutes. This removes the entries from internal
data structures and the command does not display these stale entries.
BZ#

982636

When you clone a virtual machine from the snapshot of another virtual machine, the original virtual
machine does not boot and displays the following error message:

VM VM name is down. Exit message: unsupported configuration: non-primary
video device must be type of 'qxl'.

Work Around: After cloning a virtual machine from the snapshot, and while the original virtual
machine remains shut down, change the Console Protocol to Spice, and click OK. Then, start the
original virtual machine.
You can also change the Console Protocol back to VNC, save, and then, start the original virtual
machine.

Issues related to Red Hat Storage Console

BZ#

922572

Jboss application updated after Red Hat Console is installed causes an HTTP 500 error while
accessing the console through the web interface.
Work Around: Edit the standalone.xml located in
jbossas/standalone/configuration/
by removing the <user-nam e> tag from security
element under data source element.
BZ#

905440

Due to a bug in JBoss modules (https://issues.jboss.org/browse/MODULES-105), Red Hat
Storage Console may not work after latest patches are applied.
Work Around: After every yum update, run # find /usr/share/jbossas/modules -name
'* .jar.index'
command. Restart the jbossas service to successfully log into the Red Hat
Storage Console.
BZ#

916981

In this release, VDSM supports functionality of cluster compatibility level 3.1. Hence, a Red Hat
Storage 2.0 server with a compatibility level 3.1 data center can only be added to a cluster using
Red Hat Enterprise Virtualization Manager.

Issues related to Rebalancing Volumes:

Rebalance does not happen if the bricks are down.
Currently while running rebalance, ensure all the bricks are in operating or connected state.
After rebalancing a volume, if you run rm -rf command at the mount point to remove all contents
of the current working directory recursively without prompting, you may get Directory not
Empty
error message.
Rebalance operation fails to migrate data in the distributed-striped volume.
Rebalance operation on geo-replicated volumes in the slave can lead to data inconsistency on
the slave till files are updated on master.
If one of the bricks in a replicated volume goes off-line, the rebalance operation can fail.

Chapter 3. Known Issues

7

background image

Work Around: Restart the rebalance operation after the brick is on-line.
After completion of the rebalance operation, there may be mismatch of failure counts between the
gluster volum e rebalance status output and the rebalance log files.
file already exists errors in the brick log file while rebalance operation is in progress and
can be ignored.

Issues related to Self-heal

On the NFS mount, few files can go missing till the self-heal daemon heals all the files for a
short duration during migration.
Work Around: Run a full file system crawl after the migration and the files can be viewed on the
NFS mount.
If the source brick of a self-heal operation in a replicated volume goes off-line even after
successful completion of self-healing on virtual machine files, the virtual machine goes into a
pause state.
If entries are created in a directory while one of the bricks in a replicated volume is down and the
source of the brick for self-heal goes off-line before the self-heal operation is complete on that
directory, then operations like readdir on that directory fails until self-heal operation is
completed.
BZ#

882797

When a distribute volume is changed to distribute-replicate volume, files are not pro-actively self-
healed.
Work Around: Run the following commands to replicate all files:

1. Run gluster volume heal <volume_name> full command on one of the storage

nodes.

2. Run find | xargs stat command from the mount point.

Issues related to replace-brick operation

Even though the replace-brick status command displays Migration complete, all data
would not have been migrated onto the destination brick. We strongly recommend you to be
cautious when performing the replace-brick operation.
Replace-brick operation will not be successful if either of source or destination bricks go down.
After gluster volume replace-brick VOLNAME Brick New-Brick commit command is
executed, the file system operations on that particular volume, which are in transit will fail.
After replace-brick operation, the stat information is different on NFS mount and FUSE mount.
This happens due to internal time stamp changes when the replace-brick operation is
performed.

Issues related to Directory Quota:
Directory Quota is a technology preview feature.

Some writes appear to pass even though the quota limit is exceeded (write returns success).
This is because they could be cached in write-behind. However disk-space does not exceed the
quota limit, since when writes to backend happen, quota disallows them. Hence, it is advised that
applications should check the return value of close call.
Excessive logging while deleting files when Quota or gsync-indexing options are enabled.
If a user has changed directory on which the administrator is setting the limit, even though the
command succeeds and the new limit value is applicable to all the users except for those users’
who have changed into that particular directory. The old limit value will be applicable until the user
has moved out of that directory.
BZ#

848253

When quota limit is set on a distributed volume and if a brick goes down while I/O is in progress,
the effective quota limit can exceed as distribute translator is not aware of the contribution

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

8

background image

from the offline brick.
It is recommended to use replicated volume if quota usage is critical even when a node
goes down. In replicate volume, quota limit is maintained even during single brick failure.
Rename operation (that is, removing old path and creating new path) requires additional disk
space equal to the file size. This is because, during rename, it subtracts the size on old path
after rename operation is performed, but it checks whether quota limit is exceeded on parents of
new file before rename operation.
With striped volumes, directory quota feature is not available.
When renaming a file, if the available free size is less than the size of the file, quota displays
"Disk limit exceeded" error without renaming the file.
When you set quota limit on a directory that contains ',' (comma) in its name, then quota
implementation may not work.

Issues related to POSIX ACLs:

Even though POSIX ACLs are set on the file or directory, the + (plus) sign in the file permissions
is not displayed. This results in performance optimization and will be fixed in a future release.
When glusterFS is mounted with -o acl, directory read performance can be bad. Commands like
recursive directory listing can be slower than normal.
When POSIX ACLs are set and multiple NFS clients are used, there could be inconsistency in the
way ACLs are applied due to attribute caching in NFS. For a consistent view of POSIX ACLs in a
multiple client setup, use -o noac option on NFS mount to switch off attribute caching. This could
have a performance impact on operations involving attributes.

Issues related to NFS

After you restart the NFS server, the unlock within the grace-period may fail and previously held
locks may not be reclaimed.
fcntl locking (NLM) does not work over IPv6.
You cannot perform NFS mount on a machine on which glusterfs-NFS process is already running
unless you use the NFS mount -o nolock option. This is because glusterfs-nfs has already
registered NLM port with portmapper.
If the NFS client is behind a firewall such as NAT (Network Address Translation) router, the
locking behavior is unpredictable. The current implementation of NLM assumes there are no NAT
happening to client's IP.
nfs.m ount-udp option is disabled by default. You must enable it if you want to use posix-locks
on solaris when you NFS mount the gluster volume.
If you enable nfs.mount-udp option, while mounting a subdirectory (exported using
nfs.export-dir option) on linux, you must mount using -o proto=tcp option. UDP is not
supported for subdirectory mounts on GlusterNFS server.
For NLM to function properly, you must ensure that all the servers and clients have resolvable
hostnames. That is, servers must be able to resolve client names and clients must be able to
resolve server hostnames.
BZ#

873763

NFS server on a Red Hat Storage node can be killed by the Out of Memory killer when the amount
of total physical memory available is low and the system is loaded with a lot of write operations.
Work Around: Execute gluster volume set VOLNAME performance.nfs.write-
behind on
command to enable write-behind translator on the Gluster NFS stack.
BZ#

973078

For a distributed or a distributed-replicated volume, in case of a NFS mount, if the brick or sub-
volume is down, then any attempt to create, access, or modify the file which is either hashed or
hashed and cached on the sub-volume that is down gives an I/O error instead of a Transport

Chapter 3. Known Issues

9

background image

endpoint is not connected error.

Issues related to Unified File and Object Storage

GET and PUT commands fails on the large files while using Unified File and Object Storage.
Work Around: You must ensure to add node_timeout=60 variable in proxy, container, and
object server configuration files.
In this release, Object Expiration feature of Swift-1.4.8 version is not supported.

Issues related to geo-replication

Geo-replication uses rsync to sync files from master to slave, but rsync does not sync mknod and
pipe files.
BZ#

877826

Recent Red Hat Storage release includes a new socketdir option in geo-replication module. As a
result, failure: not a valid option: socketdir warning message is displayed during yum
update
. This is due to the order in which packages are upgraded. The old geo-replication which
is not upgraded does not recognize the newly introduced socketdir option.
Work Around: Stop all the geo-replication sessions before starting the upgrade.
BZ#

849302

Geo-replication status goes to faulty state temporarily after add-brick and remove-brick
operation. Although the status is faulty, there is no interruption to data synchronization and
status will automatically change to OK after 60 seconds.
BZ#

996307

Geo-replication does not support synchronization of hardlinks to slave. This behaviour persists
in Red Hat Storage 2.0. and will be resolved in a future release.

Issues related to root squashing

BZ#

927161

fcntl locking fails for non-root users on FUSE mount when server.root-squash option is
enabled.
Work Around: Execute gluster volume set <VOLNAME> performance.quick-read
off
command to avoid this issue.
BZ#

927630

server.root-squash option does not disable dynamically.
Work Around: Stop and start the volume after disabling server.root-squash option.

yum update command may fail when it is executed for the first time in Red Hat Virtual Storage
Appliance 3.2 with the following error "GPG key retrieval failed: [Errno 14] Could not open/read
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-auxiliary " .
Work Around: Run yum update command again and it works fine on subsequent runs.
glusterfsd - Error return code is always 0 after daemonizing the process.
Due to this, scripts that mount glusterFS or start glusterFS process must not depend on its return
value.
BZ#

986090

Currently, Red Hat Storage has issues with mixed usage of hostnames, IPs and FQDNs to refer to a
peer. If a peer has been probed using its hostname but IPs are used during add-brick, the operation
may fail. It is recommended to use the same address for all the operations, that is, during peer probe,
volume creation, and adding/removing bricks. It is preferable if the address is correctly resolvable to
a FQDN.
BZ#

950314

In Red Hat Storage 2.0 Update X release, for '/' (root) of the volume, the layout is automatically fixed.
But for all other directories, you must run 'rebalance' operation to view the newly added bricks.

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

10

background image

If files and directories have different GFIDs on different back-ends, glusterFS client may hang or
display errors.
Contact Red Hat Support for more information on this issue.
Due to enhancements in dynamic volume management (for scale-out feature), you may experience
excessive memory usage in this release.
The following is a known missing (minor) feature:

locks - mandatory locking is not supported.

If you start a geo-replication session on a volume that has a replace-brick operation in progress,
there is a possibility of data loss on the slave.
BZ#

865672

Changing a volume from one-brick to multiple bricks (add-brick operation) is not supported. The
volume operations on the volume may fail, due to impact of add brick on volume configuration.
It is recommended that the volume is started with at least two bricks to avoid this issue.
BZ#

877988

Entry operations on replicated bricks may have a few issues with md-cache module enabled on the
volume graph.
For example: When one brick is down, while the other is up an application is performing a hardlink call
link() would experience EEXIST error.
Workaround: Execute this command to avoid this issue:

gluster volume set VOLNAME stat-prefetch off

BZ#

839213

A volume deleted in the absence of one of the peers is not removed from the cluster's list of volumes.
This is due to the import logic of peers that rejoin the cluster is not capable of differentiating between
volumes deleted and volumes added in the absence of the other (conflicting) peers.
Work Around : Manually detect it by analyzing CLI cmd logs to get the cluster view of the volumes
that must have been present. If any volume is not listed, use volume-sync command to reconcile
the volumes in the cluster.
BZ#

860915

One of the brick is down in a distribute type of volume, and if that directory name is hashed to the
particular brick which is down, creating file or directory fails with invalid argument error.
It is recommend you to use replicated volume where high availability and high-reliability is critical.
BZ#

920459

The mount.glusterfs script does not handle attribute_timeout and entry_timeout
options. If these options are added, a warning message will be displayed while mounting and options
will be ignored.
Work Around: If these options are critical for the setup, use the glusterFS command line to mount
volumes and then the options will be used by glusterFS process.
BZ#

920002

The POSIX compliance tests fails in certain cases on RHEL 5.9 due to mismatched timestamps on
FUSE mount. These tests pass on all other RHEL 5.x and RHEL 6.x clients.
BZ#

920970

If gluster volume heal info command hangs, subsequent commands fails for next 10 minutes
due to cluster wide lock time out.
BZ#

921263

If glusterd is consuming lot of memory, restart the glusterd service.

Chapter 3. Known Issues

11

background image

BZ#

916834

Quick-read translator returns stale file handles for certain pattern of the file access. When running
'dbench' application on the mount point, 'dbench: read fails on handle 10030' message is displayed.
Work Around: Execute gluster volume set VOLNAME quick-read off command to avoid
this issue.
BZ#

979861

Although the glusterd service is alive, gluster command reports glusterd as non-operational.
Workaround: There are two ways to solve this:
Edit /etc/glusterfs/glusterd.vol to contain this line:

option rpc-auth-allow-insecure on

Or
Reduce the fin_timeout.
tcp_fin_timeout from default 60 seconds to 1 second
The tcp_fin_timeout variable tells kernel how long to the keep sockets in the state FIN-WAIT-2 if you
were the one closing the socket.
BZ#

913364

An NFS server reboot does not reclaim the file LOCK held by a Red Hat Enterprise Linux 5.9 client.

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

12

background image

Chapter 4. Package Updates

This chapter provides information of updated Red Hat Storage 2.0 packages that fix multiple security
issues, bugs, and enhancements added to Update 4 and Update 5 releases.

4.1. RHSA-2013:0691

The bugs contained in this section are addressed by advisory RHSA-2013:0691. Further information
about this advisory is available at

https://rhn.redhat.com/errata/RHSA-2013-0691.html

.

glusterfs

BZ#

859387

Prior to this update, errors were encountered during the rebalance operation when replicate
bricks were connected and disconnected in quick successions. As a consequence, the
rebalance operation failed. This update modifies the rebalance code to handle failures while
replica pairs are getting connected and disconnected in quick successions.

BZ#

894 237

Prior to this update, rebalance process was crashing with segmentation fault due to improper
handling of the pointer. With this update, rebalance code is fixed to handle proper pointer
dereference.

BZ#

856156

Prior to this update, O_DIRECT flag was not handled properly. This is a basic requirement as
per POSIX standards mainly with respect to cache coherency related implementations. With this
update, support for O_DIRECT flag has been added when used with open() system call to
adhere to the standards.

BZ#

8584 99

Prior to this update, FUSE module was not passing the right mount parameter which was
required for the read-only mount option. As a consequence, having a read-only glusterFS mount
of the volume was not possible. This issue has been fixed and now users can successfully
have a read-only glusterFS mount (-o ro) of the volume.

BZ#

869724

Prior to this update, the smbtorture's ping-pong test failed against a glusterFS share with
STATUS_FILE_LOCK_CONFLICT error. With this update, the behavior of the posix-locks
module is as per the POSIX locking semantics. Now, smb-torture's ping-pong tests run
smoothly on top of glusterFS mounts.

glusterfs-fuse

BZ#

856206

Prior to this update, the FUSE module's queue length was not configurable and it was leading to
memory consumption issue. A new background-qlen=length mount option is introduced to
enable FUSE to handle n number of requests to be queued before stopping to accept new
requests.

Chapter 4. Package Updates

13

background image

BZ#

876679

Prior to this update, 32-bit clients and applications were not able to use 64-bit inodes and
therefore could not utilize native mount points from Red Hat Storage. With this update, FUSE
module is enhanced with inode32 mount option. Now, Red Hat Storage native client accepts
the enable-ino32 mount option that causes the file system to present 32-bit inodes instead
of 64-bit inodes.

BZ#

8584 88

Prior to this update, there was a problem in the way value associated with cluster.min-
free-disk
tunable was being interpreted. With this update, glusterFS now handles values
specified as percentage in a correct manner.

glusterfs-geo-replication

BZ#

883827

Prior to this update, geo-replication was failing to set the .xtime xattrs when an unprivileged
user accessed the file system. As a consequence, the performance was degraded. This update
fixed the issue in tracking the changes of geo-replication when an unprivileged user accesses
the file system. Now, the performance has been improved in a geo-replication setup that has
m ountbroker configured.

BZ#

880308

Prior to this update, when network connections used for geo-replication got disconnected, the
exception was not handled correctly. As a consequence, geo-replication sessions were
stopped abruptly. With this update, when network connections get disconnected, geo-replication
handles the error condition gracefully and continues with the replication.

BZ#

880193

Prior to this update, NFS was picking up geo-rep's already open (read-only) file descriptor as
an anonymous FD. With this update, corrections are made to the FD table behavior, when both
NFS and geo-replication are in progress.

BZ#

881736

Prior to this update, gluster volume geo-replication ... config checkpoint
now
command output does not provide sync status properly. With this update, gluster
volum e geo-replication ... config checkpoint now
command output is
enhanced to provide status more appropriately.

BZ#

902213

Prior to this update, gluster volume geo-replication ... config displayed
DeprecationWarnings message due to usage of .message attribute. With this update,
gluster volum e geo-replication...config command does not access the
.m essage attribute and does not display DeprecationWarnings messages.

glusterfs-server

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

14

background image

BZ#

888286

Prior to this update, the lock granted callback was sent to the client using the local IP
instead of the failover IP address. As a consequence, locking a file would cause a hang on the
mount point. With this update, NFS locking manager (NLM) code can handle IP failover
successfully.

BZ#

864 222

Prior to this update, the clients received "Stale NFS FileHandle" errors for root file handle of the
volume if nfs.enable-ino32 option was enabled. With this update, NFS code is fixed to
ensure proper inode transformation logic when nfs.enable-ino32 option is set.

BZ#

883590

Prior to this update, client root user could gain the permissions of the local root user and get all
the privileges of local root user. With this enhancement, you can prevent root users from having
root privileges and assign them the privileges of nfsnobody. This effectively squashes the
power of the root user to the user nfsnobody, preventing unauthorized modification of files on
the Red Hat Storage Servers.

BZ#

906884

Prior to this update, file locking (fctnl with setlkw call) did not work on root squashed volume.
With this update, NFS posix-lock issue is fixed when root-squash option is enabled on the
volume.

BZ#

89584 1

Prior to this update, issues in (xfs) lstat system call caused some process crash. With this
update, posix module is made more robust to handle backend brick failures better.

BZ#

911777

Prior to this update, users were able to choose the release channel during rhn_register,
which led to setting up incorrect maintenance configuration. With this update, it defaults to Red
Hat Enterprise Linux 6.2. EUS
channel.

rhsc

BZ#

922572

Prior to this update, Red Hat Console after installation displays an HTTP 500 error. With this
update, you can successfully login into RHSC through the web interface.

BZ#

923674

Prior to this update, rhsc-setup command failed on systems that have SELinux disabled. With
this update, the rhsc-setup command was modified to use semanage commands instead of
setsebool. Now, rhsc-setup command works on systems that have SELinux disabled.

vulnerability

Chapter 4. Package Updates

15

background image

CVE-2012-4 4 06

It was found that OpenStack Swift used the Python pickle module in an insecure way to serialize
and deserialize data from memcached. As memcached does not have authentication, an
attacker on the local network, or possibly an unprivileged user in a virtual machine hosted on
OpenStack, could use this flaw to inject specially-crafted data that would lead to arbitrary code
execution.

CVE-2012-5635

Multiple insecure temporary file creation flaws were found in Red Hat Storage. A local user on
the Red Hat Storage server could use these flaws to cause arbitrary files to be overwritten as
the root user via a symbolic link attack.

CVE-2012-5638

The sanlock server creates /var/log/sanlock.log with world-writable permissions
allowing a local attacker to wipe the contents of the log file or to store data within the log file
(bypassing any quotas applied to their account).

All users are advised to upgrade to these updated packages, which fix these bugs.

4.2. RHBA-2013:1064

The bugs contained in this section are addressed by advisory RHBA-2013:14846. Further information
about this advisory is available at

https://rhn.redhat.com/errata/RHBA-2013-1064.html

.

gluster-swift

BZ#

969224

Installing or upgrading the gluster-swift-plugin RPM overwrites /etc/swift
configuration files. Hence, the customer configuration is overwritten, causing data unavailability.
Now, the RPM installs or upgrades new configuration files with a non-conflicting extension and
customer configuration files are not overwritten, maintaining data availability.

glusterfs

BZ#

966180

Due to the variable entry hash component in the NFS file handles the VMware ESXi virtual
machines failed to launch when Red Hat Storage is used as storage for virtual machines. Now,
the entry hash component is removed and the VMware ESXi can use Red Hat Storage as
storage for virtual machines.

BZ#

973922

Changelogs in synchronous replication could become incorrect following an ungraceful
shutdown of XFS. This could result in files not being self-healed or let unreliable self-heal
happen which can lead to potential data corruption. The process of updating changelogs has
been made resilient to avoid this inconsistency.

glusterfs-server

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

16

background image

glusterfs-server

BZ#

964 032

The option to disable root-squash was not executed properly in server protocol module and
root-squash could not be disabled when the volume was up. Now, the option to disable the
root-squash is executed in the server-protocol and RPC layer and an administrator can
disable root-squash while the volume is up.

BZ#

906119

The format of the internal key used to identify a fully qualified domain name (FQDN) based
hostname was not complied by the parser and the NFS mounts failed when a client with FQDN
based hostname was mounted. This issue is now fixed to comply with the parser and the NFS
mount works.

BZ#

9614 92

When a volume is enabled with root-squash, all the internal/trusted clients must be allowed to
bypass the root-squash check. Daemons such as self-heal are passed through root-
squash
check. Now, there is a mechanism to distinguish the clients which are trusted and
root-squash check is not performed on those client processes.

BZ#

9614 91

In quick-read translator, the UID/GID of a user was not copied correctly to the file system
operations made to the server, thus enabling root-squash on those operations. Non-root
users who did not fall under root-squash were also treated as nobody:nobody, thus
breaking the POSIX-compliance.This issue is now fixed and the root-squash behavior is
consistent with the standards.

BZ#

958666

glusterd init/service scripts did not create a lockfile while starting the process. Hence,
during a reboot, glusterd process was not stopped automatically. Now, locking and unlocking
mechanism have been added in glusterd init script and during reboot, glusterd service
stops properly.

rhsc

BZ#

966864

The JBoss EAP version available in channel has been updated to 6.1 from 6.0. Hence, Red Hat
Storage Console installation fails. Now, the setup scripts is updated to reflect the correct
location of JBoss files needed during installation and the installation is successful.

BZ#

928311

The VDSM in Red Hat Storage 2.0 Update 4 node had Secure Sockets Layer (SSL) enabled but
was set to false for communication in the Red Hat Storage Console. Hence, add server failed
with a SocketException error. Now, the VDSM is configured to false. After installing VDSM,
stop, reconfigure and start the VDSM service to successfully add a server in the Red Hat
Storage Console.

BZ#

963676

Chapter 4. Package Updates

17

background image

While adding a Red Hat Storage Update 4 node to Red Hat Storage Console, vdsm-gluster
package failed to install due to an error in the Gluster Package Bootstrap component of
VDSM. Now, the VDSM bootstrap component is modified and adding a Red Hat Storage node is
successful.

vdsm

BZ#

924 193

VDSM in Red Hat Storage 2.0 Update 4 node reported cluster compatibility level 3.1/3.2 while
the Red Hat Storage Console required compatibility level 2.0. Hence, a Red Hat Storage 2.0
Update 4 node could not be added using the Red Hat Storage Console.The VDSM was
modified to support cluster compatibility level 2.0. Now, adding a Red Hat Storage 2.0 Update 4
node using the Red Hat Storage Console is successful.

4.3. RHBA-2013:1092

The bugs contained in this section are addressed by advisory RHBA-2013:1092. Further information
about this advisory is available at

https://rhn.redhat.com/errata/RHBA-2013-1092.html

.

glusterfs

BZ#

985738

Changelogs in synchronous replication could become incorrect following an ungraceful
shutdown of XFS. This could result in files not being self-healed or let unreliable self-heal
happen which can lead to potential data corruption. The process of updating changelogs has
been made resilient to avoid this inconsistency.

glusterfs-server

BZ#

985737

In quick-read translator, the UID/GID of a user was not copied correctly to the file system
operations made to the server, thus enabling root-squash on those operations. Non-root
users who did not fall under root-squash were also treated as nobody:nobody, thus
breaking the POSIX-compliance.This issue is now fixed and the root-squash behavior is
consistent with the standards.

BZ#

985736

The format of the internal key used to identify a fully qualified domain name (FQDN) based
hostname was not complied by the parser and the NFS mounts failed when a client with FQDN
based hostname was mounted. This issue is now fixed to comply with the parser and the NFS
mount works.

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

18

background image

Chapter 5. Technology Previews

This chapter provides a list of all available Technology Preview features in Red Hat Storage 2.0.

Technology Preview features are currently not supported under Red Hat Storage subscription services,
may not be functionally complete, and are generally not suitable for production use. However, these
features are included as a customer convenience and to provide the feature with wider exposure.

Customers may find these features useful in a non-production environment. Customers are also free to
provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully
supported. Errata will be provided for high-severity security issues.

During the development of a Technology Preview feature, additional components may become available
to the public for testing. It is the intention of Red Hat to fully support Technology Preview features in a
future release.

5.1. Red Hat Storage Console

Red Hat Storage Console is a powerful and simple web based Graphical User Interface for managing a
Red Hat Storage 2.0 environment. It helps Storage Administrators to easily create and manage multiple
storage pools. This includes features like elastically expanding or shrinking a cluster, creating and
managing volumes.

For more information, refer to Red Hat Storage 2.0 Console Administration Guide.

5.2. Striped Volumes

Striped volumes stripes data across bricks in the volume. For best results, you should use striped
volumes only in high concurrency environments accessing very large files is critical.

For more information, refer to section Creating Striped Volumes in the Administration Guide.

5.3. Distributed-Striped Volumes

Distributed striped volumes stripe data across two or more nodes in the trusted storage pool. You
should use distributed striped volumes where the requirement is to scale storage and in high
concurrency environments where accessing very large files is critical.

For more information, refer to section Creating Distributed Striped Volumes of the Administration Guide.

5.4. Distributed-Striped-Replicated Volumes

Distributed striped replicated volumes distributes striped data across replicated bricks in the trusted
storage pool. For best results, you should use distributed striped replicated volumes in highly concurrent
environments where there is parallel access of very large files and performance is critical. Configuration
of this volume type is supported only for Map Reduce workloads.

For more information, refer to section Creating Distributed Striped Replicated Volumes in the
Administration Guide.

5.5. Striped-Replicated Volumes

Striped replicated volumes stripes data across replicated bricks in the trusted storage pool. For best

Chapter 5. Technology Previews

19

background image

results, you should use striped replicated volumes in highly concurrent environments where there is
parallel access of very large files and performance is critical. In this release, configuration of this volume
type is supported only for Map Reduce workloads.

For more information, refer to section Creating Striped Replicated Volumes in the Administration Guide.

5.6. Replicated Volumes with Replica Count greater than 2

Replicated volumes create copies of files across multiple bricks in the volume. You can use replicated
volumes in environments where high-availability and high-reliability are critical. Creating replicated
volume with replica count more than 2 is under technology preview.

For more information, refer to section Creating Replicated Volumes in the Administration Guide

5.7. Support for RDMA over Infiniband

Red Hat Storage support for RDMA over Infiniband is a technology preview feature.

5.8. Stopping Remove Brick Operation

You can cancel a remove-brick operation. After executing a remove-brick operation, you can choose to
stop the remove-brick operation by executing stop command. The files that are already migrated during
remove-brick operation, is not migrated back to the same brick.

For more information, refer to section Stopping Remove Brick Operation in the Administration Guide

5.9. Directory Quota

Directory quotas allows you to set limits on usage of disk space by directories or volumes. The storage
administrators can control the disk space utilization at the directory and/or volume levels by setting limits
to allocatable disk space at any level in the volume and directory hierarchy. This is particularly useful in
cloud deployments to facilitate utility billing model.

For more information, refer to chapter Managing Directory Quota in the Administration Guide

5.10. Hadoop Compatible Storage

Red Hat Storage provides compatibility for Apache Hadoop and it uses the standard file system APIs
available in Hadoop to provide a new storage option for Hadoop deployments. Existing MapReduce
based applications can use Red Hat Storage seamlessly. This new functionality opens up data within
Hadoop deployments to any file-based or object-based application.

For more information, refer to chapter Managing Compatible Storage in the Administration Guide

5.11. Granular Locking for Large Files

This feature enables using Red Hat Storage as a backing store for preserving large files like virtual
machine images. Granular locking enables internal file operations (like self-heal) without blocking user
level file operations. The latency for user I/O is reduced during self-heal operation.

5.12. Read-only Volume

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

20

background image

Red Hat Storage enables you to mount volumes as read-only. While mounting the client, you can mount
a volume as read-only and you can also make the entire volume as read-only, which applies for all the
clients using volume set option.

Chapter 5. Technology Previews

21

background image

Revision History

Revision 1.1-20

Thu Jan 16 2014

Shalaka Harne

Updated the Package Updates and Known Issues chapter.

Revision 1.1-17

Tue Nov 26 2013

Pavithra Srinivasan

Updated the Known Issues chapter.

Revision 1.1-16

Wed Nov 06 2013

Pavithra Srinivasan

Updated the Known Issues chapter.

Revision 1.1-15

Wed Aug 19 2013

Pavithra Srinivasan

Updated the Known Issues chapter.

Revision 1.1-11

Wed Aug 13 2013

Pavithra Srinivasan

Updated the Known Issues chapter.

Revision 1.1-10

Thu July 18 2013

Divya Muntimadugu

Updated the Known Issues and Package Updates chapters.

Revision 1.1-7

Wed July 17 2013

Divya Muntimadugu

Updated the Known Issues chapter.

Revision 1.1-6

Mon July 15 2013

Divya Muntimadugu

Updated the Package Updates chapter for Update 5 release.

Revision 1.1-5

Fri July 12 2013

Divya Muntimadugu

Updated the known issues chapter with BZ# 982636 for Update 5 release.

Revision 1.1-4

Thu June 27 2013

Bhavana Mohanraj

Updated the known issues for NFS section.

Revision 1.1-3

Thu May 30 2013

Divya Muntimadugu

Updated Whats New in this Release chapter.

Revision 1.1-2

Wed Apr 17 2013

Divya Muntimadugu

Updated Technology Preview chapter.

Revision 1.1-1

Thu Mar 28 2013

Divya Muntimadugu

Red Hat Storage 2.0 - Update 4 release.

Red Hat Storage 2.0 2.0 Update 4 and Update 5 Release Notes

22


Document Outline


Wyszukiwarka

Podobne podstrony:
Red Hat Storage 2 1 2 1 Update 2 Release Notes en US
Red Hat Enterprise Linux OpenStack Platform 2 Release Notes en US
Red Hat Enterprise Linux 5 Beta 5 11 Release Notes en US
Red Hat Storage 2 1 Console Command Line Shell Guide en US
Red Hat Storage 2 0 2 0 Release Notes en US
Red Hat Enterprise Linux 5 5 4 Release Notes en US
Red Hat Enterprise Linux 6 6 0 Release Notes en US
Red Hat Enterprise Linux 5 5 0 Release Notes en US
Red Hat Enterprise Virtualization 3 2 Manager Release Notes en US
Red Hat Enterprise Linux 4 4 8 Release Notes en US
Red Hat Enterprise Linux 6 Beta 6 6 Release Notes en US
Red Hat Enterprise Linux 6 6 5 Release Notes en US
Red Hat Enterprise Linux 6 6 3 Release Notes en US
Red Hat Enterprise Linux 5 Global Network Block Device en US
Red Hat Enterprise Virtualization 3 2 Command Line Shell Guide en US
Red Hat Enterprise Virtualization 3 3 Command Line Shell Guide en US
Red Hat Enterprise Linux 6 Virtualization Getting Started Guide en US
Red Hat Storage 2 0 Installation Guide en US
Red Hat Enterprise Linux OpenStack Platform 5 Technical Notes for EL6 en US

więcej podobnych podstron