Red Hat Storage 2 1 2 1 Update 1 Release Notes en US


Red Hat Storage 2.1
2.1 Update 1 Release Notes
Release Notes for Red Hat Storage - 2.1 Update 1 Draft 1
Edition 1
Pavithra Srinivasan Shalaka Harne Divya Muntimadugu
Red Hat Storage 2.1 2.1 Update 1 Release Notes
Release Notes for Red Hat Storage - 2.1 Update 1 Draft 1
Edition 1
Pavithra Srinivasan
Red Hat Engineering Content Services
psriniva@redhat.com
Shalaka Harne
Red Hat Engineering Content Services
sharne@redhat.com
Divya Muntimadugu
Red Hat Engineering Content Services
divya@redhat.com
Legal Notice
Copyright © 2013 Red Hat Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported
License. If you distribute this document, or a modified version of it, you must provide attribution to Red
Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be
removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section
4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo,
and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
Java ® is a registered trademark of Oracle and/or its affiliates.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or
endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/service marks or
trademarks/service marks of the OpenStack Foundation, in the United States and other countries and
are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Abstract
The Release Notes provide high-level coverage of the improvements and additions that have been
implemented in Red Hat Storage 2.1.
Table of Contents
Table of Contents
. . fa. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . .
Pr.e. . ce
1. Document Conventions 3
1.1. Typographic Conventions 3
1.2. Pull-quote Conventions 4
1.3. Notes and Warnings 5
2. Getting Help and Giving Feedback 5
2.1. Do You Need Help? 5
2.2. We Need Feedback! 6
. ha.pt.e. . . . . . oduct . . .
C. . . . r 1.. Int.r . . . . . . ion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . .
. ha.pt.e. . . . . . . . t' Ne. . . . . . . . . . . . se?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. . . . . . . . . .
C. . . . r 2.. Wha . .s. . . w in t.his Re.le.a. . . .
. ha.pt.e. . . . . . . . . . . . . . . . .
C. . . . r 3. Known Issue.s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . .
. ha.pt.e. . . Te.chnology Pre. . . ws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C. . . . r 4... . . . . . . . . . . . . . . vie. . . 1.9. . . . . . . . . .
4.1. Red Hat Storage Console 19
4.2. Striped Volumes 19
4.3. Distributed-Striped Volumes 19
4.4. Distributed-Striped-Replicated Volumes 19
4.5. Striped-Replicated Volumes 20
4.6. Replicated Volumes with Replica Count greater than 2 20
4.7. Support for RDMA over Infiniband 20
4.8. Stopping Remove Brick Operation 20
4.9. Read-only Volume 20
. . vision Hist.ory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Re. . . . . . . . . . . . . 2.1. . . . . . . . . .
1
Red Hat Storage 2.1 2.1 Update 1 Release Notes
2
Preface
Preface
1. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to
specific pieces of information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The
Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative
but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later include the Liberation
Fonts set by default.
1.1. Typographic Conventions
Four typographic conventions are used to call attention to specific words and phrases. These
conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight
keys and key combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working
directory, enter the cat my_next_bestselling_novel command at the shell prompt
and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all
distinguishable thanks to context.
Key combinations can be distinguished from an individual key by the plus sign that connects each part of
a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination:
a set of three keys pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for
directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog-box text;
labeled buttons; check-box and radio-button labels; menu titles and submenu titles. For example:
Choose System Preferences Mouse from the main menu bar to launch Mouse
Preferences. In the Buttons tab, select the Left-handed mouse check box and click
Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).
To insert a special character into a gedit file, choose Applications Accessories
3
Red Hat Storage 2.1 2.1 Update 1 Release Notes
Character Map from the main menu bar. Next, choose Search Find& from the
Character Map menu bar, type the name of the character in the Search field and click
Next. The character you sought will be highlighted in the Character Table. Double-click
this highlighted character to place it in the Text to copy field and then click the Copy
button. Now switch back to your document and choose Edit Paste from the gedit menu
bar.
The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all
distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable
text. Italics denotes text you do not input literally or displayed text that changes depending on
circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell
prompt. If the remote machine is example.com and your username on that machine is
john, type ssh john@example.com.
The mount -o remount file-system command remounts the named file system. For
example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It
will return a result as follows: package-version-release.
Note the words in bold italics above: username, domain.name, file-system, package, version and release.
Each word is a placeholder, either for text you enter when issuing a command or for text displayed by
the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and
important term. For example:
Publican is a DocBook publishing system.
1.2. Pull-quote Conventions
Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman and presented thus:
books Desktop documentation drafts mss photos stuff svn
books_tests Desktop1 downloads images notes scripts svgs
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
4
Preface
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
struct kvm_assigned_pci_dev *assigned_dev)
{
int r = 0;
struct kvm_assigned_dev_kernel *match;
mutex_lock(&kvm->lock);
match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
if (!match) {
printk(KERN_INFO "%s: device hasn't been assigned before, "
"so cannot be deassigned\n", __func__);
r = -EINVAL;
goto out;
}
kvm_deassign_device(kvm, match);
kvm_free_assigned_device(kvm, match);
out:
mutex_unlock(&kvm->lock);
return r;
}
1.3. Notes and Warnings
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.
Note
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should
have no negative consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to the
current session, or services that need restarting before an update will apply. Ignoring a box
labeled  Important will not cause data loss but may cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer
5
Red Hat Storage 2.1 2.1 Update 1 Release Notes
Portal at http://access.redhat.com. Through the customer portal, you can:
search or browse through a knowledgebase of technical support articles about Red Hat products.
submit a support case to Red Hat Global Support Services (GSS).
access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and
technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo.
Click on the name of any mailing list to subscribe to that list or to access the list archives.
2.2. We Need Feedback!
If you find a typographical error in this manual, or if you have thought of a way to make this manual
better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/
against the product Red Hat Storage.
When submitting a bug report, be sure to mention the manual's identifier: 2.1_Release_Notes
If you have a suggestion for improving the documentation, try to be as specific as possible when
describing it. If you have found an error, please include the section number and some of the surrounding
text so we can find it easily.
6
Chapter 1. Introduction
Chapter 1. Introduction
Red Hat Storage is a software only, scale-out storage solution that provides flexible and agile
unstructured data storage for the enterprise. Red Hat Storage provides new opportunities to unify data
storage and infrastructure, increase performance, and improve availability and manageability in order to
meet a broader set of an organization s storage challenges and needs.
GlusterFS, a key building block of Red Hat Storage, is based on a stackable user space design and can
deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers
over network interconnects into one large parallel network file system. The POSIX compatible GlusterFS
servers, which use XFS file system format to store data on disks, can be accessed using industry
standard access protocols including Network File System (NFS) and Server Message Block SMB (also
known as CIFS).
Red Hat Storage can be deployed in the private cloud or data center using Red Hat Storage Server for
On-premise. Red Hat Storage can be installed on commodity servers and storage hardware resulting in
a powerful, massively scalable, and highly available NAS environment. Additionally, Red Hat Storage can
be deployed in the public cloud using Red Hat Storage Server for Public Cloud, for example, within the
Amazon Web Services (AWS) cloud. It delivers all the features and functionality possible in a private
cloud or datacenter to the public cloud by providing massively scalable and high available NAS in the
cloud.
Red Hat Storage Server for On-Premise
Red Hat Storage Server for On-Premise enables enterprises to treat physical storage as a virtualized,
scalable, and centrally managed pool of storage by using commodity server and storage hardware.
Red Hat Storage Server for Public Cloud
Red Hat Storage Server for Public Cloud packages GlusterFS as an Amazon Machine Image (AMI) for
deploying scalable NAS in the AWS public cloud. This powerful storage server provides a highly
available, scalable, virtualized, and centrally managed pool of storage for Amazon users.
7
Red Hat Storage 2.1 2.1 Update 1 Release Notes
Chapter 2. What's New in this Release?
This chapter describes the key features added to Red Hat Storage 2.1 Update 1.
Object Store Enhancements
Keystone Authentication Service
Red Hat Storage Object Store supports authentication against an external OpenStack Keystone
server. Keystone provides Identity, Token, Catalog, and Policy services. OpenStack users can
use Red Hat Storage Server as the swift based object store and OpenStack's keystone as the
authentication service.
TempAuth
You can use TempAuth authentication mechanism in the data center to test the authentication
service supported in Red Hat Storage Object Store. TempAuth must be used only in test
deployments and not in production environments.
Geo-replication
Geo-replication Status detail output enhancements
The status detail output now provides brick level information. Previously, the output
displayed only node level information.
Geo-replication configuration option use-tarssh now uses tar over Secure Shell protocol.
use-tarssh was introduced to handle workloads of files that have not undergone edits.
Directory Quota
With this release, Directory Quota is fully supported. It allows you to set limits on the usage of disk
space by directories or volumes with soft limits and hard limits. The storage administrators can
control the disk space utilization at the directory and volume levels by setting limits to allocatable disk
space at any level in the volume and directory hierarchy. This is particularly useful in cloud
deployments to facilitate utility billing model.
For more information refer to chapter, Managing Directory Quota in the Red Hat Storage 2.1
Administration Guide.
In-service Rolling Upgrade
You can upgrade to this update of the Red Hat Storage Server from the Red Hat Storage Server 2.1
GA release using the in-service rolling upgrade feature for replicated and distributed-replicated
volumes without stopping the volumes.
8
Chapter 3. Known Issues
Chapter 3. Known Issues
This chapter provides a list of known issues at the time of release.
Issues related to Red Hat Enterprise Virtualization and Red Hat Storage Integration
In the case that the Red Hat Storage server nodes and the Red Hat Enterprise Virtualization
Hypervisors are present in the same data center, the servers of both types are listed for
selection when you create a virtual machine or add a storage domain. Red Hat recommends that
you create a separate data center for the Red Hat Storage server nodes.
BZ# 867236
While deleting a virtual machine using the Red Hat Enterprise Virtualization Manager, the virtual
machine is deleted but remains in the actual storage. This consumes unnecessary storage.
Workaround: Delete the virtual machine manually using the command line interface. Deleting the
virtual image file frees the space.
BZ# 918032
In this release, the direct-io-mode=enable mount option does not work on the Hypervisor.
BZ# 920791 and BZ# 920530
In a plain distributed hash table (DHT), there is no assurance of data availability leading to the
unavailability of virtual machines. This may result in disruption of the cluster.
For a high availability requirement, it is recommended that you use distributed-replicate volumes
on the Hypervisors.
BZ# 979901
Virtual machines may experience very slow performance when a rebalance operation is initiated
on the storage volume. This scenario is observed when the load on storage servers are
extremely high. Hence, it is recommended to run the rebalance operation when the load is low.
BZ# 856121
When a volume starts, a .glusterfs directory is created in the back-end export directory. When
a remove-brick command is performed, it only changes the volume configuration to remove the
brick and stale data is present in back-end export directory.
Workaround: Run this command on the Red Hat Storage Server node to delete the stale data.
# rm -rf /export-dir
BZ# 866908
The gluster volume heal VOLNAME info command gives stale entries in its output in a few
scenarios.
Workaround: Execute the command after 10 minutes. This removes the entries from internal
data structures and the command does not display stale entries.
Issues related to Red Hat Storage Console
BZ# 905440
Due to a bug in JBoss modules (https://issues.jboss.org/browse/MODULES-105), the Red Hat
Storage Console may not work after the latest patches are applied.
Workaround: After every yum update run this command:
# find /usr/share/jbossas/modules -name '*.jar.index' -delete
And then restart the jbossas service.
BZ#916095
When a server is added to a cluster though the Red Hat Storage Console using the IP address
9
Red Hat Storage 2.1 2.1 Update 1 Release Notes
and consequently if the server is added to the cluster again using the hostname; the action does
not fail right away. Instead, the Console attempts to perform the installation and then fails. The
newly-added host goes to the Install Failed state.
BZ# 989477
The restore.sh script fails to restore the engine database when run with a user other than
postgres. You can run the restore.sh script only with -u postgres option.
BZ# 972581
The list events --show-all command and the show event id command raises a Python
error with the datetim e object. This renders the list events and show event CLI commands
unusable.
BZ# 990108
Resetting the user.cifs option using the Create Volume operation on the Volume
Options tab on the Red Hat Storage Console reports a failure.
BZ# 970581
When attempting to select a volume option from the Volume Option drop down list, the list
collapses before you make a selection.
Workaround: Click Volume Option again to make a selection.
BZ# 989382
No errors are reported when you start the ovirt-engine-notifier. There is no notification
that the ovirt-engine-notifier started successfully.
Workaround: Check the status of the service using the command:
# service ovirt-engine-notifier status
BZ# 1007751
During the installation of the rhsc-setup RPM, the following benign warnings are seen because
the ovirt user-id and user-group is not created.
warning: user ovirt does not exist - using root
warning: group ovirt does not exist - using root
Issues related to the Red Hat Storage Console Command Line Interface:
BZ# 928926
When you create a cluster, both the glusterFS service and virt service get enabled on the
server. An HTTP error message must be displayed and there should be a restriction on creating
a cluster with both the services enabled at the same time.
Issues related to Rebalancing Volumes:
Rebalance does not happen if the bricks are down.
While running rebalance, ensure that all the bricks are in the operating or connected state.
BZ# 960910
After executing rebalance on a volume, if you run rm -rf command on the mount point to
remove all of the content from the current working directory recursively without prompting, you
may get Directory not Empty error message.
BZ# 862618
After completion of the rebalance operation, there may be mismatch of failure counts between the
gluster volume rebalance status output and the rebalance log files.
BZ# 987327
If the user performs a rename operation on some files while the Rebalance operation is in
10
Chapter 3. Known Issues
progress, some of those files might not be visible on the mount point after the rebalance
operation is complete.
Issues related to Self-heal
BZ# 877895
When one of the bricks in a replicate volume is offline, the ls -lR command from the mount point
reports Transport end point not connected.
When one of the two bricks under replication goes down, the entries are created on the other
brick. The Automatic File Replication translator remembers that the directory that is down contains
stale data. If the brick that is up is killed before the self-heal happens on that directory, operations
like readdir() fail.
BZ# 972021
In certain cases due to a race condition of network connectivity, opening a file before the
completion of the self-heal process, leads to the file having stale data.
BZ# 852294
If the number of files which need to be self-healed is large, the Gluster CLI reports Operation
failed for the command gluster volum e heal vol info.
BZ# 920970
If executing gluster volume heal info becomes unresponsive, executing subsequent
commands fail for the next 10 minutes due to the cluster-wide lock time-out.
Issues related to replace-brick operation
Even though the replace-brick status command displays Migration complete, all the data
would not have been migrated to the destination brick. It is strongly recommended that you get
cautious when performing the replace-brick operation.
The replace-brick operation will not be successful if either the source or the destination brick
goes down.
After the gluster volume replace-brick VOLNAME Brick New-Brick commit
command is executed, the file system operations on that particular volume, which are in transit,
fail.
After a replace-brick operation, the stat information is different on the NFS mount and the FUSE
mount. This happens due to internal time stamp changes when the replace-brick operation
is performed.
Issues related to Directory Quota:
BZ# 1001453
Truncating a file to a larger size and writing to it violates the quota hard limit. This is because the
XFS pre-allocation logic applied on the truncated file does not extract the actual disk space it
consumed.
BZ# 1003755
Directory Quota feature does not work well with hard links. With a directory that has Quota limit
set, the disk usage seen with the du -hs directory command and the disk usage seen with
the gluster volume quota VOLNAME list directory command may differ. It is
recommended that applications writing to a volume with directory quotas enabled, do not use hard
links.
BZ# 1016419
Quota does not account for the disk blocks consumed by a directory. Even if a directory grows in
size because of the creation of new directory entries, the size as accounted by quota does not
change. You can create any number of empty files but you will not be able to write to the files
once you reach the quota hard limit. For example, if the quota hard limit of a directory is 100 bytes
and the disk space consumption is exactly equal to 100 bytes, you can create any number of
11
Red Hat Storage 2.1 2.1 Update 1 Release Notes
empty files without exceeding quota limit.
BZ# 1018205
Alert message is not reported in the logs when quota soft-timeout and hard-timeout is changed.
BZ# 1020275
Creating files of different sizes leads to the violation of the quota hard limit.
BZ# 1020986
On a directory that has a quota limit set experiences a performance degradation when a number
of parallel read and write operations are performed.
BZ# 1021466
After setting Quota limit on a directory, creating sub directories and populating them with files and
renaming the files subsequently while the I/O operation is in progress causes a quota limit
violation.
BZ# 998893
Zero byte sized files are created when a write operation exceeds the available quota space.
Since Quota does not account for the disk blocks consumed by a directory(as per Bug 1016419),
the write operation creates the directory entry but the subsequent write operation fails because of
unavailable disk space.
BZ# 1023430
When a quota directory reaches its limit, renaming an existing file on that directory leads to Quota
violation. This is because the renamed is treated as a new file.
BZ# 998791
During a file rename operation if the hashing logic moves the target file to a different brick, then
the rename operation fails if it is initiated by a non-root user.
BZ# 999458
Quota hard limit is violated for small quota sizes in the range of 10 MB to 100 MB.
BZ# 1020713
In a distribute or distribute replicate volume, while setting quota limit on a directory, if one or more
bricks or one or more replica sets respectively, experience downtime, quota is not enforced on
those bricks or replica sets, when they are back online. As a result, the disk usage exceeds the
quota limit.
Workaround: Set quota limit again after the brick is back online.
BZ# 1032449
In the case when two or more bricks experience a downtime and data is written to their replica
bricks, invoking the quota list command on that multi-node cluster displays different outputs after
the bricks are back online.
Issues related to Rolling Upgrade from 2.1 to 2.1 U1
BZ# 1021351
Bricks experience downtime when the load on self-heal process is high and the volume heal
command is executed periodically. During this time, data on the brick may be unavailable. A split-
brain occurs when these two scenarios happen concurrently on a brick that runs out of memory:
The file or directory is being modified.
The self heal process is running on the same file or directory.
BZ# 1021807
The VM images are likely to be modified constantly. The VMs listed in the output of the volume
heal command does not imply that the self-heal of the VM is incomplete. It could mean that the
modifications on the VM are happening constantly.
BZ# 1022443
12
Chapter 3. Known Issues
Upgrading from 2.1 to 2.1 Update 1 results in NFS file system mounted on clients to become
unresponsive. Any new or outstanding file operations on that file system will not respond without
interruption, until the upgrade activity is complete and the server is back online.
BZ# 1020995
Errors get logged in the self-heal daemon log periodically until some data is created on every
brick during the rolling upgrade process. This happens because the indices directory is not
present on some of the bricks.
Workaround: Execute a script that will ensure that one file is created on each brick. For example,
for i in {1..10}; do echo a > $i; done
BZ# 1020976
The volume heal VOLNAME info command gives an error when the number of files it needs
to report is high. Further, all commands may remain unresponsive for 10 minutes.
BZ# 1021659
Red Hat Storage Console does not have the ports 8080 and 38469 listed in its firewall setting
list. The ports 8080 (for Swift service) and 38469 (for NFS ACL support) are overwritten in the
firewall setting of the Red Hat Storage node after it is added in the Red Hat Storage Console.
Workaround: After you add a Red Hat Storage node into the Red Hat Storage Console,
configure the firewall setting to open the ports 8080 and 38469.
BZ# 906747
The volume heal VOLNAME info command fails when the number of entries to be self-healed
is high. This may also lead to the gluster self-heal daemon or glusterd to become
unresponsive. Thus, the associated self-heal processes are not completed.
BZ# 1022415
The glusterd process gets terminated causing all the Swift requests in transition to fail with a
response code of 503 (internal server error), when you perform a rolling upgrade from the Red
Hat Storage 2.1 version to the Red Hat Storage 2.1 Update 1 version.
Workaround: Prior to performing a rolling upgrade, stop all the swift services with the commands:
# service openstack-swift-proxy stop
# service openstack-swift-account stop
# service openstack-swift-container stop
# service openstack-swift-object stop
Kill the GlusterFS processes with the commands:
# pkill glusterfs
# pkill glusterfsd
Additional Information:It is recommended that you stop all the swift services prior starting the
rolling upgrade. This rejects the connection to the swift server with [Errno 111]
ECONNREFUSED error.
Issues related to NFS
After you restart the NFS server, the unlock within the grace-period feature may fail and the locks
help previously may not be reclaimed.
fcntl locking ( NFS Lock Manager) does not work over IPv6.
You cannot perform NFS mount on a machine on which glusterfs-NFS process is already running
unless you use the NFS mount -o nolock option. This is because glusterfs-nfs has already
registered NLM port with portmapper.
If the NFS client is behind a NAT (Network Address Translation) router or a firewall, the locking
13
Red Hat Storage 2.1 2.1 Update 1 Release Notes
behavior is unpredictable. The current implementation of NLM assumes that Network Address
Translation of the client's IP does not happen.
nfs.m ount-udp option is disabled by default. You must enable it if you want to use posix-locks
on Solaris when using NFS to mount on a Red Hat Storage volume.
If you enable the nfs.mount-udp option, while mounting a subdirectory (exported using the
nfs.export-dir option) on Linux, you must mount using the -o proto=tcp option. UDP is
not supported for subdirectory mounts on the GlusterFS-NFS server.
For NFS Lock Manager to function properly, you must ensure that all of the servers and clients
have resolvable hostnames. That is, servers must be able to resolve client names and clients
must be able to resolve server hostnames.
BZ# 973078
For a distributed or a distributed-replicated volume, in the case of an NFS mount, if the brick or
sub-volume is down, then any attempt to create, access, or modify the file which is either hashed
or hashed and cached on the sub-volume that is down gives an I/O error instead of a Transport
endpoint is not connected error.
Issues related to Object Store
The GET and PUT commands fail on large files while using Unified File and Object Storage.
Workaround: You must set the node_timeout=60 variable in the proxy, container, and the
object server configuration files.
BZ# 985862
When you to try to copy a file larger than that of the brick size, an HTTP error 503 is returned.
Workaround: Increase the amount of storage available in the corresponding volume and retry.
BZ# 982497
When you access a cinder volume from OpenStack node, it may fail with error 0-glusterd:
Request received from non-privileged port. Failing request.
Workaround: Perform the following to avoid this issue:
1. Set the following volume option:
# volume set VOLNAME server.allow-insecure on
2. Add the following line in /etc/glusterfs/glusterd.vol file
option rpc-auth-allow-insecure on
3. Restart the glusterd service.
Issues related to distributed Geo-replication
BZ# 984813
The files which were removed on the master volume when Geo-replication was stopped, will not
be removed from the slave, when Geo-replication restarts.
BZ# 984591
After stopping a Geo-replication session, if the files synced to the slave volume are renamed then
when Geo-replication starts again, the renamed files are treated anew, (without considering the
renaming) and synced on to the slave volumes again. For example, if 100 files were renamed, you
would find 200 files on the slave side.
BZ# 987929
While the rebalance process is in progress, starting or stopping a Geo-replication session
results in some files not get synced to the slave volumes. When a Geo-replication sync process
is in progress, running the rebalance command causes the Geo-replication sync process to
stop. As a result, some files do not get synced to the slave volumes.
14
Chapter 3. Known Issues
BZ# 1029799
Starting a Geo-replication session when there are tens of millions of files on the master volume
takes a long time to observe the updates on the slave mount point.
BZ# 1026072
The Geo-replication feature keeps the status details including the changelog entires in the
/var/run/gluster directory. On Red Hat Storage Server, this directory is a tmpfs mountpoint,
therefore there is a data loss after a reboot.
BZ# 1027727
There is a possibility of some hard links to not get synchronized to the slave volume when there
are hundreds of thousands of hard links on the master volume prior to starting the Geo-
replication session.
BZ# 1029899
During a Geo-replication session, after you set the checkpoint, and subsequently when one of the
active nodes goes down, the passive node replaces the active node. At this point the checkpoint
for replaced node is displayed as invalid.
BZ# 1030052
During a Geo-replication session, the gsyncd process restarts when you set use-tarssh, a
Geo-replication configuration option, to true even if it is already set.
BZ# 1030256
During a Geo-replication session, when create and write operations are in progress, if one of the
active nodes goes down, there is a possibility for some files to undergo a synchronization failure
to the slave volume.
BZ# 1031687
During a Geo-replication session, if the master node experiences a connectivity issue, the
changes identified by the file system crawl process is not retained when the process is
terminated. When the master nodes is back online, the file system crawl process restarts and
identifies the changes again. As a result, there is a performance degradation.
Issues related to Red Hat Storage Volumes:
BZ# 877988
Entry operations on replicated bricks may have a few issues with md-cache module enabled on
the volume graph.
For example: When one brick is down, while the other is up an application is performing a hard
link call link() would experience EEXIST error.
Workaround: Execute this command to avoid this issue:
# gluster volume set VOLNAME stat-prefetch off
BZ# 979861
Although the glusterd service is alive, the gluster command reports glusterd as non-
operational.
Workaround: There are two ways to solve this:
Edit /etc/glusterfs/glusterd.vol to contain this line:
option rpc-auth-allow-insecure on
Or
Reduce the tcp_fin_timeout from default 60 seconds to 1 second
The tcp_fin_timeout variable tells the kernel how long to the keep sockets in the state FIN-WAIT-2
15
Red Hat Storage 2.1 2.1 Update 1 Release Notes
if you were the one closing the socket.
BZ# 986090
Currently, the Red Hat Storage server has issues with mixed usage of hostnames, IPs and
FQDNs to refer to a peer. If a peer has been probed using its hostname but IPs are used during
add-brick, the operation may fail. It is recommended to use the same address for all the
operations, that is, during peer probe, volume creation, and adding/removing bricks. It is
preferable if the address is correctly resolvable to a FQDN.
BZ# 882769
When a volume is started, by default the NFS and Samba server processes are also started
automatically. The simultaneous use of Samba or NFS protocols to access the same volume is
not supported.
Workaround: You must ensure that the volume is accessed either using Samba or NFS
protocols.
BZ# 852293
The management daemon does not have a rollback mechanism to revert any action that may
have succeeded on some nodes and failed on the those that do not have the brick's parent
directory. For example, setting the volume-id extended attribute may fail on some nodes and
succeed on others. Because of this, the subsequent attempts to recreate the volume using the
same bricks may fail with the error or a prefix of it is already part of a volume.
Workaround:
1. You can either remove the brick directories or remove the glusterfs-related extended
attributes.
2. Try creating the volume again.
BZ# 977492
If the NFS client machine has more than 8 GB RAM and if the virtual memory subsystem is set
with the default value of vm.dirty_ratio and vm.dirty_background_ratio, the NFS client caches a
huge amount of write-data before committing it to the GlusterFS-NFS server. The GlusterFS-NFS
server does not handle huge I/O bursts, it slows down and eventually stops.
Workaround: Set the virtual memory parameters to increase the NFS COMMIT frequency to
avoid huge I/O bursts. The suggested values are:
vm.dirty_background_bytes=32768000
vm.dirty_bytes=65536000
BZ# 994950
An input-output error is seen instead of the Disk quota exceeded error when the quota limit
exceeds. This issue is fixed in the Red Hat Enterprise Linux 6.5 Kernel.
BZ# 913364
An NFS server reboot does not reclaim the file LOCK held by a Red Hat Enterprise Linux 5.9
client.
BZ# 1020333
The extended attributes trusted.glusterfs.quota.limit-set and
trusted.glusterfs.volume-id are visible from any FUSE mount point on the client machine.
BZ# 896314
GlusterFS Native mount in Red Hat Enterprise Linux 5.x shows lower performance than the RHEL
6.x versions for high burst I/O applications. The FUSE kernel module on Red Hat Enterprise Linux
6.x has many enhancements for dynamic write page handling and special optimization for large
burst of I/O.
Workaround: It is recommended that you use Red Hat Enterprise Linux 6.x clients if you observe
a performance degradation on the Red Hat Enterprise Linux 5.x clients.
16
Chapter 3. Known Issues
BZ# 1017728
On setting the quota limit as a decimal digit and setting the deem-statfs on, a difference is
noticed in the values displayed by the df -h command and gluster volume quota VOLNAME
list command. In case of the gluster volume quota VOLNAME list command, the values
do not get rounded off to the next integer.
BZ# 1030438
On a volume, when read and write operations are in progress and simultaneously a rebalance
operation is performed followed by a remove-brick operation on that volume, then the rm -rf
command fails on a few files.
Issues related to POSIX ACLs:
Mounting a volume with -o acl can negatively impact the directory read performance.
Commands like recursive directory listing can be slower than normal.
When POSIX ACLs are set and multiple NFS clients are used, there could be inconsistency in the
way ACLs are applied due to attribute caching in NFS. For a consistent view of POSIX ACLs in a
multiple client setup, use the -o noac option on the NFS mount to disable attribute caching. Note
that disabling the attribute caching option could lead to a performance impact on the operations
involving the attributes.
Issues related to Samba
BZ# 994990
When the same file is accessed concurrently by multiple users for reading and writing. The users
trying to write to the same file will not be able to complete the write operation because of the lock
not being available.
Workaround: To avoid the issue, execute the command:
# gluster volume set VOLNAME storage.batch-fsync-delay-usec 0
General issues
If files and directories have different GFIDs on different back-ends, the glusterFS client may hang
or display errors.
Contact Red Hat Support for more information on this issue.
BZ# 865672
Changing a volume from one-brick to multiple bricks (add-brick operation) is not supported. The
volume operations on the volume may fail due to impact of add brick operation on the volume
configuration.
It is recommended that the volume is started with at least two bricks to avoid this issue.
BZ# 839213
A volume deleted in the absence of one of the peers is not removed from the cluster's list of
volumes. This is due to the import logic of peers that rejoin the cluster. The import logic is not
capable of differentiating between deleted and added volumes in the absence of the other
(conflicting) peers.
Work Around : Manually detect it by analyzing the CLI cmd logs to get the cluster view of the
volumes that must have been present. If any volume is not listed, use thevolume-sync
command to reconcile the volumes in the cluster.
BZ# 920002
The POSIX compliance tests fail in certain cases on Red Hat Enterprise Linux 5.9 due to
mismatched timestamps on FUSE mounts. These tests pass on all the other Red Hat Enterprise
Linux 5.x and Red Hat Enterprise Linux 6.x clients.
BZ# 916834
17
Red Hat Storage 2.1 2.1 Update 1 Release Notes
The quick-read translator returns stale file handles for certain patterns of file access. When
running the dbench application on the mount point, a dbench: read fails on handle 10030
message is displayed.
Work Around: Use the command below to avoid the issue:
# gluster volume set VOLNAME quick-read off
BZ# 1030962
On installing the Red Hat Storage Server from an ISO or PXE, the kexec-tools package for the
kdump service gets installed by default. However, the crashkernel=auto kernel parameter
required for reserving memory for the kdump kernel, is not set for the current kernel entry in the
bootloader configuration file, /boot/grub/grub.conf. Therefore the kdump service fails to
start up with the following message available in the logs.
kdump: No crashkernel parameter specified for running kernel
Workaround: After installing the Red Hat Storage Server, the crashkernel=auto, or an
appropriate crashkernel=M kernel parameter can be set manually for the current kernel
in the bootloader configuration file. After that, the Red Hat Storage Server system must be
rebooted, upon which the memory for the kdump kernel is reserved and the kdump service starts
successfully. Refer to the following link for more information on Configuring kdump on the
Command Line
Additional information: On installing a new kernel after installing the Red Hat Storage Server,
the crashkernel=auto kernel parameter is successfully set in the bootloader configuration file
for the newly added kernel.
18
Chapter 4. Technology Previews
Chapter 4. Technology Previews
This chapter provides a list of all available Technology Preview features in Red Hat Storage 2.1.
Technology Preview features are currently not supported under Red Hat Storage subscription services,
may not be functionally complete, and are generally not suitable for production use. However, these
features are included as a customer convenience and to provide the feature with wider exposure.
Customers may find these features useful in a non-production environment. Customers are also free to
provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully
supported. Errata will be provided for high-severity security issues.
During the development of a Technology Preview feature, additional components may become available
to the public for testing. It is the intention of Red Hat to fully support Technology Preview features in a
future release.
4.1. Red Hat Storage Console
Red Hat Storage Console is a powerful and simple web based Graphical User Interface for managing a
Red Hat Storage 2.1 environment. It helps Storage Administrators to easily create and manage multiple
storage pools. This includes features like elastically expanding or shrinking a cluster, creating and
managing volumes.
For more information, refer to Red Hat Storage 2.1 Console Administration Guide.
4.2. Striped Volumes
Striped volumes stripes data across bricks in the volume. Use striped volumes only in high concurrency
environments accessing very large files is critical.
For more information, refer to section Creating Striped Volumes in the Red Hat Storage 2.1
Administration Guide.
4.3. Distributed-Striped Volumes
The distributed striped volumes stripe data across two or more nodes in the trusted storage pool. Use
distributed striped volumes to scale storage and to access very large files during critical operations in
high concurrency environments.
For more information, refer to section Creating Distributed Striped Volumes in the Red Hat Storage 2.1
Administration Guide.
4.4. Distributed-Striped-Replicated Volumes
Distributed striped replicated volumes distributes striped data across replicated bricks in a trusted
storage pool. Use distributed striped replicated volumes in highly concurrent environments where there
is parallel access of very large files and performance is critical. Configuration of this volume type is
supported only for Map Reduce workloads.
For more information, refer to the section Creating Distributed Striped Replicated Volumes in the Red Hat
Storage 2.1 Administration Guide.
19
Red Hat Storage 2.1 2.1 Update 1 Release Notes
4.5. Striped-Replicated Volumes
The striped replicated volumes stripe data across replicated bricks in a trusted storage pool. Use
striped replicated volumes in highly concurrent environments where there is parallel access of very large
files and performance is critical. In this release, configuration of this volume type is supported only for
Map Reduce workloads.
For more information, refer to the section Creating Striped Replicated Volumes in the Red Hat Storage
2.1 Administration Guide.
4.6. Replicated Volumes with Replica Count greater than 2
The replicated volumes create copies of files across multiple bricks in the volume. You can use
replicated volumes in environments where high-availability and high-reliability are critical. Creating
replicated volumes with replica count more than 2 is under technology preview.
For more information, refer to the section Creating Replicated Volumes in the Red Hat Storage 2.1
Administration Guide.
4.7. Support for RDMA over Infiniband
Red Hat Storage support for RDMA over Infiniband is a technology preview feature.
4.8. Stopping Remove Brick Operation
You can cancel a remove-brick operation. After executing a remove-brick operation, you can choose to
stop the remove-brick operation by executing the stop command. The files that are already migrated
during remove-brick operation, is not migrated back to the same brick.
For more information, refer to the section Stopping Remove Brick Operation in the Red Hat Storage 2.1
Administration Guide.
4.9. Read-only Volume
Red Hat Storage enables you to mount volumes with read-only permission. While mounting the client,
you can mount a volume as read-only and also make the entire volume as read-only, which applies for all
the clients using the volume set command.
20
Revision History
Revision History
Revision 2.1-20 Mon Dec 30 2013 Pavithra Srinivasan
Updated the known issues chapter.
Revision 2.1-17 Tue Dec 10 2013 Pavithra Srinivasan
Updated the known issues chapter.
Revision 2.1-16 Mon Nov 25 2013 Pavithra Srinivasan
Updated the known issues chapter.
Revision 2.1-8 Wed Nov 20 2013 Pavithra Srinivasan
Second draft for Update 1 release.
Revision 2.1-4 Fri Oct 4 2013 Pavithra Srinivasan
First draft for Update 1 release.
21


Wyszukiwarka

Podobne podstrony:
Red Hat Enterprise Linux 6 6 1 Release Notes en US
Red Hat Enterprise Linux 5 5 6 Release Notes en US
Red Hat Enterprise Linux 5 5 8 Release Notes en US
Red Hat Enterprise Linux 5 5 9 Release Notes en US
Red Hat Enterprise Linux 5 5 5 Release Notes en US
Red Hat Enterprise Linux 4 4 9 Release Notes en US
Red Hat Enterprise Linux 6 6 1 Release Notes en US
Red Hat Enterprise Linux 5 5 7 Release Notes en US
Red Hat Enterprise Linux 6 6 2 Release Notes en US
Red Hat Storage 2 0 Quick Start Guide en US
Red Hat Enterprise Linux 5 5 0 Technical Notes en US
Red Hat Storage 2 1 Console Installation Guide en US
Red Hat Storage 2 0 2 0 Release Notes en US
Red Hat Storage 2 1 2 1 Release Notes en US
Red Hat Enterprise Virtualization 3 3 Manager Release Notes en US
Red Hat Enterprise Virtualization 3 4 Manager Release Notes en US
Red Hat Enterprise Virtualization 3 1 Manager Release Notes en US
Red Hat Enterprise Linux 3 GFS Release Notes en US
Red Hat Enterprise Virtualization 3 0 Manager Release Notes en US

więcej podobnych podstron