|
Copyright © 2007 Red Hat, Inc. and others [1]
The following topics are covered in this document:
Release Notes Updates
Installation-Related Notes
Feature Updates
Driver Updates
Kernel-Related Updates
Other Updates
Technology Previews
Resolved Issues
Known Issues
Some updates on Red Hat Enterprise Linux 5.1 may not appear in this version of the Release Notes. An updated version may also be available at the following URL:
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/index.html
This section contains information about Red Hat Enterprise Linux 5.1 that did not make it into the Release Notes included in the distribution.
Virtualization does not work on architectures that use Non-Uniform Memory Access (NUMA). As such, installing the virtualized kernel on systems that use NUMA will result in a boot failure.
Some installation numbers install the virtualized kernel by default. If you have such an installation number and your system uses NUMA (or cannot disable NUMA), deselect the Virtualization option during installation.
This release includes WBEMSMT, a suite of web-based applications that provides a user-friendly management interface for Samba and DNS. For more information about WBEMSMT, refer to http://sblim.wiki.sourceforge.net/.
Upgrading pm-utils
from a Red Hat Enterprise Linux 5.1 Beta version of pm-utils
will fail, resulting in the following error:
error: unpacking of archive failed on file /etc/pm/sleep.d: cpio: rename
To prevent this from occurring, delete the /etc/pm/sleep.d/
directory prior to upgrading. If /etc/pm/sleep.d
contains any files, you can move those files to /etc/pm/hooks/
.
Using the ipath
in this architecture may result in openmpi crashes. As such, the ipath
driver is currently released for this architecture as Technology Preview.
Hardware testing for the Mellanox MT25204 has revealed that an internal error occurs under certain high-load conditions. When the ib_mthca
driver reports a catastrophic error on this hardware, it is usually related to an insufficient completion queue depth relative to the number of outstanding work requests generated by the user application.
Although the driver will reset the hardware and recover from such an event, all existing connections are lost at the time of the error. This generally results in a segmentation fault in the user application. Further, if opensm is running at the time the error occurs, then it will have to be manually restarted in order to resume proper operation.
Driver Update Disks now support Red Hat's Driver Update Program RPM-based packaging. If a driver disk uses the newer format, it is possible to include RPM packaged drivers that will be preserved across system updates.
Please note that driver RPMs are copied only for the default kernel variant that is in use on the installed system. For example, installing a driver RPM on a system running the virtualized kernel will install the driver only for the virtualized kernel. The driver RPM will not be installed for any other installed kernel variant in the system.
As such, on a system that has multiple kernel variants installed, you will need to boot the system on each kernel variant and install the driver RPM. For example, if your system has both bare-metal and virtualized kernels installed, boot your system using the bare-metal kernel and install the driver RPM. Then, reboot the system into the virtualized kernel and install the driver RPM again.
During the lifetime of dom0, you cannot create guests (i.e. xm create) more than 32,750 times. For example, if you have guests rebooting in a loop, dom0 will fail to boot any guest after rebooting guests a total of 32,750 times.
If this event occurs, restart dom0
Virtualization in this architecture can only support guests with a maximum RAM of 65,434 MB.
The Red Hat Enterprise Linux 5.1 NFS server now supports referral exports. These exports are based on extensions to the NFSv4 protocol. Any NFS clients that do not support these extensions (namely, Red Hat Enterprise Linux releases prior to 5.1) will not be able to access these exports.
As such, if an NFS client does not support these exports, any attempt to access these exports may fail with an I/O error. In some cases, depending on the client implementation, the failure may be more severe, including the possibility of a system crash.
It is important that you take precautions to ensure that NFS referral exports are not accessed by clients that do not support them.
GFS2 is an incremental advancement of GFS. This update applies several significant improvements that require a change to the on-disk file system format. GFS file systems can be converted to GFS2 using the utility gfs2_convert, which updates the metadata of a GFS file system accordingly.
While much improved since its introduction in Red Hat Enterprise Linux 5, GFS2 remains a Technology Preview. The release notes included in the distribution incorrectly states that GFS2 is fully supported. Nevertheless, benchmark tests indicate faster performance on the following:
heavy usage in a single directory and faster directory scans (Postmark benchmark)
synchronous I/O operations (fstest benchmark test indicates improved performance for messaging applications like TIBCO)
cached reads, as there is no longer any locking overhead
direct I/O to preallocated files
NFS file handle lookups
df, as allocation information is now cached
In addition, GFS2 also features the following changes:
journals are now plain (though hidden) files instead of metadata. Journals can now be dynamically added as additional servers mount a file system.
quotas are now enabled and disabled by the mount option quota=<on|off|account>
quiesce is no longer needed on a cluster to replay journals for failure recovery
nanosecond timestamps are now supported
similar to ext3, GFS2 now supports the data=ordered mode
attribute settings lsattr() and chattr() are now supported via standard ioctl()
file system sizes above 16TB are now supported
GFS2 is a standard file system, and can be used in non-clustered configurations
Installing Red Hat Enterprise Linux 5.1 on HP BL860c blade systems may hang during the IP information request stage. This issue manifests when you have to select OK twice on the Configure TCP/IP screen.
If this occurs, reboot and perform the installation with Ethernet autonegotiation disabled. To do this, use the parameter ethtool="autoneg=off" when booting from the installation media. Doing so does not affect the final installed system.
The nohide export option is required on referral exports (i.e. exports that specify a referral server). This is because referral exports need to "cross over" a bound mount point. The nohide export option is required for such a "cross over" to be successful.
For more information on bound mounts, refer to man exports 5.
This update includes the lvm2 event monitoring daemon. If you are already using lvm2 mirroring, perform the following steps to ensure that all monitoring functions are upgraded properly:
Deactivate all mirrored lvm2 logical volumes before updating. To do this, use the command lvchange -a n <volume group or mirrored volume>
.
Stop the old lvm2 event daemon using killall -HUP dmeventd.
Perform the upgrade of all related RPM packages, namely device-mapper
and lvm2
.
Reactivate all mirrored volumes again using lvchange -a y <volume group or mirrored volume>
.
Rapid Virtualization Indexing (RVI) is now supported on 64-bit, 32-bit, and 32-bit PAE kernels. However, RVI can only translate 32-bit guest virtual addresses on the 32-bit PAE hypervisor.
As such, if a guest is running a PAE kernel with more than 3840MB of RAM, a wrong address translation error will occur. This can crash the guest.
It is recommended that you use the 64-bit kernel if you intend to run guests with more than 4GB of physical RAM under RVI.
Running 16 cores or more using AMD Rev F processors may result in system resets when performing fully-virtualized guest installations.
If your system uses a P600 SmartArray controller, a machine check error may occur while running the virtualized kernel. When this occurs, dom0 will reboot.
To prevent this, run the following shell script at the beginning of each boot:
#!/bin/bash for x in $(lspci -d 103c:3220 | awk '{print $1}'); do val=$(setpci -s $x 40.b) val=$(( 0x$val | 1 )) setpci -s $x 40.b=$(printf '%x' $val) done
If you encounter a guest installation failure, it is recommended that you restart the xend daemon before attempting to install a new guest.
Installing the systemtap-runtime
package will result in a transaction check error if the systemtap
package is already installed. Further, upgrading Red Hat Enterprise Linux 5 to 5.1 will also fail if the systemtap
package is already installed.
As such, remove the systemtap
package using the command rpm -e systemtap-0.5.12-1.e15 before installing systemtap-runtime
or performing an upgrade.
Kernel modules such as e1000
and qla2xxx
cannot be unloaded if you are running the virtualized kernel.
As such, if you install any third-party drivers, it is recommended that you reboot the system.
Paravirtualized guests cannot use the parted utility. To change disk partitionsing on paravirtualized guests, use parted within dom0 on the guest's disk; for example, parted /var/lib/xen/images/pv_guest_disk_image.img.
When setting up NFSROOT, BOOTPROTO must be set as BOOTPROTO=dhcp in /etc/sysconfig/network-scripts/ifcfg-eth0
.
If your environment requires a different setting for BOOTPROTO, then temporarily set BOOTPROTO=dhcp in /etc/sysconfig/network-scripts/ifcfg-eth0
before initially creating the initrd
. You can reset the original value of BOOTPROTO after the initrd
is created.
When attempting to create a fully-virtualized guest, the hypervisor may hang if you allocate too much of available RAM to the guest. In some cases, a kernel panic may occur.
Both events are caused by hypervisor memory shortage. To ensure that hypervisor overhead is accounted for each time you allocate memory to a guest, consider the following equation:
26MB + [(number of virtual CPUs used by guest) x 17MB] = (amount of memory to be left unallocated for each existent guest)
For example, if you have 2048MB of RAM on your system and you intend to use 4 virtual CPUs for only one guest, you should leave 94MB unallocated. If you intend to have two guests, both using 4 virtual CPUs, leave 188MB unallocated (and so on).
Currently, live migration of fully virtualized guests is not supported on this architecture. The release notes included in the distribution incorrectly states that it is.
In addition, kexec and kdump is also not supported for virtualization in this architecture.
Crash dumping through kexec and kdump may not function reliably with HP Smart Array controllers. Note that these controllers use the cciss
driver.
A solution to this problem, which is likely to involve a firmware update to the controller, is being investigated.
The QLogic iSCSI Expansion Card for the IBM Bladecenter provides both ethernet and iSCSI functions. Some parts on the card are shared by both functions. However, the current qla3xxx
and qla4xxx
drivers support ethernet and iSCSI functions individually. Both drivers do not support the use of ethernet and iSCSI functions simultaneously.
As such, using both ethernet and iSCSI functions simultaneously may hang the device. This could result in data loss and filesystem corruption on iSCSI devices, or network disruptions on other connected ethernet devices.
When using virt-manager to add disks to an existing guest, duplicate entries may be created in the guest's /etc/xen/
configuration file. These duplicate entries will prevent the guest from booting.<domain name>
As such, you should remove these duplicate entries.
Repeatedly migrating a guest between two hosts may cause one host to panic. If a host is rebooted after migrating a guest out of the system and before migrating the same guest back, the panic will not occur.
sysreport
is being deprecated in favor of sos
. To install sos
, run yum install sos. This command installs sos
and removes sysreport
. It is recommended that you update any existing kickstart files to reflect this.
After installing sos
, use the command sosreport to invoke it. Using the command sysreport generates a warning that sysreport is now deprecated; continuing will invoke sosreport.
If you need to use the sysreport tool specifically, use the command sysreport.legacy to invoke it.
For more information about sosreport, refer to man sosreport and sosreport --help.
This section includes information specific to Anaconda and the installation of Red Hat Enterprise Linux 5.1.
In order to upgrade an already-installed Red Hat Enterprise Linux 5, you must use Red Hat Network to update those packages that have changed.
You may use Anaconda to perform a fresh installation of Red Hat Enterprise Linux 5.1 or to perform an upgrade from the latest updated version of Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 5.1.
Red Hat Enterprise Linux 5.1 for the 64-bit Intel Itanium2 architecture includes runtime support for 32-bit applications through the use of Intel's IA-32 Execution Layer.
The IA-32 Execution Layer is provided on the Supplementary CD
for the Intel Itanium2 architecture. In addition, a set of 32-bit libraries and applications are provided on a separate 32-bit Compatibility Layer disc. The IA-32 Execution Layer and 32-bit compatibility packages together provide a runtime environment for 32-bit applications on the 64-bit native distribution.
To install the IA-32 Execution Layer and required 32-bit compatibility packages, follow these steps:
Install Red Hat Enterprise Linux 5.1 for the Intel Itanium2 Architecture.
Insert the Red Hat Enterprise Linux 5.1 Supplementary CD
, which contains the ia32el
package.
After the system has mounted the CD, change to the directory containing the Supplementary
packages. For example:
cd /media/cdrom/Supplementary/
Install the ia32el
package:
rpm -Uvh ia32el-<version>
.ia64.rpm
Replace <version>
with the corresponding version of the ia32el
package to be installed.
Eject the Supplementary CD
:
eject /media/cdrom
To verify the installation of the 32-bit compatibility layer and libraries after installation, check that the /emul
directory has been created and that it contains files.
To verify that the 32-bit compatibility mode is in effect, type the following in a shell prompt:
service ia32el status
At this point you can install compatibility libraries by inserting the 32-bit Compatibility Layer
disc. You may choose to install all of the packages available on the disc or choose the particular packages required in order to provide runtime support for your 32-bit applications.
If you are copying the contents of the Red Hat Enterprise Linux 5 CD-ROMs (in preparation for a network-based installation, for example) be sure to copy the CD-ROMs for the operating system only. Do not copy the Supplementary CD-ROM
, or any of the layered product CD-ROMs, as this will overwrite files necessary for Anaconda's proper operation.
The contents of the Supplementary CD-ROM
and other layered product CD-ROMs must be installed after Red Hat Enterprise Linux 5.1 has been installed.
When installing Red Hat Enterprise Linux 5.1 on a fully virtualized guest, do not use the kernel-xen
kernel. Using this kernel on fully virtualized guests can cause your system to hang.
If you are using an Installation Number when installing Red Hat Enterprise Linux 5.1 on a fully virtualized guest, be sure to deselect the Virtualization
package group during the installation. The Virtualization
package group option installs the kernel-xen
kernel.
Note that paravirtualized guests are not affected by this issue. Paravirtualized guests always use the kernel-xen
kernel.
If you are using the Virtualized kernel when upgrading from Red Hat Enterprise Linux 5 to 5.1, you must reboot after completing the upgrade. You should then boot the system using the updated Virtualized kernel.
The hypervisors of Red Hat Enterprise Linux 5 and 5.1 are not ABI-compatible. If you do not boot the system after upgrading using the updated Virtualized kernel, the upgraded Virtualization RPMs will not match the running kernel.
iSCSI installation and boot was originally introduced in Red Hat Enterprise Linux 5 as a Technology Preview. This feature is now fully supported, with the restrictions described below.
This capability has three configurations depending on whether you are:
using a hardware iSCSI initiator (such as the QLogic qla4xxx)
using the open-iscsi initiator on a system with firmware boot support for iSCSI (such as iSCSI Boot Firmware, or a version of Open Firmware that features the iSCSI boot capability)
using the open-iscsi initiator on a system with no firmware boot support for iSCSI
If you are using a hardware iSCSI initiator, you can use the card's BIOS set-up utility to enter the IP address and other parameters required to obtain access to the remote storage. The logical units of the remote storage will be available in Anaconda as standard sd devices, with no additional set-up required.
If you need to determine the initiator's qualified name (IQN) in order to configure the remote storage server, follow these steps during installation:
Go to the installer page where you select which disk drives to use for the installation.
Click on Advanced storage configuration.
Click on Add iSCSI target.
The iSCSI IQN will be displayed on that screen.
If you are using the open-iscsi software initiator on a system with firmware boot support for iSCSI, use the firmware's setup utility to enter the IP address and other parameters needed to access the remote storage. Doing this configures the system to boot from the remote iSCSI storage.
Currently, Anaconda does not access the iSCSI information held by the firmware. Instead, you must manually enter the target IP address during installation. To do so, determine the IQN of the initiator using the procedure described above. Afterwards, on the same installer page where the initiator IQN is displayed, specify the IP address of the iSCSI target you wish to install to.
After manually specifying the IP address of the iSCSI target, the logical units on the iSCSI targets will be available for installation. The initrd
created by Anaconda will now obtain the IQN and IP address of the iSCSI target.
If the IQN or IP address of the iSCSI target are changed in the future, enter the iBFT or Open Firmware set-up utility on each initiator and change the corresponding parameters. Afterwards, modify the initrd
(stored in the iSCSI storage) for each initiator as follows:
Expand the initrd
using gunzip.
Unpack it using cpio -i.
In the init
file, search for the line containing the string iscsistartup. This line also contains the IQN and IP address of the iSCSI target; update this line with the new IQN and IP address.
Re-pack the initrd
using cpio -o.
Re-compress the initrd
using gunzip.
The ability of the operating system to obtain iSCSI information held by the Open Firmware / iBFT firmware is planned for a future release. Such an enhancement will remove the need to modify the initrd
(stored in the iSCSI storage) for each initiator whenever the IP address or IQN of the iSCSI target is changed.
If you are using the open-iscsi software initiator on a system with no firmware boot support for iSCSI, use a network boot capability (such as PXE/tftp). In this case, follow the same procedure described earlier to determine the initiator IQN and specify the IP address of the iSCSI target. Once completed, copy the initrd
to the network boot server and set up the system for network boot.
Similarly, if the IP address or IQN of the iSCSI target is changed, the initrd
should be modified accordingly as well. To do so, use the same procedure described earlier to modify the initrd
for each initiator.
The maximum capacity of the EXT3 is now 16TB (increased from 8TB). This enhancement was originally included in Red Hat Enterprise Linux 5 as a Technology Preview, and is now fully supported in this update.
It is now possible to limit yum to install security updates only. To do so, simply install the yum-security
plugin and run the following command:
yum update --security
It is now possible to restart a resource in a cluster without interrupting its parent service. This can be configured in /etc/cluster/cluster.conf
on a running node using the __independent_subtree="1" attribute to tag a resource as independent.
For example:
<service name="example"> <fs name="One" __independent_subtree="1" ...> <nfsexport ...> <nfsclient .../> </nfsexport> </fs> <fs name="Two" ...> <nfsexport ...> <nfsclient .../> </nfsexport> <script name="Database" .../> </fs> <ip/> </service>
Here, two file system resources are used: One and Two. If One fails, it is restarted without interrupting Two. If Two fails, all components (One, children of One and children of Two) are restarted. At no given time are Two and its children dependent on any resource provided by One.
Note that Samba requires a specific service structure, and as such it cannot be used in a service with independent subtrees. This is also true for several other resources, so use the __independent_subtree="1" attribute with caution.
The following Virtualization updates are also included in this release:
AMD-V is now supported in this release. This enables live domain migration for fully virtualized guests.
The in-kernel socket API is now expanded. This was done to fix a bug that occurs when running sctp between guests.
Virtual networking is now part of libvirt, the virtualization library. libvirt has a set of commands that sets up a virtual NAT/router and private network for all local guests on a machine. This is especially useful for guests that do not need to be routable from the outside. It is also useful for developers who use Virtualization on laptops.
Note that the virtual networking capability adds a dependency on dnsmasq
, which handles dhcp
for the virtual network.
For more information about libvirt
, refer to http://libvirt.org.
libvirt can now manage inactive virtual machines. libvirt does this by defining and undefining domains without stopping or starting them. This functionality is similar to the virsh define and virsh undefine commands.
This enhancement allows the Red Hat Virtual Machine Manager to display all available guests. This allows you to start these guests directly from the GUI.
Installing the kernel-xen
package no longer leads to the creation of incorrect / incomplete elilo.conf
entries.
DomU no longer panics when you perform a save/restore numerous times after a kernel compilation.
The xm create command now has a graphical equivalent in virt-manager.
Nested Paging (NP) is now supported. This feature reduces the complexity of memory management in virtualized environments. In addition, NP also reduces CPU utilization in memory-intensive guests.
At present, NP is not enabled by default. If your system supports NP, it is recommended that you enable NP by booting the hypervisor with the parameter hap=1.
Virtualization is fully supported in this update. This feature was originally introduced as a Technology Preview in Red Hat Enterprise Linux 5.
Note, however, that installing Red Hat Enterprise Linux 5 on a guest will result in a guest freeze and host error, even if the host runs Red Hat Enterprise Linux 5.1. As such, Red Hat Enterprise Linux 5 remains unsupported as a guest in this architecture. Red Hat Enterprise Linux guests must be of version 5.1 or later.
Shared page tables are now supported for hugetlb memory. This enables page table entries to be shared among multiple processes.
Sharing page table entries among multiple processes consumes less cache space. This improves application cache hit ratio, resulting in better application performance.
Anaconda now has the capability to detect, create, and install to dm-multipath devices. To enable this feature, add the parameter mpath to the kernel boot line.
This feature was originally introduced in Red Hat Enterprise Linux 5 as a Technology Preview, and is now fully supported in this release.
Note that dm-multipath also features inbox support for the Dell MD3000. However, multiple nodes that use dm-multipath to access the MD3000 cannot perform immediate failback.
Further, it is recommended that you use the Custom Partitioning interface in Anaconda if your system has both multipath and non-multipath devices. Using Automatic Partitioning in such cases may create both types of devices in the same logical volume groups.
At present, the following restrictions apply to this feature:
If there is only one path to the boot Logical Unit Number (LUN), Anaconda installs to the SCSI device even if mpath is specified. Even after you enable multiple paths to the boot LUN and recreate the initrd
, the operating system will will boot from the SCSI device instead of the dm-multipath device.
However, if there are multiple paths to the boot LUN to begin with, Anaconda will correctly install to the corresponding dm-multipath device after mpath is specified in the kernel boot line.
By default, user_friendly_names is set to yes in multipath.conf
. This is a required setting in the support implementation of the dm-multipath root device. As such, setting user_friendly_names to no and recreating the initrd
will result in a boot failure with the following error:
Checking filesystems fsck.ext3: No such file or directory while trying to open /dev/mapper/mpath0p1
The ability to boot from a SAN disk device is now supported. In this case, SAN refers to a Fibre Channel or iSCSI interface. This capability also features support for system-to-storage connection through multiple paths using dm-multipath.
In configurations that use multiple host bus adapters (HBA), you may need to set the system BIOS to boot from another adapter if all paths through the current adapter fail.
The Driver Update Program (DUP) was designed to allow third-party vendors (such as OEMs) to add their own device drivers and other Linux Kernel Modules to Red Hat Enterprise Linux 5 systems using regular RPM packages as the distribution containers.
Red Hat Enterprise Linux 5.1 applies several updates to the DUP, most notably:
install-time Driver Update RPMs through Driver Update Disks is now supported
bootpath Driver Updates affecting the system bootpath are now supported
support for third-party packaging of Advanced Linux Sound Architecture (ALSA) is now deprecated
Further, various updates were applied to the approved kernel ABI symbol whitelists. These whitelists are used by packaging drivers to determine which symbols and data structures provided by the kernel can be used in a third-party driver.
For more information, refer to http://www.kerneldrivers.org/RedHatKernelModulePackages.
acpi: updated ibm_acpi
module to address several ACPI and docking station issues with Lenovo laptops.
ipmi: Polling kthread no longer runs when hardware interrupt is assigned to a Baseboard Management Controller.
sata: SATA/SAS
upgraded to version 2.6.22-rc3.
openib
and openmpi
: upgraded to OFED (OpenFabrics Enterprise Distribution) version 1.2.
powernow-k8
: upgraded to version 2.0.0 to fully support Greyhound.
xinput
: added to enable full RSA support.
aic94xx
: upgraded to version 1.0.2-1, in line with an upgrade of the embedded sequencer firmware to v17. These updates apply the following changes:
fixed ascb race condition on platforms with expanders
added REQ_TASK_ABORT and DEVICE_RESET handlers
physical ports are now cleaned up properly after a discovery error
phys can now be enabled and disabled through sysfs
extended use of DDB lock to prevent race condition of DDB
ALSA updated to version 1.0.14. This update applies the following fixes:
fixed noise problem on the IBM Taroko (M50)
Realtek ALC861 is now supported
fixed a muting problem on xw8600 and xw6600
ADI 1884 Audio is now supported
fixed an audio configuration problem on xw4600
added function calls to set maximum read request size for PCIX and PCI-Express
IBM System P machines now support PCI-Express hotplugging
added necessary drivers and PCI ID to support SB600 SMBus
e1000
driver: updated to version 7.3.20-k2 to support I/OAT-enabled chipsets.
bnx2
driver: updated to version 1.5.11 to support 5709 hardware.
B44
ethernet driver: backported from upstream version 2.6.22-rc4 to apply the following changes:
several endianness fixes were made
DMA_30BIT_MASK constant is now used
skb_copy_from_linear_data_offset() is now used
spin_lock_irqsave() now features safer interrupt disabling
simple error checking is performed during resume
several fixes to multicast were applied
chip reset now takes longer than previously anticipated
Marvell sky2
driver: updated to version 1.14 to fix a bug that causes a kernel panic if the ifup/ifdown commands are executed repeatedly.
forcedeth-0.60
driver: now included in this release. This applies several critical bug fixes for customers using NVIDIA's MCP55 motherboard chipsets and corresponding onboard NIC.
ixgb
driver: updated to the latest upstream version (1.0.126).
netxen_nic
driver: version 3.4.2-2 added to enable support for NetXen 10GbE network cards.
Chelsio 10G Ethernet Network Controller is now supported.
added support for PCI error recovery to the s2io
device.
Broadcomm wireless ethernet driver now supports PCI ID for nx6325 card.
fixed a bug that caused an ASSERTION FAILED error when attempting to start a BCM4306 via ifup.
ixgb
driver: updated to add EEH PCI error recovery support for the Intel 10-gigabit ethernet card. For more information, refer to /usr/share/doc/kernel-doc-
.<kernel version>
/Documentation/pci-error-recovery.txt
qla3xxx
driver: re-enabled and updated to version 2.03.00-k3 to provide networking support for QLogic iSCSI adapters without using iSCSI.
qla2xxx
: driver upgraded to version 8.01.07-k6. This applies several changes, most notably:
iIDMA is now supported
the following Fibre Channel attributes are now supported:
symbolic nodename
system hostname
fabric name
host port state
trace-control async events are no longer logged
reset handling logic has been corrected
MSI-X is now supported
IRQ-0 assignments are now handled per system
NVRAM updates immediately go into effect
This release includes an update of the IPMI
driver set to include the upstream changes as of version 2.6.21.3, with some patches included from 2.6.22-rc-4. This update features the following changes (among others):
fixed uninitialized data bug in ipmi_si_intf
kipmid is no longer started if another driver supports interrupts
users are now allowed to override the kernel daemon enable through force_kipmid
per-channel command registration is now supported
MAX_IPMI_INTERFACES is no longer used
hot system interface removal is now supported
added a Maintenance Mode to support firmware updates
added poweroff support for the pigeonpoint IPMC
BT subdriver can now survive long timeouts
added pci_remove handling for proper cleanup on a hot remove
For information about new module parameters, refer to /usr/share/doc/kernel-doc-
.<kernel version>
/Documentation/IPMI.txt
ported SCSI blacklist from Red Hat Enterprise Linux 4 to this release.
added PCI IDs for aic79xx
driver.
aacraid
driver: updated to version 1.1.5-2437 to support PRIMERGY RX800S2 and RX800S3.
megaraid_sas
driver: updated to version 3.10. This update defines the entry point for bios_param, adds an IOCTL memory pool, and applies several minor bug fixes.
Emulex lpfc
driver: updated to version 8.1.10.9. This update applies several changes, most notably:
fixed host_lock management in the ioctl paths
the AMD chipset is now automatically detected, and reduced the DMA length to 1024 bytes
nodes are no longer removed during dev_loss_tmo if discovery is active
8GB link speeds are now enabled
qla4xxx
driver updated to apply the following changes:
added support for IPV6, QLE406x and ioctl
module
fixed a mutex_lock bug that could cause lockups
resolved lockup issues of qla4xxx
and qla3xxx
when attempting to load/unload either interface
mpt fusion
drivers: updated to version 3.04.04. This update applies several changes, most notably:
fixed several error handling bugs
mptsas now serializes target resets
mptsas and mptfc now support LUNs and targets greater than 255
fixed an LSI mptspi
driver regression that resulted in extremely slow DVD driver performance
when an LSI SCSI device returns a BUSY status, I/O attempts no longer fail after several retries
RAID arrays are no longer unavailable after auto-rebuild
arcmsr
driver: included to provide support for Areca RAID controllers.
3w-9xxx
module: updated to correctly support 3ware 9650SE.
The CIFS client has been updated to version 1.48aRH. This is based upon the 1.48a release, with patches that apply the following changes:
the mount option sec=none results in an anonymous mount
CIFS now honors the umask when POSIX extensions are enabled
fixed sec= mount options that request packet signing
Note that for users of the EMC Celerra product (NAS Code 5.5.26.x and below), the CIFS client hangs when accessing shares on EMC NAS. This issue is characterized by the following kernel messages:
kernel: CIFS VFS: server not responding kernel: CIFS VFS: No response for cmd 162 mid 380 kernel: CIFS VFS: RFC1001 size 135 bigger than SMB for Mid=384
After a CIFS mount, it becomes impossible to read/write any file on it and any application that attempts an I/O on the mountpoint will hang. To resolve this issue, upgrade to NAS Code 5.5.27.5 or later (use EMC Primus case number emc165978).
MODULE_FIRMWARE tags are now supported.
ICH9 controllers are now supported.
Greyhound processors are now supported in CPUID calls.
The getcpu system call is now supported.
Oprofile now supports new Greyhound performance counter events.
Directed DIAG is now supported to improve z/VM utilization.
The Intel graphics chipset is now supported through the DRM
kernel module. Further, the DRM API has been upgraded to version 1.3 to support direct rendering.
Updates to ACPI power management have improved S3 suspend-to-RAM and S4 hibernate.
gaim is now called pidgin.
The certified memory limit for this architecture is now 1TB (increased from 256GB).
Implicit active-active failover using dm-multipath on EMC Clariion storage is now supported.
The Chinese font Zysong is no longer installed as part of the fonts-chinese
package. Zysong is now packaged separately as fonts-chinese-zysong
. The fonts-chinese-zysong
package is located in the Supplementary CD
.
Note that the fonts-chinese-zysong
package is needed to support the Chinese National Standard GB18030.
The Challenge Handshake Authentication Protocol (CHAP) username and password have a character limit of 256 each.
pump is deprecated in this update. As such, configuring your network interface through netconfig may result in broken ifcfg scripts.
To properly configure your network interface, use system-config-network instead. Installing the updated system-config-network
package removes netconfig
.
rpm --aid is no longer supported. It is recommended that you use yum when updating and installing packages.
Technology Preview features are currently not supported under Red Hat Enterprise Linux 5.1 subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure.
Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully supported. Erratas will be provided for high-severity security issues.
During the development of a Technology Preview feature, additional components may become available to the public for testing. It is the intention of Red Hat to fully support Technology Preview features in a future release.
Stateless Linux is a new way of thinking about how a system should be run and managed, designed to simplify provisioning and management of large numbers of systems by making them easily replaceable. This is accomplished primarily by establishing prepared system images which get replicated and managed across a large number of stateless systems, running the operating system in a read-only manner (refer to /etc/sysconfig/readonly-root
for more details).
In its current state of development, the Stateless features are subsets of the intended goals. As such, the capability remains as Technology Preview.
The following is a list of the initial capabilities included in Red Hat Enterprise Linux 5:
running a stateless image over NFS
running a stateless image via loopback over NFS
running on iSCSI
It is highly recommended that those interested in testing stateless code read the HOWTO at http://fedoraproject.org/wiki/StatelessLinuxHOWTO and join stateless-list@redhat.com.
The enabling infrastructure pieces for Stateless Linux were originally introduced in Red Hat Enterprise Linux 5.
AIGLX is a Technology Preview feature of the otherwise fully supported X server. It aims to enable GL-accelerated effects on a standard desktop. The project consists of the following:
a lightly modified X server
an updated Mesa package that adds new protocol support
By installing these components, you can have GL-accelerated effects on your desktop with very few changes, as well as the ability to enable and disable them at will without replacing your X server. AIGLX also enables remote GLX applications to take advantage of hardware GLX acceleration.
FS-Cache is a local caching facility for remote file systems that allows users to cache NFS data on a locally mounted disk. To set up the FS-Cache facility, install the cachefilesd
RPM and refer to the instructions in /usr/share/doc/cachefilesd-
. <version>
/README
Replace <version>
with the corresponding version of the cachefilesd
package installed.
Systemtap provides free software (GPL) infrastructure to simplify the gathering of information about the running Linux system. This assists the diagnosis of a performance or functional problem. With the help of systemtap, developers no longer need to go through the tedious and disruptive instrument, recompile, install, and reboot sequence that may be otherwise required to collect data.
The Linux target (tgt) framework allows a system to serve block-level SCSI storage to other systems that have a SCSI initiator. This capability is being initially deployed as a Linux iSCSI target, serving storage over a network to any iSCSI initiator.
To set up the iSCSI target, install the scsi-target-utils
RPM and refer to the instructions in:
/usr/share/doc/scsi-target-utils-
<version>
/README
/usr/share/doc/scsi-target-utils-
<version>
/README.iscsi
Replace
with the corresponding version of the package installed.<version>
For more information, refer to man tgtadm.
The firewire-sbp2
module is included in this update as a Technology Preview. This module enables connectivity with FireWire storage devices and scanners.
At present, FireWire does not support the following:
IPv4
pcilynx host controllers
multi-LUN storage devices
non-exclusive access to storage devices
In addition, the following issues still exist in this version of FireWire:
a memory leak in the SBP2
driver may cause the machine to become unresponsive.
a code in this version does not work properly in big-endian machines. This could lead to unexpected behavior in PowerPC.
In multi-boot systems, parted now preserves the starting sector of the first primary partition where Windows Vista™ is installed. As such, when setting up a multi-boot system with both Red Hat Enterprise Linux 5.1 and Windows Vista™, the latter is no longer rendered unbootable.
rmmod xennet no longer causes domU to crash.
4-socket AMD Sun Blade X8400 Server Module systems that do not have memory configured in node 0 no longer panic during boot.
conga and luci can now be used to create and configure failover domains.
When installing the Cluster Storage
group through yum, the transaction no longer fails.
During installation, incorrect SELinux contexts are no longer assigned to /var/log/faillog
and /var/log/tallylog
.
Installing Red Hat Enterprise Linux 5.1 using split installation media (for example, CD or NFSISO) no longer causes an error in the installation of amanda-server
.
EDAC now reports the correct amount of memory on the latest k8 processors.
Logging in remotely to a Gnome desktop via gdm no longer causes the login screen to hang.
A bug in autofs that prevented multi-mounts from working properly is now fixed.
Several patches to utrace apply the following fixes:
fixed a bug that causes a crash in race condition when using ptrace
fixed a regression that prevented some wait4 calls from waking up when a child exited under certain circumstances
fixed a regression that sometimes prevented SIGKILL from terminating a process. This occurred if ptrace was performed on a process under certain cirtumstances.
A RealTime Clock (RTC) bug that prevented alarms and periodic RTC interrupts from working properly is now fixed.
The first time the Release Notes button is clicked in Anaconda, a delay occurs while the window renders the Release Notes. During this delay, a seemingly empty list appears in the window. The rendering normally completes quickly, so most users may not notice this.
This delay is mostly due to the fact that the package installation phase is the most CPU-intensive phase of installation.
Host bus adapters that use the MegaRAID driver must be set to operate in "Mass Storage" emulation mode, not in "I2O" emulation mode. To do this, perform the following steps:
Enter the MegaRAID BIOS Set Up Utility.
Enter the Adapter settings menu.
Under Other Adapter Options, select Emulation and set it to Mass Storage.
If the adapter is incorrectly set to "I2O" emulation, the system will attempt to load the i2o driver. This will fail, and prevent the proper driver from being loaded.
Previous Red Hat Enterprise Linux releases generally do not attempt to load the I2O driver before the MegaRAID driver. Regardless of this, the hardware should always be set to "Mass Storage" emulation mode when used with Linux.
Laptops equipped with the Cisco Aironet MPI-350 wireless may hang trying to get a DHCP address during any network-based installation using the wired ethernet port.
To work around this, use local media for your installation. Alternatively, you can disable the wireless card in the laptop BIOS prior to installation (you can re-enable the wireless card after completing the installation).
Currently, system-config-kickstart does not support package selection and deselection. When using system-config-kickstart, the Package Selection option indicates that it is disabled. This is because system-config-kickstart uses yum to gather group information, but is unable to configure yum to connect to Red Hat Network.
At present, you need to update package sections in your kickstart files manually. When using system-config-kickstart to open a kickstart file, it will preserve all package information in it and write it back out when you save.
Boot-time logging to /var/log/boot.log
is not available in this update of Red Hat Enterprise Linux 5. An equivalent functionality will be added in a future update.
When upgrading from Red Hat Enterprise Linux 4 to Red Hat Enterprise Linux 5, the Deployment Guide is not automatically installed. You need to use pirut to manually install it after completing the upgrade.
The system may not successfully reboot into a kexec/kdump kernel if X is running and using a driver other than vesa. This problem only exists with ATI Rage XL graphics chipsets.
If X is running on a system equipped with ATI Rage XL, ensure that it is using the vesa driver in order to successfully reboot into a kexec/kdump kernel.
When using Red Hat Enterprise Linux 5 on a machine with an nVidia CK804 chipset installed, the following kernel messages may appear:
kernel: assign_interrupt_mode Found MSI capability kernel: pcie_portdrv_probe->Dev[005d:10de] has invalid IRQ. Check vendor BIOS
These messages indicate that certain PCI-E ports are not requesting IRQs. Further, these messages do not, in any way, affect the operation of the machine.
Using yum to install packages from the 32-bit Compatibility Layer
disc may fail. If it does, it is because the Red Hat package signing key was not imported into the RPM database. This happens if you have not yet connected to Red Hat Network and obtained updates. To import the key manually, run the following command as root:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Once the Red Hat GPG key is imported, you may now use yum to install packages from the 32-bit Compatibility Layer
disc.
Note that when installing from this disc, it is advisable to use yum instead of rpm to ensure that base OS dependencies are addressed during installation.
Removable storage devices (such as CDs and DVDs) do not automatically mount when you are logged in as root. As such, you will need to manually mount the device through the graphical file manager.
Alternatively, you can run the following command to mount a device to /media
:
mount /dev/<device name>
/media
The IBM System z does not provide a traditional Unix-style physical console. As such, Red Hat Enterprise Linux 5 for the IBM System z does not support the firstboot functionality during initial program load.
To properly initialize setup for Red Hat Enterprise Linux 5 on the IBM System z, run the following commands after installation:
/usr/bin/setup — provided by the setuptool
package.
/usr/bin/rhn_register — provided by the rhn-setup
package.
When upgrading from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 5.1 via Red Hat Network, yum may not prompt you to import the redhat-beta key. As such, it is advised that you import the redhat-beta key manually prior to upgrading. To do this, run the following command:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
When a LUN is deleted on a configured filer, the change is not reflected on the host. In such cases, lvm commands will hang indefinitely when dm-multipath is used, as the LUN has now become stale.
To work around this, delete all device and mpath link entries in /etc/lvm/.cache
specific to the stale LUN.
To find out what these entries are, run the following command:
ls -l /dev/mpath | grep <stale LUN>
For example, if <stale LUN>
is 3600d0230003414f30000203a7bc41a00, the following results may appear:
lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5
This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links: dm-4 and dm-5.
As such, the following lines should be deleted from /etc/lvm/.cache
:
/dev/dm-4 /dev/dm-5 /dev/mapper/3600d0230003414f30000203a7bc41a00 /dev/mapper/3600d0230003414f30000203a7bc41a00p1 /dev/mpath/3600d0230003414f30000203a7bc41a00 /dev/mpath/3600d0230003414f30000203a7bc41a00p1
When attempting to create a fully virtualized Windows™ guest from a CD or DVD, the second stage of the guest install might not continue upon reboot.
To work around this, edit /etc/xen/
by properly appending an entry for the CD / DVD device.<name of guest machine>
If an installation to a simple file is used as a virtual device, the disk line of /etc/xen/
will read like the following:<name of guest machine>
disk = [ 'file:/PATH-OF-SIMPLE-FILE,hda,w']
A DVD-ROM device located on the host as /dev/dvd
can be made available to stage 2 of the installation as hdc by appending an entry like 'phy:/dev/dvd,hdc:cdrom,r'. As such, the disk line should now read as follows:
disk = [ 'file:/opt/win2003-sp1-20061107,hda,w', 'phy:/dev/dvd,hdc:cdrom,r']
The precise device path to use may vary depending on your hardware.
If the sctp
module is not added to the kernel, running netstat with the -A inet or -A inet6 option abnormally terminates with the following message:
netstat: no support for `AF INET (sctp)' on this system.
To avoid this, install the sctp
kernel module.
Current kernels do not assert Data Terminal Ready (DTR) signals before printing to serial ports during boot time. DTR assertion is required by some devices; as a result, kernel boot messages are not printed to serial consoles on such devices.
The AMD 8132 and HP BroadCom HT100 used on some platforms (such as the HP dc7700) do not support MMCONFIG cycles. If your system uses either chipset, your PCI configuration should use the legacy PortIO CF8/CFC mechanism. To configure this, boot the system with the kernel parameter -pci nommconfig during installation and add pci=nommconf to GRUB after rebooting.
Further, the AMD 8132 chipset does not support Message Signaled Interrupts (MSI). If your system uses this chipset, you should also disable MSI. To do this, use the kernel parameter -pci nomsi during installation and add pci=nomsi to GRUB after rebooting.
However, if your specific platform is already blacklisted by the kernel, your system does not require the aforementioned pci kernel parameters. The following HP platforms are already blacklisted by the kernel:
DL585g2
dc7500
xw9300
xw9400
The Virtual Machine Manager (virt-manager) included in this release does not allow users to specify additional boot arguments to the paravirtualized guest installer. This is true even when such arguments are required to install certain types of paravirtualized guests on specific types of hardware.
This issue will be addressed in a future release of virt-manager. To specify arbitrary kernel arguments in installing paravirtualized guests from the command line, use virt-install.
By default, the Itanium dom0 virtualized kernel boots up with 512MB RAM and one CPU. You can override this on the hypervisor command line using the dom0_mem and dom0_max_vcpus parameters. For example, you can set dom0 to boot with 4GB of RAM and 8 CPUs using the parameters dom0_mem=4G dom0_max_vcpus=8.
For Red Hat Enterprise Linux 5, the maximum supported value for dom0_mem is 256G. The maximum supported value for dom0_max_vcpus is 32.
However, setting dom0 to boot with the actual amount of RAM the system has may result in a kernel panic. This is because there is likely to be slightly less than the full RAM actually available for dom0 to use. At present, the hypervisor is unable to handle this situation gracefully.
As such, if the system has x
amount of RAM, it is not advisable to use dom0_mem=x
.
On some Itanium systems configured for console output to VGA, the dom0 virtualized kernel may fail to boot. This is because the virtualized kernel failed to properly detect the default console device from the Extensible Firmware Interface (EFI) settings.
When this occurs, you can work around this by adding the boot parameter console=tty to the kernel boot options in /boot/efi/elilo.conf
.
On some Itanium systems, X may fail to start on the VGA console. This is because the system memory layout does not prevent X from attempting to utilize memory regions incompatible to its needs. This can cause a Machine Check Abort (MCA), while in some cases X will simply fail with an X log entry of xf86MapDomainMem(): mmap() failure.
It is recommended that you boot affected systems in runlevel 3, and any necessary X applications should be run within a VNC X server or over X11-forwarding on a remote host. Both bare-metal and virtualized kernels are affected by this issue.
This issue will be resolved in an upcoming minor update of Red Hat Enterprise Linux 5. Testing results confirm that the issue should only manifest on Itanium systems with more than 128 PCI devices. This behavior is consistent with X on Red Hat Enterprise Linux 5.
With the default dm-multipath configuration, Netapp devices may take several minutes to complete failback after a previously failed path is restored. To resolve this problem, add the following Netapp device configuration to the devices section of the multipath.conf
file:
devices { device { vendor "NETAPP" product "LUN" getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout "/sbin/mpath_prio_netapp /dev/%n" features "1 queue_if_no_path" hardware_handler "0" path_grouping_policy group_by_prio failback immediate rr_weight uniform rr_min_io 128 path_checker directio }
( ia64 )
[1] This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v1.0, available at http://www.opencontent.org/openpub/.
|