Geen categorie

Helion OpenStack 5.0.1 release notes

Due to the fact that is more broken than working, hereby the release notes:

HPE Helion OpenStack® 5.0.1: Release Notes

Fixed in this release

HPE Helion OpenStack 5.0.1 supports SLES12 SP3 for Compute Nodes

SLES12 SP2 is supported for compute nodes in HPE Helion OpenStack 5.0HPE Helion OpenStack 5.0.1 includes support for the more recently released SLES12 SP3.

Important: HPE Helion OpenStack 5.0.1 does not support SLES12 SP2 compute nodes. If you intend to upgrade to HPE Helion OpenStack 5.0.1 you will be required to upgrade your compute nodes to SP3. See steps below for upgrading SLES compute nodes from SP2 to SP3.

Using the Lifecycle Manager to Deploy SLES Compute Nodes

The method used for deploying SLES compute nodes using Cobbler on the lifecycle manager uses legacy BIOS.

Note: UEFI and Secure boot are currently not supported on SLES compute.

Deploying legacy BIOS SLES compute nodes

The installation process for SLES nodes is almost identical to that of HPE Linux nodes as described in the topic for Installation for HPE Helion OpenStack Entry-scale KVM Cloud. The key differences are:

  • The standard SLES ISO (SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso) must be accessible via /home/stack/sles12sp3.iso. Rename the ISO or create a symbolic link.
    mv SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso /home/stack/sles12sp3.iso
  • The contents of the SLES SDK ISO (SLE-12-SP3-SDK-DVD-x86_64-GM-DVD1.iso) must be mounted or copied to /opt/hlm_packager/hlm/sles12/zypper/SDK/. If you choose to mount the ISO, we recommend creating an /etc/fstab entry to ensure the ISO is mounted after a reboot.
    • If mounting the .iso on loopback
      sudo mount -o loop /home/stack/sles12.iso /opt/hlm_packager/hlm/sles12/zypper/SDK/
    • Entry for mounting the .iso on loopback in /etc/fstab
      /home/stack/sles12.iso    /opt/hlm_packager/hlm/sles12/zypper/SDK/    iso9660    loop    0    0
  • You must identify the node(s) on which you want to install SLES, by adding the key/value pair distro-id: sles12sp3-x86_64 to server details in servers.yml. You will also need to update net_interfaces.ymlserver_roles.ymldisk_compute.yml and control_plane.yml. For more information on configuration of the Input Model for SLES, see SLES Compute Model.
  • HPE Helion OpenStack playbooks currently do not take care of SDK, so it needs to be added manually. The following command needs to be run on every SLES compute node:
    zypper addrepo --no-gpg-checks --refresh http://$deployer_ip:79/hlm/sles12/zypper/SDK SLES-SDK

Upgrading HPE Helion OpenStack 5.0 with SLES Compute SP2 to SP3

You can upgrade your SLES compute nodes from SP2 to SP3 by following these steps.

  1. You must be running HPE Helion OpenStack 5.0 with SLES12 SP2 Compute
  2. Upgrade the Compute OS from SP2 to SP3 by following these steps.
    • On the deployer, unmount the sp2 content, example:
      sudo umount /opt/hlm_packager/hlm/sles12/zypper/OS/
      sudo umount /opt/hlm_packager/hlm/sles12/zypper/SDK/
    • Copy or mount the contents of SLES SDK ISO, example:
      sudo mount -o loop SLE-12-SP3-SDK-DVD-x86_64-GM-DVD1.iso /opt/hlm_packager/hlm/sles12/zypper/SDK/
      sudo mount -o loop SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso /opt/hlm_packager/hlm/sles12/zypper/OS/
    • On sles12sp2 compute, remove old repositories if applicable
      sudo zypper removerepo SLES-SDK
      sudo zypper removerepo SLES12-SP2-12.2-0
    • On sles12sp2 compute, clean up old repositories if applicable
      sudo zypper clean
    • Add updated repositories (Update the value of deployer_ip as necessary)
      zypper addrepo --no-gpg-checks --refresh http://$deployer_ip:79/hlm/sles12/zypper/OS SLES-OS
      zypper addrepo --no-gpg-checks --refresh http://$deployer_ip:79/hlm/sles12/zypper/SDK SLES-SDK
    • Run zypper to refresh
      sudo zypper refresh
    • Run zypper to upgrade:
      sudo zypper up

    Note: After OS upgrade from SP2 to SP3, the administrator will need to run the following command. If the only way to reach the SLES Compute node is from the iLO console, the command may be run from there.

    sudo systemctl enable openvswitch

    If the openvswitch is not running after the upgrade, the administrator will need to run the command:

    sudo systemctl start openvswitch
  3. Mount the 5.0.1 ISO image
    sudo mount -o loop HelionOpenStack-5.0.iso /media/cdrom
  4. Unpack the following tarball:
    cd ~
    tar fxv /media/cdrom/hos/hos-5.0.0-20161014T020249Z.tar
  5. Run the included initialization script to update the deployer:
  6. Run the configuration processor:
    cd ~/helion/hos/ansible
    ansible-playbook -i hosts/localhost config-processor-run.yml
  7. Update the deployment directory:
    cd ~/helion/hos/ansible
    ansible-playbook -i hosts/localhost ready-deployment.yml
  8. Run site.yml
    cd ~/scratch/ansible/next/hos/ansible
    ansible-playbook -i hosts/verb_hosts site.yml
  9. If Ceph is configured, run hlm-cloud-configure.yml
    cd ~/scratch/ansible/next/hos/ansible
    ansible-playbook -i hosts/verb_hosts hlm-cloud-configure.yml
  10. Reboot all

HPE Linux Changes

HPE Helion OpenStack 5.0.1 includes a new version of the hLinux operating system which has fixes for CVEs known to us and deemed to be issues not mitigated by architecture at the time of ISO creation.

MTU Bug for LBaaS

Contains fix for OpenStack bug

LBaaSv2 sets up a VIF without specifying its MTU. Therefore, the VIF always gets the default MTU of 1500. When attaching the load balancer to a VXLAN-backed project (tenant) network, which by default has a MTU of 1450, this leads to packet dropping.

Removed hardcoded value which prevented LBaaSv2 from using MTU specified by the network object.

Magnum Fedora Atomic for HPE Helion OpenStack 5.0 must use version fedora-25

You can find more information in the document Deploying a Kubernetes Cluster on Fedora Atomic

3PAR Fix

Fixed an issue with the 3PAR driver to ensure that CHAP credentials are used correctly after compute host is rebooted.

Adding Compute Nodes with Ceph already configured

An additional step is needed to add compute nodes to a cloud where Ceph has already been configured.

  • Complete the compute host deployment using site.yml as noted in Adding Compute Nodes in the HPE Helion OpenStack 5.0 documentation.
  • Then run the following command.
    cd ~/scratch/ansible/next/hos/ansible/ansible-playbook -i hosts/verb_hosts ceph-client-prepare.yml

This applies to hLinux, SLES, RHEL and RHEV computes.

Tagged , ,

About Sebastiaan Koetsier