In case you've been missing libgcrypt.so.11

If you're running Fedora rawhide and would like to install e.g. Google Chrome, you probably already noticed there's no libgcrypt.so.11 anymore due to a soname bump some time ago.

Facing the same issue, I finally gave in and took the last pre-bump srpm and turned it into a compat package. Thinking others might share the pain and would like to receive some relief, I turned that into a Copr.

Enjoy at your own risk:
http://copr.fedoraproject.org/coprs/red/libgcrypt.so.11/

There won't be any updates (that's what the proper libgcrypt package is there for), fixes or warranties. If anything at all, all other warranties will be voided if you install the compat-libgcrypt package.

Install OpenStack Havana-2 from RDO on Fedora 19 and avoid the pitfalls

If you'd like to try out OpenStack Havana on Fedora 19, there's a few pitfalls that need to be avoided. Let me be your navigator and show you a way around them. But first, let me warn you of a few things:
  • Below, I might sometimes expect you are familiar with Linux, Fedora and OpenStack.
  • Havana is not yet finished, stable or even released. Havana-2 is only just a milestone and development will continue for two whole months so expect things to be broken. Please do report bugs upstream.
  • RDO and the Packstack tool are not supported products, except for a community forum and mailing list. Red Hat offers a commercial offering called RHOS if you need professional support.
  • The installation I describe is very simple and not suited for production even if the software was. It's a single-node setup without any high-availability mechanics. It does not include Neutron, Ceilometer, Heat or any of the even newer components.
  • I don't use \ to continue commands on the next line. Instead, the # simply marks a new command line, everything after it before normal text or another # belongs to the same line.

Fedora 19 Host Preparation

  1. Install Fedora 19 on your host. Yes, it can be done in a VM (with or without nested virtualization) if you wish so. Without nested virtualization, guests will obviously run very slow when run within a VM, though.
    Personally, I do Minimal Installs but you're probably also fine if you go with an Infrastructure Server installation with the Virtualization Add-On. The setup routine will install the missing parts either way. Obviously if you want to save some space, starting with a Minimal Install will help to keep the total installation size smaller.
  2. Make sure the latest updates are installed.
    # yum -y update
  3. Some parts might run into some trouble eventually, if your hostname is set to localhost. So if that's the case for you, change it to something different. You should be good by giving your loopback address another hostname but you can also add the name to your routable IP.
    # sed 's/localhost/openstack/' -i /etc/hostname
    # sed 's/^127.0.0.1.*$/& openstack openstack.localdomain/' -i /etc/hosts
  4. Fedora 19 does have some SELinux issues around Havana-2, most particularly with Swift. Since you're only doing this for testing purposes you shall exceptionally be allowed to set SELinux into permissive mode. Never do this in production, you're seriously tampering with your system's security.
    Bugs: rhbz#995779 and rhbz#995780
    # sed 's/^\(SELINUX=\).*$/\1permissive/' -i /etc/selinux/config
  5. The new firewall daemon is not (yet?) compatible with OpenStack so we better switch back to plain old iptables.
    Bugs: rhbz#981652 causing rhbz#981583
    # yum -y install iptables-services
    # systemctl disable firewalld
    # systemctl enable iptables
  6. To make all of the above changes effective (including a possible kernel upgrade), go ahead and
    # reboot

Install OpenStack Havana-2 from RDO

  1. Activate the RDO Havana-2 yum repository.
    # yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-havana/fedora-19/rdo-release-havana-2.noarch.rpm
  2. Install Packstack, a simple tool to install OpenStack on Fedora or RHEL and derivatives. Using Python scripting and some Puppet-foo, it can turn answers, command-line switches or an answer file into a fully installed and configured OpenStack setup to make your life easier.
    # yum -y install openstack-packstack
  3. Unfortunately, one dependency will be missing from the installation, so we better install it up front.
    Bug: rhbz#995751
    # yum -y install fprintd-pam
  4. That's what you're here for. Now, we'll install a very simple OpenStack setup and it could hardly be any easier. You will be asked for your root password once, so Packstack can connect to the host over SSH and deploy a SSH key for future use. Packstack connects several times to all hosts it sets up in order to install and configure everything. In our case that's just a connection to localhost and SSH would not be required, but there's no separate routine for that use case. This will take a while as it will download and install a number of packages. Important: don't ever reboot after this step before you've performed the next one as well (I mention all the necessary reboots anyway).
    # packstack --allinone --os-neutron-install=n
    Note, if your installation should fail and you want to try again, there should be an answer file in either the folder you executed the above command in or in /root. Now, in this case, you must issue the following command instead of the one above or things might go very wrong (since passwords for services, databases, etc. are randomly generated at each run but saved in the answer file).
    # packstack --anwer-file packstack-answers-<date-time>.txt
  5. Due to a recent regression in LVM in Fedora 19, physical volumes on a loopback device won't be available after a reboot anymore. Packstack set up such a file for Cinder so we better change the boot script to first look for physical volumes using the cache before activating logical volumes.
    Bug: rhbz#997897
    # sed 's/vgchange/pvscan --cache \&\& &/' -i /etc/rc.d/rc.local
  6. Last but not least, Django, which is used by the Horizon web dashboard, introduced a security feature which is not set up properly yet so we've got to do that manually. We'll allow all hosts in this example but you could just insert your local hostname instead of the * in the command below.
    Bug: rhbz#988316
    # sed 's/^TEMPLATE_DEBUG.*$/&\n\nALLOWED_HOSTS = [\"*\"]/' -i /etc/openstack-dashboard/local_settings
  7. Restart Apache for the last config change to take effect.
    # systemctl restart httpd
That's it. If all went well (and it should, I tested it a couple of times and so did others) you now have a running OpenStack installation. Simply point your web browser at your installation and you should find a login page. The login details can be found in /root/keystonerc_admin (os_username and os_password). Sourcing that very file, you'll also be able to use the API tools.

Should you require community support or dive in deeper, RDO lives at http://openstack.redhat.com/

Flock 2013 Review

As many before me have blogged, Flock 2013 has taken place between August 9 and 12, 2013. Thanks to generous sponsors my travel and accommodation have been funded this time so I could make the trip and participate. Thanks to all who made this possible!

While many other bloggers found Flock to be very productive I personally found it less productive than some of the previous FUDCons in North America but still more productive than FUDCons in Europe. But that might well have two reasons. First, it's been the very first Flock and I'm sure lessons were learnt and the feedback showed the big gaps already. Secondly, my personal focus has shifted quite some and into a field that is only a smart part of Fedora: the Cloud (cloud images and OpenStack) so naturally less would be going on in that area than in many other areas of interest where Fedora is working on for a longer time already and more focused on anyway (like being an Linux OS).

So I attended a couple of sessions and most of them were interesting. But I don't have much to add to most of them, yet I'd like to comment shortly on two.

OpenStack Test Event

It was great to see that more people were interested in OpenStack than expected. We were an interesting mix of (upstream) developers, testers, (RDO) community folks and users. There's also been a few people who were looking for an introduction to OpenStack by this way so we begun with a quick overview by Russell Bryant. Afterwards we went on trying to install Havana-2 (i.e. the second milestone of the upcoming release) on Fedora 19 which Kashyap Charmathy prepared for us. Obviously, some did it to learn how OpenStack works first-hand and others like me were there to find and document existing bugs (we already knew it wouldn't run out of the box). Personally, I did quite some bug hunting and also found work around for all issues plus I tried my best to help new users with problems.

So in the end I had a number of bugs with workarounds plus one issue I didn't have the time to debug yet which I did later back home. From all this, I'm going to write a blog post so others can profit from my experience.

Last but not least, I must say it was great to finally meet some people I've been working with over IRC, mailing lists and forums for a while already. It's been a great pleasure with you all!

QA Meeting

Since Monday just before lunch time (EDT) is the regular QA IRC meeting time, we decided to do it LIVE at Flock. Agenda was mostly how to test ARM stuff (since it's now a primary architecture) and the cloud image (which is getting increasingly popular and important). Since many Fedora QA, Fedora ARM and Fedora Cloud SIG people were around it only made sense it sit together and do it this way. We also broadcasted (and archived) the show live to Youtube (as most sessions were) and had the usual IRC meeting running in parallel so remotees could still participate as usual. Well, almost (the live video feed lagged quite a bit behind and we were not very good and getting everything into IRC).

So again, it's been nice to see those people but this time I already knew most of them. Still, always good to see you all! Also, we quickly came to the conclusion that sometimes a face-to-face meeting like this would be helpful. Maybe we'll do it again over Hangout or such soon, or at least as a VoIP meeting.

Setting a user password when launching cloud images

If you're using e.g. the new Fedora 19 cloud images (see my previous post) you might have noticed that logging in with a password is disabled, both as root or the default user fedora. Usually, that's no issue and actually a security feature. Injecting and using SSH keys instead is the accepted solution. But that's not what I'm here to discuss today.

Still, sometimes logging in over SSH does not do the job. Maybe networking in your cloud is broken and you need access to a guest to further debug it. But no networking, no SSH login. Fortunately, you can use (no)VNC and a tty to log in, right? Well, except SSH keys don't work there. Hence you need the user (or root) to accept password based logins.

Cloud-Init

Luckily, Fedora 19 like most other modern cloud images uses cloud-init and thereby supports userdata (which basically is user-provided metadata). Now, with userdata, you can write a simple "script" (it's actually a YAML-style config file) to set a password. By default, that password can be used only just once and needs to be changed upon login. Unless you diable the expiration with another parameter. And if you want enable password login over SSH, there's a parameter for that as well. So putting all together, your userdata script could look like this:
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True
Please note, that the first line is not a comment but actually a required "keyword".

Now, there's as many ways to provide the cloud image with the userdata as there are different ways to launch a cloud image. Let me cover what I know.

Horizon

If you're launching your instances through Horizon, the OpenStack Dashboard, you go to Instances, click the Launch Instance button, do your usual settings, go to the Post-Creation tab and insert the above code as a Customization Script. Hit the Launch button and that's it. Once the instance is up, you should be able to log in with the configured password.

Nova CLI

On the command line, you need to create a text file with the code above. Then, you just give that nova boot command a --user-data <myscript> parameter and there you go. Again, once the instance is up, you should be able to log in with the configure password.

Other clouds, other tools and the APIs

Right, obviously the userdata mechanism isn't exclusive to OpenStack. I'm certain Amazon EC2 does it too (probably did it first) and so might other cloud stacks like Eucalyptus. Also, other tools than the Nova CLI do support it, e.g. the euca2ools. And both the OpenStack Compute API and the EC2 API, probably among others, do support it, too. Unfortunately, my experience and knowledge are limited and therefore I'll have to send you to the respective documentation or support channels. But as long as your cloud image is using (a current version of) cloud-init, the above script should work independent of the underlying solutions. After all, isn't that the purpose of true cloud computing?

Fedora 19 released, now with official cloud images

Earlier this week, Fedora 19 ("Schrödinger's Cat") has been released. If you've missed the news, here's the announcement by Fedora Project Leader Robyn Bergeron and to be complete, here's the release notes, too.

As with ever new Fedora release, there's a load of new features. But in all those linked pages (and most press articles, too) one thing is easily missed or even completely absent that might be of interest to all users and operators of clouds (be it OpenStack, Eucalyptus, CloudStack or any other).

Fedora 19 Cloud Images

While providing AMI images inside Amazon EC2 for a while already, Fedora 19 is the first release to also feature official raw and qcow2 images ready to run in your own public or private cloud!

From the release notes, section Cloud (2.7.1):
Ready-to-run cloud images are provided as part of Fedora 19. These are available in Amazon EC2 or for direct download. The downloadable images are available in compressed raw image format and in qcow2 for immediate use with EC2, OpenStack, CloudStack, or Eucalyptus. The images are configured with cloud-init, and so will take advantage of ec2-compatible metadata services for provisioning SSH keys.
 So what are you waiting for? Get them, while they're hot! Quick link: cloud.fedoraproject.org

Import into OpenStack

For fellow OpenStack operators who wish to import Fedora 19 into Glance, the OpenStack Image Service, let me provide an example on how to easily import an image, e.g. x86_64 / qcow2:
glance image-create --name "Fedora 19 x86_64" --is-public true \
--container-format bare --disk-format qcow2 --copy-from \
http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/\
x86_64/Fedora-x86_64-19-20130627-sda.qcow2

Or click your way through Horizon instead

If you run OpenStack Grizzly or Havana, it's also possible to import the image easily through Horizon, the OpenStack Dashboard.

In your Project view, just go to  Images & Snapshots and click the Create Image button in the top right corner. Again, specify a Name (e.g. Fedora 19 x86_64), provide the Image Location by URL, choose QCOW2 as the Format and decide whether you want to make this image Public to all users of your cloud or only to your very own tenant. The Minimum Disk and Minimum RAM parameters are optional.

Now, Create Image and, depending on your bandwidth to the closest mirror, Fedora 19 will soon be available to be launched!

Fedora Cloud SIG @ Flock 2013?

Just copying a (hopefully) motivational e-mail I just sent to the Fedora Cloud SIG mailing list regarding the upcoming Fedora contributors conference. Maybe it gets people outside the SIG motivated to get involved, too...

Registration and Call for Papers for Flock 2013 are open, yet there's not a single talk on actual Cloud SIG topics proposed (not counting the two talks where Cloud / OpenStack are mentioned as a tool).

Are there no topics to propose? Does nobody have anything to discuss? Is there nothing to hack on in a group? Nothing you would like to hear about? Feel free to suggest something if you don't wish to talk yourself and we'll see if we can find someone to step up.

My suggestion would probably be a discussion (talk? hackfest? workshop? sprint?) around the generic Fedora cloud images. What should they look like, how should they be built, how often should they be (re-)released, etc. Just my 5c, though as I'm not leading that effort (nor am I qualified to speak on it).

If anyone is interested, I could probably do something about Packstack (being a minor contributor and user) but there might be better qualified people (and I can't yet guarantee I'll be there). There any interest in such a session? What would you like to see covered?

Will OpenStack Havana Milestone 2 be packaged? If so, we could maybe do a preliminary Testing / QA session (and maybe some QA folks will help us, too) - like a test run for the next test day.

Who else has an idea or proposal? Sorry for being very OpenStack-heavy myself, other topics covered by the cloud SIG are welcome as well, of course.

Unused base images in OpenStack Nova

By default, unused base images are left behind in /var/lib/nova/instances/_base/ indefinitely. Some people like it that way, probably because additional instances (on the same node) will launch quicker. Some people don't like it, maybe because they are limited on space even though that's considered cheap nowadays. I'm still a bit twisted, having limited storage in my proof of concept cloud but valuing short spawn times.

But ever since (I learnt that) removal of unused base images was set to True by default, I wondered why they'd still remain for me. Did I do something wrong? Being new to this I had not the slightest clue, except there were no errors which is generally a good sign you're doing something right. But then Kashyap Chamarthy wrote an interesting article on the matter a few weeks ago. Still, something seemed either wrong or missing or just different in my installation (likely one of the latter two and probably a detail, but an important one). So I started to poke around the topic myself this morning, which included reading some code to help me understand things better. And to find the actual default values, because the documentation is often wrong about those.

Configuration File Options

There's four options that you can set in /etc/nova/nova.conf that do matter here, most of them being specific to Nova Compute. The following lists the options with their default values and adds a brief explanation (the first sentence always being based on the actual help text of the option).

remove_unused_base_images = True
Should unused base images be removed?

periodic_interval = 60
How many seconds to wait between running periodic tasks. This is the definition of ticks (see below) and used by several Nova services. I think changing this value requires you to restart all of them on the same node.

image_cache_manager_interval = 0
Number of periodic scheduler ticks to wait between runs of the image cache manager. Obviously that's the periodic task which will actually perform the image removal. If set to 0, the cache manager will not be started!

remove_unused_original_minimum_age_seconds = 86400
Unused unresized base images younger than this will not be removed. The age refers to the file's mtime.

remove_unused_resized_minimum_age_seconds = 3600
Unused resized base images younger than this will not be removed. The age refers to the file's mtime.

So if you want to disable automatic removal of base images, you should be fine because the Cache Manager is disabled by default. To be sure, you might want to change the first of those options to False.

But should you wish to enable it, you must define an interval for the Cache Manager. Not sure what good values are, though but I guess that depends on how your cloud is used, anyway.

Once the Cache Manager enabled, you might want to tweak the last two options a bit though the defaults are sane to start with. Basically you have two kinds of base images and one option for each: 1) original, or unresized. That's basically the uncompressed COW image. 2) resized. That's the original image but extended to reflect the (chosen) flavor's disk size. Note: depending on your configuration, you might not have both or they might be in different formats, not sure. See the excellent and very complete discussion of this topic: P√°draig Brady.

What's actually happening

Let me try to make the complete process a little bit clearer still. Once Nova Compute is started (and after a configurable random delay to prevent a stampede), the periodic task manager is started in order to count the seconds between ticks. Every X (the interval you set) ticks, the cache manager is run. First check it does is what images are not being used in any instances anymore. Those that are still in use are ignored, the others are checked for their age which is calculated as seconds since the base file's mtime (i.e. time of the last modification of the file). If the age is smaller than what you configured (for this kind of base image: original or resized), it's deemed "too young" and left alone. But if it's mature enough, it's deleted. That's it already, end of story. The process can easily be obeyed in the log file, to understand it even better.

Shared instances Folder

If you have configured live migration of instances, all your compute nodes share one common /var/lib/nova/instances/ and you might wonder what's different in that case. Of course the process is exactly the same with one important difference: base images that are unused on one node could still be in use on a different node and should not be removed. But Nova Compute won't know it's used somewhere else and the Cache Manager will try to remove the image nevertheless. Luckily, the mtime changes now and then if the image is in usage on any node so unless you set the age options to a very low value, you should be fine. In a test run I once had it down to 4 minutes and that still worked. With the idle Cirros 0.3.0 image I spawned, the mtime changed ever ~2 minutes. That said, I don't yet completely understand why it changes at all.

Grizzly

It's important to note that the above is probably valid for Folsom only. Update: In Grizzly, there is no periodic_interval anymore, the replacement is more dynamic and can probably be ignored. That also means image_cache_manager_interval is now using seconds as there are no fixed ticks anymore. More importantly, though: its value is now set to 2400 by default, i.e. it's enabled. I don't think anything else changed.