Outreachy: Summary

Posted on Fri 31 August 2018 • Tagged with outreachy, openstack

This is the last blog post about my Outreachy internship that will summarize what I have done.

These are the 'main' additions to sushy and sushy tools project:

  • initial version for BIOS resource support in sushy
  • initial version for @Redfish.Settings used by BIOS and other resources in sushy
  • emulation of BIOS resource in sushy-tools for libvirt driver and openstacksdk driver
  • emulation of Ethernet Interface resource in sushy-tools (took over another patch and added openstacksdk driver part)
  • support for Message registries in sushy (some parts still in code review)

However, the implementation is not entirely complete for BIOS and @Redfish.Settings resource support. There are some additional fields that were left out for the first version. In BIOS there is Attribute Registry which currently is not exposed to sushy users but it can help to determine what are the allowed attributes for a particular BIOS and other metadata about the attributes. At the moment there is no input validation happening when setting new BIOS attributes and any failure messages are left until applying the settings and getting back the results in @Redfish.Settings. In @Redfish.Settings at the moment there is nothing implemented related to @Redfish.SettingsApplyTime. If supported by the Redfish service, this would allow to indicate preferred time to apply the updates. Also for @Redfish.Settings multi-threading support is not added. In cases if there were 2 or more users trying to update BIOS settings at the same time, for user it will be hard to determine if the update was successful or if failures in the results are caused by their update or one of the peers. When sushy starts supporting these features, these can also be added to sushy-tools emulation.

In addition to these patches, I did some smaller ones which were followup to these patches or some things that I encountered while working on the 'main' patches. One of the things was that tox was configured to use these python environments: py27, py35, pypy. I believe these were some standard template copy-paste or taken from another project. When running tox, I encountered interpreters not found error for py35 and pypy. Though tox has a flag to skip them, I asked my mentors what's the intention with these. After conversation I removed pypy because it is not expected that anyone will run sushy under pypy and replaced py35 with py3 so that the latest version of py3 is used. On my machine it was py36. Though I installed py35 side by side before removing the exact version from tox. Anyway, with these updates I was testing with py36 locally and I encountered 2 cases where Zuul CI which was still using py35 was failing when my environment was passing. I haven't looked into this more but for some reason there is something with methods in py35 when it is working fine in py27 and py36. So I am not entirely sure that is was ok to remove py35 locally as long as it is necessary to support it. I started to run py35 environment explicitly locally before putting in code review just in case there is this another odd exception with py35 in my code. Recently py36 was also added to Zuul CI so py36 gets tested there too now.

In one of my first blog posts I draw a diagram to comprehend the bare metal server domain which was new to me and I want to look at this diagram again as the internship has ended. There are not much changes - only 1 new component was introduced for sushy-emulator to start using openstacksdk which was added around the same time my internship started. I did not get to work on Ironic part and overall did not have to interact with other components in the diagram, but it was useful to explore the surroundings back then.

diagram: Context of sushy, updated

I think sushy and sushy-tools is a good starting point for new contributors - the project is small and easy to get around, though it did not appear so at the beginning - had to spend some time to get familiar with Redfish and the building blocks of sushy. At the beginning of the internship I had this 'I have no idea what I'm doing' feeling (shadowed by excitement that I'm working on OpenStack), but now I feel comfortable in this domain.

Overall I learned a lot during this 3 months internship and I would like to thank everyone who made this happen, especially my mentors Ilya and Dmitry. Best summer ever.

If anyone else is interested in Outreachy and would like to participate in the next round, they can check their eligibility and start applying very soon - 10th September, see Outreachy web page.

Outreachy: Redfish Message registry III and license for standard registry files

Posted on Wed 22 August 2018 • Tagged with outreachy, openstack

Outreachy ended last week, here is what I worked on in those last days.

I continued working on Redfish Message Registry. A quick recap, Message registry is necessary to determine severity and provide additional information about the error and possible ways to resolve the error when updating BIOS (and potentially other resources in future). It was clear that it will not be possible to finish it by the end of internship, especially as it all had to go through code reviews. Nevertheless I wanted to have at least initial version of code working from A to Z and to see if there are any more unexpected obstacles to get this to a working stage.

I ended up splitting the Message Registry functionality in 8 patches where each patch tackles something new to sushy library. At first I was avoiding creating too many patches, but I think in the end this makes review easier for everyone as from my experience so far there can be long discussions in reviews - definitely easier that each patch has only 1 (main-ish) thing to discuss rather than creating long threads about, let's say, 3 things and try to follow what's done and what's not.

The patches toward the end of the chain I have set in 'Work in Progress' mode, while they are functionally working, it will be necessary to take a closer look when their dependent patches are reviewed for some edge cases and unhappy paths. I did this it to avoid committing too much while the base of the patches might need re-write or different approach, thus these would require re-write too. But having some code working I can be more sure that there are no unexpected limitations or other things that would require changes in sushy to implement Message Registry support.

At the moment 3 patches have received one +2 each, thus there is some likelihood that they will not require major further changes. While the internship has ended, I like to finish what I have started so I'm keeping an eye on the patches and making updates at my now hobbyist capacity. Anyway, I don't mind that anyone else takes over, however at the moment these patches are missing reviews more.

Given that this is the third post where Message Registry is the main topic, it might be obvious that while I started working on implementing BIOS support in sushy, my biggest addition to sushy turns out to be Message Registry. When started working on this, it was not anticipated or, in other words, the scope and effort was not evaluated, but was discovered along the way. This is how it went - started working on BIOS -> encountered @Redfish.Settings, then -> encountered Message Registry and then -> encountered licensing issue.

Speaking of the licensing issue, it has some updates too. I already had implemented a fall-back scenario where user has to download the standard message registry files and if user has not downloaded them, then there is fall-back for fall-back - giving very limited information about the BIOS updates - only to say if it was successful or not, but no other information which attributes failed or any hints how to resolve the failure. I had this implemented so that there is at least something working while license thing is being resolved. In the legal mailing list it looked that they prefer that no 3rd party files are included in the repository, but DMTF were expecting that by having CC BY license it would be sufficient to include the files in sushy repository. But for this OpenStack technical committee has to have a case-by-case review to allow this. CC BY compability with OpenStack projects is not automatically granted, because OpenStack automatically allows only OSI-approved licenses, but as CC BY is not software license it is not covered by OSI. Anyway, it still sounded like an extra effort and serious matter, but I had no idea how big effort this could be and if sushy really needs the files included. Talked with my mentor if he has experience with technical committee and should we ask tech committee to review this case. As the mentor did not have experience with this, decided to ask this in #openstack-ironic IRC channel and there other contributors were strongly preferring that the files are included in sushy repository and included in installation package as indeed there are many customers who test Ironic without access to the Internet and this would damage user-friendliness if they have to package or download the files manually. Then in the mailing list I asked if this case can be reviewed and it was done very quickly, initial review happened within the same hours in #openstack-tc channel, and final approval happened next day resolving this in less than 24 hours. Very appreciate the fast response and even don't know why I was concerned about the effort involved. The only two things left now are that DMTF release the files under CC BY (which was necessary anyway also in the fall-back scenario so users can download the files and use them) and have to update the patch to use packaged files for standard registries and add license notices.

That's all, but stay tuned for another blog post summarizing my Outreachy internship (no sooner than next week though).

Outreachy: Redfish Message registry II and code reviews

Posted on Sat 04 August 2018 • Tagged with outreachy, openstack

Message Registry

Since the last blog post, I have started implementing Message Registry. I am splitting this feature in several patches starting with the easy parts - mapping Message Registry File and Message Registry resources to sushy fields. In order to implement this, I encountered a need for a new data structure, dictionary, that was not present in sushy base fields. Implemented it before everything else and currently these 3 patches are in code review.

Having taken a closer look at Message Registry File, I find there are 3 ways how to serve the Registry file itself:

  • locally as a JSON file,
  • locally in an archive as one of the JSON files,
  • publicly on the Internet as a JSON file.

There is also 4th use case for sushy - to access standard message registry files when a Redfish services have not included them using one of the options above - sushy has to get standard registries elsewhere. This use case was mentioned in the previous blog post when we encountered licensing issue as the standard message files provided by DMTF are only copyrighted without any license. Last time it looked like they will have 3-clause BSD license, but as DMTF did not see them as code, they rejected this idea. Going back to OpenStack legal mailing list, they suggested to use CC license which currently is being reviewed by DMTF.

Thus, the next thing I'm working on - I need to implement registry file loading to support all 4 use cases. It is not difficult task as such, but I need to implement it within sushy design nicely. I have implemented yet another change to sushy base classes, this time to allow processing archives and I'm thinking how should I split these changes in code reviews - should changes to base classes be separate from implementing Message Registry loading or go together so that there is context and actual use case of the changes. I don't want to split too much and I don't want to create patch too big. But I'm not thinking about this too much, or at least I try not to, because it can take forever, so I just pick one and stick with it. If reason why chosen strategy does not work comes up during coding, it can be changed later.

To locally test my changes, I need to have a somewhat working mock Redfish server that has Registry Files supported. None of the mockups provided by DMTF have these Registries included, so I added my own mock files for Registries based on JSON schemas. The next thing - how to serve them. This looks like the use case for sushy-static emulator where it just needs a bunch of JSON files, but in this particular use case I also want to test working with a ZIP archive. However it looks that sushy-static emulator is written in a way it only serves JSON files and understands URL-s that correspond to folder structure and returns index.json files in those folders, definitely not supporting URL or file ending with .zip. Something to think about, if needed for sushy-static, but I'm on a mission to quickly test my code. For sushy so far it was sufficient to test the code in unit tests, but for unit tests I need to know what to mock, this time I really don't know the details of requests returning a file in a response and anyway to be 100% sure need to test that it works with some web server. nginx comes to mind and I set it up to serve static files on my computer. First time I'm doing this on Fedora, usually use Ubuntu, so turns out that SELinux is blocking my choice of port, 2000, and I had some issues with permissions, but did set it up and was able to test my changes and decide how to mock HTTP response.

Code reviews

Apart from working on these changes, I'm doing code reviews. In both directions - responding to comments on my patches and leaving some feedback on other patches. Since previous blog post one of the patches - emulating BIOS in sushy-emulator got merged \o/, but code review for Ethernet Interface emulation is still in progress. Usually it just sits and waits for attention from code reviewers as I try to respond as soon as I can to any comments while avoiding context switch when I'm working on new patches. Usually this means that I start or end my days with code reviews. Responding to code reviews can take a lot of time. I think there have been days where that's all I do. Sometimes there are questions in code reviews to which I don't know the best answer and I have to do my own research or try things out. Another thing that re-occurs in code reviews that I get a lot are comments about the code which I have written in the same style or pattern as already existing code. Which means that if I address the comment in my patch, then overall project style will vary and will become messy. I don't see it as a good thing, the project style should be the same even if it is 'wrong' or 'bad' (especially as in some cases this is subjective). Probably 'consistent' is one of the most used words in my code review comments. If it is decided that these changes are necessary, then they are done in a follow-up patch and addressed in all parts of the project to preserve consistency. This also applies when I'm reviewing other patches - trying to not let diverge from the consistency there is in the project. The way I see it - in addition to making the project easier to maintain and code easier to read, the consistency makes contribution for newcomers easier as it serves as a self-referencing sample how should implement any new features. E.g. if there were different approaches for the same problem - how to know which one to pick and why.

Outreachy: Redfish Message registry and other

Posted on Fri 20 July 2018 • Tagged with outreachy, openstack

This time I will not act surprised that 2 more weeks have passed because I paid attention to time passing by.

In my previous blog post I mentioned that my last patch was failing CI. It turned out that the function assert_called_once is missing in Python 3.5. (it has assert_called_once_with though, but I can't use it this time). Locally I run Python 3.6 where this function is back, and there were no issues in Python 2.7. I replaced this with asserting call_count for now, but this patch still has to pass code reviews.

With that patch put in code review all green, I returned to @Redfish.Settings that had parts left out previously because of too many things that still required clarification. As it stands now, sushy users can update BIOS attributes, but for now sushy does not expose the status of this update. To get the ball rolling I started to write some code and encountered another dependency - Message Registries. In Redfish response there would be id-s of messages, e.g., Base.1.2.Success, Base.1.2.PropertyValueTypeError and in registry file Base.1.2.0.json that would correspond to section like this[1]:

"PropertyValueTypeError": {
    "Description": "Indicates that a property was given the wrong value type, such as when a number is supplied for a property that requires a string.",
    "Message": "The value %1 for the property %2 is of a different type than the property can accept.",
    "Severity": "Warning",
    "NumberOfArgs": 2,
    "ParamTypes": [
    "Resolution": "Correct the value for the property in the request body and resubmit the request if the operation failed."

In order to determine if update is successful need to consult the registry and give user some friendly message. In the given sample above the message supports templates and it has placeholders for parameters. sushy would have to build an error message passing the parameters from @Redfish.Settings for specific case. This approach also supports translating and localizing the messages. But for all this to work I need the registries. None of the provided mockup files have sample of these registries included. According to the schema they can be provided via ServiceRoot.Registry property. I remember somewhere I read that they are optional, but then how should sushy handle the case where Redfish service does not provide them? There could be 2 options: download the files programmatically from [2] as necessary or include them in sushy package as fallback. Downloading the files wouldn't be a reliable option because sushy might not have access to the external Internet or the site could be just down. Bundling the files together is the direction to go, but then the mentor queried about the license of these files. These standard registry files provided by DMTF have only copyright statement, but no license. That would mean that this is proprietary and cannot be included in OpenStack projects as they require OSI approved license. No-one was sure and I'm not a lawyer either so it was time to contact OpenStack legal mailing list to clarify this[3]. Before this I talked with the mentors what could be other options if the files couldn't be included - e.g., manually or using a script parse the files, generate a Python dict and store this derived dictionary instead of the original file. In the questions to legal mailing list I also included this approach as possible option. Pretty quickly an answer came back which said: NO, the files cannot be included without a license and the same goes for the derived code. As of this writing this is still on-going and DMTF might apply 3-clause BSD license which would be OK for OpenStack project[4].

On other tasks I did some cleanup patches that emerged from previous code reviews - what usually happens in code reviews is that reviewers notice other thing that need improvement but are not related to the patch in review. Or the necessity for changes is not so big to block the patch but can be done as a followup patch. One of those patches were to clean up sushy-tools documentation to consistently use the same term. Somehow the docs started to have 'simulator' to describe sushy-emulator and sushy-static. It might have been me because I though of 'simulator' to be more general term. Went through some discussions [5][6] to understand which is the right term to use. Turns out it is 'emulator'. Which also means that the title of my previous blog post is incorrect.

Another thing, I took over a patch that emulated Ethernet Interfaces in sushy-emulator. It was rather an old patch from January this year and since it was created sushy-tools had introduced support for openstackdriver and thus changed some structure in the Flask app too. I rebased and updated with the new structure and added support for openstacksdk driver. Which led me to setting up OpenStack cloud locally. A bit funny, but I haven't had a need to have access to OpenStack cloud before. This time I needed a sample to see how openstacksdk returns data for network interfaces, which was not entirely clear from the docs. I used dev-stack[7] on a VM and it worked without any problems. This patch too is in code-review.

Outreachy: Simulating BIOS for sushy-tools

Posted on Sun 08 July 2018 • Tagged with outreachy, openstack

I can't believe that another 2 weeks have passed since the last blog post. Shortly after my previous blog post about adding support for BIOS resource in sushy library it got merged \o/. The next thing was to add BIOS support for sushy-tools so that developers can use it during development to test against as bare metals with Redfish are not readily available. Also sushy-tools could be used in CI when running automated tests. There is another developer working on adding BIOS-Redfish support in Ironic.

To re-iterate on sushy-tools, it consists of two simulators sushy-static and sushy-emulator, Sushy static is straightforward - it serves JSON files from provided mockup folder where mockup files can be downloaded from DMTF Redfish page. Whatever is in those JSON files, is returned back to client. sushy-emulator is more dynamic and uses actual systems to emulate what is necessary for Redfish API. Now it supports 2 drivers - libvirt[1] and openstacksdk[2] (interchangeably also called nova driver).

And to re-iterate on Redfish BIOS resource - it provides BIOS attributes, updating those attributes, resetting the attributes to default and changing BIOS password.

I started to look at implementing BIOS support and writing up a story for storyboard, but stopped when decided to check what there is for libvirt and openstacksdk in relation to BIOS. Because at first I made assumption that there would be some API functions in libvirt and openstack libraries same as there were for parts already implemented. tl;dr; There is none.

Browsing through libvirt and openstack sdk I couldn't see anything that stand out as BIOS API. Ok, so maybe there is nothing for BIOS, but maybe I don't know what I'm looking for (I don't have much experience with BIOS - as a computer user I have accessed BIOS to change boot order and enable hardware virtualization, if there was anything else I don't remember it). What are these "BIOS attributes"? Looking at the samples provided by DMTF, there are attributes like 'AdminPhone', 'BootMode', 'PowerProfile', 'UsbControl'. I take a look at my BIOS and cannot map these, probably there is something like that, but named slightly different (not expecting to see AdminPhone though). Still I don't want this let go, and I want to find out how this should work on real servers that support Redfish.

This presentation[3] comes handy and introduces me to UEFI HII (Human Interface Infrastructure) which would be the source for UEFI variables mapped to Redfish BIOS attributes. I read more about it at[4] and then go back to libvirt. For openstacksdk it is already clear that there is nothing for BIOS.

A side note, here I'm not looking at legacy BIOS, but UEFI. BIOS is being replaced by UEFI and one day will be gone, however its name still will live in Redfish, Ironic and elsewhere, because at least for Ironic this name is chosen as better cannot be identified[5].

Back to libvirt, I find that for virtual machines' UEFI, the term OVMF or AAVMF is used[6]. Start searching using OVMF, find a doc about it, go through it[7], and there it mentions NVRAM config file that contains something and my mentor had mentioned it. I need to take a look what is there. I create a VM with UEFI/OVMF[8] and get the sample file. Its extension is '.fd' and it's binary file. I try to find if there is some utility to read just to see if something useful is there. Couldn't find it, and even if there is such, it is not part of libvirt API - so sushy-emulator would bypass libvirt API and would have to work with it directly which is not ideal.

OK, here I give up and look at backup plan - have a set of string mimicking BIOS attributes with no effect on VM and stored at libvirt domain XML so that it is persisted between VM and sushy-tools reboots. Where would that be? I cannot find a place for it, but there is section named SMBIOS[9] in libvirt.

System Management BIOS (SMBIOS) specification defines data structures (and access methods) that can be used to read management information produced by the BIOS of a computer

Sounds close enough? SBMBIOS has section oemStrings where it can store custom information. But it is added in the latest version released this spring, even I don't have it on my a little bit outdated Fedora (I will upgrade when Outreachy is over :)). And still this is not ideal place for custom data. Talk with mentors and then we decide that we will start with storing this in emulator's memory that would not be persisted accross reboots. I promise to take another look if really, really there is nothing better in libvirt. Next time I look and start reading the XML spec from the very beginning (not looking at table of contents which I did before) and here it is - under General data section, there is metadata section that allows to store whatever is necessary using own XML namespace. That is the perfect place for what we need.

There were some other obstacles before I could get to implementing this - due to Flask upgrade tests started to fail in sushy-tools and I tried to fix it (that would be another paragraph about this, but I'm leaving this out so the blog post does not get too long). Once I got back to implementation I did it in less than a day, but there were and still are other patches for sushy-tools which introduce new and restructure existing tests with which I need to align. After the direction to go was clarified I updated my tests.
As of writing this, the patch is submitted and fails CI jobs.
I fixed the first time it failed CI - it was concurrency issue which was not visible locally because I had libvirtd running. I stopped the service and was able to reproduce the test failures. But only when I run full test suite. When I tried to isolate the issue, the tests passed. I found the missing piece and fixed my test setup (had to nullify global driver) and it passed again, but then CI failed again with different errors. Perhaps more concurrency issues, suspecting global variable (everything fine locally).

Overall, it does not feel like already 2 weeks have passed but when I start writing about it a lot of interruptive things come up which I will leave out for brevity. One thing that stands out for me is that I was not noticing some things that I feel I should have. Better attention next time. The good thing - when I was researching UEFI BIOS thing I remembered about and used the Pomodoro technique[10] with Gnome app[11] to keep myself focused. Worked flawlessly.

Outreachy: Adding support for Redfish BIOS resource in sushy

Posted on Sun 24 June 2018 • Tagged with outreachy, openstack

Time flies and this is my 6th week in Outreachy project. My first 'big' task is to implement support for BIOS resource in sushy library.

BIOS resource is one of the Redfish resources beside many others, like Ethernet Interface, Processor which are already implemented in sushy. To shortly describe BIOS resource - it returns a list of BIOS attributes and allows to update them, it also has 2 "actions": Reset the attributes to default value and change BIOS password.

My starting point was an API scheme PDF[1] referenced in the ticket and merged patch of Ethernet Interface [2] which gave me idea which parts of the project I need to touch: implement classes for BIOS resource, add a release note, add a BIOS property to parent resource called System and add unit tests for the new BIOS resource and for new parts of System.

Sounds straightforward, could take a day or two? Of course it did not.

One of the first things I encountered was that for action "Change Password" there are no parameters specified. How to pass new password then? Looked around and discovered that the PDF referenced in the ticket is not the latest version. Found the latest version[3] released 14 May 2018, after the ticket was created and this spec has parameters for "Change Password": NewPassword, OldPassword and PasswordName. At least this makes sense now, but what to do with those who are still on previous version? At some point in freenode IRC #openstack-ironic channel I asked this question and people there knew that this was issue with generating JSON schemas from CSDL. Turns out there are CSDL files which are schemas in XML and they appear to be source of truth, not JSON. Good to know, but somehow I'm still sticking to JSON and if in doubt look at CSDL. But I started to look more at JSON schemas themselves, rather then generated PDF. For example, BIOS has it schema at [4]. There is also Schema index [5], but it is easy to guess the URL if know the resource name. With this resolved I was preparing to submit this patch for code review and while pre-reviewing myself I took another look at sample JSON file for BIOS resource to check if everything is covered. Here is the sample:

  "@odata.type": "#Bios.v1_0_3.Bios",
  "Id": "BIOS",
  "Name": "BIOS Configuration Current Settings",
  "AttributeRegistry": "BiosAttributeRegistryP89.v1_0_0",
  "Attributes": {
    "AdminPhone": "",
    "BootMode": "Uefi",
    "EmbeddedSata": "Raid",
    "NicBoot1": "NetworkBoot",
    "NicBoot2": "Disabled",
    "PowerProfile": "MaxPerf",
    "ProcCoreDisable": 0,
    "ProcHyperthreading": "Enabled",
    "ProcTurboMode": "Enabled",
    "UsbControl": "UsbEnabled"
  "@Redfish.Settings": {
    "@odata.type": "#Settings.v1_0_0.Settings",
    "ETag": "9234ac83b9700123cc32",
    "Messages": [
        "MessageId": "Base.1.0.SettingsFailed",
        "RelatedProperties": [
    "SettingsObject": {
      "@odata.id": "/redfish/v1/Systems/437XR1138R2/BIOS/Settings"
    "Time": "2016-03-07T14:44.30-05:00"
  "Actions": {
    "#Bios.ResetBios": {
      "target": "/redfish/v1/Systems/437XR1138R2/BIOS/Actions/Bios.ResetBios"
    "#Bios.ChangePassword": {
      "target": "/redfish/v1/Systems/437XR1138R2/BIOS/Actions/Bios.ChangePassword"
  "@odata.context": "/redfish/v1/$metadata#Bios.Bios",
  "@odata.id": "/redfish/v1/Systems/437XR1138R2/BIOS"

Id, Name, AttributeRegistry, Attributes covered, Actions covered, but what is this @Redfish.Settings? It's not in the schema. I start searching and find this described in Redfish Scalable Platforms Management API Specification[6], this has also HTML version [7], it's handier for me. The field is described in section 7.2.2[8]. Turns out this field is used where resource cannot be updated directly, as is the case for BIOS attributes - updates would require system reboot. Ok, looks like a reusable field. I talk with mentors and we decide this should be implemented as a separate patch to keep the patches smaller and make the BIOS patch dependent on this, a chained patch. Also speaking of this specification[6][7], this is better starting point for all Redfish things than Schema PDF, schema PDF is just a human-readable presentation of JSON/CSDL schemas, but then again if reading this first it might be hard to relate to anything in it. With only few days in I still was not comfortable with sushy project and was confusing myself how I should implement this re-usable field. In hindsight there is nothing to be confused about, but until I got there I peeked at base modules of sushy library to see what's happening there, otherwise I was just copying whatever was done in Ethernet Interface without really knowing and understanding what's happening under the hood. One thing to mention, which I did not see at the beginning, is that sushy is not a serialization library that serializes JSON to Python classes and back - it does more. sushy users wouldn't have to know Redfish to use sushy with bare metals. It adds some processing where necessary to make its use handy and hides all the web service details. At least that's how I see it now.

With the Settings field done and submitted for code review I returned to the original BIOS patch to make it dependent on Settings patch and update to use Settings field. Then I noticed that there might be a thing that should have been implemented differently in Settings patch. The Settings object holds a URI (see SettingsObject/@odata.id in the sample above) where clients should PATCH and where committed attributes are visible. Committed attributes are those that user updated, but haven't been applied yet as they are waiting for the system reboot. Having this realization that sushy is not just a dumb serialization library, an idea came that sushy should load this resource automatically when user accesses the property in Python. And I made the update to do it. As the Settings field is re-usable field I had to support that it can create a new instance of its parent type. The code introduced some structures and approaches that were not seen anywhere in the project, so I was not sure that this was the way to go, but the only way to find this out is to submit it for code review. So in the next patch update for Settings field I have a re-usable field that could load a settings resource automatically and dynamically based on parent resource type.

With both patches updated and submitted I started to wait for code reviewer feedback. I have been warned that code reviews can take a while, so during that I had some other smaller tickets to work on, and later I started to look at the next bigger ticket.

Looong story short, there were many suggestions how to improve this (still keeping in mind that this is not just serialization library), how to change some things, at one point there was a dict inherited with __setitem__ override that instead of updating the dict values, did PATCHing so that sushy user can write attributes['ProcTurboMode'] = 'Disabled' seamlessly without knowing what magic happens behind. This again introduced a lot of structures not seen anywhere else in the project and in the end it was deemed to be too confusing for user instead of being helpful, for example, with code above why does not attribute value in the dictionary change in the end?

Somewhere in between, one question remained unanswered regarding @Redfish.Settings - why isn't this field in the schema of BIOS? What are the other resources that could have this field? greping within provided mockup samples, found another case where Ethernet Interface[9] has this field, while other samples of Ethernet Interface did not have this property (e.g., [10]). Also in one of the older versions of mockup sample file Readme[11] there was a short intro in Redfish, and it mentioned that this can also be applied to Storage resources, but I haven't any mockup samples for it so far. This raised another question - what does it mean that @Redfish.Settings field is not present? Does it mean that settings are read-only or that user should patch at the resource itself? The spec was not entirely clear and the Redfish implementations being available on servers which I don't have or have access to, I was not able to check how it works in real life. Talked with mentors, should this be asked in Redfish user forum or are there any other options to find this out, an e-mail thread was started where people with more knowledge on Redfish promptly clarified this - @Redfish.Settings is payload annotation field that can appear in any resource, but practically it will appear where immediate updates are not possible but restart of a system or a service is necessary. When the field is not present, it means that user can patch against the resource itself (not that it is read-only). BIOS will always have this field, because there are no known BIOS that can apply changes without reboot and not expecting to have any in near future. Ethernet Interfaces and Storage might have it or not, and now it is not expected to appear in any other resources.

Another thing that came up this week is, how to determine and inform sushy user of attribute update outcome. In the @Redfish.Settings field there are list of messages returned, time when changes were applied and ETag of the version the changes were applied. How to know when to check for the messages? Could there be some notifications received when updates are done? As there were more questions than answers, it was decided that in this patch this will not be exposed to sushy user to find a good solution in the next patch.

In the end with all the magic removed and even both patches merged back together (it was hard to review as it was changing back and forth), current patch versions are very close to the first versions I proposed. They still are in review, but hopefully will be done soon. I feel like I walked around the world to return at the same point, but during the trip I saw places and had some fun with Python.

Outreachy: About the project

Posted on Tue 05 June 2018 • Tagged with outreachy, openstack

When it came to choose a project for which to apply in Outreachy, I was looking for 3 things: mentors are in about the same timezone as me, the project is in tech stack I'm familiar with and want to expand my skills in it, and last but but not least, the project's domain is something interesting and completely new to me.

With the last point I set myself up for a lot of reading (and some videos).

At first I felt like that dog in meme 'I have no idea what I'm doing'. To wrap my head around this, I started to draw a picture with some boxes. Here is the result:

diagram: Context of sushy

The project I'm working on is sushy and sushy-tools project.

sushy is related to, but not exactly part of, OpenStack Ironic project which deals with bare metal provisioning. Usually when talking about clouds, talk about virtual machines (VM), but there are cases where VMs do not provide necessary performance, so non-virtualized environment is necessary. Here comes Ironic project to manage bare metal servers in cloud environment - remotely. Ironic can be used independently or together with other OpenStack projects with whom it integrates. sushy is written in a way that it does not depend on Ironic and can be used by other projects. And Ironic can decide to use something else instead of sushy. But what does sushy do? Time to introduce Redfish.

Redfish is a standard API to work with bare metal servers. It lives in BMC (Baseboard Management Controller) which is a microcontroller (small computer) attached to motherboard of industrial servers. BMC allows to manage servers remotely and Redfish is one of the protocols to do it. The Redfish standard is managed by DMTF (Distributed Management Task Force).

sushy is a client library in Python for Redfish RESTful web services communicating in JSON. ironic imports sushy and uses it as one of the drivers. sushy is not the only Python library to consume Redfish API, there are alternatives named very similarly: python-redfish [5] python-redfish-library [6]

Besides sushy there is also project sushy-tools which contains emulators for testing sushy. Otherwise it is challenging for developers to test Redfish as real server with BMC and Redfish is necessary. There are 2 emulators: sushy-static which serves static JSON files provided by Redfish project. Mockups can be found at White Papers and Technical Notes section[4] looking for DSP2043. There is Redfish Mockup Creator[7] to generate mockup files from a real Redfish service. But this is little use to me as I don't have access to real Redfish service, but nice to know just in case.

Static mockup file emulator is OK for read only testing, but it does not help much when want to test actions where some changes are necessary. In this case there is sushy-emulator which uses libvirt driver connecting to virtual machine mimicking real server.

DMTF also provides similar emulators, both static mockup files[8] and dynamic[9]. I haven't tried these yet, but might try them out later. With all the alternatives available, it appears that each project takes different approach, so it is not like they are copies of each other and in the end there is choice.

Lastly, there are some acronyms that I've seen floating around in relation to Ironic that are not directly related to sushy, but I had to find out what they are and how they are related.

PXE (Preboot eXecution Environment) is way to boot up servers from network. Computers supporting PXE has NIC (network interface controller) that is up and listening to commands from network even when server itself is turned off.

IPMI (Intelligent Platform Management Interface) is a way how to manage and monitor servers remotely.

PXE and IPMI have been used together to deploy servers, but they are supposed to be replaced by newer technologies addressing some of their drawbacks - HTTP Boot and Redfish [10].

libvirt, already mentioned above, is API to manage virtualization, supporting wide range of hypervisors, including VirtualBox, VMWare, Hyper-V.

As always, in hindsight this all speaks for itself, but then again while writing this I discovered new places to go though I can avoid them now - this is just enough for sushy. It will be interesting to revisit this at the end of project and see what has changed in my point of view.

Next time I will write about first tasks I'm working on that should allow me to tell more about sushy and Redfish.

Outreachy: How I got started with OpenStack development

Posted on Sun 20 May 2018 • Tagged with outreachy, openstack

The first week has passed and in this post I will cover what I did to get my first contribution during application phase and some follow up reading afterwards. Here I will focus on general tools used in OpenStack development workflow. In next post I will write about the project I'm working on.

It was easy to start developing for OpenStack projects and almost everything I needed to know is well documented in one long page[1]. The page contains some links to follow for more detailed information. One of such pages that I find useful is about commit messages[2] that not only writes about messages themselves, but about splitting commits in smaller patches where split is reasonable and somewhat isolated rather than creating one large patch which is harder to review for everyone involved. Also commit messages should be self-contained and hold enough information to understand what's being done and why without accessing other systems like issue/bug tracker. This also says that commit message bodies should be wrapped at 72 characters. The main page[1] only mentions commit message headers that should be 50 characters.

Then there are specific OpenStack code style guidelines for Python[3] that also references the general Python style guide, PEP 8[4]. Here I'm relying on tox to catch any violations.

To write some code, I cloned the necessary projects, branched, and made some changes. There were some tools missing from my development environment that I installed as I went by, and I did not encounter any setup related issues when running projects.

Once the changes are done, the next step is to submit it for code review.

Gerrit is used for code review, it used to be written in Python, but was rewritten in Java. Wikipedia says[5] that it was done to make it easier to run on different Linux systems, but it does not give any reference to source and a quick search online did not help me to find any historical discussions, but this is not so important here, was just curious. More about Gerrit it is written at [6]. Gerrit distinguishes between core reviewers and regular reviewers, something that I have not seen in other tools where coders enforce it manually. In such cases, it is not like they try to bypass core reviewer requirement (as system does not enforce it), but it makes hard to identify open code reviews that are missing core reviewers, so it is nice that Gerrit has this built-in.

There is a handy tool git-review[7] to make work with Gerrit easier - to submit code review just run git review instead of following instructions in Gerrit's quick intro[6].

Once code review submitted Zuul[8] gets involved. Don't have to interact with this much, but Zuul will be the one who will tell if there are tests failing and PEP 8 guidelines violated. This should have been checked locally before submitting code review, but there are always more integrations to run and other issues that might not show up in local environment. As there are many patches submitted within the same time frame, Zuul will queue them and test queued patches together according to their place in queue to catch any conflicts between patches before they get merged to master. Then when code review is done and usually it requires 2 core code reviewers to approve, the Zuul will merge the changes to master.

That's all for now, if I encounter something new or something I misunderstood in relation to workflow and tools, will share it in further posts.

Hello Outreachy

Posted on Sun 29 April 2018 • Tagged with outreachy, openstack

Hello World, this is my first blog post for my journey as Outreachy intern at OpenStack "Extend Redfish hardware provisioning tooling for Python" project.