Monday, May 11, 2015
Wednesday, September 25, 2013
One interesting problem I ran into recently was that the hardware clock on a system that was being re-provisioned was off. Waaaay off. The hardware had a time setting of 2001. When the provisioning was nearing the completion, the system is supposed to register to a RHN Satellite Server that uses SSL certificates for it's communication. The certificate is valid from 2012 to 2014. Hence, the registration failed and the customer ended up wondering why the provisioning failed.
The standard solution to this is to make sure that the hardware has the right time set before we try provisioning. That requires getting console access, navigating through the uEFI/BIOS menus, setting the time, etc.
A more elegant solution is to use a kickstart templating engine, ie. cobbler, razor., and then write a snippet that sets the hardware clock before the installation begins. This will then solve the hardware clock being set incorrectly when the registration happens. The server or engine that provides the kickstart can insert a timestamp into the kickstart, then in a %pre script, you can validate the hardware clock's time with the timestamp (the timestamp in the kickstart should be OLDER than the hardware clock) and then either set the hardware clock or halt the installation process.
Technically, anything that uses certificates or crypto keys that use the current time will benefit from this kind of solution.
Friday, August 16, 2013
UEFI on ARM is very controversial at the moment since Microsoft has decreed that they will not allow the secure boot to allow custom or third party keys. Way to play nice with the rest of the playground...
Thursday, August 15, 2013
In researching how UEFI works, I found it useful to first learn how GPT works. GPT is the GUID Partition Table which is designed as a replacement for MBR partitioning. All of this stems from the size and functional limitations of BIOS and MBR. Those were designed in an era when 16bits was a lot and when a few tens of MiB was outrageously expensive.
So, GPT partition uses larger offsets and different locations for where it writes information about the partitions on the disk. This enables support for massive disks which we'll see on newegg and amazon shelves in, say, oh, the next few years. :-D
In addition to this, GPT has some backwards compatibility with MBR partition tables that has odd side effects if you don't know about GPT. Basically, the old partition table is populated in a way such that MBR tools see one large unknown partition. This can easily lead an in-initiated to wipe out the MBR and install a new MBR partition table.
Start here for info on GPT: Wikipedia Article on GPT
Once you think you understand GPT, play a little with gdisk. (Keep in mind I am a Linux consumer, I will rarely show or demo anything that is particular to another OS.)
After you think you understand GPT, time to move on to UEFI. Start here: Wikipedia Article on UEFI
I'll share more as I learn more with regard to UEFI.
Sunday, January 20, 2013
Today, more interesting conversations and sessions:
1. Automated testing and QA - conversations about AutoQA, RobotFramework, Autotest, Cucumber, and more. Watch that space for more to come.
2. Spot's grandiose ideas:
a. Fedora Badging via systemd and fedmsg. The idea is an incentive project to motivate users to test applications by awarding them badges for running applications, badges might be rewarded with real world rewards, ie., Raspberry Pi, etc.
b. Fedora App store - build an infrastructure that provides the users with an app store interface for software, as opposed to the current mechanism of packageKit updates. Third party apps might be made available via COPRs (see yesterday's post).
3. Formulas - For the moment, this seems to be Ansible playbooks that can be used to quickly and efficiently create a system with a "personality".
4. GPG Smart card authentication capabilities - Herlo has a github repo with instructions on setting this up and the scripts necessary to accomplish. Interesting alternative to CAC authentication that's popular with government agencies now.
An interesting conversation that is ongoing and will take some serious soul searching and long, late night conversations, related to Spot's talks and ideas, revolved around the future of Fedora. Given the shift in the marketplace/user community/world to a mobile platform space, where does that leave desktop focused linux distributions? This runs us into the existential question - what is Fedora really? Many of the conversations focused on the aspect of Red Hat's relationship to the project - Fedora as an R&D platform for RHEL and other RH products.
An interesting suggestion, that's definitely contentious, was that the project focus on providing a smaller footprint platform, a "core" somewhat like Fedora Core but somehow different, that has the basic functionality necessary on which things like an app store, desktop, images, mobile device interface, etc., can be installed. Those pieces would all still potentially be part of the Fedora Project, where as the premiere product or offering from the project is the distribution. In essence, changing the vision from an RPM based yum repo distribution to a core OS, usable in many spaces (think mobile devices - smart phones and tablets), with the ability to have app stores, formulas, servers, cloud images, etc., that can be built on top of the offering.
Oh yeah, something I almost forgot about from yesterday - Peer-to-peer installation mechanism. The idea here is that if we have to install thousands of systems at once, network b/w is going to be a huge bottleneck. If we can have the installer potentially share out the packages that are being installed to other install processes, we can reduce the amount of bandwith and load on the authoritative media source.
The use case is a datacenter composed of multi-system pizza boxes. Hardware vendors are beginning to see increased interest in 1U and 2U with multiple systems inside. A few folks from some hardware companies mentioned to me that they are building systems with 48, quad core systems in a 1U and they are working on figuring out how to install a plant with cabinets of these systems. In other words, 1000s of systems in a single cabinet.
Saturday, January 19, 2013
Wow, so many interesting and fascinating ideas. Here's what I've learned about and will be investigating further over the next few months:
1. Pulp 2.0 - Extensible past RPM content, right now puppet modules can be pushed via pulp.
2. Ansible - Fireball mode uses ZeroMQ, allows large scalability and fast messaging.
3. COPRs - Collection Of Package Repos, method of building and distributing packages without needing a Koji infrastructure. Interesting for third party software.
4. OpenLMI - Interesting idea, standardization of management tools into a larger framework, unless I am completely misunderstood.
Lots of things that are very relevant to the work I do and will be doing in the next few years. The lightning talks and presentations were very interesting. As always, it's been great to see all the Fedora friends in person!!
More coming tomorrow.
Saturday, January 28, 2012
Sigul uses a CA to generate SSL certs for the server, bridge, and clients to authenticate and encrypt communications. The server itself provides GPG keys for signing packages. In this case, I am going to setup a separate CA for sigul with respect to the CA we use for koji. The reasoning here is that koji's CA is used often, to create end user certs for access into koji. That means it's exposed often to an admin, either directly via the cli or indirectly via a webapp or other utility, when user certs are created.
The sigul CA should be kept fairly isolated in my opinion, since it's only used to add new server, bridge, and client instances. These additions should be fairly rare. Exposing the sigul CA often, as when a new end user cert is being created, opens up opportunities to create new certs that could be used to get rogue sigul clients the ability to get unauthorized rpms signed with our keys.
The bridge setup is pretty much spot on from the Seneca Sigul Setup link above. One thing you may have to do is change the sigul user's default shell in order to create the db as the sigul user using the defaults from the Fedora install of the packages.
usermod -s /bin/bash sigulNext for the server setup. First problem I need to resolve is that the server we're using for the sigul is an EL5 system. Python's sqlalchemy module that ships with EL5 is 0.3.11. There is an updated version in EPEL that also has a slightly different name - python-sqlalchemy0.5-0.5.8. Not sure if this is what's causing this error, I suspect so:
# sigul_server_create_db Traceback (most recent call last): File "/usr/share/sigul/server_create_db.py", line 21, in ? import server_common File "/usr/share/sigul/server_common.py", line 107, in ? sa.Column('name', sa.Text, nullable=False, AttributeError: 'module' object has no attribute 'Text'I've installed the EPEL python-sqlalchemy and just doing that did not solve the issue. I also cannot un-install the python-sqlalchemy provided with the OS. I am pretty sure that the issue here is that version of sqlalchemy on the OS is missing the functionality that we need for sigul tools.
To be continued...