Vladimir Melnik

Just another WordPress website

CloudStack Management Cluster

My new CloudStack-driven cloud is being managed by a cluster that consists of 8 virtual machines that are running on 2 different physical hosts. I can shut down any of these hosts at any moment, but it won’t affect the cloud-management service.

Synchronizing records about the occupied IP-addresses between two CloudStack databases

Sometimes it’s necessary to maintain 2 different CloudStack setups (let’s call them Cloud-A and Cloud-B) sharing the same IP-address ranges. For example, we might need that when we’re building a new CloudStack-driven environment and going to move all the virtual infrastructure from the old setup to the new one, but as we can’t just make move everything in a couple of hours (let’s say, we have about a thousand of VMs), we have to let 2 different CloudStack-driven virtual datacenters use the same networks. We should understand that we might have to maintain 2 different CloudStack setups for a few days or weeks or even months, so we have to take care about the situation when someone deploys a new VM or assign a new IP-address to a currently running VM. It means that both CloudStack setups should understand that the IP-address has been allocated, otherwise we might come to the situation when both CloudStacks will assign the same IP-address to their VMs and we’ll face a major trouble.

We can solve it by making a simple script that would look up what IP-addresses are being in the “Allocated” state in Cloud-A and make them “Allocated” to a certain account in Cloud-B. Then the script would do the same to mark as “Allocated” in Cloud-A the addresses that are really assigned to some instances in Cloud-B. Also it would be great if the script would mark as “Free” the IP-addresses that aren’t being used, of course.

In the very first apporach such a sctipt might be looking like that:

I’m going to add some functionality (for example, notifications about the IP-address that are already having the same assigned/free state in both clouds), so you’re welcome to get the latest revision on GitHub.

MonkeyMan’s vocabularies

Each element of the ACS-driven infrastructure is represented as a Moose-sugared Perl object, has attributes and executes methods. All kinds of elements consume (inherit) the same Moose role (MonkeyMan::CloudStack::API::Roles::Element) which makes that class an element.

Elements getting to know what they shall do to perform some work, are looking up the element’s vocabulary (MonkeyMan::CloudStack::API::Vocabulary;), the vocabulary configures the element like DNA. 🙂

See MonkeyMan::CloudStack::API::Element::Domain.

Yes, that’s what MonkeyMan knows about the Domain infrastructure element in the ACS-driven cloud. And when I needed to teach it how to handle with another infrastructure element – an account, I’d just added it as a separate vocabulary:

See MonkeyMan::CloudStack::API::Element::Account.

MonkeyMan is a good student, it’s such a joy to teach him feww new tricks 🙂
dammit. he is alive.

My music teacher

I had started to play music in the same year when I heard Amy Jade Winehouse, so I consider her as my teacher of music. I had no teachers besides of her. I started to play to write a song for her, to say how much grateful I am for showing me things that I never thought about before. I started to play and very soon after that (in a year) my teacher expelled me to the street, so I’ve been wandering everywhere with my guitar and amp, playing on streets of my city and in places where no one plays ever. Snow, rain, wind or sun – nothing can stop me. I can walk playing my tunes all the day long (and I usually do it at least once per week) and I feel so much grateful to the teacher for giving me these tunes. She guides me and, I believe, she’ll do it forever (please!).

How to find out where exactly the packet is being lost

It’s a pretty common occurrence when you have 2 hosts pinging each other and some packets are being lost. And sometimes you need to make sure, that the routing device in between of these host really receives a request, forwards this request to the target, receives the reply and forwards it back to the request’s initiator. If you’re lucky enough to run tcpdump on this intermediate device (if it runs some Unix-like OS, e.g. Linux or FreeBSD), you can wrap it to the script that will analyze each transit packet to find out what exactly is going wrong.

If you watch the tcpdump output with the naked eye, you’ll see the following pattern:

…which repeats…

…and repeats…

And if you look carefully to this pattern, you’ll see that there are 4 obvious phases.

A request came:

The request gone:

The reply came:

The reply gone:

So you can sit and watch the packets being received and transmitted, but it can be really boring, so you can run the following script:

It reads the output of tcpdump and prints an alert message when the usual pattern gets broken. When some request or some reply is absent or the seq-number is unexpected, it will print something like that:

The latest version is available at GitHub: https://gist.github.com/anonymous/2e8b6883c93326de280124c077424cc6.

Ходоки

Бывает, чел вот, например, идёт, потом смотрит – куда-то не туда зашёл, думает, как выйти, а потом вспоминает, что, пока шёл, часто забывал, куда дойти хотел, а потому совсем другие пути выбирал, вот и пришёл не туда. Другой – оппа, оппа – и, куда ему было нужно, бодренько так добрался, не опоздал и даже не запылился особо. А иной помнил, куда идёт, знал, что идёт вовсе не туда, но топал вполне уверенно, потому что уж очень хотелось пойти и посмотреть, что там. Пришёл посмотреть, смотрел, смотрел, да там и ослеп, теперь так и стоит, куда пришёл.

Для прочих же давно изобретены велосипеды.

How to test a TCP-connection

Sometimes it’s not enough to estimate the packet loss ratio only, but to make sure that a TCP-connections are stable. Run the “server” instance (sockping -m server -p 65513) on some host and run the “client” (sockping -m client -h 13.13.13.13 -p 65513) on another one, and the “client” will connect to the “server” and then it’ll be sending probe messages and expecting replies. If the TCP-session is broken, the “server” states it to STDERR.

Saved to Gist.

How other people affect our lives

What if people influence on each other much more than we tend to believe? Did you ever assume that some events in other people’s lives, some traits of their destinies can occasionaly exude on our own lifeways? What if the more we interact with certain people and the more impact they have on us, the more circumstances of their lives can recur in our own lives? And what if each person (even the ones we meet in early childhood) leave some imprint on our future lives? And what if it happens not because we’ve “learned” something from these people or intentionally “got” something from them, but just because the reality works this way, so we repeat even things that we consider as undesirable? What if we always inherit some random (are they really random?) things from their scenarios even if we don’t want that? What do you think?

How to check all the sensors (a Nagios plugin)

thermometer

Ever fancied a Nagios plugin to check all the sensors on the host without any hassle? Try this one, it collects all the sensors’ input values, compares it to their thresholds (the script collects threshold values from the system by itself). Then the plugin throws a warning when the rate of input value to the threshold value is 0.8 or more (actually you can change it by the -w option), also it yells about the critical state if the rate is equal or greater than 1 (of course, you can change it too by the -c option, although I wouldn’t suggest you to do that).

Oh, I almost forgot to add: you need to have lm_sensors utiluity installed.

Here are some examples:

check_all_sensors.sh
This command will check each sensor on each chip. It will raise the critical status when the input value is equal or greater than 100% of the threshold value. If the input value is greater or equal to 80% of the threshold value, the warning state will be raised. All threshold values are being got from the system.

check_all_sensors.sh -c 90% -w 50%
It’s almost the same, but the critical rate is 90% and the warning rate is 50%.

check_all_sensors.sh -C zaloopa -S '/^Temperature [0-9]+$/' -c 90 -w 75
Only temperature sensors on the zaloopa chip will be checked. The critical status will be raised when the absolute input value is greater or equal to 90 degrees. If it’s equal or greater than 75 degrees, the warning state will be thrown.

Also saved it to Gist.