I wrote an article on writing (external) Domain Specific Languages in Ruby over at the Red Hat Developers Blog.
Monday, December 05, 2016
I had the pleasure to be invited to present the Hawkular eco-system at GrafanaCon 2016.
The venue for this 2-day conference was certainly not your everyday conference venue in a hotel ballroom or cinema, but a lot more heavy-metal:
And thus the sessions of the first day all happened in the on-board theatre which used to be one of the elevators that moved the aircrafts from the hangar to the flight deck.
Former aircraft elevator
The sessions were kicked off by Torkel Odegaard, the creator of Grafana by giving some numbers about the growth and versions of Grafana and the community of users and contributors.
In the next session Kevin from Fermilab talked about lasers and how they are connected to Grafana (hint: monitoring of the huge infrastructure that monitors the collider experiments).
And then next was something that probably most have been waiting for: the official launch of Grafana 4 by Torkel. This included a demonstration of some new features like Alerting, where you can define alerts directly on a graph including a visual value-picker and a simulation mode. The announcements continued later by announcing the renaming of Raintank to GrafanaLabs and the completion of the Stack with Intel Snap, which was announcing its version 1.0 followed by Graphite doing the same
I am not going through all the sessions here, but still want to mention the presentation of Kyle Brandt, creator of Bosun alert engine. His talk was less of a product presentation, but rather a philosophical one on getting the communication right. If one engineer sets up an alert trigger and another one has to work on the fired alert, it is important that the later gets enough context to be able to quickly react to the alert.
The afterparty in the evening happened in a nearby bowling place. Those balls are a lot larger and heavier than those we use in Germany for Kegeln.
The second day had a format with 2 parallel sessions, which was a lot more "how-to" like, which included a good presentation by Brian Brazil of Promtheus fame and a nice (hi)story of monitoring at Sony PlayStation. My talk took place on the afternoon at 3pm. It went well and I got some good questions and feedback.
As this was the first day with better weather I also toured the flight deck and the bridge of the Intrepid
Bridge of the Intrepid
Space Shuttle Enterprise
View from the bridge
All talks were recorded and will be pushed online once the video team has edited in the slides. My Slides are available in the meantime from http://www.pilhuhn.de/GrafanaCon2016.pdf. I will update this post once the recordings are online.
Tuesday, October 18, 2016
Hawkular had for a while a UI with the possibility to set up Alert Triggers. As the name suggests are those triggers used to define conditions when an Alert is to be fired.
The other day I was doing some testing and a colleague asked me if I had already defined some triggers. I thought that I neither want to log into ManageIQ right now nor pass JSON structures via curl commands.
As I did some DSL work for metrics recently, I thought, why not set up a DSL for trigger definitions. This is work in progress right now and here are two examples
Set up a threshold trigger to fire when the value of _myvalue_ is > 3:
define trigger "MyTrigger" enabled ( threshold "myvalue" > 3 ) auto-disable
Set up a trigger on availability when it is reported as DOWN. The trigger is not enabled.
define trigger "MyTrigger" ( availability "mymetric" is DOWN )
At the moment it is a very crude integration via entry points in the main menu. And the DSL itself is also far from ready. I consider this an experimentation space. If it turns our successful, it may be possible to take the grammar and directly integrate it into Hawkular-Alerts, so that one can directly POST a document with a DSL fragment, which then gets turned into the internal representation.
If you are looking for code, this is available in the alert_insert branch of HawkFX.
Tuesday, September 27, 2016
Computed metrics are something I wanted to do for a very long time, already in RHQ, but never really got around it and sort of forgot about it again.
Lately I found a post that contained a DSL to do exactly this (actually you should read that post not because of the DSL, but because of the idea behind it).
After seeing this, I got the idea on what to do and to include this in HawkFX, my pet project, which is an explorer for Hawkular.
HawkFx with the input window for formulae, that shows a formula and also a parser error.
The orange chart shows Non-Heap used, the redish one the heap usage of a JVM.
Formulas are in a DSL that looks a bit like UPN, e.g. as in the following (I've shortened the metric ID for readability, more on them below):
(+ metric( "MI~...Heap Used" , "max") metric( "MI~...NonHeap Used", "max"))
to sum up two metrics (see also screenshot below). The 'metric' element gets two parameters, the metric id and also which of the aggregates that the server sends should be taken (in this case the max value) - this comes from the fact that we request the values to be put into 120 buckets by the server.
Or if you have the total amount of memory you could also subtract the used memory to get a graph of the remaining:
(- 1000000 metric( "MI~...NonHeap Used", "max"))
You could also get the total wait time for responses at a point in time when you multiply the average wait time with the number of visitors:
(* metric("MI~..ResponseTime","avg") metric("MI~..NumberVisitors","sum"))
Computed total memory usage
Summing up the metrics for 'Heap Used' and 'NonHeap Used' as shown above would then give you a nice graph of the total memory consumption of a JVM:
The green chart now shows to combined memory usage of Heap and Non-Heap, which is computed from the other two series. Orange and red are as above.
On metric IDs
Metric IDs are the IDs under which a metric is stored inside of Hawkular. The example here comes from an installation of Hawkular-services in Docker. If you just feed your metrics into Hawkular metrics, the IDs will looks like the ones you are using.
ID (upper) and path fields (lower) for a selected item in the tree
I have just pushed an update to HawkFX that provides the ID and path in their own fields at the bottom of the main window, so you can copy&paste them.
I will talk more about the parser in an upcoming article. For now it is a personal playground to also better understand what is doable here. If this turns out to be successful I can imagine that the DSL could directly be incorporated into Hawkular-metrics so that the rules are available to all metrics clients.
It would of course be cool to have an editor for the formulas that allows to interactively pick metric IDs etc, but I doubt that I will get to this any time soon.
Thursday, September 15, 2016
Mein Beitrag zum Comic-Collab von Schlogger zum Thema Gespalten.
Schaf und Verstand Schoolpeppers
Friday, July 15, 2016
Mein Beitrag zum Comic-Collab von Schlogger zum Thema Sabotage.
Mit dabei im Juli:
Mic At Six
Schisslaweng GoboPictures Rainer Unsinn
Monday, June 27, 2016
As you may know, we have started to create the Hawkular-Services distribution, which we try to build weekly. This distribution comes without embedded Cassandra, with no default user and also without a UI (we plan on re-adding a basic UI).
But you must not fear.
Running Hawkular-services is pretty easy via Docker.
Let me start with a drawing of what I want to do here:
Setup of Hawkular-services via Docker
In this scenario I run a Docker daemon (which is extremely easy these days on a Mac thanks to DockerForMac (Beta)). On the daemon I run a Hawkular-services container, which talks to a Cassandra container over the Docker-internal network. On top of that I have two WildFly10 containers running ("HawkFly"), which have been instrumented with the Hawkular-agent.
For the purpose to setup linking and data volumes I am using docker-compose. The following is the docker-compose.yml file used (for the moment all images are on my personal account):
# set up the wildfly with embedded hawkular-agent hawkfly: image: "pilhuhn/hawkfly:latest" ports: - "8081:8080" links: - hawkular # The hawkular-server hawkular: image: "pilhuhn/hawkular-services:latest" ports: - "8080:8080" - "8443:8443" - "9990:9990" volumes: - /tmp/opt/data:/opt/data links: - myCassandra environment: - HAWKULAR_BACKEND=remote - CASSANDRA_NODES=myCassandra # The used Cassandra container myCassandra: image: cassandra:3.7 environment: - CASSANDRA_START_RPC=true volumes: - /tmp/opt/data:/opt/data
To get started save the file as
docker-compose.yml and then run:
$ docker-compose up hawkularThis starts first the Cassandra container and then the Hawkular one. If they do not yet exist on the system, they are pulled from DockerHub.
After Hawkular has started you can also start the HawkFly:
$ docker-compose up hawkfly
Right now if you would directly do
docker-compose up hawkfly the agent would not work as the hawkular server is not yet up and the agent would just stop. We will add some re-try logic to the agent pretty soon.
I have pushed a new version 0.19.2 of HawkFly that has the retry mechanism. Now it is possible to get the full combo going by only running
$ docker-compose up hawkfly
Running without docker-compose
On my RHEL 7 box, there is Docker support, but no docker-compose available. Luckily docker-compose is more or less a wrapper around individual docker commands. The following would be a sequence that gets me going (you have to be root to do this):
mkdir -p /var/run/hawkular/cassandra mkdir -p /var/run/hawkular/hawkular chcon -Rt svirt_sandbox_file_t /var/run/hawkular docker run --detach --name myCassandra -e CASSANDRA_START_RPC=true \ -v /var/run/hawkular/cassandra:/var/lib/cassandra cassandra:3.7 sleep 10 docker run --detach -v /var/run/hawkular/hawkular:/opt/data \ -e HAWKULAR_BACKEND=remote -e CASSANDRA_NODES=myCassandra \ -p 8080:8080 -p 8443:8443 --link myCassandra:myCassandra \ pilhuhn/hawkular-services:latest
There is an open Pull-Request to the Hawkular-Services Docker build as a part of a release and make it available via DockerHub on the official Hawkular account.
With this PR you can do
$ mvn install $ cd docker-dist $ mvn docker:build docker:start
to get your own container built and run together with the C* one.
Right now I put in the default user/password and if the agent inside the hawkular-container should be enabled at image build time. Going forward we need to find a way to pass those at the time of the first start. The same applies (probably even more) to SSL-Certificates.
Storing them inside the container itself does not work going forward, as this way they are lost when a newer version of the image is pulled and a new container is constructed from the newer image.