Be on the same page with us
Subscribe to our latest news
By clicking the button Subscribe you give a permission for the automatic sending of e-mails
TestOps Infrastructure From Beginning to End: A Practical Approach (Part 2)
From the first part of the article, which was more on the theoretical side, we're moving to the technical part. Our company has its own set of tools, which we have been developing for over a decade now, and they seem to work best for our projects and our customers. This scheme is the same if you use the tools of your choice for writing automation scripts and a reporting tool for visualization.You can replicate this approach, it's a working concept which we've checked ourselves in production.
January 08, 2019
Carina/Zafira Logging
Zafira is a Java-based web-application composed by using the Spring Framework. It's used for aggregation of all the test results from Carina with the help of the realized Zafira Listener Agent. All the test results are published in a web-interface where you see all the graphs and test results. You can interact with the dashboard, report known issues, see what's happening with the test in real time with the help of VNC-session, and get a video recording.

ZafiraLogAppender pushes every single logging event to RabbitMQ with a correlation-id: testRunId_testId (for tracking in Zafira reporting tool).
Concerning real-time reporting, say you have a test running — it clicks on buttons, types text — and you want to see what's happening with it in real time with the help of a web-interface. For this types of cases we have ZafiraLogAppender, an implementation of a Log4j Appender, which publishes every event coming in logger in RabbitMQ exchange. In its turn, Zafira also connects to this logger exchange and you can see all the changes in real time. A test pushes the button, and this information appears in Zafira; a test goes to a new URL and it appears in the web reporting tool.

Concerning historical items, we use Elastic stack (Elastic Search, LogStash, Kibana). In this case, we use only two components, as we actually don't need the visualization through the Kibana (we do this with the help of our own reporting tool). The goal was to set up the LogStash and the Elastic search. The working principle is the following: when there's a need to look through the test logs after a certain period of time, we set up a pipeline in LogStash, which receives an input exchange from RabbitMQ. The data in the exchange with a corresponding correlation ID — it would be Testrun ID and Test ID — records all the data into the Elastic search. And when you log into Zafira web UI when a week, a day, or an hour passes, you get the context of what you need. Test ID or Test Run ID, you look into the elastic search, form an http-request and get the amount of data you need — all the logs.
Logstash Pipeline Configuration
Configuration is the following: — in Logstash form a custom Logstash file and configure the pipeline.
Pipeline has an input — the point you count from — on the left you can see the configuration for RabbitMQ: host, port, username and password.
And output — in this case with the help of Logstash we write into the Elastic search.
Since we're using this ELK stack in the Docker context, there's a collected image that we use. In this case host => "elasticsearch" is also a local host for Logstash and ES and LS are in the same container.

RabbitMQ configuration:
                host => "rabbitmq"
                port => "5672"
                exchange => "logs"
                metadata_enabled => true
                user => ”qps"
                password=> ”secret"
                durable => true
                subscription_retry_interval_seconds => 5
ELK configuration:
                hosts => ["localhost"]
                index => '%{[@metadata][rabbitmq_properties][correlation-id]}'
                document_type => 'test'
Here are some moments which you should bear in mind. Since you enter the Elastic Search not through Kibana, which is placed in the container, but through the external REST-client, you need to additionally adjust the cross-origin requests policy through the Elastic Search YAML. One of the key parameters, in this case, is that we let http.cors.enabled : true. Here you can list the hosts which generate requests or you can put a star and send requests which don't depend on the origin host.

Elasticsearch CORS configuration (elasticsearch.yml):
http.cors.enabled : true
http.cors.allow-origin : "*”
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : "*"
Now a few words about video streaming. If someone has worked with Selenoid you know it has two great plugins: enable VNC and enable video. enable VNC lets you run a special container which opens a standard or a predefined port and through any browser or VNC client you can connect to a host port and receive a real-time testing session and moreover to interact with it through a VNC session if something goes wrong with the test and you want to double check it.
Carina/Zafira – VNC Streaming (Web)
A process in brief: you use a testing framework or Carina itself. Through the desired capabilities you write enableVNC=true. This command runs Aerokube's Selenoid container with a VNC support and lets you see what's going on in a test session. Unfortunately, by default Selenoid doesn't support addressing this session via a secure web-socket protocol. In this case, we additionally use NGinX as an entering spot from the outside. Of course, if we build a production we want it to be secure both on HTTP level and web-socket level. It won't work another way: if HTTP is used for a web service, there would be a browser alert — the interaction should be held through a secured protocol with pre-installed certificates.
In this case, Zafira or your test automation reporting tool interacts with NGiNX through a secure protocol and NGiNX is proxifying the traffic into your environment through the unsecured web-socket. We use a remote framebuffer protocol as a client.

I'll say a few words about this config. We use a hub which aggregates several Selenoid instances - it's called GoGridRouter. If you run a huge amount of tests, it won't be possible to run them on a single machine, so if you use Amazon or a local infrastructure consisting of several servers, there's an entering point which distributes requests among the nodes. In this GGR quota you should additionally see the VNC parameters - it gives instructions to GGR how to proxy-pass this traffic from the entering VNC point to a specific node in Selenoid (very important!). This configuration of NGiNX lets us through a secured socket via the WSS protocol to access hostname/slash VNC and to proxy-pass the traffic through VSS to GGR which uses unsecured protocol.

Selenoid configuration:
<?xml version="1.0"?>
<qa:browsers xmlns:qa="">
  <browser name="chrome" defaultVersion="67.0">
   <version number="66.0">
     <region name="stage">
       <host name=”host" port="4446” vnc="ws://host:4446/vnc"/>
NGiNX configuration:
location /vnc {
      proxy_pass http://ggr_server;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header Host $http_host;
      proxy_set_header Access-Control-Allow-Origin;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-NginX-Proxy true;
Carina/Zafira – VNC Streaming (Mobile)
A few words about VNC streaming for mobile devices. For our mobile automation, we use our own build of a Docker container, which has everything needed for running Appium tests. On this picture, you see that when a real mobile device interacts with a server it runs a corresponding container which has ADB consisting of Java, Appium and STF.
STF is a visualization service which lets you interact with mobile devices through a web-interface, it has the ability to connect to your real mobile device. Unfortunately, it uses a TCP protocol, so you have to additionally install it which would let you proxy-pass the traffic from a web-socket type to a TCP-type.

So what happens next? Just like with Selenoid, we let it pre-form the inventory of the tests through the plug enableVNC=true which starts /websockifyproxy . We still have our traffic running through an unsecured protocol, we proxify the output through NGiNX and run it through WSS. According to the scheme, through an RFB client we connect to an URL and get the real-time information what's going on on a tested mobile device.
Video Recording
All the logs are available both in real time an in historical format. Concerning video, a connected VNC session is in real time. But it happens that you analyze the tests later in perspective. You want to analyze 5-6 failed tests and see the video recording of what happened. All the enterprise platforms like BrowserStack for example, they record the video, so you can see what's happening and why the tests have failed. Selenoid has a standard desired capability called enable Video. It means that Selenoid runs an additional container which proxies the whole VNC traffic and records a video in an mp4 format.
Video Recording (Web)
To make the recorded video accessible through the web from outside, you should set NGiNX and Selenoid up in such a way that they have the volume in the same folder.

We set up NGiNX in such a way that when you address hostname/videos, the video name points to a specific folder where the Selenoid puts it.

You know the session ID, since your automated tests initialize the web-driver, and with the help of session ID you can find a correlation between the recorded session and the test itself for which this session was recorded.

For Selenoid we need to override a variable video output. I think it's stated by default in the Selenoid documentation. Secondly, everything that is placed in the container "opt/selenoid/video" should be transferred to a host machine: current "directory/selenoid/video."

Selenoid configuration:
    network_mode: bridge
    image: aerokube/selenoid:latest-release
      - /var/run/docker.sock:/var/run/docker.sock
      - $PWD/selenoid/:/etc/selenoid/
      - $PWD/selenoid/video/:/opt/selenoid/video/
      - OVERRIDE_VIDEO_OUTPUT_DIR=$PWD/selenoid/video/
NGiNX Configuration:
    image: nginx
     - ./selenoid/video:/usr/share/nginx/video
location /video {
        root   /usr/share/nginx;
Concerning setting up of NGiNX in a Docker Compose we also state that the volume in Selenoid video is shared onto internal user/share/nginx/video. The NGiNX content itself presupposes that you would statically address by the means of /video the resources which are placed in a corresponding folder.
Video Recording (Mobile)
In Appium 1.8.0 version there's an option to choose the desired capabilities, which lets the user record a real application testing session both on a real mobile device or an emulator.
You need to set only a few parameters that instruct Appium to start recording - there are settings for a video recording session like video resolution or bitrate, you should also provide credentials for your FTP server.

By default, when recording stops, the video itself is being placed on the FTP-server. As soon as we command Appium to stop recording, it uses the before-given credentials to push the video on the FTP server.

FTP-server is set up based on a public Docker image, a huge amount of which is available. This process is described in the infrastructure documentation. Appium pushes FTP and the scheme is quite alike to web one. NGiNX and FTP also point to the same folder, so you can use the regular scheme host/video/sessionID/mp4 to gain access to the recorded video through your reporting tool. You know SessionID, so you address it and get the information about what was happening during this particular testrun.

In conclusion, from my personal production experience, I can state that it's possible to build a cost-effective TestOps infrastructure using only open-source products.