elasticsearch – eyeveebee https://eyeveebee.dev Imma Valls Fri, 26 May 2023 10:38:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.6 https://eyeveebee.dev/wp-content/uploads/2021/05/cropped-ico_eyeveebee-32x32.png elasticsearch – eyeveebee https://eyeveebee.dev 32 32 Top 3 recommendations to keep your Elasticsearch cluster healthy https://eyeveebee.dev/top-3-recommendations-to-keep-your-elasticsearch-cluster-healthy Tue, 21 Dec 2021 17:00:00 +0000 https://eyeveebee.dev/?p=4494 Working as an Elastic support engineer, we see a few tools that are very useful to have already in place when we need to help our customers troubleshoot their Elasticsearch clusters, or monitor to keep them healthy. Let’s review the top 3 with a few examples.

1. Know your REST APIs

Knowing Elasticsearch REST APIs is very useful to keep your cluster healthy. Not only can they help you troubleshoot, but prevent issues. If you want something more human-readable, have a look at the CAT APIs.

The first thing to keep the cluster healthy is to keep it in green health. With a simple call to the cluster health API:

GET /_cluster/health

We’ll get an overview of our cluster status.

{
  "cluster_name": "eyeveebee-prod-cluster",
  "status": "red",
  "timed_out": false,
  "number_of_nodes": 30,
  "number_of_data_nodes": 27,
  "active_primary_shards": 15537,
  "active_shards": 26087,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 1,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 209,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 235940,
  "active_shards_percent_as_number": 99.99616762873977
}

In the case above, we have a cluster in red status. We can see we have one unassigned shard, which is causing the cluster to be red. The health of a cluster is that of the worst shard of the worst index. Therefore, at least some index or shard will be in red.

It’s important that we keep our cluster in green health. Kibana alerts can help us here, notifying us when the cluster becomes yellow (missing at least one replica shard) or red (missing at least one primary shard).

To further investigate what index is red, we can use the CAT indices API:

GET _cat/indices?v&s=health:desc,index&h=health,status,index,docs.count,pri,rep

Where we could locate the red index.

health status index                    docs.count pri rep
red    open   eventlogs-000007                      1   1
green  open   .apm-agent-configuration          0   1   1
...

With the CAT shards API we can have look at the shards for the red index ‘eventlogs-000007’:

GET _cat/shards?v&s=state:asc,node,index&h=index,shard,prirep,state,docs,node

Where we would be able to determine that indeed we are missing a primary shard, which is ‘UNASSIGNED’.

index                                                                 shard prirep state         docs node
eventlogs-000007                                                      0     p      UNASSIGNED         
.apm-agent-configuration                                              0     p      STARTED          0 instance-0000000012
...

Finally, we can use the Cluster Allocation explain API to find the reason why.

GET _cluster/allocation/explain
{
  "index": "eventlogs-000007",
  "shard": 0,
  "primary": true
}

We would get an explanation similar to the following, which should allow us to get to the root cause.

{
  "index" : "eventlogs-000007",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2021-12-08T17:00:53.596Z",
    "details" : "node_left [gyv9cseHQyWD-FjLTfSnvA]",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "-PYVqLCLTSKjriA6UZthuw",
      "node_name" : "instance-0000000012",
      "transport_address" : "10.43.1.6:19294",
      "node_attributes" : {
        "xpack.installed" : "true",
        "data" : "hot",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the node is above the high watermark cluster setting [cluster.routing.allocation.disk.watermark.high=90.0%], having less than the minimum required [90.0%] free space, actual free: [9.13%]"
        }
      ]
    },
    ...

In this case, we ran out of storage on our nodes and shards cannot be assigned. Something we could further confirm using the CAT allocation API.

We can prevent hitting the disk watermarks with Kibana alerts for disk usage threshold. If we keep our nodes around 75% storage, we will have room for growth and keep our cluster healthier. Starting at 85% storage used (default value), we’ll have shard allocation limitations.

We can’t stress enough the importance of planning for data retention. Index Lifecycle Management and the use of Data Tiers are of great help to keep storage in check. Also good to read the documentation on Size your shards, which is also key in keeping a healthy cluster.

Don’t forget to use Elasticsearch snapshots, with snapshot lifecycle-management, to prepare for scenarios where we could lose data if we have no backup. And to set up for high availability.

We also observe in the cluster health above that the number of pending tasks could be a bit high. And we could further look at those again with a call to the CAT pending tasks API:

GET /_cat/pending_tasks?v

To find out what tasks we have pending.

insertOrder timeInQueue priority source 
412717 5.2s NORMAL restore_snapshot[2021.12.17-.ds-logs-2021.12.09-000030-trdkfuewwkjaca] 
412718 2s NORMAL ilm-execute-cluster-state-steps [{"phase":"cold","action":"searchable_snapshot","name":"wait-for-index-color"} => {"phase":"cold","action":"searchable_snapshot","name":"copy-execution-state"}]
...

Or we could also use jq to aggregate the results of the pending cluster tasks API to more easily investigate what are the tasks we have pending.

curl --silent --compressed  'https://localhost:9200/_cluster/pending_tasks' | jq '.tasks[].source' -cMr  | sed -e 's/\[.*//' | sort | uniq -c

Which could give us a better idea of what is causing pending tasks:

  1 restore_snapshot
 17 delete-index
183 ilm-execute-cluster-state-steps
  2 node-join
  4 update task state
  2 update-settings

This is just to showcase that knowing the REST APIs available we can get very helpful information to assess our cluster’s health and adjust our architecture accordingly.

Finally, have a peek at Elastic’s support diagnostics. Those are the REST APIs calls that we use at Elastic Support to help our customers keep their clusters healthy. Or have a look at our blog “Why does Elastic support keep asking for diagnostic files” which explains the underlying reasons why.

2. Take Advantage of the Stack Monitoring & Alerting

The second recommendation is to plan for a separate monitoring cluster when in production. The REST APIs give us current information, but we are missing the historic data. If we send that to a monitoring cluster, it will help us investigate incidents, forecast capacity, etc.

Kibana alerts for the Elastic Stack monitoring will notify us of potential issues.

One example we see a lot in Elastic support, where monitoring comes in handy, is node hot-spotting.

If we have a cluster that is showing high CPU, let’s say, during ingestion, on one or just a few of the data nodes, while the others are idle; and those nodes keep changing. We can use monitoring to confirm our suspicions.

Let’s have a look at the Kibana Stack Monitoring UI for our cluster. Out of 3 nodes, 1 is showing high CPU usage.

We could investigate further by going to the indices tab. We might find, like in this case, that during the window when we see high CPU usage on one node, we have an index ‘log-201998’ that had a very high ingest rate compared to the rest.

If this index has one primary shard, and it’s the only one with a high ingest rate, we could assign 3 primary shards, so the load is balanced between the 3 data instances we have in this example.

For bigger clusters and more than one hot index, the situation might be not so straightforward. We might need to limit the number of shards for those indices that end up on each cluster node. Check on our docs to avoid node hotspots.

Having a monitoring cluster will be of great help.

3. Proactively check Logs

One last recommendation is to proactively review the logs.

We can use Filebeat’s Elasticsearch module to ingest our logs in the monitoring cluster we discussed in the previous section. And even use the stack capabilities to categorize logs to discover anything abnormal and alert us.

One example we see with our customers a lot is wrong mapping data types in the indices.

Depending on how we configure our index mappings, we might be losing documents that come with conflicting types. If we check our cluster logs, we would see those errors and be able to act.

Let’s take the example of a document that sometimes has a numeric value in the source, and sometimes it’s alphanumeric. If we use the default dynamic mappings, and we first ingest this document with a numeric value of a field we’ll call “key”:

POST my-test-index/_doc
{
  "key": 0
}

Elasticsearch will interpret this field as a number, of type long.

GET my-test-index/_mapping/field/key
{
  "my-test-index" : {
    "mappings" : {
      "key" : {
        "full_name" : "key",
        "mapping" : {
          "key" : {
            "type" : "long"
          }
        }
      }
    }
  }
}

If the next document came with, let’s say, with a UUID:

POST my-test-index/_doc
{
  "key": "123e4567-e89b-12d3-a456-426614174000"
}

In Kibana we would see the error.

{
  "error" : {
    "root_cause" : [
      {
        "type" : "mapper_parsing_exception",
        "reason" : "failed to parse field [key] of type [long] in document with id '4Wtyz30BtaU7QP7QuSQY'. Preview of field's value: '123e4567-e89b-12d3-a456-426614174000'"
      }
    ],
    "type" : "mapper_parsing_exception",
    "reason" : "failed to parse field [key] of type [long] in document with id '4Wtyz30BtaU7QP7QuSQY'. Preview of field's value: '123e4567-e89b-12d3-a456-426614174000'",
    "caused_by" : {
      "type" : "illegal_argument_exception",
      "reason" : "For input string: \"123e4567-e89b-12d3-a456-426614174000\""
    }
  },
  "status" : 400
}

This status code 400 is a non-retriable error, and it means Elasticsearch will not index the document, and clients like Logstash or Agent won’t retry.

If we search for our documents on the index, we only have the first one.

GET my-test-index/_search?filter_path=hits.total,hits.hits._source
{
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "hits" : [
      {
        "_source" : {
          "key" : 0
        }
      }
    ]
  }
}

This that we see in Kibana, would appear in our logs too. It’s very common to find this in Logstash (logs we can also ingest using Filebeat’s Logstash module). It could go unnoticed unless we check our logs.

Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"my-test-index-0000001", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x5662a9f3>], :response=>{"index"=>{"_index"=>"my-test-index-0000001", "_type"=>"_doc", "_id"=>"Qmt6z30BtaU7QP7Q4SXE", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [key] of type [long] in document with id 'Qmt6z30BtaU7QP7Q4SXE'

As a bonus, if we ingest logs, we will be ready to discover slow logs in case we need to troubleshoot any search or ingest slowness.

]]>
Bienvenido a los nuevos aires de búsqueda https://eyeveebee.dev/bienvenido-a-los-nuevos-aires-de-busqueda Thu, 15 Jul 2021 20:00:00 +0000 https://eyeveebee.dev/?p=4481


Como la mayoría de las organizaciones modernas, es probable que tus equipos utilicen más de 10 aplicaciones basadas en el cloud a diario, pero pasen demasiadas horas al día buscando la información que necesitan en todas ellas. Con las capacidades listas para usar de Elastic Workplace Search, verás lo fácil que es poner contenido relevante al alcance de tus equipos con la búsqueda unificada en todas las aplicaciones en las que confían para realizar el trabajo.

Presented at ElasticON Solutions Series Latin America 2021

The video recording is available here.

]]>
Te damos la bienvenida a una nueva forma de realizar búsquedas https://eyeveebee.dev/te-damos-la-bienvenida-a-una-nueva-forma-de-realizar-busquedas Tue, 22 Jun 2021 20:00:00 +0000 https://eyeveebee.dev/?p=4476


Al igual que la mayoría de las organizaciones modernas, tus equipos probablemente usan más de 10 aplicaciones basadas en la nube a diario, pero dedican demasiado tiempo a buscar la información que necesitan en todas estas. Gracias a las características integradas de Elastic Workplace Search, podrás comprobar lo sencillo que resulta poner el contenido relevante al alcance de tus equipos gracias a la búsqueda unificada para todas las aplicaciones que usan para llevar a cabo su trabajo.

Presented at ElasticON Solutions Series EMEA 2021

The video recording is available here.

]]>
Deploying Elasticsearch and Kibana on Kubernetes with the Elastic Operator / ECK https://eyeveebee.dev/deploying-elasticsearch-and-kibana-on-kubernetes-with-the-elastic-operator-eck Thu, 14 Nov 2019 12:00:00 +0000 https://eyeveebee.dev/?p=4377


Managing and Elasticsearch deployment on Kubernetes can be challenging. Orchestrating a deployment or upgrading are not simple tasks. Our operator will help you easily manage simple or complex deployments like hot/warm/cold.

In this talk, Janko Strassburg and Imma Valls, Sr. Support Engineers at Elastic will demonstrate how to use the new operator, Elastic Cloud on Kubernetes (ECK) to automate deployments and manage an Elasticsearch cluster.

Code available here.

]]>
Workshop Monitorizando con el Stack Elastic: Elige tu propia aventura! https://eyeveebee.dev/monitorizando-con-el-stack-elastic-elige-tu-propia-aventura Sun, 06 Oct 2019 09:00:00 +0000 https://eyeveebee.dev/?p=4372


Desplegar Elasticsearch y Kibana sobre Kubernetes nunca fue tan fácil! Elastic Cloud on Kubernetes(ECK) es el

In this workshop, we’ll give you the tools to start monitoring your infrastructure and applications using the Elastic Stack. Just bring your laptop with docker-compose installed, and we will guide you through the steps to start collecting and visualizing some logs and metrics.

And come prepared to Choose Your Own Adventure!

We will dive into:

  • System metrics: Collect infrastructure metrics.
  • Application logs: Collect structured logs in a central location.
  • Uptime monitoring: Ping services and actively monitor their availability and response time.

GitHub Repo: https://github.com/immavalls/elastic-stack-workshop

Software Crafters Barcelona 2019: https://softwarecrafters.barcelona/2019/index.html

]]>
Desplegando Elasticsearch y Kibana en Kubernetes con el Operator de Elastic (ECK) https://eyeveebee.dev/desplegando-elasticsearch-y-kibana-en-kubernetes-con-el-operator-de-elastic-eck Sun, 15 Sep 2019 10:00:00 +0000 https://eyeveebee.dev/?p=4365


Desplegar Elasticsearch y Kibana sobre Kubernetes nunca fue tan fácil! Elastic Cloud on Kubernetes(ECK) es el operador desarrollado por Elastic que te permite realizar despliegues, actualizar versiones y escalarlos en forma declarativa.

Video recording here

]]>
Creating frozen indices with the Elasticsearch Freeze index API https://eyeveebee.dev/creating-frozen-indices-with-the-elasticsearch-freeze-index-api Thu, 21 Mar 2019 08:32:00 +0000 https://eyeveebee.dev/?p=1280 First published on https://www.elastic.co/blog/creating-frozen-indices-with-the-elasticsearch-freeze-index-api

First, some context

Hot-Warm architectures are often used when we want to get the most out of our hardware. It is particularly useful when we have time-based data, like logs, metrics, and APM data. Most of these setups rely on the fact that this data is read-only (after ingest) and that indices can be time(or size)-based. So they can be easily deleted based on our desired retention period. In this architecture, we categorize Elasticsearch nodes into two types: ‘hot’ and ‘warm’.

Hot nodes hold the most recent data and thus handle all indexing load. Since recent data is usually the most frequently queried, these nodes will be the most powerful in our cluster: fast storage, high memory and CPU. But that extra power gets expensive, so it doesn’t make sense to store older data that isn’t queried as often on a hot node.

On the other hand, warm nodes will be the ones dedicated to long-term storage in a more cost-efficient way. Data on the warm nodes is not as likely to be queried often and data within the cluster will move from hot to warm nodes based on our planned retention (achieved through shard allocation filtering), while still being available online for queries.

Starting with Elastic Stack 6.3, we’ve been building new features to enhance hot-warm architectures and simplify working with time-based data.

Data rollups were first introduced in version 6.3, to save storage. In time-series data, we want fine-grained detail for the most recent data. But it is very unlikely that we will need the same for historical data, where we will typically look at datasets as a whole. And this is where rollups came in, since starting at version 6.5 we can create, manage and visualize rollup data in Kibana.

Shortly after, we added source-only snapshots. These minimal snapshots will provide a significant reduction of snapshot storage, with the tradeoff of having to reindex data if we want to restore and query. This has been available since version 6.5.

In version 6.6, we released two powerful features, Index Lifecycle Management (ILM) and Frozen Indices.

ILM provides the means to automate your indices management over time. It simplifies moving indices from hot to warm, allows deletion when indices are too old, or automates force merging indices down to one segment.

And for the rest of this blog, we’ll talk about frozen indices.

Why freeze an index?

One of the biggest pain points with “old” data is that, regardless of age, indices still have a significant memory footprint. Even if we place them on cold nodes, they still use heap.

A possible solution could be to close the index. If we close an index, it won’t require memory, but we will need to re-open it to run a search. Reopening indices will incur an operational cost and also require the heap it was using before being closed.

On each node, there is a memory (heap) to storage ratio that will limit the amount of storage available per node. It may vary from as low as 1:8 (memory:data) for memory intensive scenarios, to something close to 1:100 for less demanding memory use cases.

This is where frozen indices come in. What if we could have indices that are still open — keeping them searchable — but do not occupy heap? We could add more storage to data nodes that hold frozen indices, and break the 1:100 ratio, understanding the tradeoff that searches might be slower.

When we freeze an index, it becomes read-only and its transient data structures are dropped from memory. In turn, when we run a query on frozen indices, we will have to load the data structures to memory. Searching a frozen index doesn’t have to be slow. Lucene heavily depends on the filesystem cache which might have enough capacity to retain significant portions of your index in memory. In such a case searches are comparable in speed per shard. Yet, a frozen index is still throttled such that only one frozen shard is executing per node at the same time. This aspect might slow down searches compared to unfrozen indices.

How freezing works

Frozen indices are searched through a dedicated, searched-throttled threadpool. This by default uses a single thread, to ensure that frozen indices are loaded into memory one at a time. If concurrent searches are happening, they will get in the queue to add additional protections to prevent nodes from running out of memory.

So, in a hot-warm architecture, we will now be able to transition indices from hot to warm, and then be able to freeze them before archiving or deleting them, allowing us to reduce our hardware requirements.

Before frozen indices, to reduce infrastructure cost we had to snapshot and archive our data, which adds a significant operational cost. We would have to restore data if we needed to search again. Now, we can keep our historical data available for search, without a significant memory overhead. And if we need to write again on an already froze index, we can just unfreeze it.

How to freeze an Elasticsearch index

Frozen indices are easy to implement in your cluster, so let’s get started on how to use the Freeze index API and how to search on frozen indices.

First, we’ll start by creating some sample data on a test index.

POST /sampledata/_doc
{
    "name": "Jane",
    "lastname": "Doe"
}
POST /sampledata/_doc
{
    "name": "John",
    "lastname": "Doe"
}

And then check that our data has been ingested. This should return two hits:

GET /sampledata/_search

As a best practice, before you freeze an index it’s recommended to first run a force_merge. This will ensure that each shard has only a single segment on disk. It will also provide much better compression and simplifies the data structures we will need when running an aggregation or a sorted search request on the frozen index. Running searches on a frozen index with multiple segments can have a significant performance overhead up to multiple orders of magnitude.

POST /sampledata/_forcemerge?max_num_segments=1

The next step is to just invoke a freeze on our index via the Freeze index API endpoint.

POST /sampledata/_freeze

Searching Frozen Indices

Now that it’s frozen, you’ll see that regular searches won’t work. The reason for this is that, to limit memory consumption per node, frozen indices are throttled.  Since we could target a frozen index by mistake, we’ll prevent accidental slowdowns by specifically adding ignore_throttled=false to the request.

GET /sampledata/_search?ignore_throttled=false 
{ 
  "query": { 
    "match": { 
      "name": "jane" 
    } 
  } 
}

Now we can check the status of our new index, by running the following request:

GET _cat/indices/sampledata?v&h=health,status,index,pri,rep,docs.count,store.size

This will return a result similar to the following, with the index status being ‘open’:

health status index      pri rep docs.count store.size
green  open   sampledata   5   1          2     17.8kb

As mentioned above, we must protect the cluster from running out of memory, thus there is a limit in the number of frozen indices we can concurrently load for search on a node. The number of threads in the search-throttled threadpool defaults to 1, with a default queue of 100. This means that if we run more than one request, they will be queued up to a hundred. We can monitor the threadpool status, to check queues and rejections, with the following request:

GET _cat/thread_pool/search_throttled?v&h=node_name,name,active,rejected,queue,completed&s=node_name

Which should return a response similar to:

node_name             name             active rejected queue completed
instance-0000000000   search_throttled      0        0     0        25
instance-0000000001   search_throttled      0        0     0        22
instance-0000000002   search_throttled      0        0     0         0

Frozen indices might be slower, but they can be pre-filtered in a very efficient manner. It is also recommended to set the request parameter pre_filter_shard_size to 1.

GET /sampledata/_search?ignore_throttled=false&pre_filter_shard_size=1
{
 "query": {
   "match": {
     "name": "jane"
   }
 }
}

This will not add a significant overhead to the query and will allow us to take advantage of the usual scenario. For example, when searching on a date range on time-series indices, not all shards will match.

How to write to a frozen Elasticsearch index

What will happen if we try to write on an already frozen index? Let’s go for it and find out.

POST /sampledata/_doc
{
  "name": "Janie",
  "lastname": "Doe"
}

What happened? Frozen indices are read-only, so writing is blocked. We can check these in the index settings:

GET /sampledata/_settings?flat_settings=true

Which will return:

{
 "sampledata" : {
   "settings" : {
     "index.blocks.write" : "true",
     "index.frozen" : "true",
     ....
   }
 }
}

We have to use the Unfreeze index API, invoking the unfreeze endpoint on the index.

POST /sampledata/_unfreeze

And now we’ll be able to create a third document and search for it.

POST /sampledata/_doc
{
 "name": "Janie",
 "lastname": "Doe"
}
GET /sampledata/_search
{
 "query": {
   "match": {
     "name": "janie"
   }
 }
}

Unfreezing should be done only under exceptional situations. And remember to always run a `force_merge` before freezing the index again to ensure optimal performance.

Using frozen indices in Kibana

To begin with, we will need to load some sample data, like the sample flight data.

Click on the “Add” button for Sample flight data.

kibana-load-data.png

We should now be able to see the loaded data by clicking the “View data” button. The dashboard will be similar to this one.

flight-dashboard.png

Now we can test freezing the index:

POST /kibana_sample_data_flights/_forcemerge?max_num_segments=1
POST /kibana_sample_data_flights/_freeze

And if we go back to our dashboard, we’ll notice that the data has apparently “disappeared”.

empty-flight-dashboard.png

We need to tell Kibana to allow searches on frozen indices, which is disabled by default.

Go to Kibana Management, choose Advanced Settings. In the Search section, you will find that “Search in frozen indices” is disabled. Toggle to enable and save the changes.

frozen-indices-kibana-settings.png

And the flight’s dashboard will show the data again.

Wrapping up

Frozen Indices are a very powerful tool in hot-warm architectures. They enable a more cost-effective solution for increased retention while retaining online search. I recommend that you test your search latency with your hardware and data, to come with the right sizing and search latency for your frozen indices.

Check out the Elasticsearch documentation to learn more about the Freeze index API. And as always, if you have any questions, reach out on our Discuss forums. Happy freezing!

]]>
Consejos de los expertos para actualizar el Stack ELK https://eyeveebee.dev/consejos-de-los-expertos-para-actualizar-el-stack-elk Fri, 15 Mar 2019 09:57:00 +0000 https://eyeveebee.dev/?p=4359


El Stack Elastic / Stack ELK presenta en cada versión de todos sus productos (Elasticsearch, Kibana, Beats y Logstash), ya sea minor o major, nuevas y potentes funcionalidades. La actualización del Stack Elastic a la última versión te permite aprovechar estas mejoras y nuevas funciones. Sin embargo, las actualizaciones de software pueden ser una tarea desalentadora, especialmente cuando el software en cuestión da servicio a aplicaciones críticas. Pero, no tiene porque serlo. Únete a nuestro webinar para aprender cómo hacer que la actualización sin complicaciones, adoptando unos simples consejos.

]]>
Barcelona Elastic Meetup – Release 6.5 features https://eyeveebee.dev/another-talk-about Thu, 31 Jan 2019 20:00:00 +0000 https://eyeveebee.dev/?p=1951

Barcelona Elastic Meetup January 31 2019 – Release 6.5 features: Canvas, Kibana spaces, Rollups and InfraUI.

]]>