Loading

  1. Use cases
  2. Behavior
  3. Configuration in ephemeral mode
  4. Use of ephemeral mode
  5. Configuration in chunked mode
  6. Use of chunked mode
  7. Advanced configuration
  8. Accelerator with datalog

Warp 10 Accelerator

Use cases

The Warp 10 Accelerator is an option of the standalone version of Warp 10 that adds an in-memory cache to a Warp 10 instance. That cache covers a certain period of time and can be used to store data for ultra fast access.

You may need it if you often make these kind of requests:

  • Fetch a lot of "fresh data" on lots of GTS. Reading a long time period of one GTS is fast. Reading a short period on 100000 GTS is slow, you need to open lots of files.
  • Fetch the last datapoint of lots of your GTS.

This feature is available since release 2.5.0.

Behavior

Accelerator cache is really close to in-memory backend:

  • It stores the data for each GTS in chunks which cover a certain time period. The total number of chunks for each GTS is defined at configuration time. The most recent chunk covers the current time, which means that no data which should end up in a future chunk will be stored. Also as a chunks become too old (i.e. they are over the chunk count threshold when the most recent chunk changes), they are discarded and their content is no longer available. If you FETCH data between two timestamps and both start and end timestamps falls into the current chunk range, the FETCH will use the accelerator.
  • It can work in ephemeral mode: the last recorded value is kept, whatever the chunk count and length. When you FETCH only one datapoint with MAXLONG as end, FETCH will use the accelerator.

Accelerator comes with a set of useful functions:

  • ACCEL.REPORT function will return information about the Warp 10 Accelerator, like its configuration and whether or not it was used for the last FETCH operation.
  • ACCEL.NOCACHE function will disable accessing the in-memory data for update, fetch and delete operations.
  • ACCEL.CACHE function will enable accessing the in-memory data for update, fetch and delete operations.
  • ACCEL.NOPERSIST function will disable accessing the persistent (disk based) data for update, fetch and delete operations.
  • ACCEL.PERSIST function will enable accessing the persistent (disk based) data for update, fetch and delete operations.

Accelerator can preload data from disk when you start your Warp 10 instance.

Configuration in ephemeral mode

In conf.d/30-in-memory.conf file, you need to enable accelerator and choose the preloading options:

// accelerator in ephemeral mode with preloading
accelerator = true
accelerator.ephemeral = true
accelerator.preload = true
accelerator.preload.poolsize = 8
accelerator.preload.batchsize = 1000

Restart your instance. Up to the number of GTS and the type of storage, it should take more time. Preloading millions of GTS with a spinning disk is not a good idea.

Run this WarpScript on your instance:

ACCEL.REPORT

The result should contain:

"status": true
"chunkcount": 1,
"chunkspan": 9223372036854775807,

The ephemeral mode is a special mode with one chunk with MAXLONG size. If status is false, the accelerator is not correctly configured.

Use of ephemeral mode

Run a WarpScript to fetch the last datapoint of one or several GTS on your instance, drop the result, and look at the ACCEL.REPORT result: { 'token' 'YourReadToken' 'class' 'temperature' 'labels' { } 'end' MAXLONG 'count' 1 } FETCH DROP

ACCEL.REPORT

If the request did use the in memory accelerator, the result should contain:

"accelerated": true

Any other count or end value won't use the accelerator.

Configuration in chunked mode

This mode keeps the recent history in memory. If you need a five minute history from now, you can define 6 chunks of 1 minute. Every minute, data is written in a new chunk, and oldest is removed.

In conf.d/30-in-memory.conf file, you need to enable accelerator and choose the preloading options:

// 5 minutes accelerator with preloading
accelerator = true
accelerator.ephemeral = false
accelerator.chunk.length = 60000000
accelerator.chunk.count = 6
accelerator.preload = true
accelerator.preload.poolsize = 8
accelerator.preload.batchsize = 1000

Restart your instance. Up to the number of GTS and the type of storage, it should take more time. Preloading millions of GTS with a spinning disk is not a good idea.

Run this WarpScript on your instance:

ACCEL.REPORT

The result should contain:

"status": true
"chunkcount": 6,
"chunkspan": 60000000,

If status is false, the accelerator is not correctly configured.

Use of chunked mode

{ 'token' 'YourReadToken' 'class' 'temperature' 'labels' { } 'end' NOW 'timespan' 2 m } FETCH DROP ACCEL.REPORT

To use chunked accelerator, the FETCH request must specify time bounds with end, start, or timespan. If the time bound falls in the time range covered by the chunks configuration, you will see "accelerated": true in the accel report.

In the example configuration with 6 chunks of 1 minute:

  • Fetch between now and 5 minutes ago will be accelerated
  • Fetch between now and 6 minutes ago will never be accelerated
  • Fetch between now and 5.5 minutes ago may be accelerated, one chance on two to fall in the oldest chunk.

Advanced configuration

By default, read, write and delete are done both in persistent storage and in the accelerator cache. This behavior can be adjusted with accelerator.default.write, accelerator.default.read, accelerator.default.delete configuration, and fine tuned request per request within the current WarpScript with ACCEL.NOCACHE, ACCEL.CACHE, ACCEL.NOPERSIST and ACCEL.PERSIST functions.

Memory management of the chunks is done by an internal garbage collector (not the one of the JVM). By default, it try to shrink allocated buffers in each chunk every chunkspan period. This collection activity can be limited and fine tuned by accelerator.gcperiod and accelerator.gc.maxalloc. Please do not configure these parameters if you do not perfectly understand the code behind. These are highly use case dependent, default value will work out of the box.

Accelerator with datalog

The accelerator feature is compatible with datalog and load balancing. You may enable accelerator on one or both instances.