Reporting

xivo_replic does not replicate call data

After experiencing a ‘no space left on device’ and restarting containers, it can sometimes happen that the data from XiVO is not replicated anymore. Container xivocc_replic_1 logs show the following error:

xivo_replic_1       | liquibase.exception.LockException: Could not acquire change log lock.  Currently locked by fe80:0:0:0:42:acff:fe11:8%eth0 (fe80:0:0:0:42:acff:fe11:8%eth0) since 4/10/17 3:28 PM

Error is caused by a lock in liquibase db. Follow the below steps in order to fix the issue:

Warning

This problem should not happen, so you should know what you are doing and not follow this procedure blindly.

  1. With xivo_stats, xivo_replic and pack_reporting containers stopped find if there still is an active lock:

    xivo_stats=# select * from databasechangeloglock;
    id | locked |       lockgranted       |                            lockedby
    ----+--------+-------------------------+-----------------------------------------------------------------
    1 | t      | 2017-04-10 15:28:10.684 | fe80:0:0:0:42:acff:fe11:8%eth0 (fe80:0:0:0:42:acff:fe11:8%eth0)
    
  2. If so, delete the lock:

    xivo_stats=# truncate databasechangeloglock;
    xivo_stats=# select * from databasechangeloglock;
    id | locked | lockgranted | lockedby
    ----+--------+-------------+----------
    
  3. Restart the containers:

    xivocc-dcomp start xivo_replic
    xivocc-dcomp start pack_reporting
    xivocc-dcomp start xivo_stats
    

Now, if xivo_stats is stuck in exit (126), try a docker rm -v {xivo_stats contener id} followed by a xivocc-dcomp up -d.

Totem panels (Elastic, Logstash and Kibana)

To debug the ELK stack bear in mind the Data flow.

Logs

  • Logstash:

    xivocc-dcomp logs -tf --tail=10 logstash
    
  • Elasticsearch:

    xivocc-dcomp logs -tf --tail=10 elasticsearch
    
  • Kibana:

    xivocc-dcomp logs -tf --tail=10 kibana
    

When looking at logstash logs you should normally see:

  • every minute the SQL request being run:

    logstash_1 | 2019-09-18T11:59:00.475644639Z [2019-09-18T13:59:00,475][INFO ][logstash.inputs.jdbc     ] (0.016828s) SELECT count(*) AS "count" FROM (select
    logstash_1 | 2019-09-18T11:59:00.475688759Z     ql.id as id,
    logstash_1 | 2019-09-18T11:59:00.475694836Z     cast(time as timestamp) as queuetime,
    logstash_1 | 2019-09-18T11:59:00.475698712Z     ql.callid,
    ...
    logstash_1 | 2019-09-18T11:59:00.475757232Z from
    logstash_1 | 2019-09-18T11:59:00.475760377Z     queue_log as ql
    ...
    logstash_1 | 2019-09-18T11:59:00.475782243Z where
    logstash_1 | 2019-09-18T11:59:00.475785317Z     ql.id > 20503
    logstash_1 | 2019-09-18T11:59:00.475788795Z order by ql.id asc) AS "t1" LIMIT 1
    
  • in the SQL query you should see the where condition ql.id > SOME_ID changing

    logstash_1 | 2019-09-18T12:00:00.475644639Z [2019-09-18T14:00:00,475][INFO ][logstash.inputs.jdbc     ] (0.016828s) SELECT count(*) AS "count" FROM (select
    logstash_1 | 2019-09-18T12:00:00.475688759Z     ql.id as id,
    logstash_1 | 2019-09-18T12:00:00.475694836Z     cast(time as timestamp) as queuetime,
    logstash_1 | 2019-09-18T12:00:00.475698712Z     ql.callid,
    ...
    logstash_1 | 2019-09-18T12:00:00.475757232Z from
    logstash_1 | 2019-09-18T12:00:00.475760377Z     queue_log as ql
    ...
    logstash_1 | 2019-09-18T12:00:00.475782243Z where
    logstash_1 | 2019-09-18T12:00:00.475785317Z     ql.id > 20602
    logstash_1 | 2019-09-18T12:00:00.475788795Z order by ql.id asc) AS "t1" LIMIT 1
    

How can I know if data is sent to Elasticsearch

See Check the status section.