Re-Calculation of CRC checksum using path-info-db-consistency-check

Hi everyone!
We’re currently copying datasets to new network drives and would like to know how to efficiently validate the existing checksums of the datasets in path-info-db.

Processing task that checks the consistency between the data store and the meta information stored in the PathInfoDB.

I saw this line in dss service.properties file:

It sends out an email which contains all differences found.

path-info-db-consistency-check.label = Path Info DB consistency check
path-info-db-consistency-check.dataset-types = .*
path-info-db-consistency-check.class = ch.systemsx.cisd.openbis.dss.generic.server.plugins.standard.DataSetAndPathInfoDBConsistencyCheckProcessingPlugin

  1. How do I re-trigger this service when I have monuted the new disk?
  2. Is the consistency check output saved in some specific log file - or will the log output of any inconsistent file check be logged in “datastore_server_log.txt?”

Thanks,
Filip

Hi Filip

the output of that maintenance task is stored within the default DSS log file, which by default is datastore_server_log.txt. To control the execution of the task, you can specify a run-schedule, which you can pass a Unix cron-like schedule:

maintenance-plugins = data-set-and-path-info-db-consistency-check-task [, … ]
data-set-and-path-info-db-consistency-check-task.class = ch.systemsx.cisd.etlserver.path.DataSetAndPathInfoDBConsistencyCheckTask
data-set-and-path-info-db-consistency-check-task.run-schedule = cron: 0 0 22 5 * *

All other parameters are described here:

Thanks! Really helpful with the answer Richard, I’ll get back with detailed report of how I implemented it soon, so please keep this thread open.