Article Directory
-
-
-
- 1. Modify the `.env` configuration in the root directory of the Sentry program
- 2. Data soft cleaning
- 3. PostgreSQL data cleaning
- 4. `crontab` adds scheduled cleanup, and adjusts the time according to the amount of company data
- 5. When the third step of cleaning the database takes too long and cannot be completed, you can directly create an empty table `nodestore_node`, and then perform the third step of cleaning
-
-
.env
1. Modify the configuration in the root directory of the Sentry program
SENTRY_EVENT_RETENTION_DAYS=14
2. Data soft cleaning
Enter the worker container:
docker exec -it sentry_onpremise_worker_1 /bin/bash
How many days of data to keep, cleanup uses the delete command to delete postgresql data, but for delete, update and other operations, it only marks the corresponding row as DEAD, and does not really release disk space:
sentry cleanup --days 14
3. PostgreSQL data cleaning
Enter the PostgreSQL container:
docker exec -it sentry_onpremise_postgres_1 /bin/bash
Run cleanup:
vacuumdb -U postgres -d postgres -v -f --analyze
4. crontab
Add scheduled cleaning, adjust the time according to the amount of company data
0 16 * * 5 cd /App/sentry && { time docker-compose run --rm worker cleanup --days 14; } &> /tmp/sentry-cleanup.log
0 16 * * 6 { time docker exec -i $(docker ps --format "table {
{.Names}}" | grep postgres) vacuumdb -U postgres -d postgres -v -f --analyze; } &> /tmp/sentry-vacuumdb.log
5. When the third step of cleaning the database takes too long and cannot be completed, you can directly create an empty table nodestore_node
, and then perform the third step of cleaning
Enter the PostgreSQL container:
docker exec -it sentry_onpremise_postgres_1 /bin/bash
Log in to the PostgreSQL database:
su - postgres
psql
You can check the space occupied by the table before and after deleting the table. Generally, nodestore_node
the data table occupies the largest disk space:
SELECT
table_schema || '.' || table_name AS table_full_name,
pg_size_pretty(pg_total_relation_size('"' || table_schema || '"."' || table_name || '"')) AS size
FROM information_schema.tables
ORDER BY
pg_total_relation_size('"' || table_schema || '"."' || table_name || '"') DESC limit 10;
The new backup table combined with the renaming method can delete data without affecting the normal operation of the Sentry service. This operation may take a long time, you can use tmux
or screen
tool background execution:
ALTER TABLE nodestore_node RENAME TO nodestore_node_old;
CREATE TABLE nodestore_node (LIKE nodestore_node_old INCLUDING ALL);
ALTER TABLE nodestore_node_old DISABLE TRIGGER ALL;
DROP TABLE nodestore_node_old CASCADE;