- Create local postgres database:
sudo -u postgres psql
:create database hh;
. - Initialize database structure:
sudo -u postgres psql hh < structure.sql
. - Dump
api_operations
table data into a local TSV file:psql -h apisnoop-db-host -U apisnoop-db-user hh -tAc "\copy (select * from api_operations) to '/tmp/api_operations.tsv'"
. - Dump
audit_events
table data into a local TSV file:psql -h apisnoop-db-host -U apisnoop-db-user hh -tAc "\copy (select * from audit_events) to '/tmp/audit_events.tsv'"
. - Restore
api_operations
table data locally:sudo -u postgres psql hh -tAc "\copy api_operations from '/tmp/api_operations.tsv'"
. - Restore
audit_events
table data locally:sudo -u postgres psql hh -tAc "\copy audit_events from '/tmp/audit_events.tsv'"
.
Note that all mass update script only update records with null
on opid
, so they can be called iteratively.
- Run:
make
,time sudo -u postgres ./rmatch
. That will updateop_id
column onaudit_events
table. - You can use CONN='....' to specify your own custom connect string, see example connect strings in
lib.go
. - You can use
ANALYSIS=1
to get information about how many records had 0, 1, 2, ... matches. - You can use
DBG=1
to get a lot more verbose output.
- Generate SQL file to be run on the original database:
make
,time sudo -u postgres ./gensql > update.sql
. - Run script on the original database:
psql -h apisnoop-db-host -U apisnoop-db-user hh < update.sql
.
You can also generate TSV dumps from your local databae and restore them on the remote, but the above solution is faster and better
- Dump
api_operations
table data into a local TSV file:sudo -u postgres psql hh -tAc "\copy (select * from api_operations) to '/tmp/new_api_operations.tsv'"
. - Dump
audit_events
table data into a local TSV file:sudo -u postgres psql hh -tAc "\copy (select * from audit_events) to '/tmp/new_audit_events.tsv'"
.