Coder Social home page Coder Social logo

greenplum-db / gpbackup Goto Github PK

View Code? Open in Web Editor NEW
63.0 22.0 73.0 8.46 MB

GPDB Backup Utility

License: Apache License 2.0

Makefile 0.23% Go 93.07% Shell 5.24% PLpgSQL 0.73% Python 0.72%
greenplum-backup greenplum-restore greenplum-export greenplum-import greenplum-data-migration

gpbackup's Introduction

Greenplum Backup

gpbackup and gprestore are Go utilities for performing Greenplum Database backups. They are still currently in active development.

Pre-Requisites

The project requires the Go Programming language version 1.11 or higher. Follow the directions here for installation, usage and configuration instructions. The project also has a dependency on sqlite3. This is installed by default on many platforms, but you must install it on your system if it is not present.

Downloading

go get github.com/greenplum-db/gpbackup/...

This will place the code in $GOPATH/github.com/greenplum-db/gpbackup.

Building and installing binaries

Make the gpbackup directory your current working directory and run:

make depend
make build

The build target will put the gpbackup and gprestore binaries in $HOME/go/bin.

This will also attempt to copy gpbackup_helper to the greenplum segments (retrieving hostnames from gp_segment_configuration). Pay attention to the output as it will indicate whether this operation was successful.

make build_linux is for cross compiling on macOS, and the target is Linux.

make install will scp the gpbackup_helper binary (used with -single-data-file flag) to all hosts

Validation and code quality

Test setup

Required for Greenplum Database 6 or higher, several tests require the dummy_seclabel Greenplum contrib module. This module exists only to support regression testing of the SECURITY LABEL statement. It is not intended to be used in production. Use the following commands to install the module.

pushd $(find ~/workspace/gpdb -name dummy_seclabel)
    make install
    gpconfig -c shared_preload_libraries -v dummy_seclabel
    gpstop -ra
    gpconfig -s shared_preload_libraries | grep dummy_seclabel
popd

Test execution

NOTE: The integration and end_to_end tests require a running Greenplum Database instance.

To run all tests except end-to-end (linters, unit, and integration), use

make test

To run only unit tests, use

make unit

To run only integration tests

make integration

Integration test requirements

  • Running GPDB instance
  • GPDB's gpcloud extension
make -C gpcontrib/gpcloud/ install
  • GPDB configured with --with-perl

To run end to end tests (requires a running GPDB instance), use

make end_to_end

We provide the following targets to help developers ensure their code fits Go standard formatting guidelines.

To run a linting tool that checks for basic coding errors, use

make lint

This target runs gometalinter.

Note: The lint target will fail if code is not formatted properly.

To automatically format your code and add/remove imports, use

make format

This target runs goimports and gofmt. We will only accept code that has been formatted using this target or an equivalent gofmt call.

Running the utilities

The basic command for gpbackup is

gpbackup --dbname <your_db_name>

The basic command for gprestore is

gprestore --timestamp <YYYYMMDDHHMMSS>

Run --help with either command for a complete list of options.

Cleaning up

To remove the compiled binaries and other generated files, run

make clean

More Information

The Greenplum Backup wiki for this project has several articles providing a more in-depth explanation of certain aspects of gpbackup and gprestore.

How to Contribute

See CONTRIBUTING.md file.

Code Formatting

We use goimports to format go code. See https://godoc.org/golang.org/x/tools/cmd/goimports The following command formats the gpbackup codebase excluding the vendor directory and also lists the files updated.

goimports -w -l $(find . -type f -name '*.go' -not -path "./vendor/*")

Troubleshooting

Dummy Security Label module is not installed or configured

If you see errors in many integration tests (below), review the Validation and code quality [Test setup](#Test setup) section above:

SECURITY LABEL FOR dummy ON TYPE public.testtype IS 'unclassified';
      Expected
          <pgx.PgError>: {
              Severity: "ERROR",
              Code: "22023",
              Message: "security label provider \"dummy\" is not loaded",

Tablespace already exists

If you see errors indicating the test_tablespace tablespace already exists (below), execute psql postgres -c 'DROP TABLESPACE test_tablespace' to cleanup the environment and rerun the tests.

    CREATE TABLESPACE test_tablespace LOCATION '/tmp/test_dir'
    Expected
        <pgx.PgError>: {
            Severity: "ERROR",
            Code: "42710",
            Message: "tablespace \"test_tablespace\" already exists",

gpbackup's People

Contributors

adam8157 avatar ajr-vmware avatar bmdoil avatar chrishajas avatar chrisyuan avatar danielgustafsson avatar eshkinkot avatar gp-releng avatar hidva avatar hughcapet avatar innerlife0 avatar jimmyyih avatar jmcatamney avatar khuddlefish avatar kmacoskey avatar kyeap-vmware avatar larham avatar lisaoakley avatar nadeemg avatar pengzhout avatar professor avatar roicos avatar schubert avatar shivzone avatar soumyadeep2007 avatar tm-drtina avatar tom-meyer avatar water32 avatar xenophex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpbackup's Issues

gprestore fails with error[CRITICAL]:-FATAL: semctl(196632, 0, SETVAL, 0) failed: Invalid argument (pg_sema.c:151) (SQLSTATE XX000)

There are two clusters with same segment configurations. Database is successfully backed up to S3 bucket from first cluster.
Both clusters are using same versions of Greenplum, gpbackup/gprestore and gpbackup s3 plugin.
Database itself seems to be working fine on both clusters.
While trying to restore to second cluster, error is always the same:

20220502:12:40:55 gprestore:gpadmin:greenplum-master-staging:008094-[INFO]:-gpbackup version = 1.20.3
20220502:12:40:55 gprestore:gpadmin:greenplum-master-staging:008094-[INFO]:-gprestore version = 1.20.3
20220502:12:40:55 gprestore:gpadmin:greenplum-master-staging:008094-[INFO]:-Greenplum Database Version = 6.16.0 build commit:5650be2b79197fed564dca8d734d10f2a76b876c Open Source
20220502:12:40:55 gprestore:gpadmin:greenplum-master-staging:008094-[DEBUG]:-Gathering information on backup directories
20220502:12:40:55 gprestore:gpadmin:greenplum-master-staging:008094-[DEBUG]:-Metadata will be restored from /opt/data/master/gpseg-1/backups/20220404/20220404005001/gpbackup_20220404005001_metadata.sql
20220502:12:40:55 gprestore:gpadmin:greenplum-master-staging:008094-[CRITICAL]:-FATAL: semctl(196632, 0, SETVAL, 0) failed: Invalid argument (pg_sema.c:151) (SQLSTATE XX000)
github.com/greenplum-db/gp-common-go-libs/gplog.FatalOnError
	/tmp/go/pkg/mod/github.com/greenplum-db/[email protected]/gplog/gplog.go:310
github.com/greenplum-db/gp-common-go-libs/dbconn.MustSelectString
	/tmp/go/pkg/mod/github.com/greenplum-db/[email protected]/dbconn/dbconn.go:355
github.com/greenplum-db/gpbackup/restore.ValidateDatabaseExistence
	/tmp/gpbackup/restore/validate.go:214
github.com/greenplum-db/gpbackup/restore.DoSetup
	/tmp/gpbackup/restore/restore.go:99
main.main.func1
	/tmp/gpbackup/gprestore.go:22
github.com/spf13/cobra.(*Command).execute
	/tmp/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
	/tmp/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
	/tmp/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
	/tmp/gpbackup/gprestore.go:27
runtime.main
	/usr/lib/go-1.18/src/runtime/proc.go:250
runtime.goexit

This has worked just fine earlier. Now it seems that it doesn't matter which command lines option are used (--data-only, --create-db, etc.) same error occurs every time. Tried with different database contents (even with empty metadata-file and result is always the same).

Looked into the sources for the ValidateDatabaseExistence -part. The sql used there seems to work fine against database. I can connect to the database externally or from command like with psql.

Tried to also look into greenplum sources that the error is pointing to pg_sema.c but there doesn't seem to be such file in the repository.

Acquiring locks runs out or memory

The acquiring locks behavior allocates too much memory.

[INFO]:-Acquiring ACCESS SHARE locks on tables
Locks acquired: 119569 / 1050208 ... 11.39%
[CRITICAL]:-pq: out of shared memory

Sorry I could not give more logs at this point, however I think it's enough.

After backup, failed when restore with ERROR: role "xxx" does not exist (SQLSTATE 42704): Error encountered while creating schema public

Hello:
Restore error after full backup for online greenplum cluster, can you help me? Thanks very much.

OS: CentOS Linux release 7.5.1804 (Core)

Greenplum cluster version: 6.10.1

gpbackup version : 1.20.1

gprestore version: 1.20.1

20210128:12:43:55 gprestore:gpadmin:mdw:009837-[DEBUG]:-Restore Command: [/home/gpadmin/backup_restore/bin/gprestore --timestamp 20210121132251 --backup-dir /data/gpdb/backup --debug]
20210128:12:43:55 gprestore:gpadmin:mdw:009837-[INFO]:-Restore Key = 20210121132251
20210128:12:43:55 gprestore:gpadmin:mdw:009837-[INFO]:-gpbackup version = 1.20.1
20210128:12:43:55 gprestore:gpadmin:mdw:009837-[INFO]:-gprestore version = 1.20.1
20210128:12:43:55 gprestore:gpadmin:mdw:009837-[INFO]:-Greenplum Database Version = 6.10.1 build commit:efba04ce26ebb29b535a255a5e95d1f5ebfde94e
20210128:12:43:55 gprestore:gpadmin:mdw:009837-[DEBUG]:-Gathering information on backup directories
20210128:12:43:55 gprestore:gpadmin:mdw:009837-[DEBUG]:-Verifying backup directories exist
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[DEBUG]:-Metadata will be restored from /data/gpdb/backup/gpseg-1/backups/20210121/20210121132251/gpbackup_20210121132251_metadata.sql
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[INFO]:-Restoring pre-data metadata
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[CRITICAL]:-ERROR: role "sourcer" does not exist (SQLSTATE 42704): Error encountered while creating schema public
github.com/greenplum-db/gpbackup/restore.RestoreSchemas
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/wrappers.go:333
github.com/greenplum-db/gpbackup/restore.restorePredata
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/restore.go:279
github.com/greenplum-db/gpbackup/restore.DoRestore
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/restore.go:142
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gprestore.go:23
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gprestore.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[INFO]:-Found neither /usr/local/greenplum-db-6.10.1/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[INFO]:-Email containing gprestore report /data/gpdb/backup/gpseg-1/backups/20210121/20210121132251/gprestore_20210121132251_20210128124355_report will not be sent
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[DEBUG]:-Beginning cleanup
20210128:12:43:57 gprestore:gpadmin:mdw:009837-[DEBUG]:-Cleanup complete

NoSuchKey: The specified key does not exist

  • gpbackup exception:
20210516:20:04:54 gpbackup:gpadmin:gp6mdw:036189-[DEBUG]:-Worker 2: COPY fis.pca_pca_model_change_log TO PROGRAM 'gzip -c -1 | /usr/local/greenplum-db-6.16.0/bin/gpbackup_s3_plugin backup_data /tmp/20210516200002_s3_backup.yml <SEG_DATA_DIR>/backups/20210516/20210516200002/gpbackup_<SEGID>_20210516200002_864333.gz' WITH CSV DELIMITER ',' ON SEGMENT IGNORE EXTERNAL PARTITIONS;
20210516:20:06:35 gpbackup:gpadmin:gp6mdw:036189-[CRITICAL]:-ERROR: command error message: 20210516:20:05:22 gpbackup_s3_plugin:gpadmin:gp6sdw5:011663-[ERROR]:-NoSuchKey: The specified key does not exist.
        status code: 404, request id: 167F8A84E7318F00, host id:  (seg16 10.13.0.63:40000 pid=508) (SQLSTATE 2F000)
github.com/greenplum-db/gpbackup/backup.backupDataForAllTables
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/data.go:223
github.com/greenplum-db/gpbackup/backup.backupData
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:280
github.com/greenplum-db/gpbackup/backup.DoBackup
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:160
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:23
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
  • pg_log:
2021-05-16 20:05:22.268876 CST,"gpadmin","F6_BDC",p508,th-2048743296,"10.13.0.65","55388",2021-05-16 20:00:02 CST,0,con1347551,cmd98,seg16,,dx23343414,,sx1,"LOG","00000","read err msg from pipe, len:173 msg:20210516:20:05:22 gpbackup_s3_plugin:gpadmin:gp6sdw5:011663-[ERROR]:-NoSuchKey: The specified key does not exist.
        status code: 404, request id: 167F8A84E7318F00, host id:
",,,,,,,0,,,,

gprestore 022515-[CRITICAL]:-Version string empty

env:
master Greenplum Version: 'PostgreSQL 9.4.20 (Greenplum Database 6.0.0-beta.3 build on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit
go version go1.17.7 linux/amd64

gpbackup executed successfully, but gprestore failed

 gpbackup --backup-dir /opt/greenplum --dbname crp  --jobs 8 --with-stats
     #  Backup completed successfully
 gprestore --backup-dir /opt/greenplum/ --timestamp 20220222164920 --redirect-db archivedb --with-stats  --jobs 8
     # gprestore:gpadmin:xxx:022515-[CRITICAL]:-Version string empty

`gprestore --skip-index-build` option

Hi, Team,

It seems that gprestore does not provide parallel index build option and it might be very time-consuming to restore a backup.

Please provide an option --skip-index-build to skip building index and redirect un-executed index creation commands to a output file to $HOME/gpAdminLogs for users to execute later.

Thanks.

`--include-table` and `--include-table-file` cannot be used together

These two parameters are marked as mutually exclusive, but the implementation of options is prepared for both parameters to be present. The params from file are appended to the params list and there is no need to prohibit this parameter combination.

func setFiltersFromFile(initialFlags *pflag.FlagSet, filterFlag string, filterFileFlag string) ([]string, error) {
filters, err := initialFlags.GetStringArray(filterFlag)
if err != nil {
return nil, err
}
// values obtained from file filterFileFlag are copied to values in filterFlag
// values are mutually exclusive so this is not an overwrite, it is a "fresh" setting
filename, err := initialFlags.GetString(filterFileFlag)
if err != nil {
return nil, err
}
if filename != "" {
filterLines, err := iohelper.ReadLinesFromFile(filename)
if err != nil {
return nil, err
}
// copy any values for flag filterFileFlag into global flag for filterFlag
for _, fqn := range filterLines {
if fqn != "" {
filters = append(filters, fqn) //This appends filter to options
err = initialFlags.Set(filterFlag, fqn) //This appends to the slice underlying the flag.
if err != nil {
return nil, err
}
}
}
if err != nil {
return nil, err
}
}
return filters, nil
}

Similar issue is for --exclude-table, --exclude-table-file, --include-schema, --include-schema-file and --exclude-schema, --exclude-schema-file

options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA, options.INCLUDE_SCHEMA_FILE, options.INCLUDE_RELATION, options.INCLUDE_RELATION_FILE)
options.CheckExclusiveFlags(flags, options.EXCLUDE_SCHEMA, options.EXCLUDE_SCHEMA_FILE, options.INCLUDE_SCHEMA, options.INCLUDE_SCHEMA_FILE)
options.CheckExclusiveFlags(flags, options.EXCLUDE_SCHEMA, options.EXCLUDE_SCHEMA_FILE, options.EXCLUDE_RELATION, options.INCLUDE_RELATION, options.EXCLUDE_RELATION_FILE, options.INCLUDE_RELATION_FILE)
options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA, options.INCLUDE_RELATION, options.INCLUDE_RELATION_FILE)
options.CheckExclusiveFlags(flags, options.EXCLUDE_SCHEMA, options.INCLUDE_SCHEMA)
options.CheckExclusiveFlags(flags, options.EXCLUDE_SCHEMA, options.EXCLUDE_RELATION, options.INCLUDE_RELATION, options.EXCLUDE_RELATION_FILE, options.INCLUDE_RELATION_FILE)

It would be necessary to create multiple checks for each line from the snippets^^, otherwise no changes should be necessary.

options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA, options.INCLUDE_SCHEMA_FILE, options.INCLUDE_RELATION, options.INCLUDE_RELATION_FILE)
// would become
options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA, options.INCLUDE_RELATION)
options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA, options.INCLUDE_RELATION_FILE)
options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA_FILE, options.INCLUDE_RELATION)
options.CheckExclusiveFlags(flags, options.INCLUDE_SCHEMA_FILE, options.INCLUDE_RELATION_FILE)

etc.

I'm willing to create a PR for this if you want me to.

Backup solution for massive databases

When the data volume exceeds 10T, the efficiency of database backup is still relatively low. The solution I thought of is: incremental backups for AO tables, and full backups for normal heap tables. But how do I make gpbackup skip the AO table during full backup, and only target the AO table during incremental backup?

Restore with different number of segments.

We backed up a database with 14 segments and would like to restore to one with 16. Is there a supported way to do this? (Using gpbackup-s3-plugin)

It's failing on restore trying to find segment 14 (0 indexed)

Sorry if this isn't the right avenue for this issue.

I've looked through the documentation and googled to no avail.

incremental backup failed after full backup

After I use flowing command to full backup:

./gpbackup --dbname whistleSrc --backup-dir /data/gpdb/backup --debug

And sussess for my operation。However, I try to do Incremental backup , but failed, error information fllows:

[INFO]: Incremental backup database for whistleSrc.
20201208:11:32:54 gpbackup:gpadmin:node1:1683340-[DEBUG]:-Backup Command: [./gpbackup --dbname whistleSrc --backup-dir /data/gpdb/backup --debug --leaf-partition-data --incremental]
20201208:11:32:54 gpbackup:gpadmin:node1:1683340-[INFO]:-gpbackup version = 1.20.1
20201208:11:32:54 gpbackup:gpadmin:node1:1683340-[INFO]:-Greenplum Database Version = 6.10.1 build commit:efba04ce26ebb29b535a255a5e95d1f5ebfde94e
20201208:11:32:54 gpbackup:gpadmin:node1:1683340-[INFO]:-Starting backup of database whistleSrc
20201208:11:32:54 gpbackup:gpadmin:node1:1683340-[DEBUG]:-Validating Tables and Schemas exist in Database
20201208:11:32:54 gpbackup:gpadmin:node1:1683340-[DEBUG]:-Creating backup directories
20201208:11:32:55 gpbackup:gpadmin:node1:1683340-[DEBUG]:-Getting database size
20201208:11:32:55 gpbackup:gpadmin:node1:1683340-[INFO]:-Backup Timestamp = 20201208113254
20201208:11:32:55 gpbackup:gpadmin:node1:1683340-[INFO]:-Backup Database = whistleSrc
20201208:11:32:55 gpbackup:gpadmin:node1:1683340-[DEBUG]:-Backup Parameters: {compression: gzip, plugin executable: None, backup section: All Sections, object filtering: None, includes statistics: No, data file format: Multiple Data Files Per Segment, incremental: True, incremental backup set:, }
20201208:11:32:55 gpbackup:gpadmin:node1:1683340-[CRITICAL]:-There was no matching previous backup found with the flags provided. Please take a full backup.
github.com/greenplum-db/gp-common-go-libs/gplog.FatalOnError
        /tmp/build/3e49593f/go/pkg/mod/github.com/greenplum-db/[email protected]/gplog/gplog.go:310
github.com/greenplum-db/gpbackup/backup.GetLatestMatchingBackupTimestamp
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/incremental.go:56
github.com/greenplum-db/gpbackup/backup.GetTargetBackupTimestamp
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/incremental.go:39
github.com/greenplum-db/gpbackup/backup.DoBackup
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:108
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:23
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

gpbackup version:1.20.1
Who can help me? Thanks!

`gpbackup --include-schema` contains default privilege on other schemas not on the list

Hi,

It seems that gpbackup --include-schema additionally backups DEFAULT PRIVILEGE commands for schemas not on the list.

[gpadmin@gpdb ~]$ psql -d test -c 'ALTER DEFAULT PRIVILEGES FOR ROLE aaa IN SCHEMA xxx GRANT A
LL ON TABLES TO public;'
ALTER DEFAULT PRIVILEGES
[gpadmin@gpdb ~]$ 
[gpadmin@gpdb ~]$ gpbackup --include-schema public --dbname test
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-gpbackup version = 1.20.4
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Greenplum Database Version = 6.16.2 build commit:950f103fe23180ed0148552ef08cd62cbbf5a681 Open Source
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Starting backup of database test
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Backup Timestamp = 20210713153403
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Backup Database = test
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Gathering table state information
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Acquiring ACCESS SHARE locks on tables
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Gathering additional table metadata
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Getting partition definitions
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Getting storage information
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Getting child partitions with altered schema
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[WARNING]:-No tables in backup set contain data. Performing metadata-only backup instead.
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Metadata will be written to /data/master/gpseg-1/backups/20210713/20210713153403/gpbackup_20210713153403_metadata.sql
20210713:15:34:03 gpbackup:gpadmin:gpdb:001594-[INFO]:-Writing global database metadata
20210713:15:34:04 gpbackup:gpadmin:gpdb:001594-[INFO]:-Global database metadata backup complete
20210713:15:34:04 gpbackup:gpadmin:gpdb:001594-[INFO]:-Writing pre-data metadata
20210713:15:34:04 gpbackup:gpadmin:gpdb:001594-[INFO]:-Pre-data metadata metadata backup complete
20210713:15:34:04 gpbackup:gpadmin:gpdb:001594-[INFO]:-Writing post-data metadata
20210713:15:34:04 gpbackup:gpadmin:gpdb:001594-[INFO]:-Post-data metadata backup complete
20210713:15:34:05 gpbackup:gpadmin:gpdb:001594-[INFO]:-Found neither /usr/local/greenplum-db-6.16.2/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20210713:15:34:05 gpbackup:gpadmin:gpdb:001594-[INFO]:-Email containing gpbackup report /data/master/gpseg-1/backups/20210713/20210713153403/gpbackup_20210713153403_report will not be sent
20210713:15:34:05 gpbackup:gpadmin:gpdb:001594-[INFO]:-Backup completed successfully
[gpadmin@gpdb ~]$ 

Then gprestore process encounters error and failed.

[gpadmin@gpdb ~]$ gprestore --timestamp 20210713153403 --create-db --redirect-db test2
20210713:15:34:48 gprestore:gpadmin:gpdb:001623-[INFO]:-Restore Key = 20210713153403
20210713:15:34:48 gprestore:gpadmin:gpdb:001623-[INFO]:-gpbackup version = 1.20.4
20210713:15:34:48 gprestore:gpadmin:gpdb:001623-[INFO]:-gprestore version = 1.20.4
20210713:15:34:48 gprestore:gpadmin:gpdb:001623-[INFO]:-Greenplum Database Version = 6.16.2 build commit:950f103fe23180ed0148552ef08cd62cbbf5a681 Open Source
20210713:15:34:48 gprestore:gpadmin:gpdb:001623-[INFO]:-Creating database
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[INFO]:-Database creation complete for: test2
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[INFO]:-Restoring pre-data metadata
Pre-data objects restored:  4 / 4 [===============================================] 100.00% 0s
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[INFO]:-Pre-data metadata restore complete
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[INFO]:-Restoring post-data metadata
Post-data objects restored:  0 / 1 [-------------------------------------------------]   0.00%
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[CRITICAL]:-ERROR: schema "xxx" does not exist (SQLSTATE 3F000)
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[INFO]:-Found neither /usr/local/greenplum-db-6.16.2/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20210713:15:34:50 gprestore:gpadmin:gpdb:001623-[INFO]:-Email containing gprestore report /data/master/gpseg-1/backups/20210713/20210713153403/gprestore_20210713153403_20210713153448_report will not be sent
[gpadmin@gpdb ~]$ 

Maybe it is a bug to be fixed.

Best Regards.

zstd compression support

Please add support for zstd compression.
Below is an example of compression of a csv file by zstd and gzip archivers.
For zstd, an example with the number of threads 1 and equal to the number of processors.

-rw-------. 1 gpadmin gpadmin 9479745682 апр 27 20:28 gpbackup.csv
-rw-rw-r--. 1 gpadmin gpadmin 1248764524 июн  5 18:43 gpzip.gz
-rw-rw-r--. 1 gpadmin gpadmin  993988596 июн  5 18:37 zstd.zst
time gzip -5 -cv gpbackup.csv > gpzip.gz
gpbackup.csv : 86.8%

real	3m5.661s
user	3m0.858s
sys	0m4.779s
time zstd -c -3 gpbackup.csv > zstd.zst
gpbackup.csv : 10.49%   (9479745682 => 993988596 bytes, /*stdout*\)

real	0m41.388s
user	0m39.149s
sys	0m5.474s
time zstd -d zstd.zst > gpbackup_0
zstd.zst            : 9479745682 bytes

real	0m24.706s
user	0m13.423s
sys	0m9.628s
time zstd -c -3 -T0 gpbackup.csv > zstd.zst
gpbackup.csv : 10.49%   (9479745682 => 993988596 bytes, /*stdout*\)

real	0m8.244s
user	0m39.601s
sys	0m4.734s
time zstd -d -T0 zstd.zst > gpbackup_0
zstd.zst            : 9479745682 bytes

real	0m20.831s
user	0m13.396s
sys     0m7.433s
lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
Stepping:              1
CPU MHz:               2100.000
CPU max MHz:           2100,0000
CPU min MHz:           1200,0000
BogoMIPS:              4190.27
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              40960K
NUMA node0 CPU(s):     0-15,32-47
NUMA node1 CPU(s):     16-31,48-63

go get github.com/greenplum-db/gpbackup fails

Maybe I'm doing something incorrect. I'm following the project's README

ls ~/go/bin/src/github.com/greenplum-db
<<nothing in there>>

go get github.com/greenplum-db/gpbackup
can't load package: package github.com/greenplum-db/gpbackup: no buildable Go source files in /Users/pivotal/go/bin/src/github.com/greenplum-db/gpbackup

ls ~/go/bin/src/github.com/greenplum-db
gpbackup 

Which SQL does gpbackup block

When I execute gpbackup, I found that the truncate operation of table is blocked until the gpbackup is finished.

Will the truncate of the temporary table be blocked? What other operations will be blocked?

If the backup takes a long time, the blocked SQL will have a great impact on online biz.

2020-10-30 18-18-22屏幕截图

Ater full backup and restore to the same one as that greenplum cluster

Hello:
After I make full backup from one greenplum, then I make full restore by gprestore in another greenplum cluster which is the same as that one, but errror is fllowing:

20201214:11:11:41 gprestore:gpadmin:mdw:029088-[DEBUG]:-Restore Command: [bin/gprestore --timestamp 20201214101943 --backup-dir /data/gpdb/backup --debug]
20201214:11:11:41 gprestore:gpadmin:mdw:029088-[INFO]:-Restore Key = 20201214101943
20201214:11:11:41 gprestore:gpadmin:mdw:029088-[INFO]:-gpbackup version = 1.20.1
20201214:11:11:41 gprestore:gpadmin:mdw:029088-[INFO]:-gprestore version = 1.20.1
20201214:11:11:41 gprestore:gpadmin:mdw:029088-[INFO]:-Greenplum Database Version = 6.10.1 build commit:efba04ce26ebb29b535a255a5e95d1f5ebfde94e
20201214:11:11:41 gprestore:gpadmin:mdw:029088-[DEBUG]:-Gathering information on backup directories
20201214:11:11:41 gprestore:gpadmin:mdw:029088-[DEBUG]:-Verifying backup directories exist
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg0/backups/20201214/20201214101943 missing or inaccessible on segment 0 on host sdw01 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw01 test -d /data/gpdb/backup/gpseg0/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg1/backups/20201214/20201214101943 missing or inaccessible on segment 1 on host sdw01 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw01 test -d /data/gpdb/backup/gpseg1/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg2/backups/20201214/20201214101943 missing or inaccessible on segment 2 on host sdw01 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw01 test -d /data/gpdb/backup/gpseg2/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg3/backups/20201214/20201214101943 missing or inaccessible on segment 3 on host sdw01 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw01 test -d /data/gpdb/backup/gpseg3/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg4/backups/20201214/20201214101943 missing or inaccessible on segment 4 on host sdw01 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw01 test -d /data/gpdb/backup/gpseg4/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg5/backups/20201214/20201214101943 missing or inaccessible on segment 5 on host sdw01 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw01 test -d /data/gpdb/backup/gpseg5/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg6/backups/20201214/20201214101943 missing or inaccessible on segment 6 on host sdw02 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw02 test -d /data/gpdb/backup/gpseg6/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg7/backups/20201214/20201214101943 missing or inaccessible on segment 7 on host sdw02 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw02 test -d /data/gpdb/backup/gpseg7/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg8/backups/20201214/20201214101943 missing or inaccessible on segment 8 on host sdw02 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw02 test -d /data/gpdb/backup/gpseg8/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg9/backups/20201214/20201214101943 missing or inaccessible on segment 9 on host sdw02 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw02 test -d /data/gpdb/backup/gpseg9/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg10/backups/20201214/20201214101943 missing or inaccessible on segment 10 on host sdw02 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw02 test -d /data/gpdb/backup/gpseg10/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Backup directory /data/gpdb/backup/gpseg11/backups/20201214/20201214101943 missing or inaccessible on segment 11 on host sdw02 with error exit status 1:
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Command was: ssh -o StrictHostKeyChecking=no gpadmin@sdw02 test -d /data/gpdb/backup/gpseg11/backups/20201214/20201214101943
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[CRITICAL]:-Backup directories missing or inaccessible on 12 segments. See /home/gpadmin/gpAdminLogs/gprestore_20201214.log for a complete list of errors.
github.com/greenplum-db/gp-common-go-libs/cluster.LogFatalClusterError
        /tmp/build/3e49593f/go/pkg/mod/github.com/greenplum-db/[email protected]/cluster/cluster.go:398
github.com/greenplum-db/gp-common-go-libs/cluster.(*Cluster).CheckClusterError
        /tmp/build/3e49593f/go/pkg/mod/github.com/greenplum-db/[email protected]/cluster/cluster.go:380
github.com/greenplum-db/gpbackup/restore.VerifyBackupDirectoriesExistOnAllHosts
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/remote.go:26
github.com/greenplum-db/gpbackup/restore.BackupConfigurationValidation
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/wrappers.go:145
github.com/greenplum-db/gpbackup/restore.DoSetup
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/restore.go:85
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gprestore.go:22
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gprestore.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[INFO]:-Found neither /usr/local/greenplum-db-6.10.1/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[INFO]:-Email containing gprestore report /data/gpdb/backup/gpseg-1/backups/20201214/20201214101943/gprestore_20201214101943_20201214111141_report will not be sent
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Beginning cleanup
20201214:11:11:45 gprestore:gpadmin:mdw:029088-[DEBUG]:-Cleanup complete

Questions: What Can I do for full backup from one greenplum cluster and then restore another same one?
Who can help me? Thanks very much!

How to specify GreenplumDB host

Hi !

I've got a question regarding execution of gpbackup/gprestore tools.

Is it possible to specify greenplumdb host ?

I want to collect and store backup from a different VM, is it possible ?

gpbackup and GPCC integration support

Hi, Team,

Our customer suggest that GPCC can integrate DB backup integration such as housekeeping or backup/restore.

Maybe your team can consider it.

Regarfds.

value of distribution key doesn't belong to segment with ID 9, it belongs to segment with ID 10 when backup in greenplum 5.28.0 and restore in greenplum 6.20.1

I use gpbakup 1.23 to backup data in greenplum 5.28.0 and use gprestore 1.23 to restore data in greenplum 6.20.1. Both clusters have the same number of segments.I got the error "value of distribution key doesn't belong to segment with ID 9, it belongs to segment with ID 10" like #425.

my table:

create table imp_req_lib(
       imp_id CHARACTER(6) default  '' not null,
       imp_type  CHARACTER(1) default  '' not null,
       imp_state  CHARACTER(1) default  '' not null,
       ......
)  DISTRIBUTED BY (imp_id );

my data :
('I01238','0','0',........)

How can I resovle this problem? Should I upgrade 5.28.0 to newest 5.x like 5.28.12? Thanks for help.

value of distribution key doesn't belong to segment with ID xxx, it belongs to segment with ID xxx

When I restore v1.19.0 the data, I got the error as below. But both clusters have the same number of segments and segment map (contents, dbids).

[gpadmin@gp6mdw greenplum]$ gprestore --backup-dir /gpbackup/20201019191245 --timestamp 20201019191245 --create-db
20201019:21:27:45 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Restore Key = 20201019191245
20201019:21:27:45 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Creating database
20201019:21:27:51 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Database creation complete for: "F6_BDC"
20201019:21:27:51 gprestore:gpadmin:gp6mdw:011474-[WARNING]:-This backup set was taken on a version of Greenplum prior to 6.x. This restore will use the legacy hash operators when loading data.
20201019:21:27:51 gprestore:gpadmin:gp6mdw:011474-[WARNING]:-To use the new Greenplum 6.x default hash operators, these tables will need to be redistributed.
20201019:21:27:51 gprestore:gpadmin:gp6mdw:011474-[WARNING]:-For more information, refer to the migration guide located as https://docs.greenplum.org/latest/install_guide/migrate.html.
20201019:21:27:51 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Restoring pre-data metadata
Pre-data objects restored:  1826 / 1826 [===========================================] 100.00% 21m56s
20201019:21:49:48 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Pre-data metadata restore complete
Tables restored:  12 / 1492 [>-------------------------------------------------------------]   0.80%20201019:21:51:39 gprestore:gpadmin:gp6mdw:011474-[ERROR]:-Error loading data into table fis.fis2_pca_model: COPY fis2_pca_model, line 1: "1395T0011102,1,XSM,WCxxxx Lexington SCSI SOS Card,0,1,0,20,400,77,14,237278-001,D02 ,XXXX    ,HPQ,Le...": ERROR: value of distribution key doesn't belong to segment with ID 2, it belongs to segment with ID 0  (seg2 10.13.0.23:40002 pid=55527) (SQLSTATE 23000)

20201019:21:51:39 gprestore:gpadmin:gp6mdw:011474-[ERROR]:-Encountered 1 error(s) during table data restore; see log file /home/gpadmin/gpAdminLogs/gprestore_20201019.log for a list of table errors.
20201019:21:51:39 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Data restore complete
20201019:21:51:39 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Restoring post-data metadata
Post-data objects restored:  328 / 328 [=============================================] 100.00% 7m20s
20201019:21:58:59 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Post-data metadata restore complete
20201019:21:58:59 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Found neither /usr/local/greenplum-db-6.11.2/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20201019:21:58:59 gprestore:gpadmin:gp6mdw:011474-[INFO]:-Email containing gprestore report /gpbackup/20201019191245/gpseg-1/backups/20201019/20201019191245/gprestore_20201019191245_20201019212745_report will not be sent
  • fis2_pca_model
CREATE TABLE fis.fis2_pca_model
(
    model character(12) NOT NULL,
    inuse integer NOT NULL,
    fis_code character(3) NOT NULL,
    ...
    CONSTRAINT fis2_pca_model_pkey PRIMARY KEY (model)
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default
DISTRIBUTED BY (model)
;

Supporting data restoration to another GPDB cluster with different configuration.

Hi, Team,

Is it possible to make gprestore to support data restoration through mdw node?

Or maybe restoring data through gpfdist utility when the target GPDB cluster is not in the same configuration with the original cluster?

Usually users only hope to restore some data to another cluster for testing use.

But the current gpbackup and gprestore only allows backup/restore from/to GPDB with the same segment configuration, as mentioned in the documentation.

A backup created with gpbackup can only be restored to a Greenplum Database cluster with the same number of segment instances as the source cluster. If you run gpexpand to add segments to the cluster, backups you made before starting the expand cannot be restored after the expansion has completed.

In the legacy backup utility gpcrondump, backup files can be restored to a GPDB cluster with different segment configuration by feeding backup files through mdw.

So it would be helpful to make gprestore supports similar functionality.

Regards.

Restore index

When recovering data in gprestore, I found that it would take a lot of time to recover the index data, more than 2 hours.

Could I configure index recovery as an option? The operation is as follows:

# Pre-data: create database, tables, functions, views
gprestore --plugin-config /opt/greenplum/config/s3_backup.yml --timestamp 20201101200000 --create-db --jobs 4 --metadata-only
# Table restores: import data by copy
gprestore --plugin-config /opt/greenplum/config/s3_backup.yml --timestamp 20201101200000 --jobs 4 --on-error-continue --data-only
# Post-data: add constraint
gprestore --plugin-config /opt/greenplum/config/s3_backup.yml --timestamp 20201101200000 --jobs 4 --on-error-continue --constraint-only
# Post-data: create index
gprestore --plugin-config /opt/greenplum/config/s3_backup.yml --timestamp 20201101200000 --jobs 4 --on-error-continue --index-only

Or how can I avoid being affected by index creation.

gprestore not support parallel restoration of indexes

In the process of restoring data using gprestore, when the jobs option is specified, the tablespace setting of the index may be incorrect when the index is rebuilt.

$ gprestore --plugin-config /opt/greenplum/config/s3_backup.yml --timestamp 20210430155944 --create-db --jobs 4 --on-error-continue
...
20210502:09:48:33 gprestore:gpadmin:gp6mdw:029875-[INFO]:-Restoring post-data metadata
20210502:09:51:02 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  10% (41/404)
20210502:10:09:55 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  20% (81/404)
20210502:10:10:16 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  30% (122/404)
20210502:10:10:16 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  30% (123/404)
20210502:10:10:34 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  50% (202/404)
20210502:10:10:43 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  60% (243/404)
20210502:10:11:03 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  70% (283/404)
20210502:10:11:20 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dw.idx_dw_fact_cpu_sno_parts_sno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dw.idx_dw_fact_cpu_sno_parts_sno" does not exist (SQLSTATE 42P01)
20210502:10:11:28 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dw.idx_dw_fact_cpu_sn_sno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dw.idx_dw_fact_cpu_sn_sno" does not exist (SQLSTATE 42P01)
20210502:10:11:52 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  80% (324/404)
20210502:10:11:52 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dw.idx_dw_fact_pca_rep_mcbsno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dw.idx_dw_fact_pca_rep_mcbsno" does not exist (SQLSTATE 42P01)
20210502:10:12:09 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dw.idx_dw_fact_sn_info_orig_mcbsno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dw.idx_dw_fact_sn_info_orig_mcbsno" does not exist (SQLSTATE 42P01)
20210502:10:13:33 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_lr_sn_mcbsno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_lr_sn_mcbsno" does not exist (SQLSTATE 42P01)
20210502:10:13:33 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_lr_sn_old_sn SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_lr_sn_old_sn" does not exist (SQLSTATE 42P01)
20210502:10:13:33 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_lr_sn_orgmcbsno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_lr_sn_orgmcbsno" does not exist (SQLSTATE 42P01)
20210502:10:13:36 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_customer SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_customer" does not exist (SQLSTATE 42P01)
20210502:10:13:36 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_family SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_family" does not exist (SQLSTATE 42P01)
20210502:10:13:44 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_invoice SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_invoice" does not exist (SQLSTATE 42P01)
20210502:10:13:51 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_mcbsno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_mcbsno" does not exist (SQLSTATE 42P01)
20210502:10:13:51 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_model SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_model" does not exist (SQLSTATE 42P01)
20210502:10:13:54 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_orgdn SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_orgdn" does not exist (SQLSTATE 42P01)
20210502:10:14:00 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_fqas_pca_sn_orgmcbsno SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_fqas_pca_sn_orgmcbsno" does not exist (SQLSTATE 42P01)
20210502:10:14:08 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Post-data objects restored:  90% (364/404)
20210502:10:14:21 gprestore:gpadmin:gp6mdw:029875-[DEBUG]:-Error encountered when executing statement: ALTER INDEX dwfqas.idx_dwfqas_fact_pca_sn_carton SET TABLESPACE tbs_ssd01; Error was: ERROR: relation "dwfqas.idx_dwfqas_fact_pca_sn_carton" does not exist (SQLSTATE 42P01)
20210502:10:15:59 gprestore:gpadmin:gp6mdw:029875-[ERROR]:-Encountered 15 errors during metadata restore; see log file /home/gpadmin/gpAdminLogs/gprestore_20210502.log for a list of failed statements.
20210502:10:15:59 gprestore:gpadmin:gp6mdw:029875-[INFO]:-Post-data metadata restore complete
...

Incorrect metadata.sql exported by gpbackup

When using gprestore to restore the data exported by gpbackup, I encountered the following problems:

# [CRITICAL]:-ERROR: relation "data_entity_pkey" already exists (SQLSTATE 42P07)
# ALTER TABLE ONLY manager.data_entity ADD CONSTRAINT data_entity_pkey PRIMARY KEY (data_name);
# [CRITICAL]:-ERROR: relation "panelsn_pkey" already exists (SQLSTATE 42P07)
# ALTER TABLE ONLY mes.cab_selfcheck ADD CONSTRAINT panelsn_pkey PRIMARY KEY (panelid, machine_id);

First, I suspect that the problem is caused by our own irregular use. But after I checked the specific table-building statement, I haven't found any problem with the table constraint definition. So I suspect the gpbackup has potential bugs.

CREATE TABLE mes.cab_selfcheck
(
    panelid character varying(64) NOT NULL,
    programname character varying(64),
    line_id character varying(20),
    machine_id character varying(20) NOT NULL,
    plan_qty integer,
    actual_qty integer,
    bomlist_time timestamp without time zone,
    pcbcomponenttrace_time timestamp without time zone,
    cdt timestamp without time zone,
    CONSTRAINT cab_selfcheck_pkey PRIMARY KEY (panelid, machine_id)
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default
DISTRIBUTED BY (panelid)
;

ALTER TABLE mes.cab_selfcheck
    OWNER to bdcenter;

metadata.sql

...
ALTER TABLE ONLY manager.data_entity ADD CONSTRAINT data_entity_pkey PRIMARY KEY (data_name);

ALTER TABLE ONLY manager.data_entity_gptable ADD CONSTRAINT data_entity_pkey PRIMARY KEY (database_name, schema_name, table_name);
...

Error when Downloading

go get github.com/greenplum-db/gpbackup/...
# github.com/greenplum-db/gpbackup/restore
/usr/local/go/workspace/src/github.com/greenplum-db/gpbackup/restore/data.go:46:11: undefined: pgx.PgError
/usr/local/go/workspace/src/github.com/greenplum-db/gpbackup/restore/data.go:47:48: undefined: pgx.PgError

gpbackup stuck on `idle in transaction`

Bug Report

Greenplum version or build

  • GP 6.20.5 - 6.23.0
  • gpbackup 1.25.0

Actual behavior

2022-07-19 15-23-25屏幕截图

When using the latest version 1.25.0 of gpbackup to back up the database, the backup process is blocked. Check the SQL being executed in the database and find that the SQL executed by gpbackup is in the state of idle in transcation. I don't understand why the transaction is not committed at the end.

Check:

select pid,
       datname,
       usename,
       client_addr,
       application_name,
       query,
       waiting,
       backend_xid,
       xact_stay,
       query_stay,
       state,
       waiting_reason
from
  (select pid,
          datname,
          usename,
          client_addr,
          application_name,
          waiting,
          waiting_reason,
          backend_start,
          backend_xid,
          xact_start,
          age(now(), xact_start) as xact_stay,
          query_start,
          age(now(), query_start) as query_stay,
          query,
          state
   from pg_stat_activity
   where state != 'idle' 
    -- and usename != 'pgexporter' and usename != 'replicator'
    -- and now()-query_start > interval '1 second'
    -- and wait_event not in ('WalSenderMain', 'WalSenderWaitForWAL')
  ) idleconnections
order by query_stay asc
-- limit 10;

Result:

2022-07-19 15-20-49屏幕截图

data-1658215049134.csv

`gpbackup` does not copy `gpbackup_history.yaml` to smdw host

Hi,

One of our subscribed customers previously perform high availability test on mdw/smdw nodes.

And they find that gpbackup_history.yaml file "disappears" on smdw, which is horrible because they utilize DDBoost plugin and backup file management gpbackup_manager is totally relying on this file.

Can gpbackup team fix this bug?

Thanks.

Wrong version check on grrenplum/gpbackup.

Actual problem , if you compile a Greenplum , say 5.10.0 version , and when try to run gpbackup on it , gpbackup show error that varsion have to be older than 5.0.0 and not work . If it possible to fix this in greenplum , or in gpbackup ?

gprestore. Error using options --redirect-schema with --metadata-only together.

Hello. When using greenplum 5.28, I encountered unexpected behavior of the gprestore utility version 1.20.1. If both --redirect-schema and --metadata-only are specified together, gprestore will return an error like
"gprestore:<user>:<master hostname>:025223-[CRITICAL]:-The following flags may not be specified together: truncate-table, metadata-only, incremental, redirect-schema".
Checked the same with gprestore version 1.17 (from gpbackup 5.25), it works correctly. Also there are no restrictions on using these options in the documentation ( https://gpdb.docs.pivotal.io/backup-restore/1-20/utility_guide/ref/gprestore.html )
Please tell me if there is a bug or if gprestore has some hidden limitation of using options --redirect-schema with --metadata-only together.

gpbackup build failing with version=1.15.0+dev.39.gc8ab236

When building gpbackup the process fails with error 'warning: pattern "all" matched no module dependencies' when running "make depend" using Ubuntu 16.0.4

I have built gpbackup many times successfully in the past, the last time though was on 1st Nov with version=1.15.0+dev.3.gffbfe2a so some change to the Makefile since that date seems to have caused this error

Is there a way to specify the gpbackup version when installing so that I can continue to use the previous gpbackup version ?

`gpbackup --include-schema` backup does not contain created extensions

Hi,

It seems that gpbackup can not backup plpythonu extension when specifying --include-schema option.

Test below:

  1. Backup specified schema(s) of a database with plpythonu enabled.
[gpadmin@gpdb ~]$ createdb test
[gpadmin@gpdb ~]$ psql -d test -c 'create extension plpythonu;'
CREATE EXTENSION
[gpadmin@gpdb ~]$ psql -d test -c 'create schema xxx;'
CREATE SCHEMA
[gpadmin@gpdb ~]$ 
[gpadmin@gpdb ~]$ gpbackup --include-schema xxx --dbname test
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-gpbackup version = 1.20.4
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Greenplum Database Version = 6.16.2 build commit:950f103fe23180ed0148552ef08cd62cbbf5a681 Open Source
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Starting backup of database test
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Backup Timestamp = 20210713151838
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Backup Database = test
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Gathering table state information
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Acquiring ACCESS SHARE locks on tables
20210713:15:18:38 gpbackup:gpadmin:gpdb:001410-[INFO]:-Gathering additional table metadata
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Getting partition definitions
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Getting storage information
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Getting child partitions with altered schema
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[WARNING]:-No tables in backup set contain data. Performing metadata-only backup instead.
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Metadata will be written to /data/master/gpseg-1/backups/20210713/20210713151838/gpbackup_20210713151838_metadata.sql
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Writing global database metadata
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Global database metadata backup complete
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Writing pre-data metadata
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Pre-data metadata metadata backup complete
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Writing post-data metadata
20210713:15:18:39 gpbackup:gpadmin:gpdb:001410-[INFO]:-Post-data metadata backup complete
20210713:15:18:40 gpbackup:gpadmin:gpdb:001410-[INFO]:-Found neither /usr/local/greenplum-db-6.16.2/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20210713:15:18:40 gpbackup:gpadmin:gpdb:001410-[INFO]:-Email containing gpbackup report /data/master/gpseg-1/backups/20210713/20210713151838/gpbackup_20210713151838_report will not be sent
20210713:15:18:40 gpbackup:gpadmin:gpdb:001410-[INFO]:-Backup completed successfully
[gpadmin@gpdb ~]$ 
  1. After restore:
[gpadmin@gpdb ~]$ psql -d test -c 'select * from pg_extension'
  extname  | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition 
-----------+----------+--------------+----------------+------------+-----------+--------------
 plpgsql   |       10 |           11 | f              | 1.0        |           | 
 plpythonu |       10 |           11 | f              | 1.0        |           | 
(2 rows)

[gpadmin@gpdb ~]$ psql -d test2 -c 'select * from pg_extension'
 extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition 
---------+----------+--------------+----------------+------------+-----------+--------------
 plpgsql |       10 |           11 | f              | 1.0        |           | 
(1 row)

[gpadmin@gpdb ~]$ 

Maybe this issue should be fixed.

Best Regards.

gprestore can not real work and maybe dead

Hello:

I use gpbackup to backup my greenplum cluster data, and restore by using gprestore with debug mode, but after more than 2 hours , there is no log print to my console, then I use pstack analyse the process, following is this :

[root@master1 ~]# /home/gpadmin/backup_restore/bin/gprestore --version
gprestore version 1.20.2
[root@master1 ~]# ps aux | grep gpadmin | grep gprestore
gpadmin   8702  0.0  0.1 1007804 38300 ?       Sl   15:42   0:05 /home/gpadmin/backup_restore/bin/gprestore --timestamp 20210121114808 --backup-dir /data/gpdb/backup --debug
[root@master1 ~]# pstack 8702
Thread 21 (Thread 0x7fd47e67f700 (LWP 8703)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042cb14 in runtime.futexsleep (addr=0xe21c90 <runtime.sched+272>, val=0, ns=60000000000) at /usr/local/go/src/runtime/os_linux.go:50
#2  0x000000000040c73e in runtime.notetsleep_internal (n=0xe21c90 <runtime.sched+272>, ns=60000000000, ~r2=<optimized out>) at /usr/local/go/src/runtime/lock_futex.go:193
#3  0x000000000040c811 in runtime.notetsleep (n=0xe21c90 <runtime.sched+272>, ns=60000000000, ~r2=<optimized out>) at /usr/local/go/src/runtime/lock_futex.go:216
#4  0x000000000043b66e in runtime.sysmon () at /usr/local/go/src/runtime/proc.go:4316
#5  0x0000000000433c23 in runtime.mstart1 () at /usr/local/go/src/runtime/proc.go:1201
#6  0x0000000000433b3e in runtime.mstart () at /usr/local/go/src/runtime/proc.go:1167
#7  0x0000000000401a83 in runtime/cgo(.text) ()
#8  0x00007fd47e67f700 in ?? ()
#9  0x0000000000000000 in ?? ()
Thread 20 (Thread 0x7fd47de7e700 (LWP 8704)):
#0  runtime.epollwait () at /usr/local/go/src/runtime/sys_linux_amd64.s:673
#1  0x000000000042c950 in runtime.netpoll (block=true, ~r1=...) at /usr/local/go/src/runtime/netpoll_epoll.go:71
#2  0x0000000000436115 in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2372
#3  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#4  0x000000000043712d in runtime.park_m (gp=0xc00012cc00) at /usr/local/go/src/runtime/proc.go:2610
#5  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#6  0x0000000000000000 in ?? ()
Thread 19 (Thread 0x7fd47d67d700 (LWP 8705)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc000058bc8, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc000058bc8) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc00003af00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x0000000000437816 in runtime.goexit0 (gp=0xc00012cf00) at /usr/local/go/src/runtime/proc.go:2727
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 18 (Thread 0x7fd47ce7c700 (LWP 8706)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xe3df40 <runtime.sig>, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c696 in runtime.notetsleep_internal (n=0xe3df40 <runtime.sig>, ns=-1, ~r2=<optimized out>) at /usr/local/go/src/runtime/lock_futex.go:174
#3  0x000000000040c89c in runtime.notetsleepg (n=0xe3df40 <runtime.sig>, ns=-1, ~r2=<optimized out>) at /usr/local/go/src/runtime/lock_futex.go:228
#4  0x000000000044492c in os/signal.signal_recv (~r0=<optimized out>) at /usr/local/go/src/runtime/sigqueue.go:147
#5  0x000000000087bd22 in os/signal.loop () at /usr/local/go/src/os/signal/signal_unix.go:23
#6  0x000000000045c1f1 in runtime.goexit () at /usr/local/go/src/runtime/asm_amd64.s:1357
#7  0x0000000000000000 in ?? ()
Thread 17 (Thread 0x7fd477fff700 (LWP 8707)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xe3de58 <runtime.newmHandoff+24>, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xe3de58 <runtime.newmHandoff+24>) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000434f42 in runtime.templateThread () at /usr/local/go/src/runtime/proc.go:1906
#4  0x0000000000433c23 in runtime.mstart1 () at /usr/local/go/src/runtime/proc.go:1201
#5  0x0000000000433b3e in runtime.mstart () at /usr/local/go/src/runtime/proc.go:1167
#6  0x0000000000401a83 in runtime/cgo(.text) ()
#7  0x00007fd477fff700 in ?? ()
#8  0x0000000000000000 in ?? ()
Thread 16 (Thread 0x7fd4777fe700 (LWP 8708)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc00012a148, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc00012a148) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc00003af00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc000000180) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 15 (Thread 0x7fd476ffd700 (LWP 8709)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc00012a4c8, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc00012a4c8) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc000000180) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 14 (Thread 0x7fd4767fc700 (LWP 8712)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc0000864c8, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc0000864c8) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc00003af00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00009be00) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 13 (Thread 0x7fd475ffb700 (LWP 8716)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc00012a848, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc00012a848) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000038a00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc000001080) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 12 (Thread 0x7fd4757fa700 (LWP 8718)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc000058f48, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc000058f48) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00009b200) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 11 (Thread 0x7fd474ff9700 (LWP 8720)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc0002ea148, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc0002ea148) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc000384f00) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 10 (Thread 0x7fd45ffff700 (LWP 8722)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc000304148, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc000304148) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00009a900) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 9 (Thread 0x7fd45f7fe700 (LWP 8726)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc0000879c8, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc0000879c8) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc00003af00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00009be00) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 8 (Thread 0x7fd45effd700 (LWP 8729)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc0000599c8, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc0000599c8) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc00003af00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00009be00) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 7 (Thread 0x7fd45e7fc700 (LWP 8734)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc000059d48, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc000059d48) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000034000, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00009b980) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 6 (Thread 0x7fd45dffb700 (LWP 8736)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc0003a4148, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc0003a4148) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000034000, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc000000180) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 5 (Thread 0x7fd45d7fa700 (LWP 8741)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc000087d48, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc000087d48) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc00012cf00) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 4 (Thread 0x7fd45cff9700 (LWP 8744)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042cb14 in runtime.futexsleep (addr=0xe25420 <runtime.timers+288>, val=0, ns=199991693) at /usr/local/go/src/runtime/os_linux.go:50
#2  0x000000000040c73e in runtime.notetsleep_internal (n=0xe25420 <runtime.timers+288>, ns=199991693, ~r2=<optimized out>) at /usr/local/go/src/runtime/lock_futex.go:193
#3  0x000000000040c89c in runtime.notetsleepg (n=0xe25420 <runtime.timers+288>, ns=199991693, ~r2=<optimized out>) at /usr/local/go/src/runtime/lock_futex.go:228
#4  0x000000000044cf51 in runtime.timerproc (tb=0xe25400 <runtime.timers+256>) at /usr/local/go/src/runtime/time.go:311
#5  0x000000000045c1f1 in runtime.goexit () at /usr/local/go/src/runtime/asm_amd64.s:1357
#6  0x0000000000e25400 in runtime.timers ()
#7  0x0000000000000000 in ?? ()
Thread 3 (Thread 0x7fd44bfff700 (LWP 8747)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc0003c4148, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc0003c4148) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc00003af00, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x000000000043712d in runtime.park_m (gp=0xc000000180) at /usr/local/go/src/runtime/proc.go:2610
#7  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#8  0x0000000000000000 in ?? ()
Thread 2 (Thread 0x7fd44b7fe700 (LWP 8926)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xc000098848, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xc000098848) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435020 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1928
#4  0x000000000043613f in runtime.findrunnable (gp=0xc000036500, inheritTime=false) at /usr/local/go/src/runtime/proc.go:2391
#5  0x0000000000436dee in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#6  0x0000000000437327 in runtime.goschedImpl (gp=0xc000384f00) at /usr/local/go/src/runtime/proc.go:2625
#7  0x0000000000437574 in runtime.gopreempt_m (gp=0xc000384f00) at /usr/local/go/src/runtime/proc.go:2653
#8  0x0000000000446f09 in runtime.newstack () at /usr/local/go/src/runtime/stack.go:1038
#9  0x000000000045a26f in runtime.morestack () at /usr/local/go/src/runtime/asm_amd64.s:449
#10 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7fd48108f740 (LWP 8702)):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:536
#1  0x000000000042ca96 in runtime.futexsleep (addr=0xe22748 <runtime.m0+328>, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x000000000040c5bf in runtime.notesleep (n=0xe22748 <runtime.m0+328>) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0000000000435688 in runtime.stoplockedm () at /usr/local/go/src/runtime/proc.go:2068
#4  0x0000000000436fb5 in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2469
#5  0x000000000043712d in runtime.park_m (gp=0xc00009a600) at /usr/local/go/src/runtime/proc.go:2610
#6  0x000000000045a10b in runtime.mcall () at /usr/local/go/src/runtime/asm_amd64.s:318
#7  0x000000000045a024 in runtime.rt0_go () at /usr/local/go/src/runtime/asm_amd64.s:220
#8  0x0000000000000000 in ?? ()
[root@master1 ~]#

Who can help me? Is it dead?

gprestore interrupt when recovering index

When performing index recovery, the error will appear. The related logs and SQL are as follows:

  • gprestore log
20201018:14:06:34 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Restore Command: [gprestore --backup-dir /gpbackup/20201016185432 --timestamp 20201016185716 --metadata-only --create-db --jobs 8]
20201018:14:06:34 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Restore Key = 20201016185716
20201018:14:06:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Metadata will be restored from /gpbackup/20201016185432/gpseg-1/backups/20201016/20201016185716/gpbackup_20201016185716_metadata.sql
20201018:14:06:35 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Creating database
20201018:14:06:37 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Database creation complete for: "F6_BDC"
20201018:14:06:38 gprestore:gpadmin:gp6mdw:024511-[WARNING]:-This backup set was taken on a version of Greenplum prior to 6.x. This restore will use the legacy hash operators when loading data.
20201018:14:06:38 gprestore:gpadmin:gp6mdw:024511-[WARNING]:-To use the new Greenplum 6.x default hash operators, these tables will need to be redistributed.
20201018:14:06:38 gprestore:gpadmin:gp6mdw:024511-[WARNING]:-For more information, refer to the migration guide located as https://docs.greenplum.org/latest/install_guide/migrate.html.
20201018:14:06:38 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Restoring pre-data metadata
20201018:14:07:02 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  10% (199/1981)
20201018:14:07:27 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  20% (397/1981)
20201018:14:07:59 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  30% (595/1981)
20201018:14:08:24 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  40% (793/1981)
20201018:14:08:56 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  50% (991/1981)
20201018:14:12:40 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  60% (1189/1981)
20201018:14:14:33 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  70% (1387/1981)
20201018:14:16:08 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  80% (1585/1981)
20201018:14:17:03 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  90% (1783/1981)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Pre-data objects restored:  100% (1981/1981)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Pre-data metadata restore complete
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Restoring post-data metadata
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: _ssd01;

CREATE INDEX fact_shipping_info_wanglin2_mcbsno_pack_date ON dw.fact_shipping_info_wanglin2 USING btree (mcbsno, pack Error was: ERROR: syntax error at or near "_ssd01" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: _ssd01;

CREATE INDEX fact_shipping_info_rma_mcbsno_rma_no ON dw.fact_shipping_info_rma USING btree (mcbsno, r Error was: ERROR: syntax error at or near "_ssd01" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: (id);


CREATE INDEX fact_shipping_info_lr_combine_mcbsno_rma_no ON dw.fact_shipping_info_lr_combine USING btree (mcbsno, r Error was: ERROR: syntax error at or near "id" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: (cdt);

CREATE INDEX idx_dw_customer_datasource_customer ON dw.customer_datasource USING btree (cus Error was: ERROR: syntax error at or near "cdt" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: _ssd01;

CREATE INDEX fact_shipping_info_lr_not_combine_mcbsno_rma_no ON dw.fact_shipping_info_lr_not_combine USING btree (mcbsno, r Error was: ERROR: syntax error at or near "_ssd01" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: _ssd01;

CREATE INDEX fact_shipping_info_pca_combine_mcbsno_rma_no ON dw.fact_shipping_info_pca_combine USING btree (mcbsno, r Error was: ERROR: syntax error at or near "_ssd01" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: _date);

CREATE INDEX i_cdt ON fis.pca_pca_lot USING btree Error was: ERROR: syntax error at or near "_date" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Error encountered when executing statement: _ssd01;

CREATE INDEX idx_dw_dim_cpu_dn_dn ON dw.dim_cpu_dn USING btre Error was: ERROR: syntax error at or near "_ssd01" (SQLSTATE 42601)
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[CRITICAL]:-ERROR: syntax error at or near "_ssd01" (SQLSTATE 42601)
github.com/greenplum-db/gpbackup/restore.ExecuteStatements
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/parallel.go:79
github.com/greenplum-db/gpbackup/restore.ExecuteRestoreMetadataStatements
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/wrappers.go:280
github.com/greenplum-db/gpbackup/restore.restorePostdata
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/restore.go:348
github.com/greenplum-db/gpbackup/restore.DoRestore
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/restore/restore.go:150
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gprestore.go:23
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gprestore.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Found neither /usr/local/greenplum-db-6.11.2/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[INFO]:-Email containing gprestore report /gpbackup/20201016185432/gpseg-1/backups/20201016/20201016185716/gprestore_20201016185716_20201018140634_report will not be sent
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Beginning cleanup
20201018:14:17:35 gprestore:gpadmin:gp6mdw:024511-[DEBUG]:-Cleanup complete
  • sql
CREATE INDEX fact_shipping_info_lr_combine_mcbsno_rma_no ON dw.fact_shipping_info_lr_combine USING btree (mcbsno, rma_no);
ALTER INDEX dw.fact_shipping_info_lr_combine_mcbsno_rma_no SET TABLESPACE tbs_ssd01;

CREATE INDEX fact_shipping_info_lr_not_combine_mcbsno_rma_no ON dw.fact_shipping_info_lr_not_combine USING btree (mcbsno, rma_no);
ALTER INDEX dw.fact_shipping_info_lr_not_combine_mcbsno_rma_no SET TABLESPACE tbs_ssd01;

CREATE INDEX fact_shipping_info_pca_combine_mcbsno_rma_no ON dw.fact_shipping_info_pca_combine USING btree (mcbsno, rma_no);
ALTER INDEX dw.fact_shipping_info_pca_combine_mcbsno_rma_no SET TABLESPACE tbs_ssd01;

CREATE INDEX fact_shipping_info_rma_mcbsno_rma_no ON dw.fact_shipping_info_rma USING btree (mcbsno, rma_no);
ALTER INDEX dw.fact_shipping_info_rma_mcbsno_rma_no SET TABLESPACE tbs_ssd01;

CREATE INDEX fact_shipping_info_wanglin2_mcbsno_pack_date ON dw.fact_shipping_info_wanglin2 USING btree (mcbsno, pack_date);

CREATE INDEX i_cdt ON fis.pca_pca_lot USING btree (cdt);

CREATE INDEX idx_dw_customer_datasource_customer ON dw.customer_datasource USING btree (customer);
ALTER INDEX dw.idx_dw_customer_datasource_customer SET TABLESPACE tbs_ssd01;

CREATE INDEX idx_dw_dim_cpu_dn_dn ON dw.dim_cpu_dn USING btree (dn);
ALTER INDEX dw.idx_dw_dim_cpu_dn_dn SET TABLESPACE tbs_ssd01;

Incremental backups are same size (or bigger) as full backup

Hello! I'm using Greenplum 6.14 and latest Backup and Restore (1.20.3).
So here is my problem step-by-step:

  1. I created a test database with 1000 tables and random content in rows in it
  2. gpbackup --dbname test_db_100g --jobs 8 --leaf-partition-data
  3. Added another 300 tables in DB
  4. gpbackup --dbname test_db_100g --jobs 8 --leaf-partition-data --incremental
    Then, i checked size of both backups - incremental backup is bigger than full (first) backup
    I decide to try another way and make some changes in existing tables, instead of adding new ones. So:
  5. Same two steps as in first example
  6. Change content of 300 random rows
  7. gpbackup --dbname test_db_100g --jobs 8 --leaf-partition-data --incremental
    And i get a same result, nothing changed. My full backup is 36 mb and incremental backup is 36 mb as well.
    Am i doing something wrong?

P.S. Sorry for bad english :)

gpbackup on Solaris

When I build gpbackup on Solaris, there is an error like following:

[root@a686a8c2-73cf-4796-bb62-e7caca687b9f ~/gocode/src/github.com/greenplum-db/gpbackup]# make build
GO111MODULE=on  go build -mod=readonly -tags 'gpbackup' -o /root/gocode/bin/gpbackup -ldflags "-X github.com/greenplum-db/gpbackup/backup.version=1.17.0+dev.72.g4ecd0d9"
GO111MODULE=on  go build -mod=readonly -tags 'gprestore' -o /root/gocode/bin/gprestore -ldflags "-X github.com/greenplum-db/gpbackup/restore.version=1.17.0+dev.72.g4ecd0d9"
GO111MODULE=on  go build -mod=readonly -tags 'gpbackup_helper' -o /root/gocode/bin/gpbackup_helper -ldflags "-X github.com/greenplum-db/gpbackup/helper.version=1.17.0+dev.72.g4ecd0d9"
# github.com/greenplum-db/gpbackup/helper
helper/helper.go:127:9: undefined: syscall.Mkfifo
make: *** [Makefile:77: build] Error 2

After some search, I find that there are some system calls are not present on Solaris in syscall in Go [1]. Here is my fix, however, it maybe not suit for Linux.

diff --git a/go.mod b/go.mod
index 12c6f08..f28b22d 100644
--- a/go.mod
+++ b/go.mod
@@ -20,6 +20,7 @@ require (
        github.com/sergi/go-diff v1.1.0
        github.com/spf13/cobra v0.0.5
        github.com/spf13/pflag v1.0.5
+       golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae
        golang.org/x/tools v0.0.0-20200214225126-5916a50871fb
        gopkg.in/cheggaaa/pb.v1 v1.0.28
        gopkg.in/yaml.v2 v2.2.8
diff --git a/helper/helper.go b/helper/helper.go
index 93517c2..7604d22 100644
--- a/helper/helper.go
+++ b/helper/helper.go
@@ -14,6 +14,7 @@ import (
        "sync"
        "syscall"

+       "golang.org/x/sys/unix"
        "github.com/greenplum-db/gp-common-go-libs/gplog"
        "github.com/greenplum-db/gp-common-go-libs/operating"
        "github.com/greenplum-db/gpbackup/utils"
@@ -124,7 +125,7 @@ func InitializeGlobals() {
  */

 func createPipe(pipe string) error {
-       err := syscall.Mkfifo(pipe, 0777)
+       err := unix.Mkfifo(pipe, 0777)
        return err
 }

Is there any way to support Solaris?

[1] https://golang.org/cl/14643

gprestore failed gpadmin:gpmaster:015116-[CRITICAL]:-Version string empty

PostgreSQL 9.4.24 (Greenplum Database 6.10.1 build commit:efba04ce26ebb29b535a255a5e95d1f5ebfde94e Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Sep 15 2020 03:51:36
I can use the gpbackup tool excute the backup operation,but gprestore can't excute.
[gpadmin@gpmaster go]$ gprestore --backup-dir /home/gpadmin/backup --timestamp 20210201103637
20210201:10:46:00 gprestore:gpadmin:gpmaster:015116-[INFO]:-Restore Key = 20210201103637
20210201:10:46:00 gprestore:gpadmin:gpmaster:015116-[CRITICAL]:-Version string empty
20210201:10:46:00 gprestore:gpadmin:gpmaster:015116-[INFO]:-Found neither /usr/local/greenplum-db-6.10.1/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
20210201:10:46:00 gprestore:gpadmin:gpmaster:015116-[INFO]:-Email containing gprestore report /home/gpadmin/backup/gpseg-1/backups/20210201/20210201103637/gprestore_20210201103637_20210201104600_report will not be sen

20210201:10:46:00 gprestore:gpadmin:gpmaster:015116-[CRITICAL]:-Version string empty --- I don't know what is mean , and why do not restore...

gpbackup block some irrelevant sql in gpdb6.16.0

This week I upgraded my Greenplum cluster from 5.28 to 6.16.0. When I executed gpbackup regularly today, I found that it blocked some SQL, even though these SQLs operate on different tables.

    {
        "datname": "F3_BDC",
        "blocking_clientip": "10.13.0.65",
        "blocking_pid": 40043,
        "current_statement": "COPY report_query.log_query_metadata TO PROGRAM 'gzip -c -1 | /usr/local/greenplum-db-6.16.0/bin/gpbackup_s3_plugin backup_data /tmp/20210509203413_s3_backup.yml /backups/20210509/20210509203413/gpbackup__20210509203413_896693.gz' WITH CSV DELIMITER ',' ON SEGMENT IGNORE EXTERNAL PARTITIONS;",
        "blocked_pid": 47956,
        "blocking_application": "gpbackup_20210509203413",
        "blocked_statement": "TRUNCATE TABLE fis.pca_smt_log_sync",
        "blocked_application": "Kettle",
        "blocked_clientip": "192.168.2.15",
        "blocked_user": "bdcenter",
        "blocking_user": "gpadmin"
    }

gpbackup version 1.18.1 caused 'password authentication failed' error

Dear,

gpbackup version 1.18.1 caused 'password authentication failed' error when I set the the localhost access method to md5. I am sure the PGPASSWORD is correct. Below is the error log:

[gpadmin@mdw bin]$ gpbackup --dbname gpdw --backup-dir /data/mybackup/gpdw_db --leaf-partition-data --verbose
20211227:18:08:01 gpbackup:gpadmin:mdw:104524-[DEBUG]:-Backup Command: [gpbackup --dbname gpdw --backup-dir /data/mybackup/gpdw_db --leaf-partition-data --verbose]
20211227:18:08:01 gpbackup:gpadmin:mdw:104524-[CRITICAL]:-FATAL: password authentication failed for user "gpadmin" (SQLSTATE 28P01) (mdw:5432)
github.com/greenplum-db/gp-common-go-libs/gplog.FatalOnError
        /tmp/build/3e49593f/go/pkg/mod/github.com/greenplum-db/[email protected]/gplog/gplog.go:310
github.com/greenplum-db/gp-common-go-libs/dbconn.(*DBConn).MustConnect
        /tmp/build/3e49593f/go/pkg/mod/github.com/greenplum-db/[email protected]/dbconn/dbconn.go:187
github.com/greenplum-db/gpbackup/utils.CheckGpexpandRunning
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/utils/gpexpand_sensor.go:34
github.com/greenplum-db/gpbackup/backup.DoSetup
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:45
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:22
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
20211227:18:08:01 gpbackup:gpadmin:mdw:104524-[DEBUG]:-Beginning cleanup
20211227:18:08:01 gpbackup:gpadmin:mdw:104524-[DEBUG]:-Cleanup complete
[gpadmin@mdw bin]$ gpbackup --version
gpbackup version 1.18.1

This bug doesn't occur on master branch, I tried to debug and fix it but I didn't have any progress. Please give me a hand.

Thanks very much,
Chris

`gpbackup` sendmail step always fails when running in `crontab`

Hi, Team,

When executing gpbackup / gprestore command directly with gp_email_contacts.yaml setup, we can receive email after command finish.
https://gpdb.docs.pivotal.io/backup-restore/1-20/admin_guide/managing/backup-gpbackup.html#topic_qwd_d5d_tbb

However, when putting the same command in crontab (running with gpadmin account), mail sending always fails, with the following error.

20210716:02:08:29 gpbackup:gpadmin:gpdb:157501-[INFO]:-/home/gpadmin/gp_email_contacts.yaml list found, /data/master/gpseg-1/backups/20210716/20210716020827/gpbackup_20210716020827_report will be sent
20210716:02:08:29 gpbackup:gpadmin:gpdb:157501-[WARNING]:-Unable to send email report: bash: line 68: sendmail: command not found

Can you fix the issue from gpbackup source code?

Regards.

SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.

  • Command:
gpbackup --dbname F3_BDC --plugin-config /opt/greenplum/config/s3_backup.yml --leaf-partition-data --exclude-schema bsi_old --jobs 4 --quiet
  • Exception:
20201030:17:15:57 gpbackup:gpadmin:mdw:013535-[CRITICAL]:-ERROR: Error from segment 9: ERROR:  command error message: 2020/10/30 17:15:42 SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method. (SQLSTATE 22P04)
github.com/greenplum-db/gpbackup/backup.BackupDataForAllTables
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/backup/data.go:167
github.com/greenplum-db/gpbackup/backup.backupData
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:321
github.com/greenplum-db/gpbackup/backup.DoBackup
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:181
main.main.func1
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:23
github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra.(*Command).execute
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra/command.go:766
github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra/command.go:852
github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra.(*Command).Execute
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra/command.go:800
main.main
	/tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:27
runtime.main
	/usr/local/go/src/runtime/proc.go:198
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:2361
  • Snapshot:

2020-10-30 17-19-33屏幕截图

Header checksum does not match. Expected 0x0 and found 0xD49F4AA2 (SQLSTATE 22P04)

Greenplum version or build

  • GP: 5.28.1

OS version and uname -a

  • Docker Container: Centos7
  • Docker Host: Linux mdw 4.15.0-117-generic # 118-Ubuntu SMP Fri Sep 4 20:02:41 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Actual behavior

When I performed a backup recently, I found the following error:

20201115:23:33:36 gpbackup:gpadmin:mdw:061595-[DEBUG]:-Writing data for table mes.pcbcomponenttrace_1_prt_60 to file
20201115:23:33:43 gpbackup:gpadmin:mdw:061595-[CRITICAL]:-ERROR: Error from segment 15: ERROR:  Header checksum does not match.  Expected 0x0 and found 0xD49F4AA2 (SQLSTATE 22P04)
github.com/greenplum-db/gpbackup/backup.BackupDataForAllTables
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/backup/data.go:167
github.com/greenplum-db/gpbackup/backup.backupData
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:321
github.com/greenplum-db/gpbackup/backup.DoBackup
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:181
main.main.func1
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:23
github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra.(*Command).execute
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra/command.go:766
github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra/command.go:852
github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra.(*Command).Execute
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/vendor/github.com/spf13/cobra/command.go:800
main.main
        /tmp/build/5f8239f8/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:198
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:2361

I found a similar problem on the official forum, and then I performed the following operations according to the instructions. But after I completed the execution, the md5 value on the primary segment is still different from the one on the mirror.

# select count(1) from mes.pcbcomponenttrace_1_prt_60

ERROR:  Header checksum does not match.  Expected 0x0 and found 0xD49F4AA2  (seg15 slice1 10.12.0.41:40003 pid=23862)
DETAIL:  
Append-Only storage header kind 0 unknown
Scan of Append-Only Row-Oriented relation 'pcbcomponenttrace_1_prt_60'. Append-Only segment file 'base/189710/34812804.1', block header offset in file = 9439816, bufferCount 2960
SQL state: XX001
[gpadmin@mdw greenplum]$ ssh sdw4 md5sum /disk4/gpdata/gpsegment/primary/gpseg15/base/189710/34812804.1
# d680dfa823728175df51e846b450f810
[gpadmin@mdw greenplum]$ ssh sdw3 md5sum /disk4/gpdata/gpsegment/mirror/gpseg15/base/189710/34812804.1
# 8294b4c4a354f9caaaa9edc9e0996eb8
# sdw4
[gpadmin@sdw4 greenplum]$ pg_ctl -D /disk4/gpdata/gpsegment/primary/gpseg15 stop -m fast
# mdw
[gpadmin@mdw greenplum]$ gprecovery -a
[gpadmin@mdw greenplum]$ gprecovery -r -a
[gpadmin@mdw greenplum]$ ssh sdw4 md5sum /disk4/gpdata/gpsegment/primary/gpseg15/base/189710/34812804.1
d680dfa823728175df51e846b450f810  /disk4/gpdata/gpsegment/primary/gpseg15/base/189710/34812804.1
[gpadmin@mdw greenplum]$ ssh sdw3 md5sum /disk4/gpdata/gpsegment/mirror/gpseg15/base/189710/34812804.1
8294b4c4a354f9caaaa9edc9e0996eb8  /disk4/gpdata/gpsegment/mirror/gpseg15/base/189710/34812804.1

Exception when restoring timestamp field across major versions

When restoring the backup data of greenplum5 to greenplum6, one table failed to import, and the error message is as follows:

20210502:02:06:50 gprestore:gpadmin:gp6mdw:008984-[ERROR]:-Error loading data into table dw.fact_pca_yield_unit: COPY fact_pca_yield_unit, line 37486852, column test_date: "2017-12-12-3B0": ERROR: invalid input syntax for type timestamp: "2017-12-12-3B0"  (seg8 10.13.0.54:40000 pid=15339) (SQLSTATE 22007)

I suspect that the timestamp content format is abnormal. So I tried to retrieve the data for that time period. But because the scale of the data is still very large, no exact data can be located in the end.

In response to the above question, I have two questions:

  • How to quickly locate the specific record through the prompt line 37486852?
  • Why does Greenplum5 support abnormal timestamp format? Is there a potential bug?

Improve the retry mechanism for upload failures

Recently, the exception MultipartUpload: upload multipart failed happens quite often when doing a full database backup. I can only re-backup the entire database, which caused a huge waste of resources. gpbackup version: 1.20

20210110:22:03:50 gpbackup:gpadmin:mdw:000885-[DEBUG]:-COPY mes.trace_components_new_1_prt_trace_components_9 TO PROGRAM 'gzip -c -1 | /usr/local/greenplum-db-5.28.3/bin/gpbackup_s3_plugin backup_data /tmp/20210110200000_s3_backup.yml <SEG_DATA_DIR>/backups/20210110/20210110200000/gpbackup_<SEGID>_20210110200000_78715128.gz' WITH CSV DELIMITER ',' ON SEGMENT IGNORE EXTERNAL PARTITIONS;
20210110:22:04:52 gpbackup:gpadmin:mdw:000885-[CRITICAL]:-ERROR: Error from segment 16: ERROR:  command error message: 20210110:22:04:33 gpbackup_s3_plugin:gpadmin:sdw5:024156-[ERROR]:-MultipartUpload: upload multipart failed (SQLSTATE 22P04)
github.com/greenplum-db/gpbackup/backup.backupDataForAllTables
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/data.go:167
github.com/greenplum-db/gpbackup/backup.backupData
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:276
github.com/greenplum-db/gpbackup/backup.DoBackup
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/backup/backup.go:160
main.main.func1
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:23
github.com/spf13/cobra.(*Command).execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /tmp/build/3e49593f/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /tmp/build/3e49593f/go/src/github.com/greenplum-db/gpbackup/gpbackup.go:27
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.