Coder Social home page Coder Social logo

bnb-chain / greenfield-storage-provider Goto Github PK

View Code? Open in Web Editor NEW
39.0 24.0 34.0 10.43 MB

Greenfield Storage Provider is a storage infrastructure for Greenfield decentralized storage platform.

License: GNU Lesser General Public License v3.0

Makefile 0.06% Go 99.35% Shell 0.57% Dockerfile 0.03%

greenfield-storage-provider's Introduction

Greenfield Storage Provider

Greenfield Storage Provider (abbreviated SP) is storage service infrastructure provider. It uses Greenfield as the ledger and the single source of truth. Each SP can and will respond to users' requests to write (upload) and read (download) data, and serve as the gatekeeper for user rights and authentications.

Disclaimer

The software and related documentation are under active development, all subject to potential future change without notification and not ready for production use. The code and security audit have not been fully completed and not ready for any bug bounty. We advise you to be careful and experiment on the network at your own risk. Stay safe out there.

SP Core

SPs store the objects' real data, i.e. the payload data. Each SP runs its own object storage system. Similar to Amazon S3 and other object store systems, the objects stored on SPs are immutable. The users may delete and re-create the object (under the different ID, or under the same ID after certain publicly declared settings), but they cannot modify it.

SPs have to register themselves first by depositing on the Greenfield blockchain as their "Service Stake". Greenfield validators will go through a dedicated governance procedure to vote for the SPs of their election. SPs are encouraged to advertise their information and prove to the community their capability, as SPs have to provide a professional storage system with high-quality SLA.

SPs provide publicly accessible APIs for users to upload, download, and manage data. These APIs are very similar to Amazon S3 APIs so that existing developers may feel familiar enough to write code for it. Meanwhile, they provide each other REST APIs and form another white-listed P2P network to communicate with each other to ensure data availability and redundancy. There will also be a P2P-based upload/download network across SPs and user-end client software to facilitate easy connections and fast data download, which is similar to BitTorrent.

Among the multiple SPs that one object is stored on, one SP will be the "Primary SP", while the others are "Secondary SP".

When users want to write an object into Greenfield, they or the client software they use must specify the primary SP. Primary SP should be used as the only SP to download the data. Users can change the primary SP for their objects later if they are not satisfied with its service.

Quick Started

Note: Requires Go 1.20+

Compile SP

Compilation dependencies:

  • Golang: SP is written in Golang, you need to install it. Golang version requires 1.20+.
  • Buf: A new way of working with Protocol Buffers. SP uses Buf to manage proto files.
  • protoc-gen-gocosmos: Protocol Buffers for Go with Gadgets. SP uses this protobuf compiler to generate pb.go files.
  • mockgen: A mocking framework for the Go programming language that is used in unit test.
  • jq: Command-line JSON processor. Users should install jq according to your operating system.
# clone source code
git clone https://github.com/bnb-chain/greenfield-storage-provider.git

cd greenfield-storage-provider

# install dependent tools: buf, protoc-gen-gocosmos and mockgen
make install-tools

# compile sp
make build

# move to build directory
cd build

# execute gnfd-sp binary file
./gnfd-sp version

# show the gnfd-sp version information
Greenfield Storage Provider
    __                                                       _     __
    _____/ /_____  _________ _____ ____     ____  _________ _   __(_)___/ /__  _____
    / ___/ __/ __ \/ ___/ __  / __  / _ \   / __ \/ ___/ __ \ | / / / __  / _ \/ ___/
    (__  ) /_/ /_/ / /  / /_/ / /_/ /  __/  / /_/ / /  / /_/ / |/ / / /_/ /  __/ /
    /____/\__/\____/_/   \__,_/\__, /\___/  / .___/_/   \____/|___/_/\__,_/\___/_/
    /____/       /_/

Version : vx.x.x
Branch  : master
Commit  : 342930b89466c15653af2f3695cfc72f6466d4b8
Build   : go1.20.3 darwin arm64 2023-06-20 10:31

# show the gnfd-sp help info
./gnfd-sp -h

Note

If you've already executed make install-tools command in your shell, but you failed to make build and encountered one of the following error messages:

# error message 1
buf: command not found
# you can execute the following command, assumed that you installed golang in /usr/local/go/bin. Other OS are similar.
GO111MODULE=on GOBIN=/usr/local/go/bin go install github.com/bufbuild/buf/cmd/[email protected]

# error message 2
Failure: plugin gocosmos: could not find protoc plugin for name gocosmos - please make sure protoc-gen-gocosmos is installed and present on your $PATH
# you can execute the fowllowing command, assumed that you installed golang in /usr/local/go/bin. Other OS are similar.
GO111MODULE=on GOBIN=/usr/local/go/bin go install github.com/cosmos/gogoproto/protoc-gen-gocosmos@latest

# if you want to execute unit test of sp, you should execute the following command, assumed that you installed golang in /usr/local/go/bin. Other OS are similar.
GO111MODULE=on GOBIN=/usr/local/go/bin go install go.uber.org/mock/mockgen@latest

Above error messages are due to users don't set go env correctly. More info users can search GOROOT, GOPATH and GOBIN.

SP Dependencies

If a user wants to start SP in local mode or testnet mode, you must prepare SPDB, BSDB and PieceStore dependencies.

SPDB and BSDB

SP uses SPDB and BSDB to store some metadata such as object info, object integrity hash, etc. These two DBs now use RDBMS to complete corresponding function.

Users now can use MySQL or MariaDB to store metadata.The following lists the supported RDBMS:

  1. MySQL
  2. MariaDB

More types of database such as PostgreSQL or NewSQL will be supported in the future.

PieceStore

Greenfield is a decentralized data storage system which uses object storage as the main data storage system. SP encapsulates data storage as PieceStore which provides common interfaces to be compatible with multiple data storage systems. Therefore, if a user wants to join SP or test the function of SP, you must use a data storage system.

The following lists the supported data storage systems:

  1. AWS S3: An object storage can be used in production environment.
  2. MinIO: An object storage can be used in production environment which is compatible with AWS S3.
  3. POSIX Filesystem: Local filesystem is used for experiencing the basic features of SP and understanding how SP works. The piece data created by SP cannot be got within the network and can only be used on a single machine.

Install Dependencies

Install MySQL in CentOS

  1. Install MySQL yum package
# 1. Download MySQL yum package
wget http://repo.mysql.com/mysql57-community-release-el7-10.noarch.rpm

# 2. Install MySQL source
rpm -Uvh mysql57-community-release-el7-10.noarch.rpm

# 3. Install public key
rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022

# 4. Install MySQL server
yum install -y mysql-community-server

# 5. Start MySQL
systemctl start mysqld.service

# 6. Check whether the startup is successful
systemctl status mysqld.service

# 7. Get temporary password
grep 'temporary password' /var/log/mysqld.log 

# 8. Login MySQL through temporary password
# After you log in with the temporary password, do not perform any other operations. Otherwise, an error will occur. In this case, you need to change the password
mysql -uroot -p

# 9. change MySQL password rules
mysql> set global validate_password_policy=0;
mysql> set global validate_password_length=1;
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'yourpassword';

Configuration

Make configuration template

# dump default configuration
./gnfd-sp config.dump
# optional
Env = ''
# optional
AppID = ''
# optional
Server = []
# optional
GRPCAddress = ''

[SpDB]
# required
User = ''
# required
Passwd = ''
# required
Address = ''
# required
Database = ''
# optional
ConnMaxLifetime = 0
# optional
ConnMaxIdleTime = 0
# optional
MaxIdleConns = 0
# optional
MaxOpenConns = 0

[BsDB]
# required
User = ''
# required
Passwd = ''
# required
Address = ''
# required
Database = ''
# optional
ConnMaxLifetime = 0
# optional
ConnMaxIdleTime = 0
# optional
MaxIdleConns = 0
# optional
MaxOpenConns = 0

[PieceStore]
# required
Shards = 0

[PieceStore.Store]
# required
Storage = ''
# optional
BucketURL = ''
# optional
MaxRetries = 0
# optional
MinRetryDelay = 0
# optional
TLSInsecureSkipVerify = false
# required
IAMType = ''

[Chain]
# required
ChainID = ''
# required
ChainAddress = []
# optional
SealGasLimit = 0
# optional
SealFeeAmount = 0
# optional
RejectSealGasLimit = 0
# optional
RejectSealFeeAmount = 0
# optional
DiscontinueBucketGasLimit = 0
# optional
DiscontinueBucketFeeAmount = 0
# optional
CreateGlobalVirtualGroupGasLimit = 0
# optional
CreateGlobalVirtualGroupFeeAmount = 0
# optional
CompleteMigrateBucketGasLimit = 0
# optional
CompleteMigrateBucketFeeAmount = 0

[SpAccount]
# required
SpOperatorAddress = ''
# required
OperatorPrivateKey = ''
# required
SealPrivateKey = ''
# required
ApprovalPrivateKey = ''
# required
GcPrivateKey = ''
# required
BlsPrivateKey = ''

[Endpoint]
# required
ApproverEndpoint = ''
# required
ManagerEndpoint = ''
# required
DownloaderEndpoint = ''
# required
ReceiverEndpoint = ''
# required
MetadataEndpoint = ''
# required
UploaderEndpoint = ''
# required
P2PEndpoint = ''
# required
SignerEndpoint = ''
# required
AuthenticatorEndpoint = ''

[Approval]
# optional
BucketApprovalTimeoutHeight = 0
# optional
ObjectApprovalTimeoutHeight = 0
# optional
ReplicatePieceTimeoutHeight = 0

[Bucket]
# optional
AccountBucketNumber = 0
# optional
FreeQuotaPerBucket = 0
# optional
MaxListReadQuotaNumber = 0
# optional
MaxPayloadSize = 0

[Gateway]
# required
DomainName = ''
# required
HTTPAddress = ''

[Executor]
# optional
MaxExecuteNumber = 0
# optional
AskTaskInterval = 0
# optional
AskReplicateApprovalTimeout = 0
# optional
AskReplicateApprovalExFactor = 0.0
# optional
ListenSealTimeoutHeight = 0
# optional
ListenSealRetryTimeout = 0
# optional
MaxListenSealRetry = 0

[P2P]
# optional
P2PPrivateKey = ''
# optional
P2PAddress = ''
# optional
P2PAntAddress = ''
# optional
P2PBootstrap = []
# optional
P2PPingPeriod = 0

[Parallel]
# optional
GlobalCreateBucketApprovalParallel = 0
# optional
GlobalCreateObjectApprovalParallel = 0
# optional
GlobalMaxUploadingParallel = 0
# optional
GlobalUploadObjectParallel = 0
# optional
GlobalReplicatePieceParallel = 0
# optional
GlobalSealObjectParallel = 0
# optional
GlobalReceiveObjectParallel = 0
# optional
GlobalGCObjectParallel = 0
# optional
GlobalGCZombieParallel = 0
# optional
GlobalGCMetaParallel = 0
# optional
GlobalRecoveryPieceParallel = 0
# optional
GlobalMigrateGVGParallel = 0
# optional
GlobalBackupTaskParallel = 0
# optional
GlobalDownloadObjectTaskCacheSize = 0
# optional
GlobalChallengePieceTaskCacheSize = 0
# optional
GlobalBatchGcObjectTimeInterval = 0
# optional
GlobalGcObjectBlockInterval = 0
# optional
GlobalGcObjectSafeBlockDistance = 0
# optional
GlobalSyncConsensusInfoInterval = 0
# optional
UploadObjectParallelPerNode = 0
# optional
ReceivePieceParallelPerNode = 0
# optional
DownloadObjectParallelPerNode = 0
# optional
ChallengePieceParallelPerNode = 0
# optional
AskReplicateApprovalParallelPerNode = 0
# optional
QuerySPParallelPerNode = 0
# required
DiscontinueBucketEnabled = false
# optional
DiscontinueBucketTimeInterval = 0
# required
DiscontinueBucketKeepAliveDays = 0
# optional
LoadReplicateTimeout = 0
# optional
LoadSealTimeout = 0

[Task]
# optional
UploadTaskSpeed = 0
# optional
DownloadTaskSpeed = 0
# optional
ReplicateTaskSpeed = 0
# optional
ReceiveTaskSpeed = 0
# optional
SealObjectTaskTimeout = 0
# optional
GcObjectTaskTimeout = 0
# optional
GcZombieTaskTimeout = 0
# optional
GcMetaTaskTimeout = 0
# optional
SealObjectTaskRetry = 0
# optional
ReplicateTaskRetry = 0
# optional
ReceiveConfirmTaskRetry = 0
# optional
GcObjectTaskRetry = 0
# optional
GcZombieTaskRetry = 0
# optional
GcMetaTaskRetry = 0

[Monitor]
# required
DisableMetrics = false
# required
DisablePProf = false
# required
DisableProbe = false
# required
MetricsHTTPAddress = ''
# required
PProfHTTPAddress = ''
# required
ProbeHTTPAddress = ''

[Rcmgr]
# optional
DisableRcmgr = false

[Log]
# optional
Level = ''
# optional
Path = ''

[Metadata]
# required
IsMasterDB = false
# optional
BsDBSwitchCheckIntervalSec = 0

[BlockSyncer]
# required
Modules = []
# optional
BsDBWriteAddress = ''
# required
Workers = 0

[APIRateLimiter]
# every line should represent one entry of gateway route. The comment after each line must contain which route name it represents.
# Most of APIs has a qps number, offered by QA team.  That usually means the max qps for the whole 4 gateway cluster.
# How to setup the RateLimit value, it is a sophistcated question and need take a lot of factors into account.
# 1. For most query-APIs, we can setup a rate limit up to the 1/4 of max qps, as the config is for only one gateway instance.
# 2. Also we avoid to setup a too large or too small rate limit value.
# 3. For upload/download APIs, it is diffiult to use a rate limit as a protect mechanism for the servers. Because the performance of upload/download interactions usually dependens on how large the file is processed.
# 4. We tetatively setup 50~75 as the rate limit for the download/upload APIs and we can ajdust them once we have a better experience.
# 5. The rate limt config will upgraded in next version to use http methods and virtual-host/path style as part of the matching keys.

# optional
PathPattern = [
    {Key = "/auth/request_nonce", Method = "GET", Names = ["GetRequestNonce"]}, 
    {Key = "/auth/update_key", Method = "POST", Names = ["UpdateUserPublicKey"]}, 
    {Key = "/permission/.+/[^/]*/.+", Method = "GET", Names = ["VerifyPermission"]},
    {Key = "/greenfield/admin/v1/get-approval", Method = "GET", Names = ["GetApproval"]},
    {Key = "/greenfield/admin/v1/challenge", Method = "GET", Names = ["GetChallengeInfo"]},
    {Key = "/greenfield/receiver/v1/replicate-piece", Method = "PUT", Names = ["ReplicateObjectPiece"]},
    {Key = "/greenfield/recovery/v1/get-piece", Method = "GET", Names = ["RecoveryPiece"]},
    {Key = "/greenfield/migrate/v1/notify-migrate-swap-out-task", Method = "POST", Names = ["NotifyMigrateSwapOut"]},
    {Key = "/greenfield/migrate/v1/migrate-piece", Method = "GET", Names = ["MigratePiece"]},
    {Key = "/greenfield/migrate/v1/migration-bucket-approval", Method = "GET", Names = ["MigrationBucketApproval"]},
    {Key = "/greenfield/migrate/v1/get-swap-out-approval", Method = "GET", Names = ["SwapOutApproval"]},
    {Key = "/download/[^/]*/.+", Method = "GET", Names = ["DownloadObjectByUniversalEndpoint"]},{Key = "/download", Method = "GET", Names = ["DownloadObjectByUniversalEndpoint"]},
    {Key = "/view/[^/]*/.+", Method = "GET", Names = ["ViewObjectByUniversalEndpoint"]},{Key = "/view", Method = "GET", Names = ["ViewObjectByUniversalEndpoint"]},
    {Key = "/status", Method = "GET", Names = ["GetStatus"]},
    {Key = "/.+/.+[?]offset.*", Method = "POST", Names = ["ResumablePutObject"]},
    {Key = "/.+/.+[?]upload-context.*", Method = "GET", Names = ["QueryResumeOffset"]},
    {Key = "/.+/.+[?]upload-progress.*", Method = "GET", Names = ["QueryUploadProgress"]},
    {Key = "/.+/.+[?]bucket-meta.*", Method = "GET", Names = ["GetBucketMeta"]},
    {Key = "/.+/.+[?]object-meta.*", Method = "GET", Names = ["GetObjectMeta"]},
    {Key = "/.+/.+[?]object-policies.*", Method = "GET", Names = ["ListObjectPolicies"]},
    {Key = "/.+[?]read-quota.*", Method = "GET", Names = ["GetBucketReadQuota"]},
    {Key = "/.+[?]list-read-quota.*", Method = "GET", Names = ["listBucketReadRecord"]},
    {Key = "/[?].*group-query.*", Method = "GET", Names = ["getGroupList"]},
    {Key = "/[?].*objects-query.*", Method = "GET", Names = ["listObjectsByIDs"]},
    {Key = "/[?].*buckets-query.*", Method = "GET", Names = ["listBucketsByIDs"]},
    {Key = "/[?].*verify-id.*", Method = "GET", Names = ["verifyPermissionByID"]},
    {Key = "/[?].*user-groups.*", Method = "GET", Names = ["getUserGroups"]},
    {Key = "/[?].*group-members.*", Method = "GET", Names = ["getGroupMembers"]},
    {Key = "/[?].*owned-groups.*", Method = "GET", Names = ["getUserOwnedGroups"]},
    
    {Key = "/.+/$", Method = "GET", Names = ["ListObjectsByBucket"]},
    {Key = "/.+/.+", Method = "GET", Names = ["ListObjectsByBucket"]},
    {Key = "/.+/.+", Method = "PUT", Names = ["PutObject"]},
    {Key = "/$", Method = "GET", Names = ["GetUserBuckets"]},

]

NameToLimit = [
    {Name = "GetRequestNonce", RateLimit = 100, RatePeriod = 'S'}, # requestNonceRouterName 3000qps
    {Name = "UpdateUserPublicKey", RateLimit = 100, RatePeriod = 'S'}, # updateUserPublicKeyRouterName 4000qps
    {Name = "VerifyPermission", RateLimit = 100, RatePeriod = 'S'}, # verifyPermissionRouterName  1200qps
    {Name = "GetApproval", RateLimit = 35, RatePeriod = 'S'}, # approvalRouterName  150qps
    {Name = "GetChallengeInfo", RateLimit = 20, RatePeriod = 'S'}, # getChallengeInfoRouterName, no test data
    {Name = "ReplicateObjectPiece", RateLimit = 1000, RatePeriod = 'S'},  # replicateObjectPieceRouterName, no test data. Internal API among sps, no rate limit is needed.
    {Name = "RecoveryPiece", RateLimit = 1000, RatePeriod = 'S'}, # recoveryPieceRouterName, no test data. Internal API among sps, no rate limit is needed.
    {Name = "NotifyMigrateSwapOut", RateLimit = 10, RatePeriod = 'S'},  # notifyMigrateSwapOutRouterName, no test data. Internal API among sps, no rate limit is needed.
    {Name = "MigratePiece", RateLimit = 10, RatePeriod = 'S'}, # migratePieceRouterName, no test data
    {Name = "MigrationBucketApproval", RateLimit = 10, RatePeriod = 'S'}, # migrationBucketApprovalName, no test data
    {Name = "SwapOutApproval", RateLimit = 10, RatePeriod = 'S'}, # swapOutApprovalName, no test data
    {Name = "DownloadObjectByUniversalEndpoint", RateLimit = 50, RatePeriod = 'S'}, # downloadObjectByUniversalEndpointName, 50qps
    {Name = "ViewObjectByUniversalEndpoint", RateLimit = 50, RatePeriod = 'S'}, # viewObjectByUniversalEndpointName, 50qps
    {Name = "GetStatus", RateLimit = 200, RatePeriod = 'S'},# getStatusRouterName, 2000qps
    {Name = "ResumablePutObject", RateLimit = 30, RatePeriod = 'S'}, # resumablePutObjectRouterName , test data is same as putObject object 10qps
    {Name = "QueryResumeOffset", RateLimit = 30, RatePeriod = 'S'},  # queryResumeOffsetName, test data is same as putObject object 10qps
    {Name = "QueryUploadProgress", RateLimit = 50, RatePeriod = 'S'}, # queryUploadProgressRouterName, test data is same as putObject object 10qps
    {Name = "GetBucketMeta", RateLimit = 100, RatePeriod = 'S'}, # getBucketMetaRouterName, 400qps
    {Name = "GetObjectMeta", RateLimit = 100, RatePeriod = 'S'}, # getObjectMetaRouterName, 400qps
    {Name = "ListObjectPolicies", RateLimit = 200, RatePeriod = 'S'}, # listObjectPoliciesRouterName, 2000qps
    {Name = "GetBucketReadQuota", RateLimit = 200, RatePeriod = 'S'}, # getBucketReadQuotaRouterName
    {Name = "ListBucketReadRecord", RateLimit = 100, RatePeriod = 'S'}, # listBucketReadRecordRouterName
    {Name = "GetGroupList", RateLimit = 200, RatePeriod = 'S'}, # getGroupListRouterName, similar to getUserGroupsRouterName, 2000qps
    {Name = "ListObjectsByIDs", RateLimit = 200, RatePeriod = 'S'}, # listObjectsByIDsRouterName, 1200qps
    {Name = "ListBucketsByIDs", RateLimit = 200, RatePeriod = 'S'}, # listBucketsByIDsRouterName, 2000qps
    {Name = "VerifyPermissionByID", RateLimit = 200, RatePeriod = 'S'}, # verifyPermissionByIDRouterName, 1200qps
    {Name = "GetUserGroups", RateLimit = 200, RatePeriod = 'S'}, # getUserGroupsRouterName, 2000qps
    {Name = "GetGroupMembers", RateLimit = 200, RatePeriod = 'S'}, # getGroupMembersRouterName, 2000qps
    {Name = "GetUserOwnedGroups", RateLimit = 200, RatePeriod = 'S'}, # getUserOwnedGroupsRouterName, 2000qps
    
    {Name = "ListObjectsByBucket", RateLimit = 75, RatePeriod = 'S'}, # listObjectsByBucketRouterName, 300qps
    {Name = "GetObject", RateLimit = 75, RatePeriod = 'S'}, # getObjectRouterName, 100 qps
    {Name = "PutObject", RateLimit = 75, RatePeriod = 'S'}, # putObjectRouterName, 100 qps
    {Name = "GetUserBuckets", RateLimit = 75, RatePeriod = 'S'}] # getUserBucketsRouterName, 1000 qps

# optional
HostPattern = []
# optional
APILimits = []

[APIRateLimiter.IPLimitCfg]
# optional
On = false
# optional
RateLimit = 0
# optional
RatePeriod = ''

[Manager]
# optional
EnableLoadTask = false
# optional
SubscribeSPExitEventIntervalSec = 0
# optional
SubscribeSwapOutExitEventIntervalSec = 0
# optional
SubscribeBucketMigrateEventIntervalSec = 0
# optional
GVGPreferSPList = []
# optional
SPBlackList = []

App info

These fields are optional and they can.

GRPCAddress = '0.0.0.0:9333'

Database

To config [SpDB], [BsDB], you have to input the user name, db password,db address and db name in these fields.

PieceStore

To config [PieceStore] and [PieceStore.Store], you can read the details in this doc

Chain info

  • ChainID of testnet is greenfield_5600-1.
  • ChainAddress is RPC endpoint of testnet, you can find RPC info here

SpAccount

These private keys are generated during wallet setup.

Endpoint

[Endpoint] specified the URL of different services.

For single-machine host (not recommended):

[Endpoint]
ApproverEndpoint = ''
ManagerEndpoint = ''
DownloaderEndpoint = ''
ReceiverEndpoint = ''
MetadataEndpoint = ''
UploaderEndpoint = ''
P2PEndpoint = ''
SignerEndpoint = ''
AuthenticatorEndpoint = ''

For K8S cluster:

[Endpoint]
ApproverEndpoint = 'manager:9333'
ManagerEndpoint = 'manager:9333'
DownloaderEndpoint = 'downloader:9333'
ReceiverEndpoint = 'receiver:9333'
MetadataEndpoint = 'metadata:9333'
UploaderEndpoint = 'uploader:9333'
P2PEndpoint = 'p2p:9333'
SignerEndpoint = 'signer:9333'
AuthenticatorEndpoint = 'localhost:9333'

P2P

  • P2PPrivateKey and node_id is generated by ./gnfd-sp p2p.create.key -n 1

  • P2PAntAddress is your load balance address. If you don't have a load balance address, you should have a public IP and use it in P2PAddress. It consists of ip:port.

  • P2PBootstrap can be left empty.

Gateway

[Gateway]
DomainName = 'region.sp-name.com'

The correct configuration should not include the protocol prefix https://.

BlockSyncer

Here is block_syncer config. The configuration of BsDBWriteAddress can be the same as the BSDB.Address module here. To enhance performance, you can set up the write database address here and the corresponding read database address in BSDB.

Modules = ['epoch','bucket','object','payment','group','permission','storage_provider','prefix_tree', 'virtual_group','sp_exit_events','object_id_map','general']
BsDBWriteAddress = 'localhost:3306'
Workers = 50

Start

# start sp
./gnfd-sp --config ${config_file_path}

Join Greenfield Testnet

Run Testnet SP Node

Document

Related Projects

  • Greenfield: The Golang implementation of the Greenfield Blockchain.
  • Greenfield-Go-SDK: The Greenfield SDK, interact with SP, Greenfield and Tendermint.
  • Greenfield Cmd: Greenfield client cmd tool, supporting commands to make requests to greenfield.
  • Greenfield-Common: The Greenfield common package.
  • Reed-Solomon: The Reed-Solomon Erasure package in prue Go, with speeds exceeding 1GB/s/cpu core.
  • Juno: The Cosmos Hub blockchain data aggregator and exporter package.

Contribution

Thank you for considering to help out with the source code! We welcome contributions from anyone on the internet, and are grateful for even the smallest of fixes!

If you'd like to contribute to Greenfield Storage Provider, please fork, fix, commit and send a pull request for the maintainers to review and merge into the main code base. If you wish to submit more complex changes though, please check up with the core devs first through github issue(going to have a discord channel soon) to ensure those changes are in line with the general philosophy of the project and/or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple.

License

The greenfield storage provider library (i.e. all code outside the cmd directory) is licensed under the GNU Lesser General Public License v3.0, also included in our repository in the COPYING.LESSER file.

The greenfield storage provider binaries (i.e. all code inside the cmd directory) is licensed under the GNU General Public License v3.0, also included in our repository in the COPYING file.

greenfield-storage-provider's People

Contributors

alexgao001 avatar annielz avatar augustyip avatar barrytong65 avatar bhllau avatar constwz avatar dangerousidea avatar flywukong avatar forcodedancing avatar fynnss avatar j75689 avatar jingjunli avatar joeylichang avatar keefel avatar krish-nr avatar randyahx avatar ruojunm avatar sunskyf avatar sysvm avatar will-2012 avatar yutianwu avatar yzhaoyu avatar zjubfd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

greenfield-storage-provider's Issues

Unable to build from source

Need some help to build from source.

$ go version
go version go1.19.6 linux/amd64

$ make build
bash +x ./build.sh
./build.sh: line 12: buf: command not found
service/challenge/challenge.go:15:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/challenge/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/challenge/types
service/downloader/downloader.go:15:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/downloader/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/downloader/types
service/metadata/client/metadata_client.go:10:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/metadata/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/metadata/types
service/p2p/p2p.go:19:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/p2p/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/p2p/types
service/receiver/client/receiver_client.go:10:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/receiver/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/receiver/types
service/signer/client/signer_client.go:13:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/signer/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/signer/types
service/tasknode/task_node.go:19:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/tasknode/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/tasknode/types
store/sqldb/database.go:7:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/types
service/uploader/client/uploader_client.go:11:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/uploader/types; to add it:
	go get github.com/bnb-chain/greenfield-storage-provider/service/uploader/types
build failed Ooooooh!!!

Provide the API version in an endpoint

It would be great if you provide a get_info endpoint where you get the API version and maybe other data.

Use case:

  • You have a library/SDK compatible with v0.1.0 and the node gets updated to v0.1.2. The SDK that connects to the v0.1.2 should present a warning or directly not work on the new version.

SP process terminate abnormally

System information

Version : v1.0.1
Branch : master
Commit : ae61ba0
Build : go1.20.3 linux amd64 2023-10-20 21:23

Actual behaviour

SP ran normally for a couple of hours, then 'dial tcp 0.0.0.0:9333: connect: connection refused' error occurred.

Backtrace

{"t":"2023-10-23T19:22:59.057Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:228","msg":"0x7a00004347b5dff922301945741a6f5f27642348613986e50dd38d36df2fcd73"}
{"t":"2023-10-23T19:22:59.066Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:208","msg":"height :1600417 tx count:3 sql count:13"}
{"t":"2023-10-23T19:22:59.066Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:211","msg":"total cost: 17"}
{"t":"2023-10-23T19:22:59.875Z","l":"error","caller":"gfspvgmgr/virtual_group_manager.go:293","msg":"failed to refresh due to current sp is not in sp list"}
{"t":"2023-10-23T19:23:01.553Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:228","msg":"0x83c3a096b48206f26e6252017d5e22bb05e6e0aaf5851c6aba99a730f1c50fc5"}
{"t":"2023-10-23T19:23:01.560Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:208","msg":"height :1600418 tx count:1 sql count:9"}
{"t":"2023-10-23T19:23:01.560Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:211","msg":"total cost: 11"}
{"t":"2023-10-23T19:23:01.878Z","l":"error","caller":"gfspvgmgr/virtual_group_manager.go:293","msg":"failed to refresh due to current sp is not in sp list"}
{"t":"2023-10-23T19:23:03.857Z","l":"error","caller":"gfspvgmgr/virtual_group_manager.go:293","msg":"failed to refresh due to current sp is not in sp list"}
{"t":"2023-10-23T19:23:03.872Z","l":"error","caller":"gnfd/gnfd_service.go:207","msg":"failed to query storage provider","error":"rpc error: code = Unknown desc = StorageProvider does not exist: unknown request"}
{"t":"2023-10-23T19:23:03.872Z","l":"error","caller":"manager/manager.go:153","msg":"failed to new bucket migrate scheduler","error":"rpc error: code = Unknown desc = StorageProvider does not exist: unknown request"}
{"t":"2023-10-23T19:23:04.002Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:228","msg":"0xb865c584b466664662831e778792682144a0b52bd519c1b91c9bef3000fb236e"}
{"t":"2023-10-23T19:23:04.014Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:208","msg":"height :1600419 tx count:3 sql count:19"}
{"t":"2023-10-23T19:23:04.014Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:211","msg":"total cost: 22"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"approver"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"authenticator"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"downloader"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"taskexecutor"}
{"t":"2023-10-23T19:23:05.782Z","l":"error","caller":"gater/gater.go:63","msg":"failed to listen","error":"http: Server closed"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"gateway"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"manager"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"p2p"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"receiver"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"signer"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"metadata"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"uploader"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"blocksyncer"}
{"t":"2023-10-23T19:23:05.782Z","l":"error","caller":"metrics/metrics.go:79","msg":"failed to listen and serve","error":"http: Server closed"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"metrics"}
{"t":"2023-10-23T19:23:05.782Z","l":"error","caller":"pprof/pprof.go:65","msg":"failed to listen and serve","error":"http: Server closed"}
{"t":"2023-10-23T19:23:05.782Z","l":"info","caller":"gfspapp/app_lifecycle.go:93","msg":"succeed to stop service","service_name":"pprof"}
{"t":"2023-10-23T19:23:05.782Z","l":"error","caller":"gfspapp/app_lifecycle.go:84","msg":"service while stopping service, killing instance manually"}
{"t":"2023-10-23T19:23:05.878Z","l":"error","caller":"gfspvgmgr/virtual_group_manager.go:293","msg":"failed to refresh due to current sp is not in sp list"}
{"t":"2023-10-23T19:23:05.929Z","l":"error","caller":"gfspclient/manager.go:194","msg":"client failed to query manager's task stats","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:06.559Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:228","msg":"0x69cecfd50f41e9f69391e9d881f6edb032e9047e6fd0af51d6980a34a92ef3b7"}
{"t":"2023-10-23T19:23:06.567Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:208","msg":"height :1600420 tx count:2 sql count:11"}
{"t":"2023-10-23T19:23:06.567Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:211","msg":"total cost: 14"}
{"t":"2023-10-23T19:23:07.872Z","l":"error","caller":"gfspvgmgr/virtual_group_manager.go:293","msg":"failed to refresh due to current sp is not in sp list"}
{"t":"2023-10-23T19:23:08.903Z","l":"error","caller":"gnfd/gnfd_service.go:207","msg":"failed to query storage provider","error":"rpc error: code = Unknown desc = StorageProvider does not exist: unknown request"}
{"t":"2023-10-23T19:23:08.903Z","l":"error","caller":"manager/manager.go:153","msg":"failed to new bucket migrate scheduler","error":"rpc error: code = Unknown desc = StorageProvider does not exist: unknown request"}
{"t":"2023-10-23T19:23:08.927Z","l":"error","caller":"gfspclient/manager.go:194","msg":"client failed to query manager's task stats","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:09.047Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:228","msg":"0xe3dc99dc1ed61ce70a48cefb8e495f52a848085355497ee8105cfabcbdf0f505"}
{"t":"2023-10-23T19:23:09.054Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:208","msg":"height :1600421 tx count:1 sql count:3"}
{"t":"2023-10-23T19:23:09.054Z","l":"info","caller":"blocksyncer/blocksyncer_indexer.go:211","msg":"total cost: 10"}
{"t":"2023-10-23T19:23:09.878Z","l":"error","caller":"gfspvgmgr/virtual_group_manager.go:293","msg":"failed to refresh due to current sp is not in sp list"}
{"t":"2023-10-23T19:23:09.953Z","l":"error","caller":"manager/manager.go:510","msg":"reset gc object task","old_task_key":"GCObject-start33033-end34033-time1698076305"}
{"t":"2023-10-23T19:23:09.953Z","l":"error","caller":"manager/manager.go:512","msg":"reset gc object task","new_task_key":"GCObject-start33033-end34033-time1698076305"}
{"t":"2023-10-23T19:23:09.953Z","l":"error","caller":"manager/manager.go:510","msg":"reset gc object task","old_task_key":"GCObject-start33033-end34033-time1698076305"}
{"t":"2023-10-23T19:23:09.953Z","l":"error","caller":"manager/manager.go:512","msg":"reset gc object task","new_task_key":"GCObject-start33033-end34033-time1698076305"}
{"t":"2023-10-23T19:23:09.953Z","l":"debug","caller":"manager/manager.go:674","msg":"add gc object task to backup set","task_key":"GCObject-start33033-end34033-time1698076305","task_limit":"tasks:1 tasks_low_priority:1 "}
{"t":"2023-10-23T19:23:09.953Z","l":"debug","caller":"manager/manager.go:563","msg":"only one task for picking"}
{"t":"2023-10-23T19:23:09.953Z","l":"info","caller":"manager/manager.go:739","msg":"retry push gc object task to queue after dispatching","error":null}
{"t":"2023-10-23T19:23:09.953Z","l":"debug","caller":"manager/manage_task.go:67","msg":"dispatch task to executor","key_info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[<nil>]"}
{"t":"2023-10-23T19:23:09.953Z","l":"debug","caller":"gfspapp/manage_server.go:193","msg":"succeed to dispatch task","task_key":"GCObject-start33033-end34033-time1698076305","info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[<nil>]"}
{"t":"2023-10-23T19:23:09.953Z","l":"debug","caller":"gfspapp/manage_server.go:175","msg":"succeed to response ask task","task_key":"GCObject-start33033-end34033-time1698076305"}
{"t":"2023-10-23T19:23:09.954Z","l":"debug","caller":"gfsprcmgr/scope.go:452","msg":"begin to reserve resources","span_id":0,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 0]","alloc_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.954Z","l":"debug","caller":"gfsprcmgr/scope.go:452","msg":"begin to reserve resources","span_id":202,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 0]","alloc_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.954Z","l":"debug","caller":"gfsprcmgr/scope.go:452","msg":"begin to reserve resources","span_id":11,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 0]","alloc_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.954Z","l":"debug","caller":"gfsprcmgr/scope.go:455","msg":"end to reserve resources","span_id":11,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]","alloced_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.954Z","l":"debug","caller":"gfsprcmgr/scope.go:455","msg":"end to reserve resources","span_id":202,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]","alloced_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.954Z","l":"debug","caller":"gfsprcmgr/scope.go:455","msg":"end to reserve resources","span_id":0,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]","alloced_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.954Z","l":"error","caller":"gfspclient/metadata.go:50","msg":"failed to list deleted objects by block number range","task_key":"GCObject-start33033-end34033-time1698076305","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:09.955Z","l":"error","caller":"executor/execute_task.go:260","msg":"failed to query deleted object list","task_key":"GCObject-start33033-end34033-time1698076305","task_info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[<nil>]","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"failed to list deleted objects by block number range, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:09.955Z","l":"error","caller":"gfspclient/manager.go:136","msg":"client failed to report task","task_key":"GCObject-start33033-end34033-time1698076305","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"executor/executor.go:248","msg":"finished to report task","task_key":"GCObject-start33033-end34033-time1698076305","task_info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"failed to list deleted objects by block number range, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" ]","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"client failed to report task, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"executor/execute_task.go:240","msg":"gc object task report progress","task_key":"GCObject-start33033-end34033-time1698076305","task_info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"failed to list deleted objects by block number range, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" ]","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"client failed to report task, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"executor/execute_task.go:252","msg":"gc object task","task_key":"GCObject-start33033-end34033-time1698076305","task_info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"failed to list deleted objects by block number range, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" ]","is_succeed":false,"response_end_block_id":0,"waiting_gc_object_number":0,"has_gc_object_number":0,"try_again_later":false,"task_is_canceled":false,"has_no_object":false,"error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"failed to list deleted objects by block number range, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"executor/executor.go:231","msg":"finished to handle task","task_key":"GCObject-start33033-end34033-time1698076305"}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"gfsprcmgr/scope.go:75","msg":"begin to release resources","span_id":0,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]","release_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"gfsprcmgr/scope.go:78","msg":"end to release resources","span_id":0,"reserved_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 0]","release_stat":"memory reserved [0], task reserved[h: 0, m: 0, l: 1]"}
{"t":"2023-10-23T19:23:09.955Z","l":"error","caller":"gfspclient/manager.go:70","msg":"client failed to ask task","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:09.955Z","l":"error","caller":"executor/executor.go:130","msg":"failed to ask task","remaining":"memory:7730940928 tasks:10240 tasks_high_priority:128 tasks_medium_priority:1024 tasks_low_priority:16 ","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"client failed to ask task, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:09.955Z","l":"error","caller":"gfspclient/manager.go:136","msg":"client failed to report task","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:09.955Z","l":"debug","caller":"executor/executor.go:248","msg":"finished to report task","task_info":"key[GCObject-start33033-end34033-time1698076305], type[GCObjectTask], priority[0], limit[tasks:1 tasks_low_priority:1 ], start_block[33033], end_block[34033], current_block[0], last_deleted_object_id[0], create[1698076305], update[1698088989], timeout[300], retry[1], max_retry[0], runner[127.0.0.1], error[code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"failed to list deleted objects by block number range, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" ]","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"client failed to report task, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:10.005Z","l":"error","caller":"gfspclient/manager.go:70","msg":"client failed to ask task","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:10.005Z","l":"error","caller":"executor/executor.go:130","msg":"failed to ask task","remaining":"memory:7730940928 tasks:10240 tasks_high_priority:128 tasks_medium_priority:1024 tasks_low_priority:16 ","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"client failed to ask task, error: rpc error: code = Unavailable desc = connection error: desc = \\\"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\\\"\" "}
{"t":"2023-10-23T19:23:10.007Z","l":"error","caller":"gfspclient/manager.go:70","msg":"client failed to ask task","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 0.0.0.0:9333: connect: connection refused\""}
{"t":"2023-10-23T19:23:10.

[ BlockSyncer ] Failed to get block from map

Run command:

./gnfd-sp --config   /path-to-config/config.toml

Error Log:

{"t":"2023-06-21T07:40:26.523Z","l":"error","caller":"parser/worker.go:86","msg":"error while process block","height":1,"err":"failed to get block from map need retry"} 
{"t":"2023-06-21T07:40:29.530Z","l":"warn","caller":"blocksyncer/blocksyncer_indexer.go:89","msg":"failed to get map data height: 1"} 
{"t":"2023-06-21T07:40:29.530Z","l":"info","caller":"parser/worker.go:121","msg":"processing block","height":1} 
{"t":"2023-06-21T07:40:29.530Z","l":"error","caller":"parser/worker.go:86","msg":"error while process block","height":1,"err":"failed to get block from map need retry"}

nfhxvK0CGx

How to Fix?

diff --git a/modular/blocksyncer/blocksyncer_indexer.go b/modular/blocksyncer/blocksyncer_indexer.go
index a60e723..d93655b 100644
--- a/modular/blocksyncer/blocksyncer_indexer.go
+++ b/modular/blocksyncer/blocksyncer_indexer.go
@@ -78,7 +78,7 @@ func (i *Impl) Process(height uint64) error {
        flagAny := i.GetCatchUpFlag().Load()
        flag := flagAny.(int64)
        heightKey := fmt.Sprintf("%s-%d", i.GetServiceName(), height)
-       if flag == -1 || flag >= int64(height) {
+       if flag < -1 || flag > int64(height) {
                blockAny, okb := blockMap.Load(heightKey)
                eventsAny, oke := eventMap.Load(heightKey)
                txsAny, okt := txMap.Load(heightKey)
make build

make build error

OS version:

NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

go version:

go version go1.20.3 linux/amd64

starting step:

make install-tools

make build && cd build

get errors:

bash +x ./build.sh
Failure: plugin gogofaster: could not find protoc plugin for name gogofaster - please make sure protoc-gen-gogofaster is installed and present on your $PATH
service/auth/auth.go:13:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/auth/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/auth/types
service/challenge/challenge.go:15:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/challenge/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/challenge/types
service/downloader/downloader.go:15:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/downloader/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/downloader/types
service/metadata/client/metadata_client.go:11:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/metadata/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/metadata/types
service/p2p/p2p.go:19:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/p2p/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/p2p/types
service/receiver/client/receiver_client.go:10:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/receiver/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/receiver/types
service/signer/client/signer_client.go:13:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/signer/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/signer/types
service/tasknode/task_node.go:19:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/tasknode/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/tasknode/types
store/sqldb/database.go:12:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/types
service/uploader/client/uploader_client.go:10:2: no required module provides package github.com/bnb-chain/greenfield-storage-provider/service/uploader/types; to add it:
        go get github.com/bnb-chain/greenfield-storage-provider/service/uploader/types
build failed Ooooooh!!!

go get:

go get github.com/bnb-chain/greenfield-storage-provider/service/challenge/types

output:
go: github.com/bnb-chain/greenfield-storage-provider/service/auth/types: no matching versions for query "upgrade"

no good addresses

Hello, may I ask what the problem is? "Error": "no good addresses
1682233449574

SQL syntax error

gnfd-sp version:

Version : v0.2.3-hf.2
Branch  : HEAD
Commit  : 399e72bc698bdb143b995d92d460550bcb41b283
Build   : go1.20.6 linux amd64 2023-08-02 09:04

sp node stop sync network latest block

logs:

Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:18.183+0800","l":"error","caller":"sqlclient/client.go:108","msg":"error sql","err":"Error 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO `global_virtual_group_families` (`global_virtual_group_family_id`,`p' at line 1","elapsed":0.000112003,"sql":"INSERT INTO `global_virtual_groups` (`global_virtual_group_id`,`family_id`,`primary_sp_id`,`secondary_sp_ids`,`stored_size`,`virtual_payment_address`,`total_deposit`,`create_at`,`create_tx_hash`,`create_time`,`update_at`,`update_tx_hash`,`update_time`,`removed`) VALUES (1,1,6,'1,2,3,4,5,7',0,'<binary>','<binary>',45591,'<binary>',1690965783,45591,'<binary>',1690965783,false) ON DUPLICATE KEY UPDATE `global_virtual_group_id`=VALUES(`global_virtual_group_id`),`family_id`=VALUES(`family_id`),`primary_sp_id`=VALUES(`primary_sp_id`),`secondary_sp_ids`=VALUES(`secondary_sp_ids`),`stored_size`=VALUES(`stored_size`),`virtual_payment_address`=VALUES(`virtual_payment_address`),`total_deposit`=VALUES(`total_deposit`),`create_at`=VALUES(`create_at`),`create_tx_hash`=VALUES(`create_tx_hash`),`create_time`=VALUES(`create_time`),`update_at`=VALUES(`update_at`),`update_tx_hash`=VALUES(`update_tx_hash`),`update_time`=VALUES(`update_time`),`removed`=VALUES(`removed`);   INSERT INTO `global_virtual_group_families` (`global_virtual_group_family_id`,`primary_sp_id`,`global_virtual_group_ids`,`virtual_payment_address`,`create_at`,`create_tx_hash`,`create_time`,`update_at`,`update_tx_hash`,`update_time`,`removed`) VALUES (1,6,NULL,'<binary>',45591,'<binary>',1690965783,45591,'<binary>',1690965783,false) ON DUPLICATE KEY UPDATE `global_virtual_group_family_id`=VALUES(`global_virtual_group_family_id`),`primary_sp_id`=VALUES(`primary_sp_id`),`global_virtual_group_ids`=VALUES(`global_virtual_group_ids`),`virtual_payment_address`=VALUES(`virtual_payment_address`),`create_at`=VALUES(`create_at`),`create_tx_hash`=VALUES(`create_tx_hash`),`create_time`=VALUES(`create_time`),`update_at`=VALUES(`update_at`),`update_tx_hash`=VALUES(`update_tx_hash`),`update_time`=VALUES(`update_time`),`removed`=VALUES(`removed`);   INSERT INTO `global_virtual_groups` (`global_virtual_group_id`,`family_id`,`primary_sp_id`,`secondary_sp_ids`,`stored_size`,`virtual_payment_address`,`total_deposit`,`create_at`,`create_tx_hash`,`create_time`,`update_at`,`update_tx_hash`,`update_time`,`removed`) VALUES (2,2,7,'1,2,3,4,5,6',0,'<binary>','<binary>',45591,'<binary>',1690965783,45591,'<binary>',1690965783,false) ON DUPLICATE KEY UPDATE `global_virtual_group_id`=VALUES(`global_virtual_group_id`),`family_id`=VALUES(`family_id`),`primary_sp_id`=VALUES(`primary_sp_id`),`secondary_sp_ids`=VALUES(`secondary_sp_ids`),`stored_size`=VALUES(`stored_size`),`virtual_payment_address`=VALUES(`virtual_payment_address`),`total_deposit`=VALUES(`total_deposit`),`create_at`=VALUES(`create_at`),`create_tx_hash`=VALUES(`create_tx_hash`),`create_time`=VALUES(`create_time`),`update_at`=VALUES(`update_at`),`update_tx_hash`=VALUES(`update_tx_hash`),`update_time`=VALUES(`update_time`),`removed`=VALUES(`removed`);   INSERT INTO `global_virtual_group_families` (`global_virtual_group_family_id`,`primary_sp_id`,`global_virtual_group_ids`,`virtual_payment_address`,`create_at`,`create_tx_hash`,`create_time`,`update_at`,`update_tx_hash`,`update_time`,`removed`) VALUES (2,7,NULL,'<binary>',45591,'<binary>',1690965783,45591,'<binary>',1690965783,false) ON DUPLICATE KEY UPDATE `global_virtual_group_family_id`=VALUES(`global_virtual_group_family_id`),`primary_sp_id`=VALUES(`primary_sp_id`),`global_virtual_group_ids`=VALUES(`global_virtual_group_ids`),`virtual_payment_address`=VALUES(`virtual_payment_address`),`create_at`=VALUES(`create_at`),`create_tx_hash`=VALUES(`create_tx_hash`),`create_time`=VALUES(`create_time`),`update_at`=VALUES(`update_at`),`update_tx_hash`=VALUES(`update_tx_hash`),`update_time`=VALUES(`update_time`),`removed`=VALUES(`removed`);   INSERT INTO `global_virtual_groups` (`global_virtual_group_id`,`family_id`,`primary_sp_id`,`secondary_sp_ids`,`stored_size`,`virtual_payment_address`,`total_deposit`,`create_at`,`create_tx_hash`,`create_time`,`update_at`,`update_tx_hash`,`update_time`,`removed`) VALUES (3,3,5,'1,2,3,4,6,7',0,'<binary>','<binary>',45591,'<binary>',1690965783,45591,'<binary>',1690965783,false) ON DUPLICATE KEY UPDATE `global_virtual_group_id`=VALUES(`global_virtual_group_id`),`family_id`=VALUES(`family_id`),`primary_sp_id`=VALUES(`primary_sp_id`),`secondary_sp_ids`=VALUES(`secondary_sp_ids`),`stored_size`=VALUES(`stored_size`),`virtual_payment_address`=VALUES(`virtual_payment_address`),`total_deposit`=VALUES(`total_deposit`),`create_at`=VALUES(`create_at`),`create_tx_hash`=VALUES(`create_tx_hash`),`create_time`=VALUES(`create_time`),`update_at`=VALUES(`update_at`),`update_tx_hash`=VALUES(`update_tx_hash`),`update_time`=VALUES(`update_time`),`removed`=VALUES(`removed`);   INSERT INTO `global_virtual_group_families` (`global_virtual_group_family_id`,`primary_sp_id`,`global_virtual_group_ids`,`virtual_payment_address`,`create_at`,`create_tx_hash`,`create_time`,`update_at`,`update_tx_hash`,`update_time`,`removed`) VALUES (3,5,NULL,'<binary>',45591,'<binary>',1690965783,45591,'<binary>',1690965783,false) ON DUPLICATE KEY UPDATE `global_virtual_group_family_id`=VALUES(`global_virtual_group_family_id`),`primary_sp_id`=VALUES(`primary_sp_id`),`global_virtual_group_ids`=VALUES(`global_virtual_group_ids`),`virtual_payment_address`=VALUES(`virtual_payment_address`),`create_at`=VALUES(`create_at`),`create_tx_hash`=VALUES(`create_tx_hash`),`create_time`=VALUES(`create_time`),`update_at`=VALUES(`update_at`),`update_tx_hash`=VALUES(`update_tx_hash`),`update_time`=VALUES(`update_time`),`removed`=VALUES(`removed`);   INSERT INTO `epoch` (`one_row_id`,`block_height`,`block_hash`,`update_time`) VALUES (true,45591,'<binary>',1690965783) ON DUPLICATE KEY UPDATE `block_height`=VALUES(`block_height`),`block_hash`=VALUES(`block_hash`),`update_time`=VALUES(`update_time`);   ","rows":0}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:18.184+0800","l":"error","caller":"parser/worker.go:86","msg":"error while process block","height":45591,"err":"Error 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO `global_virtual_group_families` (`global_virtual_group_family_id`,`p' at line 1"}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.483ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.470ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.402ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.451ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.352ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.422ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.279ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.419ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.475ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.454ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.463ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.452ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.318ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.395ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.352ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.490ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]:
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: 2023/08/04 23:49:19 /home/runner/work/greenfield-storage-provider/greenfield-storage-provider/store/bsdb/block.go:21
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: [0.506ms] [rows:1] SELECT block_height FROM `epoch` LIMIT 1
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:18.973+0800","l":"error","caller":"metadata/metadata_sp_exit_service.go:164","msg":"failed to list migrate bucket events due to request block id exceed current block syncer block height"}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:18.973+0800","l":"error","caller":"gfspclient/metadata.go:647","msg":"client failed to list migrate bucket events","error":"rpc error: code = Unknown desc = code_space:\"metadata\" http_status_code:400 inner_code:90005 description:\"request block height exceed latest height\" "}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:18.973+0800","l":"error","caller":"manager/bucket_migrate_scheduler.go:416","msg":"failed to list migrate bucket events","block_id":45591,"error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"server slipped away, try again later\" "}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:19.013+0800","l":"error","caller":"metadata/metadata_sp_exit_service.go:405","msg":"failed to list sp exit events due to request block id exceed current block syncer block height"}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:19.013+0800","l":"error","caller":"gfspclient/metadata.go:685","msg":"client failed to list sp exit events","error":"rpc error: code = Unknown desc = code_space:\"metadata\" http_status_code:400 inner_code:90005 description:\"request block height exceed latest height\" "}
Aug 04 23:49:19 10-7-46-85 gnfd-sp[311626]: {"t":"2023-08-04T23:49:19.013+0800","l":"error","caller":"manager/sp_exit_scheduler.go:332","msg":"failed to subscribe sp exit event","error":"code_space:\"GfSpClient\" http_status_code:404 inner_code:98001 description:\"server slipped away, try again later\" "}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.