Coder Social home page Coder Social logo

primihub / primihub Goto Github PK

View Code? Open in Web Editor NEW
1.2K 22.0 158.0 18.53 MB

Privacy-Preserving Computing Platform 由密码学专家团队打造的开源隐私计算平台,支持多方安全计算、联邦学习、隐私求交、匿踪查询等。

Home Page: https://docs.primihub.com/

License: Apache License 2.0

Starlark 2.22% Dockerfile 0.13% Shell 0.47% Python 19.82% C++ 77.27% C 0.02% Makefile 0.07%
federated-learning private-information-retrieval private-set-intersection pir psi hacktoberfest mpc fl privacy-preserving multi-party-computation

primihub's Introduction

Header

由密码学专家团队打造的开源隐私计算平台

GitHub Release Build Status Docker Pulls

中文 | English

隐私计算

数据流动起来才可以创造更大的价值,随着数字经济持续高速增长,数据的互联互通需求越来越旺盛,大到政府机关的机密数据、公司核心商业数据、小到个人信息。近两年,我国也相继出台了 《数据安全法》《个人信息保护法》。因此,如何让数据安全地流通起来,是一个必须要解决的问题

隐私计算技术作为连接数据流通和隐私保护法规的纽带,实现了 “数据可用不可见”。即在保护数据本身不对外泄露的前提下实现数据分析计算的技术集合。隐私计算作为数据流通的重要创新前沿技术,已经广泛应用于金融、医疗、通信、政务等多个行业。

PrimiHub

如果你对隐私计算感兴趣,想近距离体验下隐私计算的魅力,不妨试试 PrimiHub!一款由密码学专家团队打造的开源隐私计算平台,它安全可靠、开箱即用、自主研发、功能丰富。

特性

  • 开源:完全开源、免费
  • 安装简单:支持 Docker 一键部署
  • 开箱即用:拥有 Web界面命令行Python SDK 多种使用方式
  • 功能丰富:支持隐匿查询、隐私求交、联合统计、数据资源管理等功能
  • 灵活配置:支持自定义扩展语法、语义、安全协议等
  • 自主研发:基于安全多方计算、联邦学习、同态加密、可信计算等隐私计算技术

快速开始

推荐使用 Docker 部署 PrimiHub,开启你的隐私计算之旅。

# 第一步:下载
git clone https://github.com/primihub/primihub.git
# 第二步:启动容器
cd primihub && docker-compose up -d
# 第三步:进入容器
docker exec -it primihub-node0 bash
# 第四步:执行隐私求交计算
./primihub-cli --task_config_file="example/psi_ecdh_task_conf.json"
I20230616 13:40:10.683375    28 cli.cc:524] all node has finished
I20230616 13:40:10.683745    28 cli.cc:598] SubmitTask time cost(ms): 1419
# 查看结果
cat data/result/psi_result.csv
"intersection_row"
X3
...

PSI

隐私求交例子 在线尝试命令行

除此之外,PrimiHub 还提供了多种适合不同人群的使用方式:

问题 / 帮助 / Bug

如果您在使用过程中遇到任何问题,需要我们的帮助可以 点击 反馈问题。

欢迎添加我们的微信助手,加入「PrimiHub 开源社区」微信群。“零距离”接触项目核心开发、密码学专家、隐私计算行业大咖,获得更及时的回复和隐私计算的第一手资讯。

Header

许可证

此代码在 Apache 2.0 下发布,参见 LICENSE 文件。

primihub's People

Contributors

barry-ljf avatar bye-legumes avatar cccrick avatar commonye1 avatar fuxingbit avatar happytianli avatar helloprimihub avatar hobo0cn avatar keepmoving-zxy avatar linuxsuren avatar lx-1234 avatar lzw9560 avatar mi-sery avatar ni-kafruit avatar peachkk avatar phoenix20162016 avatar ppppbamzy avatar qixiaoye avatar summerborn12 avatar victory-bot-sys avatar writer-x avatar xenooooooooo avatar xuefeng-xu avatar xujiangyu avatar y981431999 avatar yankaili2006 avatar yongganhangxing avatar zhangll225 avatar zjj614 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

primihub's Issues

Failed to open file: /tmp/FL/hetero_xgb/train/train_breast_cancer_host.csv

Put value, key string is : bafkreifx4adrh3g3623y4cxmsrsgqfgrlddwh7vnz7uoz4jrvjqbiv67iu

Put value success, value length:320

I20230410 11:50:59.483922 1 service.cc:237] << Put meta: {

"data_type": 0,

"data_url": "node1:172.28.1.11:50051:/tmp/FL/homo_lr_test.data",

"description": "homo_lr_test",

"driver_type": "CSV",

"id": "bafkreifgem32so6wuhtmef5joiczlrasybffuibuiph5xgtxcz7pn6adbu",

"schema": "{"1000025":[],"5":[],"1":[],"1":[],"1":[],"2":[],"1":[],"3":[],"1":[],"1":[]}",

"visibility": 1

}

Failed to open file: /tmp/FL/hetero_xgb/train/train_breast_cancer_host.csv

Put value, key string is : bafkreifgem32so6wuhtmef5joiczlrasybffuibuiph5xgtxcz7pn6adbu

Put value success, value length:318

The task of PSI failed in docker-compose

I use the demo code in dochttps://docs.primihub.com/docs/advance-usage/create-tasks/psi-task

“./primihub-cli --task_type=3 --params="clientData:STRING:0:psi_client_data,serverData:STRING:0:psi_server_data,clientIndex:INT32:0:0,serverIndex:INT32:0:1,psiType:INT32:0:0,outputFullFilename:STRING:0:/data/result/psi_result.csv" --input_datasets="clientData,serverData"

node0_primihub | Could not create logging file: No such file or directory
node0_primihub | COULD NOT CREATE A LOGGINGFILE 20220703-025230.1!Could not create logging file: No such file or directory
node0_primihub | COULD NOT CREATE A LOGGINGFILE 20220703-025230.1!E20220703 02:52:30.917066 26 service.cc:242] 🔍 ⏱️ Timeout while searching meta list.

the log shows that the demo task timeout

启动Node报terminate called after throwing an instance of 'primihub::service::Error'错误

按照 启动节点教程 启动程序报错,错误如下:

$ ./bazel-bin/node --node_id=node0 --service_port=50050 --config=./config/node0.yaml
terminate called after throwing an instance of 'primihub::service::Error'
Aborted

启动了三个节点的输出都是一样的。
bootstrap节点输出信息如下:

[*] Listening on: 0.0.0.0 with port: 4001
2022-08-31T11:16:03.914Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:03.920Z	INFO	bootsrap	src/main.go:64	Host created. We are:QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
2022-08-31T11:16:03.920Z	INFO	bootsrap	src/main.go:65	[/ip4/172.17.0.2/tcp/4001 /ip4/127.0.0.1/tcp/4001]

[*] Your Bootstrap ID Is: /ip4/0.0.0.0/tcp/4001/ipfs/QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd

2022-08-31T11:16:03.920Z	INFO	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:279	starting refreshing cpl 0 with key CIQAAADGAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 0)
2022-08-31T11:16:03.921Z	WARN	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:136	failed when refreshing routing table2 errors occurred:
	* failed to query for self, err=failed to find any peer in table
	* failed to refresh cpl=0, err=failed to find any peer in table


2022-08-31T11:16:03.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:03.923Z	INFO	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:279	starting refreshing cpl 0 with key CIQAAAAML4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 0)
2022-08-31T11:16:03.923Z	WARN	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:199	failed when refreshing routing table	{"error": "2 errors occurred:\n\t* failed to query for self, err=failed to find any peer in table\n\t* failed to refresh cpl=0, err=failed to find any peer in table\n\n"}
2022-08-31T11:16:03.924Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:08.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:13.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:18.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:23.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:28.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:33.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:38.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:43.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:48.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:53.921Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
2022-08-31T11:16:58.181Z	DEBUG	upgrader	[email protected]/listener.go:109	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.17.0.2/tcp/4001 <---> /ip4/172.17.0.1/tcp/33768
2022-08-31T11:16:58.181Z	DEBUG	upgrader	[email protected]/listener.go:109	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.17.0.2/tcp/4001 <---> /ip4/172.17.0.1/tcp/33766
2022-08-31T11:16:58.183Z	DEBUG	upgrader	[email protected]/listener.go:133	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> accepted connection: <stream.Conn[TCP] /ip4/172.17.0.2/tcp/4001 (QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd) <-> /ip4/172.17.0.1/tcp/33766 (12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e)>
2022-08-31T11:16:58.183Z	DEBUG	swarm2	[email protected]/swarm_listen.go:103	swarm listener accepted connection: <stream.Conn[TCP] /ip4/172.17.0.2/tcp/4001 (QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd) <-> /ip4/172.17.0.1/tcp/33766 (12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e)>
2022-08-31T11:16:58.183Z	DEBUG	upgrader	[email protected]/listener.go:125	accept upgrade error: failed to negotiate stream multiplexer: EOF (/ip4/172.17.0.2/tcp/4001 <--> /ip4/172.17.0.1/tcp/33768)
2022-08-31T11:16:58.185Z	DEBUG	basichost	basic/basic_host.go:414	protocol negotiation took 102.531µs
2022-08-31T11:16:58.185Z	DEBUG	basichost	basic/basic_host.go:414	protocol negotiation took 155.844µs
2022-08-31T11:16:58.185Z	DEBUG	net/identify	identify/id.go:407	/ipfs/id/1.0.0 sent message to 12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e /ip4/172.17.0.1/tcp/33766
2022-08-31T11:16:58.185Z	DEBUG	net/identify	identify/id.go:439	/ipfs/id/1.0.0 received message from 12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e /ip4/172.17.0.1/tcp/33766
2022-08-31T11:16:58.185Z	DEBUG	net/identify	identify/id.go:635	QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd received listen addrs for 12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e: [/ip4/127.0.0.1/tcp/8886]
2022-08-31T11:16:58.185Z	DEBUG	dht	[email protected]/dht.go:639	peer found	{"peer": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e"}
2022-08-31T11:16:58.185Z	DEBUG	dht	[email protected]/dht.go:639	peer found	{"peer": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e"}
2022-08-31T11:16:58.185Z	DEBUG	dht	[email protected]/dht_net.go:116	handling message	{"from": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e", "type": 4, "key": "EiAymLH8QKpDId0a6U46J7FWYnA8it1mnbY4MFCZiAnD1A=="}
2022-08-31T11:16:58.185Z	DEBUG	dht	[email protected]/dht.go:639	peer found	{"peer": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e"}
2022-08-31T11:16:58.185Z	DEBUG	dht	[email protected]/dht_net.go:133	handled message	{"from": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e", "type": 4, "key": "EiAymLH8QKpDId0a6U46J7FWYnA8it1mnbY4MFCZiAnD1A==", "time": 0.000222812}
2022-08-31T11:16:58.185Z	DEBUG	dht	[email protected]/dht_net.go:159	responded to message	{"from": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e", "type": 4, "key": "EiAymLH8QKpDId0a6U46J7FWYnA8it1mnbY4MFCZiAnD1A==", "time": 0.000267972}
2022-08-31T11:16:58.186Z	DEBUG	swarm2	[email protected]/swarm.go:336	[QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd] opening stream to peer [12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e]
2022-08-31T11:16:58.186Z	DEBUG	dht	[email protected]/dht.go:639	peer found	{"peer": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e"}
2022-08-31T11:16:58.186Z	DEBUG	dht	[email protected]/query.go:426	PEERS CLOSER -- worker for: 12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e found self
2022-08-31T11:16:58.186Z	DEBUG	dht	[email protected]/query.go:505	not connected. dialing.
2022-08-31T11:16:58.186Z	DEBUG	basichost	basic/basic_host.go:782	host QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd dialing QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.186Z	DEBUG	swarm2	[email protected]/swarm_dial.go:241	dialing peer	{"from": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd", "to": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA"}
2022-08-31T11:16:58.187Z	DEBUG	swarm2	[email protected]/swarm_dial.go:266	network for QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd finished dialing QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.187Z	DEBUG	dht	[email protected]/query.go:513	error connecting: no good addresses
2022-08-31T11:16:58.187Z	DEBUG	dht	[email protected]/dht.go:656	peer stopped dht	{"peer": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA"}
2022-08-31T11:16:58.187Z	INFO	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:279	starting refreshing cpl 0 with key CIQAAANQVMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 1)
2022-08-31T11:16:58.187Z	DEBUG	dht	net/message_manager.go:303	error reading message	{"error": "EOF", "retrying": true}
2022-08-31T11:16:58.187Z	DEBUG	swarm2	[email protected]/swarm.go:336	[QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd] opening stream to peer [12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e]
2022-08-31T11:16:58.187Z	DEBUG	swarm2	[email protected]/limiter.go:201	[limiter] clearing all peer dials: QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.188Z	DEBUG	dht	[email protected]/dht.go:639	peer found	{"peer": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e"}
2022-08-31T11:16:58.188Z	DEBUG	dht	[email protected]/query.go:426	PEERS CLOSER -- worker for: 12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e found self
2022-08-31T11:16:58.188Z	DEBUG	dht	[email protected]/query.go:505	not connected. dialing.
2022-08-31T11:16:58.188Z	DEBUG	basichost	basic/basic_host.go:782	host QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd dialing QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.188Z	DEBUG	swarm2	[email protected]/swarm_dial.go:241	dialing peer	{"from": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd", "to": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA"}
2022-08-31T11:16:58.188Z	DEBUG	swarm2	[email protected]/swarm_dial.go:266	network for QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd finished dialing QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.188Z	DEBUG	dht	[email protected]/query.go:513	error connecting: no good addresses
2022-08-31T11:16:58.188Z	DEBUG	dht	[email protected]/dht.go:656	peer stopped dht	{"peer": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA"}
2022-08-31T11:16:58.188Z	INFO	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:286	finished refreshing cpl 0, routing table size is now 1
2022-08-31T11:16:58.188Z	DEBUG	dht	net/message_manager.go:303	error reading message	{"error": "EOF", "retrying": true}
2022-08-31T11:16:58.189Z	DEBUG	swarm2	[email protected]/swarm.go:336	[QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd] opening stream to peer [12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e]
2022-08-31T11:16:58.189Z	DEBUG	swarm2	[email protected]/limiter.go:201	[limiter] clearing all peer dials: QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.189Z	DEBUG	dht	[email protected]/dht.go:639	peer found	{"peer": "12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e"}
2022-08-31T11:16:58.189Z	DEBUG	dht	[email protected]/query.go:426	PEERS CLOSER -- worker for: 12D3KooWALSPVcLiy4h4umM8XZ39GsNQebqdbMF2ZusTzryPH78e found self
2022-08-31T11:16:58.189Z	DEBUG	dht	[email protected]/query.go:505	not connected. dialing.
2022-08-31T11:16:58.189Z	DEBUG	basichost	basic/basic_host.go:782	host QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd dialing QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.189Z	DEBUG	swarm2	[email protected]/swarm_dial.go:241	dialing peer	{"from": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd", "to": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA"}
2022-08-31T11:16:58.189Z	DEBUG	swarm2	[email protected]/swarm_dial.go:266	network for QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd finished dialing QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
2022-08-31T11:16:58.189Z	DEBUG	dht	[email protected]/query.go:513	error connecting: no good addresses
2022-08-31T11:16:58.190Z	DEBUG	dht	[email protected]/dht.go:656	peer stopped dht	{"peer": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA"}
2022-08-31T11:16:58.190Z	DEBUG	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:265	not running refresh for cpl 0 as time since last refresh not above interval
2022-08-31T11:16:58.190Z	DEBUG	swarm2	[email protected]/limiter.go:201	[limiter] clearing all peer dials: QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA

尝试使用 文档 - 常见问题 中的第3条解决,无效。

请问以上问题如何解决?

Does primihub support clearing all configurations of accounts?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Failed to run Python SDK due to No module named 'opt_paillier_c2py'

I followed the instructions from this document. But I still got the following error message:

vscode ➜ /workspaces/primihub/python (develop ✗) $ python primihub/tests/test_disxgb_en.py
Traceback (most recent call last):
  File "/workspaces/primihub/python/primihub/tests/test_disxgb_en.py", line 25, in <module>
    from primihub.examples.disxgb_en import xgb_host_logic, xgb_guest_logic
  File "/workspaces/primihub/python/primihub/examples/disxgb_en.py", line 20, in <module>
    from primihub.primitive.opt_paillier_c2py_warpper import *
  File "/workspaces/primihub/python/primihub/primitive/opt_paillier_c2py_warpper.py", line 1, in <module>
    import opt_paillier_c2py
ModuleNotFoundError: No module named 'opt_paillier_c2py'

How to use version 1.6.2 to do Homo LR Demo

I use version 1.6.2 to do "Homo LR Demo". Demo code is as follow:

import primihub as ph
from primihub.FL.model.logistic_regression.homo_lr import run_host_party, run_guest_party,run_arbiter_party
from primihub.client import primihub_cli as cli
cli.init(config={"node": "192.168.146.136:50050", "cert": ""})
cli.async_remote_execute((run_host_party, ), (run_guest_party, ))

But the error log from Host(50051) is as follow:

图片

Request to have a development guide

Please consider adding the following basic content:

  • How to set up a development environment?
  • How to compile this project?
  • Which branch should we use when creating a PR?
  • If we have a code style guide?
  • others

Unable to run node

There is no error in the process of compiling the code, but an error is reported when running the node, as shown below:
image

Linux arm64下怎么编译安装

Describe the bug
linux arm64环境怎么编译?现在支持么

Expected behavior
A clear and concise description of what you expected to happen.

Log output
If applicable, post the log output to help explain your problem.

Enviroment (please complete the following information):

  • OS: [Linux]
  • Arch: [arm64]
  • Version of Primihub:

Additional context
Add any other context about the problem here.

table[data_pr] loss and table[data_project]'s column `organ_id`, `organ_num`,`resource_organ_ids`, `auth_resource_num` , `user_id` not exist

As the title says, Init.sql and ddl.sql do not match in the following sections:


-- Records of data_pr


INSERT INTO data_pr (id, project_id, resource_id, is_authed, is_del, create_date, update_date) VALUES (1, 1, 8, 1, 0, '2022-04-27 18:39:08.000', '2022-04-27 18:39:08.000');
INSERT INTO data_pr (id, project_id, resource_id, is_authed, is_del, create_date, update_date) VALUES (2, 1, 9, 1, 0, '2022-04-27 18:39:08.000', '2022-04-27 18:39:08.000');


-- Records of data_project


INSERT INTO data_project (project_id, project_name, project_desc, organ_id, organ_num, resource_num, resource_organ_ids, auth_resource_num, user_id, is_del, create_date, update_date) VALUES (1, '短视频', '短视频', 1000, 0, 2, '', 2, 1005, 0, '2022-04-27 18:39:08.000', '2022-04-27 18:39:08.000');

python3-dev has installed,but an error occurs

I have install python3-dev, when i compile using bazel, some errors show me to install pyhon2-dev,I don't why, python3 not enough?

bazel-out/k8-fastbuild/bin/external/local_config_python/_python3/_python3_include/object.h:430: error: undefined reference to '_Py_Dealloc'
external/pybind11/include/pybind11/detail/../detail/common.h:983: error: undefined reference to 'PyExc_TypeError'
external/pybind11/include/pybind11/detail/../detail/common.h:983: error: undefined reference to 'PyErr_SetString'
external/pybind11/include/pybind11/detail/../detail/common.h:987: error: undefined reference to 'PyExc_RuntimeError'
external/pybind11/include/pybind11/detail/../detail/common.h:987: error: undefined reference to 'PyErr_SetString'
external/pybind11/include/pybind11/detail/../detail/common.h:1046: error: undefined reference to 'PyErr_Fetch'
external/pybind11/include/pybind11/detail/../detail/common.h:1047: error: undefined reference to 'PyErr_Restore'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:384: error: undefined reference to 'PyErr_Fetch'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:396: error: undefined reference to 'PyErr_Restore'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:448: error: undefined reference to 'PyErr_Fetch'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:449: error: undefined reference to 'PyErr_NormalizeException'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:451: error: undefined reference to 'PyException_SetTraceback'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:457: error: undefined reference to 'PyErr_SetString'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:459: error: undefined reference to 'PyErr_Fetch'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:460: error: undefined reference to 'PyErr_NormalizeException'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:462: error: undefined reference to 'PyException_SetCause'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:463: error: undefined reference to 'PyException_SetContext'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:464: error: undefined reference to 'PyErr_Restore'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:522: error: undefined reference to 'PyObject_HasAttrString'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:546: error: undefined reference to 'PyObject_GetAttrString'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:576: error: undefined reference to 'PyObject_SetAttrString'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:613: error: undefined reference to 'PyUnicode_FromString'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:618: error: undefined reference to 'PyDict_GetItemWithError'
external/pybind11/include/pybind11/detail/../detail/../pytypes.h:620: error: undefined reference to 'PyErr_Occurred'

An Error occured after command "docker-compose up"

The error message is shown below

$ docker-compose up
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services: 'node1'
Unsupported config option for networks: 'testing_net'

I tried to re-download the whole primihub repository, and executge the command again, while the error still occurs.

my docker version

$ docker version
Client:
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.8
 Git commit:        20.10.7-0ubuntu5~18.04.3
 Built:             Mon Nov  1 01:04:14 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.8
  Git commit:       20.10.7-0ubuntu5~18.04.3
  Built:            Fri Oct 22 00:57:37 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.5.5-0ubuntu3~18.04.2
  GitCommit:
 runc:
  Version:          1.0.1-0ubuntu2~18.04.1
  GitCommit:
 docker-init:
  Version:          0.19.0
  GitCommit:

my docker-compose version

$ sudo docker-compose version
docker-compose version 1.17.1, build unknown
docker-py version: 2.5.1
CPython version: 2.7.17
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018

my linux's information

yabin@server-virtual-machine:~/primihub$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.6 LTS
Release:        18.04
Codename:       bionic

I am not familiar with docker things, so please forgive me if it is a naive question and give some guides, thx.

我在 sh deploy.sh部署的时候提示提示

Describe the bug
A clear and concise description of what the bug is.
? Network docker-deploy_primihub_net Error 0.0s
failed to create network docker-deploy_primihub_net: Error response from daemon: Failed to Setup IP tables: Unable to enable SKIP DNAT rule: (iptables failed: iptables --wait -t nat -I DOCKER -i br-226f4c00e324 -j RETURN: iptables: No chain/target/match by that name.
(exit status 1))

1

“docker-compose” start error

Two containers failed to start after using 'docker-compose' to start the service. Is there something wrong with my operation?

Name Command State Ports
node0_primihub /bin/bash -c ./primihub-no ... Up 0.0.0.0:10120->12120/tcp,:::10120->12120/tcp,0.0.0.0:10121->12121/tcp,:::10121->12121/tcp,0.0.0.0:8050->50050/tcp,:::8050->50050/tcp
node1_primihub /bin/bash -c ./primihub-no ... Exit 139
node2_primihub /bin/bash -c ./primihub-no ... Exit 139
simple_bootstrap_node /app/simple-bootstrap-node Up 0.0.0.0:4001->4001/tcp,:::4001->4001/tcp
$ docker-compose up

Log info as follows.

Attaching to simple_bootstrap_node, node0_primihub, node1_primihub, node2_primihub
node0_primihub           | 22.06.08 03:35:56.298739  Warning   Soralog  Group 'network' for logger 'DialerImpl' is not found. Fallback group will be used (it is group 'libp2p' right now).
node0_primihub           | 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB * started
simple_bootstrap_node    | [*] Listening on: 0.0.0.0 with port: 4001
simple_bootstrap_node    | 2022-06-08T03:35:44.503Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:35:44.513Z INFO    bootsrap    src/main.go:64  Host created. We are:QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
simple_bootstrap_node    | 2022-06-08T03:35:44.513Z INFO    bootsrap    src/main.go:65  [/ip4/172.28.1.13/tcp/4001 /ip4/127.0.0.1/tcp/4001]
simple_bootstrap_node    |
simple_bootstrap_node    | [*] Your Bootstrap ID Is: /ip4/0.0.0.0/tcp/4001/ipfs/QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
simple_bootstrap_node    |
simple_bootstrap_node    | 2022-06-08T03:35:44.515Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:35:44.519Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:35:44.520Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:279 starting refreshing cpl 0 with key CIQAAADLAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 0)
simple_bootstrap_node    | 2022-06-08T03:35:44.520Z WARN    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:136 failed when refreshing routing table2 errors occurred:
simple_bootstrap_node    |  * failed to query for self, err=failed to find any peer in table
simple_bootstrap_node    |  * failed to refresh cpl=0, err=failed to find any peer in table
simple_bootstrap_node    |
simple_bootstrap_node    |
simple_bootstrap_node    | 2022-06-08T03:35:49.516Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:35:54.515Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:35:56.301Z DEBUG   upgrader    [email protected]/listener.go:109 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.28.1.13/tcp/4001 <---> /ip4/172.28.1.10/tcp/44086
simple_bootstrap_node    | 2022-06-08T03:35:56.302Z DEBUG   upgrader    [email protected]/listener.go:109 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.28.1.13/tcp/4001 <---> /ip4/172.28.1.10/tcp/44088
simple_bootstrap_node    | 2022-06-08T03:35:56.308Z DEBUG   upgrader    [email protected]/listener.go:133 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> accepted connection: <stream.Conn[TCP] /ip4/172.28.1.13/tcp/4001 (QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA) <-> /ip4/172.28.1.10/tcp/44086 (12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB)>
simple_bootstrap_node    | 2022-06-08T03:35:56.308Z DEBUG   swarm2  [email protected]/swarm_listen.go:103swarm listener accepted connection: <stream.Conn[TCP] /ip4/172.28.1.13/tcp/4001 (QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA) <-> /ip4/172.28.1.10/tcp/44086 (12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB)>
simple_bootstrap_node    | 2022-06-08T03:35:56.310Z DEBUG   upgrader    [email protected]/listener.go:125 accept upgrade error: failed to negotiate stream multiplexer: read tcp4 172.28.1.13:4001->172.28.1.10:44088: read: connection reset by peer (/ip4/172.28.1.13/tcp/4001 <--> /ip4/172.28.1.10/tcp/44088)
simple_bootstrap_node    | 2022-06-08T03:35:56.353Z DEBUG   net/identify    identify/id.go:439  /ipfs/id/1.0.0 received message from 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB /ip4/172.28.1.10/tcp/44086
simple_bootstrap_node    | 2022-06-08T03:35:56.353Z DEBUG   basichost   basic/basic_host.go:414 protocol negotiation took 165.211µs
simple_bootstrap_node    | 2022-06-08T03:35:56.353Z DEBUG   net/identify    identify/id.go:635  QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA received listen addrs for 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB: [/ip4/172.28.1.10/tcp/8880]
simple_bootstrap_node    | 2022-06-08T03:35:56.354Z DEBUG   net/identify    identify/id.go:407  /ipfs/id/1.0.0 sent message to 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB /ip4/172.28.1.10/tcp/44086
simple_bootstrap_node    | 2022-06-08T03:35:56.354Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.354Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.354Z DEBUG   net/identify    identify/obsaddr.go:397 added own observed listen addr  {"observed": "/ip4/172.28.1.13/tcp/4001"}
simple_bootstrap_node    | 2022-06-08T03:35:56.353Z DEBUG   basichost   basic/basic_host.go:414 protocol negotiation took 148.767µs
simple_bootstrap_node    | 2022-06-08T03:35:56.354Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:56.356Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.356Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:56.357Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:56.358Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.358Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.359Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.359Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:56.360Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.360Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:279 starting refreshing cpl 0 with key CIQAAAHN4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 1)
simple_bootstrap_node    | 2022-06-08T03:35:56.361Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:56.362Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:56.362Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.362Z DEBUG   dht [email protected]/dht_net.go:116    handling message    {"from": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB", "type": 4, "key": "EiAZs2CCfEj5wYqyRadsDXQbzN6gxva2+VQN6aV68G8S1w=="}
simple_bootstrap_node    | 2022-06-08T03:35:56.362Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.363Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.362Z DEBUG   dht [email protected]/dht_net.go:133    handled message {"from": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB", "type": 4, "key": "EiAZs2CCfEj5wYqyRadsDXQbzN6gxva2+VQN6aV68G8S1w==", "time": 0.001424729}
simple_bootstrap_node    | 2022-06-08T03:35:56.364Z DEBUG   dht [email protected]/dht_net.go:159    responded to message    {"from": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB", "type": 4, "key": "EiAZs2CCfEj5wYqyRadsDXQbzN6gxva2+VQN6aV68G8S1w==", "time": 0.001655538}
simple_bootstrap_node    | 2022-06-08T03:35:56.364Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:56.364Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:56.364Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.366Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.366Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.367Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:56.367Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.368Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:286 finished refreshing cpl 0, routing table size is now 1
simple_bootstrap_node    | 2022-06-08T03:35:56.368Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:279 starting refreshing cpl 1 with key CIQAAACNUUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 1)
simple_bootstrap_node    | 2022-06-08T03:35:56.369Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:56.369Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:56.368Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.412Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.412Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:286 finished refreshing cpl 1, routing table size is now 1
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:279 starting refreshing cpl 2 with key CIQAAAFIOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 1)
simple_bootstrap_node    | 2022-06-08T03:35:56.413Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:56.414Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:56.414Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.416Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:56.416Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:56.416Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:56.416Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.416Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.417Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.417Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:56.417Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:56.417Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:56.417Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:286 finished refreshing cpl 2, routing table size is now 1
simple_bootstrap_node    | 2022-06-08T03:35:59.516Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
node1_primihub           | 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf * started22.06.08 03:35:59.794552  Warning   Soralog  Group 'network' for logger 'DialerImpl' is not found. Fallback group will be used (it is group 'libp2p' right now).
node1_primihub           |
simple_bootstrap_node    | 2022-06-08T03:35:59.802Z DEBUG   upgrader    [email protected]/listener.go:109 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.28.1.13/tcp/4001 <---> /ip4/172.28.1.11/tcp/46892
simple_bootstrap_node    | 2022-06-08T03:35:59.807Z DEBUG   upgrader    [email protected]/listener.go:125 accept upgrade error: failed to negotiate stream multiplexer: read tcp4 172.28.1.13:4001->172.28.1.11:46892: read: connection reset by peer (/ip4/172.28.1.13/tcp/4001 <--> /ip4/172.28.1.11/tcp/46892)
simple_bootstrap_node    | 2022-06-08T03:35:59.808Z DEBUG   upgrader    [email protected]/listener.go:109 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.28.1.13/tcp/4001 <---> /ip4/172.28.1.11/tcp/46894
simple_bootstrap_node    | 2022-06-08T03:35:59.814Z DEBUG   upgrader    [email protected]/listener.go:133 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> accepted connection: <stream.Conn[TCP] /ip4/172.28.1.13/tcp/4001 (QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA) <-> /ip4/172.28.1.11/tcp/46894 (12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf)>
simple_bootstrap_node    | 2022-06-08T03:35:59.814Z DEBUG   swarm2  [email protected]/swarm_listen.go:103swarm listener accepted connection: <stream.Conn[TCP] /ip4/172.28.1.13/tcp/4001 (QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA) <-> /ip4/172.28.1.11/tcp/46894 (12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf)>
simple_bootstrap_node    | 2022-06-08T03:35:59.820Z DEBUG   basichost   basic/basic_host.go:414 protocol negotiation took 587.69µs
simple_bootstrap_node    | 2022-06-08T03:35:59.821Z DEBUG   net/identify    identify/id.go:407  /ipfs/id/1.0.0 sent message to 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf /ip4/172.28.1.11/tcp/46894
simple_bootstrap_node    | 2022-06-08T03:35:59.823Z DEBUG   basichost   basic/basic_host.go:414 protocol negotiation took 265.176µs
simple_bootstrap_node    | 2022-06-08T03:35:59.826Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:35:59.826Z DEBUG   dht [email protected]/dht_net.go:116    handling message    {"from": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf", "type": 4, "key": "EiBKug9Eveess7uvcexKWa6n3lItjbzZWrKnZjgDbmmsow=="}
simple_bootstrap_node    | 2022-06-08T03:35:59.826Z DEBUG   dht [email protected]/dht_net.go:133    handled message {"from": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf", "type": 4, "key": "EiBKug9Eveess7uvcexKWa6n3lItjbzZWrKnZjgDbmmsow==", "time": 0.000543401}
simple_bootstrap_node    | 2022-06-08T03:35:59.827Z DEBUG   dht [email protected]/dht_net.go:159    responded to message    {"from": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf", "type": 4, "key": "EiBKug9Eveess7uvcexKWa6n3lItjbzZWrKnZjgDbmmsow==", "time": 0.000898703}
simple_bootstrap_node    | 2022-06-08T03:35:59.868Z DEBUG   net/identify    identify/id.go:439  /ipfs/id/1.0.0 received message from 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf /ip4/172.28.1.11/tcp/46894
simple_bootstrap_node    | 2022-06-08T03:35:59.869Z DEBUG   net/identify    identify/id.go:635  QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA received listen addrs for 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf: [/ip4/172.28.1.11/tcp/8881]
simple_bootstrap_node    | 2022-06-08T03:35:59.869Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:35:59.869Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:59.870Z DEBUG   net/identify    identify/obsaddr.go:397 added own observed listen addr  {"observed": "/ip4/172.28.1.13/tcp/4001"}
simple_bootstrap_node    | 2022-06-08T03:35:59.870Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:35:59.871Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf]
simple_bootstrap_node    | 2022-06-08T03:35:59.871Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:59.872Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:59.874Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:59.875Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:59.875Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:59.875Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.875Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:59.877Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.877Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:59.877Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:59.877Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.879Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:35:59.879Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf found self
simple_bootstrap_node    | 2022-06-08T03:35:59.879Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 0 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:35:59.880Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 1 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:35:59.880Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 2 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:35:59.880Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:279 starting refreshing cpl 3 with key CIQAAAAJAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 2)
simple_bootstrap_node    | 2022-06-08T03:35:59.881Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:59.882Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf]
simple_bootstrap_node    | 2022-06-08T03:35:59.883Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:59.883Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:59.885Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:35:59.886Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf found self
simple_bootstrap_node    | 2022-06-08T03:35:59.886Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:59.886Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.887Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:59.888Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.888Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:59.888Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:59.888Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.890Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:59.891Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:59.892Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:286 finished refreshing cpl 3, routing table size is now 2
simple_bootstrap_node    | 2022-06-08T03:35:59.894Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:279 starting refreshing cpl 4 with key CIQAAACQXEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 2)
simple_bootstrap_node    | 2022-06-08T03:35:59.895Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:35:59.895Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf]
simple_bootstrap_node    | 2022-06-08T03:35:59.896Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:35:59.944Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:35:59.944Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:35:59.946Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:35:59.946Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.959Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:59.960Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:35:59.961Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:35:59.961Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:35:59.961Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:35:59.962Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf found self
simple_bootstrap_node    | 2022-06-08T03:35:59.962Z INFO    dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:286 finished refreshing cpl 4, routing table size is now 2
simple_bootstrap_node    | 2022-06-08T03:35:59.962Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
node2_primihub           | 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc * started22.06.08 03:36:00.577739  Warning   Soralog  Group 'network' for logger 'DialerImpl' is not found. Fallback group will be used (it is group 'libp2p' right now).
node2_primihub           |
simple_bootstrap_node    | 2022-06-08T03:36:00.582Z DEBUG   upgrader    [email protected]/listener.go:109 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.28.1.13/tcp/4001 <---> /ip4/172.28.1.12/tcp/48692
simple_bootstrap_node    | 2022-06-08T03:36:00.583Z DEBUG   upgrader    [email protected]/listener.go:109 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> got connection: /ip4/172.28.1.13/tcp/4001 <---> /ip4/172.28.1.12/tcp/48694
simple_bootstrap_node    | 2022-06-08T03:36:00.586Z DEBUG   upgrader    [email protected]/listener.go:125 accept upgrade error: failed to negotiate stream multiplexer: read tcp4 172.28.1.13:4001->172.28.1.12:48694: read: connection reset by peer (/ip4/172.28.1.13/tcp/4001 <--> /ip4/172.28.1.12/tcp/48694)
simple_bootstrap_node    | 2022-06-08T03:36:00.588Z DEBUG   upgrader    [email protected]/listener.go:133 listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/4001> accepted connection: <stream.Conn[TCP] /ip4/172.28.1.13/tcp/4001 (QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA) <-> /ip4/172.28.1.12/tcp/48692 (12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc)>
simple_bootstrap_node    | 2022-06-08T03:36:00.588Z DEBUG   swarm2  [email protected]/swarm_listen.go:103swarm listener accepted connection: <stream.Conn[TCP] /ip4/172.28.1.13/tcp/4001 (QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA) <-> /ip4/172.28.1.12/tcp/48692 (12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc)>
simple_bootstrap_node    | 2022-06-08T03:36:00.592Z DEBUG   basichost   basic/basic_host.go:414 protocol negotiation took 320.622µs
simple_bootstrap_node    | 2022-06-08T03:36:00.593Z DEBUG   net/identify    identify/id.go:407  /ipfs/id/1.0.0 sent message to 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc /ip4/172.28.1.12/tcp/48692
simple_bootstrap_node    | 2022-06-08T03:36:00.593Z DEBUG   basichost   basic/basic_host.go:414 protocol negotiation took 63.254µs
simple_bootstrap_node    | 2022-06-08T03:36:00.636Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc"}
simple_bootstrap_node    | 2022-06-08T03:36:00.637Z DEBUG   dht [email protected]/dht_net.go:116    handling message    {"from": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc", "type": 4, "key": "EiCY4wLIbOYwye5IGvJYTqeR4doGTFQXP+vmoalKXwXjFA=="}
simple_bootstrap_node    | 2022-06-08T03:36:00.638Z DEBUG   dht [email protected]/dht_net.go:133    handled message {"from": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc", "type": 4, "key": "EiCY4wLIbOYwye5IGvJYTqeR4doGTFQXP+vmoalKXwXjFA==", "time": 0.001920409}
simple_bootstrap_node    | 2022-06-08T03:36:00.638Z DEBUG   dht [email protected]/dht_net.go:159    responded to message    {"from": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc", "type": 4, "key": "EiCY4wLIbOYwye5IGvJYTqeR4doGTFQXP+vmoalKXwXjFA==", "time": 0.001959497}
simple_bootstrap_node    | 2022-06-08T03:36:00.639Z DEBUG   net/identify    identify/id.go:439  /ipfs/id/1.0.0 received message from 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc /ip4/172.28.1.12/tcp/48692
simple_bootstrap_node    | 2022-06-08T03:36:00.639Z DEBUG   net/identify    identify/id.go:635  QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA received listen addrs for 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc: [/ip4/172.28.1.12/tcp/8882]
simple_bootstrap_node    | 2022-06-08T03:36:00.639Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc"}
simple_bootstrap_node    | 2022-06-08T03:36:00.639Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:36:00.639Z DEBUG   net/identify    identify/obsaddr.go:397 added own observed listen addr  {"observed": "/ip4/172.28.1.13/tcp/4001"}
simple_bootstrap_node    | 2022-06-08T03:36:00.640Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:36:00.640Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc"}
simple_bootstrap_node    | 2022-06-08T03:36:00.641Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc]
simple_bootstrap_node    | 2022-06-08T03:36:00.642Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB]
simple_bootstrap_node    | 2022-06-08T03:36:00.645Z DEBUG   dht net/message_manager.go:303  error reading message   {"error": "EOF", "retrying": true}
simple_bootstrap_node    | 2022-06-08T03:36:00.650Z DEBUG   swarm2  [email protected]/swarm.go:336    [QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA] opening stream to peer [12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf]
simple_bootstrap_node    | 2022-06-08T03:36:00.649Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB"}
simple_bootstrap_node    | 2022-06-08T03:36:00.650Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB found self
simple_bootstrap_node    | 2022-06-08T03:36:00.653Z DEBUG   dht [email protected]/query.go:505  not connected. dialing.
simple_bootstrap_node    | 2022-06-08T03:36:00.654Z DEBUG   basichost   basic/basic_host.go:782 host QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:36:00.657Z DEBUG   swarm2  [email protected]/swarm_dial.go:241   dialing peer    {"from": "QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA", "to": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:36:00.654Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf"}
simple_bootstrap_node    | 2022-06-08T03:36:00.658Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf found self
simple_bootstrap_node    | 2022-06-08T03:36:00.658Z DEBUG   swarm2  [email protected]/swarm_dial.go:266   network for QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA finished dialing QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:36:00.658Z DEBUG   dht [email protected]/query.go:513  error connecting: no good addresses
simple_bootstrap_node    | 2022-06-08T03:36:00.658Z DEBUG   dht [email protected]/dht.go:656    peer stopped dht    {"peer": "QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd"}
simple_bootstrap_node    | 2022-06-08T03:36:00.658Z DEBUG   swarm2  [email protected]/limiter.go:201  [limiter] clearing all peer dials: QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht [email protected]/dht.go:639    peer found  {"peer": "12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc"}
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht [email protected]/query.go:426  PEERS CLOSER -- worker for: 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc found self
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 0 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 1 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 2 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 3 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:36:00.704Z DEBUG   dht/RtRefreshManager    rtrefresh/rt_refresh_manager.go:265 not running refresh for cpl 4 as time since last refresh not above interval
simple_bootstrap_node    | 2022-06-08T03:36:04.517Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
node0_primihub           | 22.06.08 03:35:56.309982  Error     Plaintext  received_pid=QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, p.value()=QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
node0_primihub           | 22.06.08 03:35:56.310044  Error     Plaintext  error happened while establishing a Plaintext session: Received peer id doesn't match actual peer id
node0_primihub           | 22.06.08 03:35:56.311873  Info      IdentifyMsgProcessor  successfully written an identify message to peer QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, /ip4/172.28.1.13/tcp/4001
node0_primihub           | 22.06.08 03:35:56.355272  Info      IdentifyMsgProcessor  received an identify message from peer QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, /ip4/172.28.1.13/tcp/4001
node0_primihub           | 22.06.08 03:35:59.837392  Info      IdentifyMsgProcessor  successfully written an identify message to peer 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf, /ip4/172.28.1.11/tcp/35668
node0_primihub           | 22.06.08 03:35:59.880427  Info      IdentifyMsgProcessor  received an identify message from peer 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf, /ip4/172.28.1.11/tcp/35668
node0_primihub           | 22.06.08 03:36:00.669372  Info      IdentifyMsgProcessor  successfully written an identify message to peer 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc, /ip4/172.28.1.12/tcp/36522
node0_primihub           | 22.06.08 03:36:00.711606  Info      IdentifyMsgProcessor  received an identify message from peer 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc, /ip4/172.28.1.12/tcp/36522
node0_primihub           | Put value, key string is : bafkreidgl73i4jxg4anhieb55p4gy7rav462ilzzqughc4mutqwsjzr6wi
node0_primihub           | Could not create logging file: No such file or directory
node0_primihub           | COULD NOT CREATE A LOGGINGFILE 20220608-033606.1!I20220608 03:36:06.328670     1 service.cc:179] << Put meta: {
node0_primihub           |     "data_type": 0,
node0_primihub           |     "data_url": "node0:172.28.1.10:50050:/tmp/train_party_0.csv",
node0_primihub           |     "description": "train_party_0",
node0_primihub           |     "driver_type": "CSV",
node0_primihub           |     "id": "bafkreidgl73i4jxg4anhieb55p4gy7rav462ilzzqughc4mutqwsjzr6wi",
node0_primihub           |     "schema": "{\"x_0\":[],\"x_1\":[],\"x_2\":[],\"x_3\":[],\"x_4\":[],\"x_5\":[],\"x_6\":[],\"x_7\":[],\"x_8\":[],\"x_9\":[],\"x_10\":[],\"y\":[]}",
node0_primihub           |     "visibility": 1
node0_primihub           | }
node0_primihub           | Put value success, value length:351
node0_primihub           | I20220608 03:36:06.342710     1 service.cc:179] << Put meta: {
node0_primihub           |     "data_type": 0,
node0_primihub           |     "data_url": "node0:172.28.1.10:50050:/tmp/test_party_0.csv",
node0_primihub           |     "description": "test_party_0",
node0_primihub           |     "driver_type": "CSV",
node0_primihub           |     "id": "bafkreihsbjgozy7bokmekzyixvhtn7zb5ukn6j46ttfbdjrybpiuykqera",
node0_primihub           |     "schema": "{\"x_0\":[],\"x_1\":[],\"x_2\":[],\"x_3\":[],\"x_4\":[],\"x_5\":[],\"x_6\":[],\"x_7\":[],\"x_8\":[],\"x_9\":[],\"x_10\":[],\"y\":[]}",
node0_primihub           |     "visibility": 1
node0_primihub           | }
node0_primihub           | Put value, key string is : bafkreihsbjgozy7bokmekzyixvhtn7zb5ukn6j46ttfbdjrybpiuykqera
node0_primihub           | Put value success, value length:349
node0_primihub           | I20220608 03:36:06.346393     1 node.cc:140]  💻 Node listening on port: 50050
simple_bootstrap_node    | 2022-06-08T03:36:09.517Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
node1_primihub           | 22.06.08 03:35:59.807301  Error     Plaintext  received_pid=QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, p.value()=QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
node1_primihub           | 22.06.08 03:35:59.807358  Error     Plaintext  error happened while establishing a Plaintext session: Received peer id doesn't match actual peer id
node1_primihub           | 22.06.08 03:35:59.824921  Info      IdentifyMsgProcessor  received an identify message from peer QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, /ip4/172.28.1.13/tcp/4001
node1_primihub           | 22.06.08 03:35:59.828697  Info      IdentifyMsgProcessor  successfully written an identify message to peer QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, /ip4/172.28.1.13/tcp/4001
node1_primihub           | 22.06.08 03:35:59.838287  Info      IdentifyMsgProcessor  received an identify message from peer 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB, /ip4/172.28.1.10/tcp/8880
node1_primihub           | 22.06.08 03:35:59.839191  Info      IdentifyMsgProcessor  successfully written an identify message to peer 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB, /ip4/172.28.1.10/tcp/8880
node1_primihub           | 22.06.08 03:35:59.920763  Warning   KademliaExecutor  FindPeer#1: Result from 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB is failed: stream reset; active 1, in queue 0
node1_primihub           | 22.06.08 03:36:00.669104  Info      IdentifyMsgProcessor  received an identify message from peer 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc, /ip4/172.28.1.12/tcp/56458
node1_primihub           | 22.06.08 03:36:00.669402  Info      IdentifyMsgProcessor  successfully written an identify message to peer 12D3KooWC7NFZFTVoikHVgLLspi8dPmzHTfMQ6hfeDpMxYpfc7Rc, /ip4/172.28.1.12/tcp/56458
node1_primihub           | Put value, key string is : bafkreiftpzzziaapza3t7t4vmnk3wrzreecebs25fhqq5kw65wwb2n52ke
node1_primihub           | Could not create logging file: No such file or directory
node1_primihub           | COULD NOT CREATE A LOGGINGFILE 20220608-033609.1!I20220608 03:36:09.820542     1 service.cc:179] << Put meta: {
node1_primihub           |     "data_type": 0,
node1_primihub           |     "data_url": "node1:172.28.1.11:50050:/tmp/train_party_1.csv",
node1_primihub           |     "description": "train_party_1",
node1_primihub           |     "driver_type": "CSV",
node1_primihub           |     "id": "bafkreiftpzzziaapza3t7t4vmnk3wrzreecebs25fhqq5kw65wwb2n52ke",
node1_primihub           |     "schema": "{\"x_0\":[],\"x_1\":[],\"x_2\":[],\"x_3\":[],\"x_4\":[],\"x_5\":[],\"x_6\":[],\"x_7\":[],\"x_8\":[],\"x_9\":[],\"x_10\":[],\"y\":[]}",
node1_primihub           |     "visibility": 1
node1_primihub           | }
node1_primihub           | Put value success, value length:351
node1_primihub           | I20220608 03:36:09.828379     1 service.cc:179] << Put meta: {
node1_primihub           |     "data_type": 0,
node1_primihub           |     "data_url": "node1:172.28.1.11:50050:/tmp/test_party_1.csv",
node1_primihub           |     "description": "test_party_1",
node1_primihub           |     "driver_type": "CSV",
node1_primihub           |     "id": "bafkreiaal743zyopwvelxpdpvka4ij4lkka4zbpvvfzhzl35j7ppkrhd3y",
node1_primihub           |     "schema": "{\"x_0\":[],\"x_1\":[],\"x_2\":[],\"x_3\":[],\"x_4\":[],\"x_5\":[],\"x_6\":[],\"x_7\":[],\"x_8\":[],\"x_9\":[],\"x_10\":[],\"y\":[]}",
node1_primihub           |     "visibility": 1
node1_primihub           | }
node1_primihub           | Put value, key string is : bafkreiaal743zyopwvelxpdpvka4ij4lkka4zbpvvfzhzl35j7ppkrhd3y
node1_primihub           | Put value success, value length:349
node1_primihub           | Failed to open file: /tmp/breast-cancer-wisconsin.data
node2_primihub           | Could not create logging file: No such file or directory
node2_primihub           | COULD NOT CREATE A LOGGINGFILE 20220608-033610.1!I20220608 03:36:10.614406     1 service.cc:179] << Put meta: {
node2_primihub           |     "data_type": 0,
node2_primihub           |     "data_url": "node2:172.28.1.12:50050:/tmp/train_party_2.csv",
node2_primihub           |     "description": "train_party_2",
node2_primihub           |     "driver_type": "CSV",
node2_primihub           |     "id": "bafkreig4v5ik5pq4z7u5gouuwow26acdtq45p3ufflqlbl7oj722b36iue",
node2_primihub           |     "schema": "{\"x_0\":[],\"x_1\":[],\"x_2\":[],\"x_3\":[],\"x_4\":[],\"x_5\":[],\"x_6\":[],\"x_7\":[],\"x_8\":[],\"x_9\":[],\"x_10\":[],\"y\":[]}",
node2_primihub           |     "visibility": 1
node2_primihub           | }
node2_primihub           | 22.06.08 03:36:00.586024  Error     Plaintext  received_pid=QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, p.value()=QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd
node2_primihub           | 22.06.08 03:36:00.586075  Error     Plaintext  error happened while establishing a Plaintext session: Received peer id doesn't match actual peer id
node2_primihub           | 22.06.08 03:36:00.595407  Info      IdentifyMsgProcessor  successfully written an identify message to peer QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, /ip4/172.28.1.13/tcp/4001
node2_primihub           | 22.06.08 03:36:00.595685  Info      IdentifyMsgProcessor  received an identify message from peer QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA, /ip4/172.28.1.13/tcp/4001
node2_primihub           | 22.06.08 03:36:00.668776  Info      IdentifyMsgProcessor  successfully written an identify message to peer 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf, /ip4/172.28.1.11/tcp/8881
node2_primihub           | 22.06.08 03:36:00.670341  Info      IdentifyMsgProcessor  received an identify message from peer 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf, /ip4/172.28.1.11/tcp/8881
node2_primihub           | 22.06.08 03:36:00.670714  Info      IdentifyMsgProcessor  received an identify message from peer 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB, /ip4/172.28.1.10/tcp/8880
node2_primihub           | 22.06.08 03:36:00.671108  Warning   KademliaExecutor  FindPeer#1: Result from 12D3KooWE44B6gcnvtjuVs845hPMupL8FFASQ62xZfH7GKfHHAPf is failed: stream reset; active 2, in queue 0
node2_primihub           | 22.06.08 03:36:00.671321  Info      IdentifyMsgProcessor  successfully written an identify message to peer 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB, /ip4/172.28.1.10/tcp/8880
node2_primihub           | 22.06.08 03:36:00.713194  Warning   KademliaExecutor  FindPeer#1: Result from 12D3KooWSNHsV84Lp9q8PtzdkXLk4dd4eA6vdw1Hwjkmj4xN5BHB is failed: stream reset; active 1, in queue 0
node2_primihub           | Put value, key string is : bafkreig4v5ik5pq4z7u5gouuwow26acdtq45p3ufflqlbl7oj722b36iue
node2_primihub           | Put value success, value length:351
node2_primihub           | I20220608 03:36:10.627748     1 service.cc:179] << Put meta: {
node2_primihub           |     "data_type": 0,
node2_primihub           |     "data_url": "node2:172.28.1.12:50050:/tmp/test_party_2.csv",
node2_primihub           |     "description": "test_party_2",
node2_primihub           |     "driver_type": "CSV",
node2_primihub           |     "id": "bafkreiddalfshc4t3u6mjnp4n34mfcrze6fwons67g54j4p2meey56ebxe",
node2_primihub           |     "schema": "{\"x_0\":[],\"x_1\":[],\"x_2\":[],\"x_3\":[],\"x_4\":[],\"x_5\":[],\"x_6\":[],\"x_7\":[],\"x_8\":[],\"x_9\":[],\"x_10\":[],\"y\":[]}",
node2_primihub           |     "visibility": 1
node2_primihub           | }
node2_primihub           | Failed to open file: /tmp/breast-cancer-wisconsin-label.data
node2_primihub           | Put value, key string is : bafkreiddalfshc4t3u6mjnp4n34mfcrze6fwons67g54j4p2meey56ebxe
node2_primihub           | Put value success, value length:349
simple_bootstrap_node    | 2022-06-08T03:36:14.495Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
node1_primihub exited with code 139
node2_primihub exited with code 139
simple_bootstrap_node    | 2022-06-08T03:36:19.495Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:36:24.495Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:36:29.501Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:36:34.494Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
simple_bootstrap_node    | 2022-06-08T03:36:39.495Z DEBUG   basichost   basic/basic_host.go:313 failed to fetch local IPv6 address  {"error": "no route found for ::"}
^CGracefully stopping... (press Ctrl+C again to force)
Killing node0_primihub         ... done
Killing simple_bootstrap_node  ... done
ERROR: 2

本地编译启动,linux环境下

找不到@pybind11_bazel//:build_defs.bzl,@pybind11_bazel未被定义,后面我手动导入了build_defs.bzl到bazel/文件夹中还是报此错误?

root@ubuntu:/home/~/Desktop/primihub# make linux_x86_64
bazel build --config=linux_x86_64 //:node //:cli //src/primihub/pybind_warpper:linkcontext //src/primihub/pybind_warpper:opt_paillier_c2py //:py_main
ERROR: Skipping '//src/primihub/pybind_warpper:opt_paillier_c2py': error loading package 'src/primihub/pybind_warpper': Unable to find package for @pybind11_bazel//:build_defs.bzl: The repository '@pybind11_bazel' could not be resolved: Repository '@pybind11_bazel' is not defined.
ERROR: error loading package 'src/primihub/pybind_warpper': Unable to find package for @pybind11_bazel//:build_defs.bzl: The repository '@pybind11_bazel' could not be resolved: Repository '@pybind11_bazel' is not defined.
INFO: Elapsed time: 0.106s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded)
currently loading: src/primihub/pybind_warpper
make: *** [Makefile:9: linux_x86_64] Error 1

docker部署的方式,无法访问

Describe the bug
A clear and concise description of what the bug is.
服务启动后,页面30811可以打开,但是报404错误,看日志nacos启动报错了,且nacos的8848无法访问
image
image
image
困扰我好几天了,可以帮我看看嘛

Suggestion: provide a roadmap for this project

A clear roadmap is very important for an open-source community. Users or contributors could know the long-term plan of the community. Even better, people might have a chance to collaborate with the community on some new features.

In order to make our life easier. I have listed some examples:

部署PrimiHub CPU必须支持AVX指令集吗

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

> 你是使用的什么系统?如果是centos或者ubuntu,可以参考[这里](https://docs.primihub.com/docs/advance-usage/start/build)配置python环境,如果是别的系统,可以尝试通过miniconda安装python,参考[这里](https://m74hgjmt55.feishu.cn/docx/FmnSdtDLMoyGLMxffVqcqEZWnFg)

          > 你是使用的什么系统?如果是centos或者ubuntu,可以参考[这里](https://docs.primihub.com/docs/advance-usage/start/build)配置python环境,如果是别的系统,可以尝试通过miniconda安装python,参考[这里](https://m74hgjmt55.feishu.cn/docx/FmnSdtDLMoyGLMxffVqcqEZWnFg)

是CentOS8,目前可以测试MPC、PSI和PIR的功能,就是无法测试FL。

Originally posted by @shuizhongmose in #471 (comment)

Failed to execute python: ImportError: cannot import name 'service_pb2' from 'src.primihub.protos'

运行Hetero XGB Training
时提示错误: Failed to execute python: ImportError: cannot import name 'service_pb2' from 'src.primihub.protos',这个该怎么解决呢?
错误日志如下:

/home/private/.cache/bazel/_bazel_private/70ce3da7f8a8c5326e7f2639e4c55098/execroot/primihub/bazel-out/k8-fastbuild/bin
I20230607 08:52:09.521046 272986 py_executor.cc:74] 
I20230607 08:52:09.521107 272986 py_executor.cc:75] start py main
E20230607 08:52:09.628433 272986 py_executor.cc:60] Failed to execute python: ImportError: cannot import name 'service_pb2' from 'src.primihub.protos' (/home/private/codes/mpc/primihub/python/primihub/client/ph_grpc/src/primihub/protos/__init__.py)

At:
  /home/private/codes/mpc/primihub/python/primihub/client/ph_grpc/service.py(27): <module>
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap_external>(848): exec_module
  <frozen importlib._bootstrap>(686): _load_unlocked
  <frozen importlib._bootstrap>(975): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  /home/private/codes/mpc/primihub/python/primihub/client/ph_grpc/task.py(18): <module>
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap_external>(848): exec_module
  <frozen importlib._bootstrap>(686): _load_unlocked
  <frozen importlib._bootstrap>(975): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  /home/private/codes/mpc/primihub/python/primihub/client/client.py(23): <module>
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap_external>(848): exec_module
  <frozen importlib._bootstrap>(686): _load_unlocked
  <frozen importlib._bootstrap>(975): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  /home/private/codes/mpc/primihub/python/primihub/client/__init__.py(1): <module>
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap_external>(848): exec_module
  <frozen importlib._bootstrap>(686): _load_unlocked
  <frozen importlib._bootstrap>(975): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap>(961): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap>(961): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap>(961): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap>(961): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
  /home/private/codes/mpc/primihub/python/primihub/executor.py(20): <module>
  <frozen importlib._bootstrap>(219): _call_with_frames_removed
  <frozen importlib._bootstrap_external>(848): exec_module
  <frozen importlib._bootstrap>(686): _load_unlocked
  <frozen importlib._bootstrap>(975): _find_and_load_unlocked
  <frozen importlib._bootstrap>(991): _find_and_load
E20230607 08:52:09.628553 272986 py_executor.cc:92] py executor encoutes error when executing task
I20230607 08:52:09.683385 272978 fl_task.cc:114] py_main executes result code: 255

The performance of PIR and PSI

Hello, primihub
can you provide some information about the performance of PIR and PSI in primihub under different quantity levels and Network bandwidths?
thx

PSI结果不全

Describe the bug

  1. KKRT-PSI 配置文件中serverIndexpsiTagvalue错误,应该都是1
  2. 为什么EDCH-PSI和KKRT-PSI的结果中,数据索引A big company 8A big company 9没有在求交结果中呢?

Log output

[baas@baas-node03 primihub]$ cat data/result/server/psi_result.csv                                  
 "intersection_row"
A big company 4
A big company 1
A big company 6
A big company 7
A big company 5
A big company 10
A big company 3
A big company 2
[baas@baas-node03 primihub]$ cat data/result/psi_result.csv                                         
 "intersection_row"
A big company 4
A big company 1
A big company 6
A big company 7
A big company 5
A big company 10
A big company 3
A big company 2

Suggestion: release the Python SDK to the PyPI service

Background

Currently, it's pretty complex if someone wants to try the Python SDK. Usually, people might need. to handle many environment or version issues when they try to compile the source code. It might take 30 minutes or even more time during this process.

Suggestion

PyPI is a very popular python library center service. I highly recommend releasing the Python SDK there. Users will don't need to care about how to compile the Python SDK anymore.

I did some work on that. It should not be hard work. Please see also this tutorial

拉取了最新的镜像之后,页面还是404,nacos报错,请问这个是哪里出了问题

nacos.log
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'memoryMonitor' defined in URL [jar:file:/home/nacos/target/nacos-server.jar!/BOOT-INF/lib/nacos-config-2.0.4.jar!/com/alibaba/nacos/config/server/monitor/MemoryMonitor.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'asyncNotifyService': Unsatisfied dependency expressed through field 'dumpService'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'externalDumpService': Invocation of init method failed; nested exception is ErrCode:500, ErrMsg:Nacos Server did not start because dumpservice bean construction failure :
No DataSource set
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1338)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1185)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:554)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:514)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:321)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:319)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:866)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)

org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'memoryMonitor' defined in URL [jar:file:/home/nacos/target/nacos-server.jar!/BOOT-INF/lib/nacos-config-2.0.4.jar!/com/alibaba/nacos/config/server/monitor/MemoryMonitor.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'asyncNotifyService': Unsatisfied dependency expressed through field 'dumpService'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'externalDumpService': Invocation of init method failed; nested exception is ErrCode:500, ErrMsg:Nacos Server did not start because dumpservice bean construction failure :
No DataSource set
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1338)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1185)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:554)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:514)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:321)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:319)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:866)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)

example task run success, but some error in log

Use README.md execute task command, task maybe execute success, I found some errors in the log. Is that normal?

docker run --network=host -it primihub/primihub-node:1.0.5  /app/primihub-cli  --server=127.0.0.1:8050
node0_primihub           | 22.06.08 10:51:01.448706  Error     Plaintext  received_pid=QmP2C45o2vZfy1JXWFZDUEzrQCigMtd4r3nesvArV8dFKd, p.value()=QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
node0_primihub           | 22.06.08 10:51:01.449584  Error     Plaintext  error happened while establishing a Plaintext session: Received peer id doesn't match actual peer id

When using baze 5.0.0, error: /bin/bash: $'\r': command not found

ERROR: /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/external/lib_function2/BUILD.bazel:10:8: Executing genrule @lib_function2//:function2-build failed: (Exit 127): linux-sandbox failed: error executing command
(cd /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/sandbox/linux-sandbox/33/execroot/main &&
exec env -
PATH=/home/hsx/.opam/4.10.0/bin:/home/hsx/.local/bin:/home/hsx/anaconda3/bin:/home/hsx/anaconda3/condabin:/home/hsx/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/snap/bin
TMPDIR=/tmp
/root/.cache/bazel/_bazel_hsx/install/c87283ec3a7822eea44f4cecb6db792e/linux-sandbox -t 15 -w /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/sandbox/linux-sandbox/33/execroot/main -w /tmp -w /run/shm -D -- /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh;
set -x
FUNCTION2_ROOT=$(dirname external/lib_function2/CMakeLists.txt)
cp $FUNCTION2_ROOT/include/function2/function2.hpp bazel-out/k8-fastbuild/bin/external/lib_function2/function2/function2.hpp
')
1656642084.665200200: src/main/tools/linux-sandbox.cc:152: calling pipe(2)...
1656642084.665260800: src/main/tools/linux-sandbox.cc:171: calling clone(2)...
1656642084.665663100: src/main/tools/linux-sandbox.cc:180: linux-sandbox-pid1 has PID 15992
1656642084.665742300: src/main/tools/linux-sandbox-pid1.cc:641: Pid1Main started
1656642084.665917900: src/main/tools/linux-sandbox.cc:197: done manipulating pipes
1656642084.666076700: src/main/tools/linux-sandbox-pid1.cc:260: working dir: /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/sandbox/linux-sandbox/33/execroot/main
1656642084.666113400: src/main/tools/linux-sandbox-pid1.cc:292: writable: /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/sandbox/linux-sandbox/33/execroot/main
1656642084.666128100: src/main/tools/linux-sandbox-pid1.cc:292: writable: /tmp
1656642084.666140600: src/main/tools/linux-sandbox-pid1.cc:292: writable: /run/shm
1656642084.666237100: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /
1656642084.666252700: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /init
1656642084.666260300: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /dev
1656642084.666267100: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /dev/pts
1656642084.666274100: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys
1656642084.666280300: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup
1656642084.666289500: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/unified
1656642084.666297000: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/cpuset
1656642084.666303700: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/cpu
1656642084.666310000: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/cpuacct
1656642084.666315800: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/blkio
1656642084.666322700: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/memory
1656642084.666356800: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/devices
1656642084.666364100: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/freezer
1656642084.666370300: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/net_cls
1656642084.666376800: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/perf_event
1656642084.666383100: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/net_prio
1656642084.666389100: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/hugetlb
1656642084.666395300: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/pids
1656642084.666401400: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /sys/fs/cgroup/rdma
1656642084.666407900: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /proc
1656642084.666413200: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /proc/sys/fs/binfmt_misc
1656642084.666422800: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /run
1656642084.666428600: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /run/lock
1656642084.666434500: src/main/tools/linux-sandbox-pid1.cc:362: remount rw: /run/shm
1656642084.666440600: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /run/user
1656642084.666467400: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /usr/lib/wsl/drivers
1656642084.666475700: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /usr/lib/wsl/lib
1656642084.666481200: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /mnt/wsl
1656642084.666488500: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /mnt/c
1656642084.666494800: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /mnt/d
1656642084.666500700: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /mnt/e
1656642084.666533900: src/main/tools/linux-sandbox-pid1.cc:362: remount ro: /mnt/f
1656642084.666550000: src/main/tools/linux-sandbox-pid1.cc:362: remount rw: /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/sandbox/linux-sandbox/33/execroot/main
1656642084.666581600: src/main/tools/linux-sandbox-pid1.cc:362: remount rw: /root/.cache/bazel/_bazel_hsx/c5a0b19753becc3200e2a5f87cfd2c2a/sandbox/linux-sandbox/33/execroot/main
1656642084.666594400: src/main/tools/linux-sandbox-pid1.cc:362: remount rw: /tmp
1656642084.666604000: src/main/tools/linux-sandbox-pid1.cc:362: remount rw: /run/shm
1656642084.666742500: src/main/tools/linux-sandbox-pid1.cc:451: calling fork...
1656642084.667313800: src/main/tools/linux-sandbox-pid1.cc:481: child started with PID 2
/bin/bash: $'\r': command not found
1656642084.675113200: src/main/tools/linux-sandbox-pid1.cc:498: wait returned pid=2, status=0x7f00
1656642084.675130100: src/main/tools/linux-sandbox-pid1.cc:516: child exited normally with code 127
1656642084.675748300: src/main/tools/linux-sandbox.cc:233: child exited normally with code 127
INFO: Elapsed time: 0.484s, Critical Path: 0.13s

我在部署完整的PrimiHub隐私计算管理平台报错

Describe the bug
A clear and concise description of what the bug is.

[root@localhost docker-deploy]# sh deploy.sh
docker installed
Docker Compose version v2.6.0
docker-compose installed
1.6.4: Pulling from primihub/primihub-fusion
Digest: sha256:591e2d8e0f8e684fa4e6764ef200cb22d030f3e5e764ddbdb8a1264a930f49e2
Status: Image is up to date for primihub/primihub-fusion:1.6.4
docker.io/primihub/primihub-fusion:1.6.4
1.6.4: Pulling from primihub/primihub-platform
Digest: sha256:721bd58c64df3e61b80b6be1d8bc5b8f2ecd91e4a9e1309b379cd52a2f796906
Status: Image is up to date for primihub/primihub-platform:1.6.4
docker.io/primihub/primihub-platform:1.6.4
1.6.4: Pulling from primihub/primihub-web
Digest: sha256:224b169e706b18c8f6cdc532ccaf9902f30a0fc093f309f46bfdf1efc569b769
Status: Image is up to date for primihub/primihub-web:1.6.4
docker.io/primihub/primihub-web:1.6.4
1.6.4: Pulling from primihub/primihub-node
Digest: sha256:f23c659e71e5ba3964acaf04eb69b8bff231cca07022c2abbc679fbea31cdaa5
Status: Image is up to date for primihub/primihub-node:1.6.4
docker.io/primihub/primihub-node:1.6.4
v2.0.4: Pulling from primihub/nacos-server
Digest: sha256:6dfbd52d675f804d11f370b42a1f969f5d9557c56fe091a62de3051d3d115d6e
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/nacos-server:v2.0.4
registry.cn-beijing.aliyuncs.com/primihub/nacos-server:v2.0.4
3.6.15-management: Pulling from primihub/rabbitmq
Digest: sha256:d1e6b70b28b9fe6c42ba86c059bd249677583bb62cb25f6f982649bd7ae6976b
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/rabbitmq:3.6.15-management
registry.cn-beijing.aliyuncs.com/primihub/rabbitmq:3.6.15-management
7: Pulling from library/redis
Digest: sha256:92b8b307ee28ed74da17578064c73307ad41e43f422f0b7e4e91498b406c59e3
Status: Image is up to date for redis:7
docker.io/library/redis:7
5.7: Pulling from primihub/mysql
Digest: sha256:398f124948bb3d5789c0ac7c004d02e6d9a3ae95aa9804d7a3b33a344ff3c9cd
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/mysql:5.7
registry.cn-beijing.aliyuncs.com/primihub/mysql:5.7
Using default tag: latest
latest: Pulling from primihub/loki
Digest: sha256:d69f377ecfdbb3f72086a180dcd7c2f02c795cf1867bbeb61606b42a8d41a557
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/loki:latest
registry.cn-beijing.aliyuncs.com/primihub/loki:latest
[+] Running 0/0
⠿ Network docker-deploy_primihub_net Error 0.0s
failed to create network docker-deploy_primihub_net: Error response from daemon: Pool overlaps with other one on this address space

最后报这个错误
⠿ Network docker-deploy_primihub_net Error 0.0s
failed to create network docker-deploy_primihub_net: Error response from daemon: Pool overlaps with other one on this address space

build error

ERROR: error loading package '': Label '//3rdparty/bazel-rules-leveldb/bazel:repos.bzl' is invalid because '3rdparty/bazel-rules-leveldb/bazel' is not a package; perhaps you meant to put the colon here: '//:3rdparty/bazel-rules-leveldb/bazel/repos.bzl'?
INFO: Elapsed time: 0.505s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)

执行 deploy.sh 报错Error response from daemon: Invalid address 172.28.1.30: It does not belong to any of this network's subnets

Describe the bug
A clear and concise description of what the bug is.
Error response from daemon: Invalid address 172.28.1.30: It does not belong to any of this network's subnets

[root@localhost docker-deploy]# sh deploy.sh
docker installed
Docker Compose version v2.6.0
docker-compose installed
1.6.4: Pulling from primihub/primihub-fusion
Digest: sha256:591e2d8e0f8e684fa4e6764ef200cb22d030f3e5e764ddbdb8a1264a930f49e2
Status: Image is up to date for primihub/primihub-fusion:1.6.4
docker.io/primihub/primihub-fusion:1.6.4
1.6.4: Pulling from primihub/primihub-platform
Digest: sha256:721bd58c64df3e61b80b6be1d8bc5b8f2ecd91e4a9e1309b379cd52a2f796906
Status: Image is up to date for primihub/primihub-platform:1.6.4
docker.io/primihub/primihub-platform:1.6.4
1.6.4: Pulling from primihub/primihub-web
Digest: sha256:224b169e706b18c8f6cdc532ccaf9902f30a0fc093f309f46bfdf1efc569b769
Status: Image is up to date for primihub/primihub-web:1.6.4
docker.io/primihub/primihub-web:1.6.4
1.6.4: Pulling from primihub/primihub-node
Digest: sha256:f23c659e71e5ba3964acaf04eb69b8bff231cca07022c2abbc679fbea31cdaa5
Status: Image is up to date for primihub/primihub-node:1.6.4
docker.io/primihub/primihub-node:1.6.4
v2.0.4: Pulling from primihub/nacos-server
Digest: sha256:6dfbd52d675f804d11f370b42a1f969f5d9557c56fe091a62de3051d3d115d6e
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/nacos-server:v2.0.4
registry.cn-beijing.aliyuncs.com/primihub/nacos-server:v2.0.4
3.6.15-management: Pulling from primihub/rabbitmq
Digest: sha256:d1e6b70b28b9fe6c42ba86c059bd249677583bb62cb25f6f982649bd7ae6976b
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/rabbitmq:3.6.15-management
registry.cn-beijing.aliyuncs.com/primihub/rabbitmq:3.6.15-management
7: Pulling from library/redis
Digest: sha256:92b8b307ee28ed74da17578064c73307ad41e43f422f0b7e4e91498b406c59e3
Status: Image is up to date for redis:7
docker.io/library/redis:7
5.7: Pulling from primihub/mysql
Digest: sha256:398f124948bb3d5789c0ac7c004d02e6d9a3ae95aa9804d7a3b33a344ff3c9cd
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/mysql:5.7
registry.cn-beijing.aliyuncs.com/primihub/mysql:5.7
Using default tag: latest
latest: Pulling from primihub/loki
Digest: sha256:d69f377ecfdbb3f72086a180dcd7c2f02c795cf1867bbeb61606b42a8d41a557
Status: Image is up to date for registry.cn-beijing.aliyuncs.com/primihub/loki:latest
registry.cn-beijing.aliyuncs.com/primihub/loki:latest
[+] Running 0/3
⠦ Container redis Starting 0.6s
⠦ Container loki Starting 0.6s
⠦ Container mysql Starting 0.6s
Error response from daemon: Invalid address 172.28.1.30: It does not belong to any of this network's subnets

docker-compose up 失败

OS:MacOS Monterey 12.3.1
Docker version 20.10.10, build b485636

(base)  ✘ zane@m1  ~/data/primihub   master  docker-compose up
Creating network "primihub_testing_net" with driver "bridge"
Pulling simple_bootstrap_node (primihub/simple-bootstrap-node:1.0)...
1.0: Pulling from primihub/simple-bootstrap-node
d07403d21a2a: Pull complete
Digest: sha256:3115ca655bb8a45ad2b6a83d8227b9fde272584abcc74ed814c46d413679ebed
Status: Downloaded newer image for primihub/simple-bootstrap-node:1.0
Pulling node2 (primihub/primihub-node:1.0.5)...
1.0.5: Pulling from primihub/primihub-node
d7bfe07ed847: Pull complete
3e832b6dc085: Pull complete
6e2baf3e12f2: Pull complete
b64c6dcc8c01: Pull complete
17b8d95d5b45: Pull complete
3145e5b0507f: Pull complete
9bbf991806d5: Pull complete
06de1ebb8232: Pull complete
8ad9830833d1: Pull complete
Digest: sha256:f0b11ee5ac73b22d731f6685143fc2e8ec41867a108974aab9e2b9e9203c921e
Status: Downloaded newer image for primihub/primihub-node:1.0.5
Creating simple_bootstrap_node ... done
Creating node2_primihub        ... done
Creating node1_primihub        ... done
Creating node0_primihub        ... done
Attaching to simple_bootstrap_node, node0_primihub, node1_primihub, node2_primihub
simple_bootstrap_node    | [*] Listening on: 0.0.0.0 with port: 4001
node0_primihub           | terminate called after throwing an instance of 'YAML::BadFile'
node0_primihub           |   what():  bad file: /app/primihub_node0.yaml
node0_primihub           | qemu: uncaught target signal 6 (Aborted) - core dumped
node2_primihub           | terminate called after throwing an instance of 'YAML::BadFile'
node2_primihub           |   what():  bad file: /app/primihub_node2.yaml
node2_primihub           | qemu: uncaught target signal 6 (Aborted) - core dumped
node1_primihub           | terminate called after throwing an instance of 'YAML::BadFile'
node1_primihub           |   what():  bad file: /app/primihub_node1.yaml
node1_primihub           | qemu: uncaught target signal 6 (Aborted) - core dumped
simple_bootstrap_node    | 2022-07-05T08:39:52.689Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:39:52.743Z	INFO	bootsrap	src/main.go:64	Host created. We are:QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
simple_bootstrap_node    | 2022-07-05T08:39:52.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:39:52.747Z	INFO	bootsrap	src/main.go:65	[/ip4/172.28.1.13/tcp/4001 /ip4/127.0.0.1/tcp/4001]
simple_bootstrap_node    |
simple_bootstrap_node    | [*] Your Bootstrap ID Is: /ip4/0.0.0.0/tcp/4001/ipfs/QmdSyhb8eR9dDSR5jjnRoTDBwpBCSAjT7WueKJ9cQArYoA
simple_bootstrap_node    |
simple_bootstrap_node    | 2022-07-05T08:39:52.757Z	INFO	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:279	starting refreshing cpl 0 with key CIQAAALIVMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 0)
simple_bootstrap_node    | 2022-07-05T08:39:52.758Z	WARN	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:136	failed when refreshing routing table2 errors occurred:
simple_bootstrap_node    | 	* failed to query for self, err=failed to find any peer in table
simple_bootstrap_node    | 	* failed to refresh cpl=0, err=failed to find any peer in table
simple_bootstrap_node    |
simple_bootstrap_node    |
simple_bootstrap_node    | 2022-07-05T08:39:52.760Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:39:52.762Z	INFO	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:279	starting refreshing cpl 0 with key CIQAAACPIYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (routing table size was 0)
simple_bootstrap_node    | 2022-07-05T08:39:52.762Z	WARN	dht/RtRefreshManager	rtrefresh/rt_refresh_manager.go:199	failed when refreshing routing table	{"error": "2 errors occurred:\n\t* failed to query for self, err=failed to find any peer in table\n\t* failed to refresh cpl=0, err=failed to find any peer in table\n\n"}
simple_bootstrap_node    | 2022-07-05T08:39:57.749Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:02.750Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:07.751Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:12.749Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:17.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:22.746Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:27.752Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:32.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:37.751Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:42.750Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:47.749Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:52.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:40:57.750Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:02.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:07.750Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:12.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:17.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:22.746Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:27.749Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:32.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:37.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:42.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:47.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:52.743Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:41:57.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:02.749Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:07.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:12.749Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:17.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:22.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:27.751Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:32.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:37.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:42.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:47.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:52.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:42:57.748Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:02.746Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:07.746Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:12.741Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:17.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:22.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:27.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:32.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:37.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:42.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:47.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:52.740Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:43:57.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:02.743Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:07.746Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:12.743Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:17.746Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:22.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:27.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:32.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:37.747Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:42.744Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:47.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:52.740Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:44:57.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:45:02.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:45:07.745Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:45:12.742Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:45:17.740Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}
simple_bootstrap_node    | 2022-07-05T08:45:22.740Z	DEBUG	basichost	basic/basic_host.go:313	failed to fetch local IPv6 address	{"error": "no route found for ::"}

/usr/include/c++/9/type_traits:883:12: error: default member initializer for 'grpc_core::XdsApi::Duration::nanos'

When i use g++ 9,but some error: /usr/include/c++/9/type_traits:883:12: error: default member initializer for 'grpc_core::XdsApi::Duration::seconds' required before the end of its enclosing class


In file included from /usr/include/c++/9/bits/move.h:55,
from /usr/include/c++/9/bits/stl_pair.h:59,
from /usr/include/c++/9/bits/stl_algobase.h:64,
from /usr/include/c++/9/bits/stl_tree.h:63,
from /usr/include/c++/9/set:60,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_api.h:26,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.h:24,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.cc:21:
/usr/include/c++/9/type_traits: In instantiation of 'struct std::is_constructible<grpc_core::XdsApi::RetryPolicy>':
/usr/include/c++/9/type_traits:2912:25: required from 'constexpr const bool std::is_constructible_v<grpc_core::XdsApi::RetryPolicy>'
/usr/include/c++/9/optional:604:66: required by substitution of 'template<class ... _Args, typename std::enable_if<is_constructible_v<grpc_core::XdsApi::RetryPolicy, _Args&& ...>, bool>::type > constexpr std::_Optional_base<grpc_core::XdsApi::RetryPolicy, true, true>::_Optional_base(std::in_place_t, _Args&& ...) [with _Args = {}; typename std::enable_if<is_constructible_v<grpc_core::XdsApi::RetryPolicy, _Args&& ...>, bool>::type = ]'
/usr/include/c++/9/type_traits:883:12: required from 'struct std::is_constructible<grpc_core::XdsApi::Route::RouteAction, const grpc_core::XdsApi::Route::RouteAction&>'
/usr/include/c++/9/type_traits:901:12: required from 'struct std::__is_copy_constructible_impl<grpc_core::XdsApi::Route::RouteAction, true>'
/usr/include/c++/9/type_traits:907:12: required from 'struct std::is_copy_constructible<grpc_core::XdsApi::Route::RouteAction>'
/usr/include/c++/9/type_traits:2918:25: required from 'constexpr const bool std::is_copy_constructible_v<grpc_core::XdsApi::Route::RouteAction>'
/usr/include/c++/9/variant:275:5: required from 'constexpr const bool std::__detail::__variant::_Traits<grpc_core::XdsApi::Route::UnknownAction, grpc_core::XdsApi::Route::RouteAction, grpc_core::XdsApi::Route::NonForwardingAction>::_S_copy_ctor'
/usr/include/c++/9/variant:1228:11: required from 'class std::variant<grpc_core::XdsApi::Route::UnknownAction, grpc_core::XdsApi::Route::RouteAction, grpc_core::XdsApi::Route::NonForwardingAction>'
external/com_github_grpc_grpc/src/core/ext/xds/xds_api.h:181:68: required from here
/usr/include/c++/9/type_traits:883:12: error: default member initializer for 'grpc_core::XdsApi::Duration::seconds' required before the end of its enclosing class
883 | struct is_constructible
| ^~~~~~~~~~~~~~~~
In file included from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.h:24,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.cc:21:
external/com_github_grpc_grpc/src/core/ext/xds/xds_api.h:56:21: note: defined here
56 | int64_t seconds = 0;
| ^~~~
In file included from /usr/include/c++/9/bits/move.h:55,
from /usr/include/c++/9/bits/stl_pair.h:59,
from /usr/include/c++/9/bits/stl_algobase.h:64,
from /usr/include/c++/9/bits/stl_tree.h:63,
from /usr/include/c++/9/set:60,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_api.h:26,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.h:24,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.cc:21:
/usr/include/c++/9/type_traits:883:12: error: default member initializer for 'grpc_core::XdsApi::Duration::nanos' required before the end of its enclosing class
883 | struct is_constructible
| ^~~~~~~~~~~~~~~~
In file included from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.h:24,
from external/com_github_grpc_grpc/src/core/ext/xds/xds_certificate_provider.cc:21:
external/com_github_grpc_grpc/src/core/ext/xds/xds_api.h:57:19: note: defined here
57 | int32_t nanos = 0;

Failed to open file from primihub-platform

I launched primihub by docker-compose up , and the branch is develop.
I passed a resource by primihub-platform, and the node can receive it through grpc, but the opening failed as follows:

I20220710 03:08:52.751906     1 node.cc:196]  💻 Node listening on port: 50050

I20220710 03:10:41.090735    20 ds.cc:26] start to create dataset.

Failed to open file: /Users/shuming/data/upload/1/2022071011/e03ca5f4-eab6-49f5-83b9-80c2a949748c.csv

The resource is in attachment.
e03ca5f4-eab6-49f5-83b9-80c2a949748c.csv

PIR and PSI operational issues

I run the default task is success,but When I running MPC, PSI and PIR tasks according to the document, the following error is reported. How should the command be modified?

default task:
image

MPC task:
image

PSI task:

image

PIR task:
image

Error occured after command: "docker-compose up"

The error messages are listed below:
ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

我在公众号中看到primihub实现了ABY2.0,在项目中没有看到相关代码,只找到了ABY3的,请问有ABY2.0的实现么

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

node connect failed

We have already deployed the nodes on the different devices and make sure the ports that we need is open and vaild.But the nodes seems can not communicate with each other.Please help,Thank you.

Failed to Homo LR and Hetero XGB

Hi,all
Firstly, thanks for your job.
I want to learn more about primihub, and followed these steps from https://docs.primihub.com/docs/category/%E8%81%94%E9%82%A6%E5%AD%A6%E4%B9%A0. Bug some issues comes with me.

Homo LR
...
I20221125 09:23:33.820232 412549 service.cc:307] Found local meta: {
"data_type": 0,
"data_url": "node0:127.0.0.1:50050:data/FL/wisconsin.data",
"description": "breast_0",
"driver_type": "CSV",
"id": "bafkreie2sapjk7dbcb3oheynzt2kqpxq7rl7nz5c6hezailrqtomxmyjn4",
"schema": "{"1000025":[],"5":[],"1":[],"1":[],"1":[],"2":[],"1":[],"3":[],"1":[],"1":[],"2":[]}",
"visibility": 1
}
Get value request success
Get value request success
E20221125 09:23:34.952518 412549 service.cc:292] 🔍 ⏱️ Timeout while searching meta list.

Hetero XGB
...
2022-11-25 09:24:47.200 | ERROR | primihub.context:get_task_type:95 - Task type in all role must be the same.
2022-11-25 09:24:47.200 | ERROR | primihub.context:get_task_type:95 - Task type in all role must be the same.
E20221125 09:24:47.200563 412531 py_parser.cc:102] Failed to parse python: Unable to cast Python instance to C++ type (compile in debug mode for details)

I look forward to your help. Thanks.

通过本地编译启动方式测试FL任务时,总是提示“ImportError: /usr/local/lib/python3.8/lib-dynload/_posixsubprocess.cpython-38-x86_64-linux-gnu.so: undefined symbol: PyTuple_Type”

问题描述:
通过本地编译启动方式来测试FL任务时,总是提示ImportError: /usr/local/lib/python3.8/lib-dynload/_posixsubprocess.cpython-38-x86_64-linux-gnu.so: undefined symbol: PyTuple_Type

尝试过方法:
已经尝试过重新编译python,重新安装所有第三方库,问题仍然无法解决。

日志如下:

  • 启动界面日志如下:

image

  • server端错误日志如下:

image

操作步骤:
1、通过如下方式安装FL任务环境:

cd primihub/python
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
python setup.py install --user

2、编译方式下,通过 start-server.sh方式启动3个节点。
3、启动任务命令是:./bazel-bin/cli --server="127.0.0.1:50050" --task_config_file="example/fl_hetero_xgb_task_conf.json"

建议增加CMakelists.txt

我在阅读代码中,使用CLion时,安装Bazel插件,但这个插件对于网络代理好像有点缺陷。如果能使用CMakelists.txt,CLion会适配得比较好。对于代码高亮,引用的跳转都会比较方便。VS CODE下,也是多多少少有点类似的问题。(也有可能是我本身的开发环境设置的问题。)

虚拟机上运行了好久了,但是突然报错了kernel:NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [java:102195]

Describe the bug
A clear and concise description of what the bug is.
docker logs -f primihub-node0

........
Message from syslogd@localhost at Apr 15 10:22:13 ...
kernel:NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [java:102195]

Message from syslogd@localhost at Apr 15 10:22:13 ...
kernel:NMI watchdog: BUG: soft lockup - CPU#1 stuck for 33s! [java:81590]

Message from syslogd@localhost at Apr 15 10:22:13 ...
kernel:NMI watchdog: BUG: soft lockup - CPU#2 stuck for 33s! [java:94258]

Message from syslogd@localhost at Apr 15 10:22:13 ...
kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 33s! [nginx:77370]

node can not work after docker compose(always restarting)

Describe the bug

git clone
cd primihub
docker compose up -d
docker ps

find primihub-node0、primihub-node1、primihub-node2 always restarting

Expected behavior
hope node works

Log output
docker logs:
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory
./primihub-node: error while loading shared libraries: libmysqlclient.so.21: cannot open shared object file: No such file or directory

Enviroment (please complete the following information):

  • OS: CentOS Linux release 7.9.2009 (Core) (alicloud ecs)
  • Version of Primihub:github:master
  • Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512vbmi pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq arch_capabilities

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.