Coder Social home page Coder Social logo

polarismesh / polaris Goto Github PK

View Code? Open in Web Editor NEW
2.3K 46.0 388.0 49.17 MB

Service Discovery and Governance Platform for Microservice and Distributed Architecture

Home Page: https://polarismesh.cn

License: Other

Go 97.72% Shell 1.39% Batchfile 0.17% PowerShell 0.52% Dockerfile 0.11% Smarty 0.04% Makefile 0.07%
traffic-control fault-tolerance microservice servicemesh service-discover load-balance rate-limit circuit-break service-register health-check

polaris's Introduction

Polaris: Service Discovery and Governance

Build Status codecov.io Go Report Card Docker Pulls Contributors License GitHub release (latest by date)

English | 简体中文

README:

Visit Website to learn more

Introduction

Polaris is an open source system for service discovery and governance. It can be used to solve the problem of service management, traffic control, fault tolerance and config management in distributed and microservice architecture.

Functions:

  • service management: service discovery, service registry and health check
  • traffic control: customizable routing, load balance, rate limiting and access control
  • fault tolerance: circuit breaker for service, interface and instance
  • config management: config version control, grayscale release and dynamic update

Features:

  • It is a one-stop solution instead of registry center, service mesh and config center.
  • It provides multi-mode data plane, including SDK, development framework, Java agent and sidecar.
  • It is integrated into the most frequently used frameworks, such as Spring Cloud, Dubbo and gRPC.
  • It supports K8s service registry and automatic injection of sidecar for proxy service mesh.

How to install

Visit Installation Guide to learn more

How to develop service

Polaris provides multi-mode data plane including SDK, development framework, Java agent and sidecar. You can select one or more mode to develop service according to business requirements.

Use Polaris multi-language SDK and call Polaris Client API directly:

Use HTTP or RPC frameworks already integrating Polaris Java SDK:

Use HTTP or RPC frameworks already integrating Polaris Go SDK:

Use K8s service and sidecar:

How to integrate service gateway

You can integrate service gateways with Polaris service discovery and governance.

Chat group

Please scan the QR code to join the chat group.

polaris's People

Contributors

alexwanglei avatar andrewshan avatar chenyukang avatar chuntaojun avatar daheige avatar dependabot[bot] avatar dhbin avatar edocevol avatar guiyangzhao avatar horizonzy avatar houseme avatar lepdou avatar liu-song avatar magederek avatar mi-cool avatar movebean avatar onecer avatar pemako avatar polaris-admin avatar qdsordinarydream avatar qnnn avatar ranchowang avatar reallovelei avatar samshan08 avatar shichaoyuan avatar skyebefreeman avatar skyenought avatar skywli avatar wtifs avatar xiaohongxiedaima avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

polaris's Issues

Panic when registering a service that does not exist

Describe the bug
when i try to register a service instance,It is found that panic is thrown when the registered service does not exist

err log:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xaababb]

goroutine 114 [running]:
github.com/polarismesh/polaris-server/store/boltdbStore.(*serviceStore).GetSourceServiceToken(0xc00016cb90, 0xc0004048e8, 0x5, 0xc0004048f0, 0x7, 0x0, 0x0, 0x0)
/home/runner/work/polaris/polaris/store/boltdbStore/service.go:181 +0xbb
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).batchVerifyInstances(0xc0000ce630, 0xc000482420, 0xc000482420, 0x0, 0x0, 0xc00004be00)
/home/runner/work/polaris/polaris/naming/batch/instance.go:385 +0x6ce
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).registerHandler(0xc0000ce630, 0xc0000a3700, 0x1, 0x20, 0xc000063550, 0x10f9001)
/home/runner/work/polaris/polaris/naming/batch/instance.go:226 +0x2b5
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).storeWorker(0xc0000ce630, 0xd3f838, 0xc000063540, 0x20)
/home/runner/work/polaris/polaris/naming/batch/instance.go:181 +0x24b
created by github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).Start
/home/runner/work/polaris/polaris/naming/batch/instance.go:93 +0x1b5

Environment

  • Version: v1.2.1
  • OS: linux

🔥 Extend Polaris Ecosystem

What is the feature you want to add?

Combine polaris with surrounding ecological components

Why do you want to add this feature?

I hope that the capabilities of polaris can serve more ecology and improve the experience of developers


希望polaris的能力可以服务更多的生态,提升开发者的体验

How to implement this feature?

Additional context

If you have community students who are willing to work together to build together, you can leave a message in the issue for the task you want to receive, and create an issue separately to start the task


如果有愿意一起携手共建的社区同学,可以在issue留言想领取的任务,并单独创建一个issue进行任务的开始

polaris-server crash when register api

Describe the bug
Follow the steps in polaris-go, polaris-server will crash

To Reproduce
Steps to reproduce the behavior.

All the configuration is default.

Use install.sh will also crash the polaris-server, so I tried with commands, when sdk trying to register a new API, it will crash with errors:

coder@pearl:~/polaris/polaris-standalone-release_v1.2.1.linux.amd64/polaris-server-release_v1.2.1.linux.amd64$ sudo ./polaris-server start
[INFO] load config from polaris-server.yaml
{Bootstrap:{Logger:{OutputPaths:[] ErrorOutputPaths:[] RotateOutputPath:log/polaris-server.log RotationMaxSize:500 RotationMaxAge:30 RotationMaxBackups:100 JSONEncoding:false LogGrpc:false Level:debug outputLevels: logCallers: stackTraceLevels:} StartInOrder:map[key:sz open:true] PolarisService:{EnableRegister:false ProbeAddress: Isolated:false Services:[0xc000067340 0xc000067380]}} APIServers:[{Name:httpserver Option:map[connLimit:map[maxConnLimit:5120 maxConnPerHost:128 openConnLimit:false purgeCounterExpired:5s purgeCounterInterval:10s whiteList:127.0.0.1] enablePprof:true listenIP:0.0.0.0 listenPort:8090] API:map[admin:{Enable:true Include:[]} client:{Enable:true Include:[discover register healthcheck]} console:{Enable:true Include:[default]}]} {Name:grpcserver Option:map[connLimit:map[maxConnLimit:5120 maxConnPerHost:128 openConnLimit:false] listenIP:0.0.0.0 listenPort:8091] API:map[client:{Enable:true Include:[discover register healthcheck]}]}] Cache:{Open:true Resources:[{Name:service Option:map[disableBusiness:false needMeta:true]} {Name:instance Option:map[disableBusiness:false needMeta:true]} {Name:routingConfig Option:map[]} {Name:rateLimitConfig Option:map[]} {Name:circuitBreakerConfig Option:map[]}]} Naming:{Auth:map[open:false] HealthCheck:{Open:true KvConnNum:0 KvServiceName: KvNamespace: KvPasswd: SlotNum:30 LocalHost: MaxIdle:20 IdleTimeout:120} Batch:map[deregister:map[concurrency:64 maxBatchCount:32 open:true queueSize:10240 waitTime:32ms] register:map[concurrency:64 maxBatchCount:32 open:true queueSize:10240 waitTime:32ms]]} Store:{Name:boltdbStore Option:map[path:./polaris.bolt]} Plugin:{CMDB:{Name: Option:map[]} RateLimit:{Name:token-bucket Option:map[api-limit:map[apis:[map[name:POST:/v1/naming/services rule:store-write] map[name:PUT:/v1/naming/services rule:store-write] map[name:POST:/v1/naming/services/delete rule:store-write] map[name:GET:/v1/naming/services rule:store-read] map[name:GET:/v1/naming/services/count rule:store-read] map[name:]] open:false rules:[map[limit:map[bucket:2000 open:true rate:1000] name:store-read] map[limit:map[bucket:1000 open:true rate:500] name:store-write]]] instance-limit:map[global:map[bucket:2 rate:2] open:true resource-cache-amount:1024] ip-limit:map[global:map[bucket:300 open:true rate:200] open:true resource-cache-amount:1024 white-list:[127.0.0.1]] remote-conf:false]} History:{Name:HistoryLogger Option:map[]} Statis:{Name:local Option:map[interval:60 outputPath:./statis]} DiscoverStatis:{Name:discoverLocal Option:map[interval:60 outputPath:./discover-statis]} ParsePassword:{Name: Option:map[]} Auth:{Name: Option:map[]} MeshResourceValidate:{Name: Option:map[]}}}
finish starting server
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xaababb]

goroutine 41 [running]:
github.com/polarismesh/polaris-server/store/boltdbStore.(*serviceStore).GetSourceServiceToken(0xc0001eab80, 0xc0001e89f0, 0x5, 0xc0001e89f8, 0x4, 0x0, 0x0, 0x0)
        /home/runner/work/polaris/polaris/store/boltdbStore/service.go:181 +0xbb
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).batchVerifyInstances(0xc0000d4000, 0xc00031ec60, 0xc00031ec60, 0x0, 0x0, 0xc000302e00)
        /home/runner/work/polaris/polaris/naming/batch/instance.go:385 +0x6ce
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).registerHandler(0xc0000d4000, 0xc0000aa500, 0x1, 0x20, 0xc000067550, 0x10f9001)
        /home/runner/work/polaris/polaris/naming/batch/instance.go:226 +0x2b5
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).storeWorker(0xc0000d4000, 0xd3f838, 0xc000067540, 0x4)
        /home/runner/work/polaris/polaris/naming/batch/instance.go:181 +0x24b
created by github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).Start
        /home/runner/work/polaris/polaris/naming/batch/instance.go:93 +0x1b5

Go code for registering a new API:

func main() {

	provider, err := api.NewProviderAPI()
	if nil != err {
		log.Fatal(err)
	}
	//defer provider.Destroy()

	request := &api.InstanceRegisterRequest{}
	request.Namespace = "Test"
	request.Service = "dummy"
	request.Host = "127.0.0.1"
	request.Port = 8093

	//set the instance ttl, server will set instance 	unhealthy when not receiving heartbeat after 2*ttl
	request.SetTTL(10)

	resp, err := provider.Register(request)
	if nil != err {
		log.Fatal(err)
	}
	log.Println(resp)
}

Expected behavior
Successfully register a new API.

Environment

  • Version:
    polaris-standalone-release_v1.2.1.linux.amd64
    polaris-go v1.0.0

  • OS:
    Linux pearl 5.8.0-1041-azure #44~20.04.1-Ubuntu SMP Fri Aug 20 20:41:09 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Additional context
Start with default config file polaris-server.yaml in release.

Logs are printed to different log files based on the type

What is the feature you want to add?

Logs are printed to different log files based on the type

example:
if log about healthcheck, may be should print to the polaris-health-check.log
if log about storage, may be should print to the polaris-store.log

Why do you want to add this feature?

When polaris-server runs for a long time, a large number of logs will be generated. If various types of logs are output to the same log file, it may be troublesome to troubleshoot the problem. If you divide the log output file according to the category, locate the problem or It is more convenient to do something based on the log

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

support auto build and push docker image

What is the feature you want to add?

support auto build and push docker image

Why do you want to add this feature?

support auto build and push docker image

How to implement this feature?

use github action to finish build and push docker image

Additional context
Add any other context or screenshots about the feature request here.

MemoryHealthChecker do query will panic

Describe the bug

func (r *MemoryHealthChecker) Query(request *plugin.QueryRequest) (*plugin.QueryResponse, error) {

func (r *MemoryHealthChecker) Query(request *plugin.QueryRequest) (*plugin.QueryResponse, error) {
	recordValue, ok := r.hbRecords.Load(request.InstanceId)
	if !ok {
		return &plugin.QueryResponse{
			LastHeartbeatSec: 0,
		}, nil
	}
	record := recordValue.(*HeartbeatRecord)
	return &plugin.QueryResponse{
		Server:           record.Server,
		LastHeartbeatSec: record.CurTimeSec,
	}, nil
}

this code will let polaris-server occur panic: record := recordValue.(*HeartbeatRecord)

Correct code is record := recordValue.(HeartbeatRecord)

To Reproduce

  1. set healthchecktype is memory
healthcheck:
  open: true
  service: polaris.checker
  slotNum: 30
  checkers:
    - name: heartbeatMemory
  1. run polaris-server
  2. register instance and send heartbeat to polaris-server

Expected behavior
A clear and concise description of what you expected to happen.

Environment

  • Version: [e.g. v1.0.0]
  • OS: [e.g. CentOS8]

Additional context
Add any other context about the problem here.

if register instance report service not found, polaris can auto create simple service info

What is the feature you want to add?

can automatically create the service based on the existing information when register the instance

Why do you want to add this feature?

can have the same registration experience as eureka, zk, nacos, etc.

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

Hope Polaris provides Dockerfile script

What is the feature you want to add?

Hope Polaris provides Dockerfile script

Why do you want to add this feature?

Users can expand their own mirrors according to the official dockerfile defined by Polaris, or build their own mirrors according to the official dockerfile and upload them to their own mirror warehouse

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

Can support multiple database storage types

What is the feature you want to add?

Can support multiple database storage types

eg: sqlserver、PostgreSQL、oracle、kingbase、dameng ...

  • support store can use sqlserver
  • support store can use PostgreSQL
  • support store can use Oracle
  • support store can use embedded distribute KVStorage or Database

Welcome to participate in the evil classmates in the community, you can have relevant discussions in this question


我们希望北极星的后端存储可以不仅仅只对接mysql,希望还可以对接sqlserver、PostgreSQL、oracle、kingbase、dameng 等,当前可以优先实现这三种数据库的对接

  • support store can use sqlserver
  • support store can use PostgreSQL
  • support store can use Oracle
  • support store can use embedded distribute KVStorage or Database

欢迎社区中感兴趣的同学参与,可以在本issue中进行相关的讨论

Why do you want to add this feature?

  1. Support domestic database
  2. Meet the database selection needs of different users

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

add mac and windows installation support

What is the feature you want to add?
support standalone installation on mac
support standalone installation on win

Why do you want to add this feature?
user want to quickly experience or debug on his laptop

How to implement this feature?
change the build.sh
change the workflow scripts

Additional context
Add any other context or screenshots about the feature request here.

Can support reading remote fuse configuration rules

What is the feature you want to add?

Can support reading remote fuse configuration rules

Why do you want to add this feature?

How to implement this feature?

Allow users to set different circuit breaker rules for different services in the console

Additional context
Add any other context or screenshots about the feature request here.

Add shell operation ability to polaris

What is the feature you want to add?

Allows users to interact with polaris through shell commands, for example

  1. View service or service instance
  2. View service governance rules information
  3. Do some operation and maintenance capabilities for polaris, such as forcing update of internal cache, restarting apisever, etc.

Why do you want to add this feature?

Provide command line interaction capabilities for Polaris, and provide another way of operation and maintenance capabilities and experience

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

Run unit test failed

Describe the bug

run circuitbreaker_test.go fail in Test_circuitBreakerStore_UpdateCircuitBreaker
run platform_test.go fail in Test_platformStore_GetPlatforms

To Reproduce

just exec cmd

go test -timeout 30s -run ^Test_circuitBreakerStore_UpdateCircuitBreaker$ github.com/polarismesh/polaris-server/store/boltdb

Expected behavior

unit test should be success

Environment

  • Version: 1.2.0

Additional context
Add any other context about the problem here.

Helmchart support with default namespace

What is the feature you want to add?
I would like to deploy the cluster in K8s with helmchart, and also with a default namespace like "polarishmesh"

Why do you want to add this feature?

helmchart provide more features to tweak with deployment variables
a namespace other than "default" is more appropriate for registry component

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

RegisterInstance if service not exist, will let polaris-server panic

Describe the bug
RegisterInstance if service not exist, will let polaris-server panic

func (ss *serviceStore) GetSourceServiceToken(name string, namespace string) (*model.Service, error) {

because service and error is all nil, but not deal this case

To Reproduce

do register instance and set service name not exist in polaris-server

Expected behavior

need to response : service not exist

Environment

  • Version: 1.2.1
  • OS: Mac OS

Additional context
Add any other context about the problem here.

Optimize the console data reading interface

What is the feature you want to add?

Optimize the console data reading interface

Why do you want to add this feature?

The current console data is read directly from the database. Let’s see if some optimizations can be made. Can some data be read directly from the cache, or optimized for related SQL.

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

Improve the data operation authority control of the polaris console

What is the feature you want to add?

Improve the data operation authority control of the polaris console

Why do you want to add this feature?

At present, anyone can operate the console. In fact, there will be a relatively large security problem. Therefore, it is necessary to enhance the control of the polaris console for data operation permissions, add a polaris built-in permission implementation model, or implement an LDAP protocol Docking the auth plugin inside polaris

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

Optimize the build script

What is the feature you want to add?

Why do you want to add this feature?

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

install process fail for default configuration

Describe the bug
we use the default configuration to install the cluster version, it show as below when server started
image

To Reproduce
install as the doc said

Expected behavior
start successfully

Environment

  • Version: v1.3.0
  • OS: CentOS8

Additional context
Add any other context about the problem here.

支持多种缓存更新策略

What is the feature you want to add?

支持多种缓存更新策略

Why do you want to add this feature?

在不同的持久化存储数据库中,cache层支持不同的缓存更新策略,比如增量更新、全量更新

How to implement this feature?

cache添加一个缓存策略的字段,更新逻辑按照策略来走

Additional context
Add any other context or screenshots about the feature request here.

./tool/p.sh执行后,没有任何输出

`❯ ./tool/install.sh
++ pwd

  • curpath=/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1..
  • '[' . == / ']'
    ++ pwd
    ++ dirname ./tool/install.sh
  • dir=/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1.././tool
  • cd /Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1.././tool/..
    ++ pwd
  • workdir=/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1..
  • source tool/include
    ++ server_name=polaris-server
    ++ cmdline='./polaris-server start'
  • chmod 755 tool/check.sh tool/install.sh tool/p.sh tool/reload.sh tool/start.sh tool/stop.sh tool/uninstall.sh polaris-server
  • item='/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1../tool/check.sh >>/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1../log/check.log 2>&1'
    ++ crontab -l
    ++ grep '/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1../tool/check.sh >>/Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1../log/check.log 2>&1'
    ++ grep -v '#'
    ++ wc -l
    crontab: no crontab for iceewei
  • exist=' 0'
  • '[' ' 0' == 0 ']'
  • cd /Users/iceewei/projects/go_proj/src/github.com/polaris/polaris-server-release_v1.0.1..

    ❯ ./tool/p.sh


`

Support ARM 64-bit architectures

What is the feature you want to add?

  • Support ARM 64-bit architectures

Why do you want to add this feature?

  • Support ARM 64-bit architectures

How to implement this feature?

  • Cross-platform build

Additional context

When Huawei is sanctioned. We have some services run on the Huawei Kunpeng ARM64 arch platform to support Huawei. Maybe someone else also need this.

The stand-alone version of Polaris failed to pull the service instance.

Describe the bug
Use SDK to pull the list of service instances, only an empty list can be pulled.

To Reproduce
It can be reproduced using Polaris Go SDK.

Expected behavior
The registered instance can be pulled normally.

Environment

  • Version: polaris-standalone-release_v1.2.0
  • OS: All

support giving an alias to service, for user the access from different situations

What is the feature you want to add?
polaris service should have different aliases, user from any situation can assess the same service throw diffent name

Why do you want to add this feature?
for some common services, such as database or kvstore, different talents hope to see the service name from their view, make it easy to manage

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

试用polaris后的一些小问题

在掘金看到了项目相关介绍,很感谢腾讯的同行开源相关项目。👍 试用后感觉很不错,希望有机会可能共同参与到项目生态建设中来。

下面是试用过程中发现的一些问题

  1. polaris standalone启动约20分钟后无故挂掉

启动后新建了一个service,没有注册服务,之后server就挂掉了

相关日志:

stdout

[INFO] load config from polaris-server.yaml
{Bootstrap:{Logger:{OutputPaths:[] ErrorOutputPaths:[] RotateOutputPath:log/polaris-server.log RotationMaxSize:500 RotationMaxAge:30 RotationMaxBackups:100 JSONEncoding:false LogGrpc:false Level:debug outputLevels: logCallers: stackTraceLevels:} StartInOrder:map[key:sz open:true] PolarisService:{EnableRegister:false ProbeAddress: Isolated:false Services:[0xc000107340 0xc000107380]}} APIServers:[{Name:httpserver Option:map[connLimit:map[maxConnLimit:5120 maxConnPerHost:128 openConnLimit:false purgeCounterExpired:5s purgeCounterInterval:10s whiteList:127.0.0.1] enablePprof:true listenIP:0.0.0.0 listenPort:8090] API:map[admin:{Enable:true Include:[]} client:{Enable:true Include:[discover register healthcheck]} console:{Enable:true Include:[default]}]} {Name:grpcserver Option:map[connLimit:map[maxConnLimit:5120 maxConnPerHost:128 openConnLimit:false] listenIP:0.0.0.0 listenPort:8091] API:map[client:{Enable:true Include:[discover register healthcheck]}]}] Cache:{Open:true Resources:[{Name:service Option:map[disableBusiness:false needMeta:true]} {Name:instance Option:map[disableBusiness:false needMeta:true]} {Name:routingConfig Option:map[]} {Name:rateLimitConfig Option:map[]} {Name:circuitBreakerConfig Option:map[]}]} Naming:{Auth:map[open:false] HealthCheck:{Open:true KvConnNum:0 KvServiceName: KvNamespace: KvPasswd: SlotNum:30 LocalHost: MaxIdle:20 IdleTimeout:120} Batch:map[deregister:map[concurrency:64 maxBatchCount:32 open:true queueSize:10240 waitTime:32ms] register:map[concurrency:64 maxBatchCount:32 open:true queueSize:10240 waitTime:32ms]]} Store:{Name:boltdbStore Option:map[path:./polaris.bolt]} Plugin:{CMDB:{Name: Option:map[]} RateLimit:{Name:token-bucket Option:map[api-limit:map[apis:[map[name:POST:/v1/naming/services rule:store-write] map[name:PUT:/v1/naming/services rule:store-write] map[name:POST:/v1/naming/services/delete rule:store-write] map[name:GET:/v1/naming/services rule:store-read] map[name:GET:/v1/naming/services/count rule:store-read] map[name:]] open:false rules:[map[limit:map[bucket:2000 open:true rate:1000] name:store-read] map[limit:map[bucket:1000 open:true rate:500] name:store-write]]] instance-limit:map[global:map[bucket:2 rate:2] open:true resource-cache-amount:1024] ip-limit:map[global:map[bucket:300 open:true rate:200] open:true resource-cache-amount:1024 white-list:[127.0.0.1]] remote-conf:false]} History:{Name:HistoryLogger Option:map[]} Statis:{Name:local Option:map[interval:60 outputPath:./statis]} DiscoverStatis:{Name:discoverLocal Option:map[interval:60 outputPath:./discover-statis]} ParsePassword:{Name: Option:map[]} Auth:{Name: Option:map[]} MeshResourceValidate:{Name: Option:map[]}}}
finish starting server
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16a5fdb]

goroutine 96 [running]:
github.com/polarismesh/polaris-server/store/boltdbStore.(*serviceStore).GetSourceServiceToken(0xc0001dcbe0, 0xc00030c890, 0x5, 0xc00030c898, 0x4, 0x0, 0x0, 0x0)
        /home/runner/work/polaris/polaris/store/boltdbStore/service.go:181 +0xbb
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).batchVerifyInstances(0xc0002d4000, 0xc0000d2bd0, 0xc0000d2bd0, 0x0, 0x0, 0xc00007ae00)
        /home/runner/work/polaris/polaris/naming/batch/instance.go:385 +0x6ce
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).registerHandler(0xc0002d4000, 0xc00029a400, 0x1, 0x20, 0xc000107550, 0x1cf2001)
        /home/runner/work/polaris/polaris/naming/batch/instance.go:226 +0x2b5
github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).storeWorker(0xc0002d4000, 0x1938c58, 0xc000107540, 0xc)
        /home/runner/work/polaris/polaris/naming/batch/instance.go:181 +0x24b
created by github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).Start
        /home/runner/work/polaris/polaris/naming/batch/instance.go:93 +0x1b5
polaris-server.log
2021-09-25T00:29:30.821075Z     debug   timewheel/timewheel.go:128      ckv task timewheel task start time:1632500970, use time:470ns, exec num:0
2021-09-25T00:29:31.819654Z     debug   timewheel/timewheel.go:128      ckv task timewheel task start time:1632500971, use time:397ns, exec num:0
2021-09-25T00:29:31.819618Z     info    cache/instance.go:129   [Cache][Instance] get more instances    {"update": 0, "delete": 0, "last": "1970-01-01T08:00:00.000000Z", "used": "10.961µs"}
2021-09-25T00:29:31.819654Z     debug   timewheel/timewheel.go:128      db task timewheel task start time:1632500971, use time:580ns, exec num:0
2021-09-25T00:29:31.819743Z     info    cache/service.go:129    [Cache][Service] get more services      {"update": 1, "delete": 0, "last": "2021-09-25T00:10:18.000000Z", "used": "154.389µs"}
2021-09-25T00:29:32.307459Z     info    grpcserver/server.go:376        receive request {"client-address": "127.0.0.1:65095", "user-agent": "grpc-go/1.22.0", "request-id": "52831770710", "method": "/v1.PolarisGRPC/ReportClient"}
2021-09-25T00:29:32.820542Z     debug   timewheel/timewheel.go:128      db task timewheel task start time:1632500972, use time:1.978µs, exec num:0
2021-09-25T00:29:32.820986Z     debug   timewheel/timewheel.go:128      ckv task timewheel task start time:1632500972, use time:1.324µs, exec num:0
2021-09-25T00:29:32.821241Z     info    cache/service.go:129    [Cache][Service] get more services      {"update": 1, "delete": 0, "last": "2021-09-25T00:10:18.000000Z", "used": "416.22µs"}
2021-09-25T00:29:32.821146Z     info    cache/instance.go:129   [Cache][Instance] get more instances    {"update": 0, "delete": 0, "last": "1970-01-01T08:00:00.000000Z", "used": "239.593µs"}
2021-09-25T00:29:32.830494Z     info    grpcserver/server.go:376        receive request {"client-address": "127.0.0.1:65096", "user-agent": "grpc-go/1.22.0", "request-id": "1379206009", "method": "/v1.PolarisGRPC/RegisterInstance"}
2021-09-25T00:29:32.842209Z     info    batch/instance.go:203   [Batch] Start batch creating instances count: 1
  1. 单机版能否提供docker-compose的启动方式?

个人认为通过一个大zip分发包括 prometheus/pushgateway等组件感觉不是一个很优雅的方式。且在应用挂掉如果后通过uninstall.sh停止全部应用,又导致了所有数据全部丢失,通过docker-compose启动应该会更灵活一点

  1. go sdk中api的写法不是很优雅

像api.NewProviderAPI() 这种写法令人感到有带一些迷惑,无法很直观的看出来是哪个api(尤其是接入polaris的多数为server端应用,大部分程序都会去建个api包,用于提供对外的rpc或http api)。可能像zap.NewProduction()gin.Default() 这样,写成 polaris.NewProviderAPI()这样会更直观一点。

  1. 文档里一些小bug

https://polarismesh.cn/zh/doc/%E5%BF%AB%E9%80%9F%E5%85%A5%E9%97%A8/%E4%BD%BF%E7%94%A8polaris-go.html#%E9%85%8D%E7%BD%AE%E6%9C%8D%E5%8A%A1%E7%AB%AF%E5%9C%B0%E5%9D%80
在应用当前运行目录下,添加polaris.yml文件,配置服务端地址信息 实际为 polaris.yaml文件

  1. 控制台中无法新建namespace?

not check password at mysql config.

Describe the bug
mysql password is not force required. my local mysql password is null.

 mac@apples-MacBook-Pro ~ mysql -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1024
Server version: 8.0.26 Homebrew

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

So I config mysql like this:

  name: defaultStore
  option:
    master:
      dbType: mysql
      dbUser: root
      dbPwd:
      dbAddr: 127.0.0.1
      dbName: polaris_server
      maxOpenConns: -1
      maxIdleConns: -1
      connMaxLifetime: 300 # 单位秒
      txIsolationLevel: 2 #LevelReadCommitted

err log:

err: Config Plugin defaultStore missing database param

Support the server to record service event information

What is the feature you want to add?

Support the server to record service event information

Why do you want to add this feature?

By recording the status change event of the service instance, the user can have a place to perceive the situation of the instance, and it is convenient for the user to perform additional operations based on this event

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

In single-machine Bolt storage mode, dirty data is left in the memory cache after instances are deleted

Describe the bug
In single-machine Bolt storage mode, dirty data is left in the memory cache after instances are deleted

To Reproduce

  1. start polaris-server use single-mode + use bolt store
  2. create some instance from polaris-server open-api
  3. query all instance from polaris-server client open-api
  4. delete some instance from polaris-server open-api
  5. wait some time
  6. query all instance from polaris-server client open-api
  7. will find, delete instance always exist in query all instance response until restart polaris-server

code line :

if !item.Valid {

Expected behavior

The cache information is updated correctly after the persistent store removes the instance

Environment

  • Version: [e.g. v1.2.2]
  • OS: tlinux2.0

Additional context
Add any other context about the problem here.

Unable to clean up the service cache whose namesapce has been deleted, causing a memory leak

What is the feature you want to add?

Unable to clean up the service cache whose namesapce has been deleted, causing a memory leak

Why do you want to add this feature?

causing a memory leak

How to implement this feature?

中文

前置约束

  1. 删除Namespace、Service资源时,必须保证其下面的子资源都已经被清理才能执行删除,比如删除Namespace时,必须保证当前Namespace下已经没有任何Service了才可以删除Namespace

其中一种修复思路

  1. ServiceCache是负责刷新service的数据
  2. 可以在ServiceCache每次Update时,拉取一下Namespace列表,然后和ServiceCache的names缓存做一下数据对账,清理掉脏数据

English

Pre-constraint

  1. When deleting Namespace and Service resources, you must ensure that the sub-resources below have been cleaned up before you can delete it. For example, when deleting a Namespace, you must ensure that there is no Service under the current Namespace before you can delete the Namespace.

One of the repair ideas

  1. ServiceCache is responsible for refreshing service data
  2. You can pull up the Namespace list every time ServiceCache is updated, and then do a data reconciliation with the names cache of ServiceCache to clean up dirty data

Additional context
Add any other context or screenshots about the feature request here.

feat: remove l5protocol port in polaris-server.yaml

What is the feature you want to add?
polaris-server.yaml default won't expose l5protocol port, if user want to use this feature, he need to start this config manually

Why do you want to add this feature?
most of the users won't use l5protocol, so it should not occupy a port on startup

How to implement this feature?
comment the l5protocol config

Additional context
Add any other context or screenshots about the feature request here.

install-darwin.sh in checkPort step not check Prometheus and pushgateway use port

Describe the bug
install-darwin.sh in checkPort step not check Prometheus and pushgateway use port.

To Reproduce

  1. start some process to listen 9090 and 9091 .
  2. run install-darwin.sh

Expected behavior
if 9090 and 9091 was listened, install-darwin.sh should be exist.

Environment

  • Version: [1.2.0]
  • OS: [e.g. Mac OS]

Additional context
Add any other context about the problem here.

Reports monitoring indicators to Prometheus

What is the feature you want to add?

Reports monitoring indicators to Prometheus

Why do you want to add this feature?

Reports monitoring indicators to Prometheus

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

In the stand-alone boltdb storage mode, current limiting related storage operations are not implemented

Describe the bug

In the stand-alone boltdb storage mode, current limiting related storage operations are not implemented

To Reproduce

  1. Use boltdb storage mode to start stand-alone polaris
  2. create service
  3. create limit rule and save
  4. query limit rule list about this service, will find, it retuen empty list

Expected behavior
A clear and concise description of what you expected to happen.

Environment

  • Version: [e.g. v1.0.0]
  • OS: [e.g. CentOS8]

Additional context
Add any other context about the problem here.

Instance health status is not right.

Describe the bug
Now the health check callback be trigger be client heart beat request.
If client didn't send heart beat request, the check task won't be trigger.

To Reproduce

  1. register a instance which use health check.
  2. stop polaris.
  3. stop client.
  4. start polaris.
  5. observe the instance health status, it is still health.

Environment

  • Version: v_1.2.1
  • OS: macOs

fuzzy query is not work when use boltdb

Describe the bug
fuzzy query service list, didn't show any service. In fact, the service is exist.

To Reproduce
chose fuzzy query in colse when palaris use boltdb.

Expected behavior

Environment

  • Version: v_1.2.1
  • OS: mac

Due to incorrect use of boltdb, an internal boltdb deadlock occurs

Describe the bug

Due to incorrect use of boltdb, an internal boltdb deadlock occurs

  1. Only one write transaction is allowed at the same time, and the boltdb tx object created by Crearte Transaction is not transparently transmitted to the lower layer for use

To Reproduce

  1. start polaris in standalone mode and use boltdb store plugin
  2. create one service
  3. create one RateLimitConfig
  4. polaris-server not response

Expected behavior

  1. will create ratelimit_config success

Environment

  • Version: [e.g. v1.3.0]
  • OS: [e.g. CentOS8]

Additional context

if slice is empty,protobuf marshal will lead to nil

Describe the bug

if slice is empty,protobuf marshal will lead to nil

To Reproduce

type A struct  {
 arr []int
}

m := jsonpb.Marshaler{Indent: " "}
m.Marshal(&A{arr : make([]int, 0, 0)})

m := jsonpb.Marshaler{Indent: " "}

Expected behavior

type A struct  {
 arr []int
}

m := jsonpb.Marshaler{Indent: " ", EmitDefaults: true}
m.Marshal(&A{arr : make([]int, 0, 0)})

Environment

  • Version: [e.g. v1.0.0]
  • OS: [e.g. CentOS8]

Additional context
Add any other context about the problem here.

add boltdb store

What is the feature you want to add?
boltdb store

Why do you want to add this feature?
to meet the requirements to one stand install

How to implement this feature?
implement a new store plugin to support

Additional context
Add any other context or screenshots about the feature request here.

Optimize the metric data calculation of server-side apisever, and use prometheus's own metric processing

What is the feature you want to add?

1、Optimize the metric data calculation of server-side apisever, and use prometheus's own metric processing

Why do you want to add this feature?

Use prometheus's own indicators about delay to perform metric collection and calculation to make the results more accurate and reliable

How to implement this feature?

Additional context
Add any other context or screenshots about the feature request here.

mac环境下 build.sh realpath 无对应命令

realpath: command not found
shell 新增函数可以解决

realpath() {
    [[ $1 = /* ]] && echo "$1" || echo "$PWD/${1#./}"
}

判断环境变了的写法也有问题

if [ -n "$GOOS" ] && [ -n "$GOARCH" ]
then
        folder_name="polaris-server-release_${version}.${GOOS}.${GOARCH}"
fi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.