migtools / cpma Goto Github PK
View Code? Open in Web Editor NEWControl Plane Migration Assistant: Help manage Openshift (OCP) cluster's information.
License: Apache License 2.0
Control Plane Migration Assistant: Help manage Openshift (OCP) cluster's information.
License: Apache License 2.0
ETCD configuration is in /etc/etcd/etcd.conf
CRI-O runtime configuration is in /etc/crio/crio.conf
Both are ini files and trying to adding them to the download/transform causes an error trying to unmarshal json.
Few oAuth providers require config maps that hold a the certificate authority, we need to implement their creation.
This should be handled here https://github.com/fusor/cpma/blob/master/pkg/transform/oauth/ldap.go#L48 and the logic for creating config maps will be similar to secrets
Example:
{
"level": "error",
"msg": "Unable to read private key: ",
"time": "2019-06-06T16:31:59+02:00"
}
{
"level": "error",
"msg": "Unable to read private key: ",
"time": "2019-06-06T16:31:59+02:00"
}
{
"level": "error",
"msg": "Unable to read private key: ",
"time": "2019-06-06T16:31:59+02:00"
}
{
"level": "error",
"msg": "Unable to read private key: ",
"time": "2019-06-06T16:31:59+02:00"
}
We need to wrap these objects with array [{...},{...}]
or change format of this file
$ ./bin/cpma --help
Helps migration cluster configuration of a OCP 3.x cluster to OCP 4.x
Usage:
cpma [flags]
Flags:
--config string config file (default is $HOME/.cpma.yaml)
--debug set log level to debug
-h, --help help for cpma
-l, --local-only do not fetch files, use only local files.
-o, --output-dir string set the directory to store extracted configuration.
40/40 10:40:41 FATAL unable to read private key: open : no such file or directory
That's wrong, right?
Currently this tool does not work with any oc cluster up
environments. That isn't a huge deal since we don't support oc cluster up
but for development it would be nice to test this against dev environments that use oc cluster up
.
Would be nice if https://github.com/fusor/cpma/blob/master/ocp3/ocp3.go#L104 was configurable in my .cpma.yaml
file.
BindPassword attribute for LDAP can be a clear text value, env var or a file reference. You can see the definition below.
https://github.com/openshift/api/blob/a2de4d01c8717dc2c9fc6429080b26e9d8bb2373/osin/v1/types.go#L156
https://github.com/openshift/api/blob/6820a28f1dcf8f036f94a56164dfaeb842e6b28a/config/v1/types.go#L119
Right now we only handle BindPassword if it was defined as a clear text value and skipping cases when it's something else https://github.com/fusor/cpma/blob/master/pkg/transform/oauth/ldap.go#L56 . Same problem for GitHub, GitLab, Google, OpenID.
https://docs.openshift.com/container-platform/3.11/install_config/configuring_authentication.html#LDAPPasswordIdentityProvider
https://docs.openshift.com/container-platform/3.11/install_config/configuring_authentication.html#GitHub
This is just to track replacing the structs we added for Crio when possible
This prevents us from using machine-config-operator
openshift/machine-config-operator#848
I also hit this issue and haven't been able to get client-go and apimachinery to play nice after adding machine-config-operator
# k8s.io/client-go/rest
../../../../pkg/mod/k8s.io/[email protected]+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
have (*versioned.Decoder)
want (watch.Decoder, watch.Reporter)
make: *** [Makefile:20: build] Error 2
I can work on this some more when the other issue gets fixed.
Although it might sound more natural to use sftp to retrieve the nodes' configuration files, ssh is more flexible to handle advanced admin operations.
The use case is more specifically when nodes are configured with an dedicated user which has sudo privileges and the admin(root) user is not allowed for ssh/sftp.
cmd, pkg, internal, ...:
https://github.com/golang-standards/project-layout#go-directories
There was an internal directory until a day or two ago. It looks like it was changed to 'network'
This issue is to track work to create a 'scenarios' folder similar to https://github.com/fusor/mig-controller/tree/master/docs/scenarios
The purpose of 'docs/scenarios' for CPMA is to create documentation on the high level use cases we will support along with any sample data needed to recreate those scenarios.
For example, this may include sample yamls or master-config.yaml snippets which have certain configurations to apply in OCP 3 so we may exercise CPMA tool and produce desired Reports/CRs to apply to OCP 4.x.
These engineering docs will help the community to understand the capabilities of the tool and serve as on-boarding, as well as help QE and Docs.
Add the following parameters which are supported in OCP4 OAuth CRD spec:
Mark as unsupported, only if value is set:
If we believe this problem can be described as a series of Translations
, where something asks them to do their ETL work, I think it's important we settle on what we should call this fundamental type that represents the workflow. The names thus far:
We're looking for a name for a type that abstracts the entire workflow of translating a given OCP4 feature, and implements the whole ETL process. Inviting opinion on the existing names, or proposals for new ones.
Issue spawned from a previous PR: #113 (comment)
Currently the expected CRs and reports are mostly generated from struct.
By loading them from YAML and JSON files would save code and potentially make tests easier to maintain.
Currently the code is structured as fetch, decode, transform, dump. I'm proposing we model the workflow to match the established ETL pattern. "Component" here refers to the abstraction described in #106:
FileRetriever
type that checks for a cached copy, if miss, download).These methods should be defined on each component we write. A runner should accept a list of these components and execute each of them in sequence (could be parallelized via channels, but let's not pre-optimize).
Right now these are just spitting warnings. We need to ensure we're getting valid data we can transform here for both.
Since users will prompt values using survey
, we need to remove all logs from console either by moving all info logs to debug level or leaving only cpma.log file.
What do others think about adding a local 'lint' target to our Makefile and have it be called each time 'build' is called?
Thinking this would help to surface basic issues before submitting a PR.
I've used the below and would suggest:
https://github.com/golangci/golangci-lint
Example in Makefile
${GOPATH}/bin/golangci-lint:
go get -u github.com/golangci/golangci-lint/cmd/golangci-lint
lint: ${GOPATH}/bin/golangci-lint
${GOPATH}/bin/golangci-lint run
https://github.com/fusor/cpma/blob/master/pkg/transform/secrets/secrets.go#L68
It's preferable to use an enum to represent something like this so it uses a strong type and won't accept a secretType
that isn't part of the possible set (the compiler will stop it in its tracks).
Go doesn't have enums, but there are idiomatic ways they can be approximated:
type Direction int
const (
North Direction = iota
East
South
West
)
func (d Direction) String() string {
return [...]string{"North", "East", "South", "West"}[d]
}
ssia. E.g.:
$ cpma some.ocp3.paas.host.com
vs
$ cpma ./this/dir/here
and get rid of -l/-local-only flag
Spawned from #128 (comment)
@jwmatthews pointed out that there are a set of types that are tightly coupled to OAuth
that don't feel right to be in a generic secrets package:
https://github.com/fusor/cpma/blob/master/pkg/transform/secrets/secrets.go#L11-L24
This begs the question, should these be part of the oauth
pkg, and furthermore, what actually belongs in a secrets pkg, and where should it be located?
It makes sense to me that we should have a generic secrets pkg for logic that works with them, but I'm not sure that it belongs under the transform dir. Separately, I think the above types should be in the oauth pkg.
It's not very clear where anything goes in the project. Pulling from my comment on another PR:
It's not clear what should go into something like ocp/ocp.go file, for example. Especially when there is an ocp3 and an ocp4 pkg. This needs more thought. This is probably something that deserve's its own issue, and should be dealt with separately.
I thought someone mentioned it but I cannot find in PR#128 a comment about it.
Anyway, since lots of components are sub set of the same configuration file, shouldn't we rely on a helper to handle each time we tap into a MasterConfig, or NodeConfig, etc.
For instance, these 2 blocks have the same use and both share configV1.MasterConfig:
https://github.com/fusor/cpma/blob/master/pkg/transform/oauth_transform.go#L57-L66
https://github.com/fusor/cpma/blob/master/pkg/transform/sdn_transform.go#L103-L110
Helpers could be something like:
https://github.com/fusor/cpma/blob/8c7523c9ec9a90959fae074a9ab02b6533f7f835/pkg/ocp3/ocp3.go#L24-L49
Your thoughts?
For Persistent Volumes:
Add Capacity if not already there
Add Phase (Bound, List, Available, etc.)
Add Driver (NFS, EBS, etc.)
(Note from John, Assume we will need some logic to look at determine, not aware of a simple field of a 'type' we can just query on PV spec.)
Add Node List to report
CPU
Memory
Consumed Memory, if possible, did not see an API response for this
Number of Pods runing (this might be more difficult in that you may have to look at pods to determine what node they running on)
List Masters if possible. Didn't see a way to list this in the API, but we get a list somehow for the settings survey?
Basic Auth support needs to be re-examined, I imagine this hasn't been tested against a real cluster yet.
https://github.com/fusor/cpma/blob/master/pkg/transform/oauth/basicauth.go#L45-L55
// TODO: Fetch cert and key
certFile := "456This is pretend content"
encoded := base64.StdEncoding.EncodeToString([]byte(certFile))
certSecret := secrets.GenSecret(certSecretName, encoded, "openshift-config", "basicauth")
keySecretName := p.Name + "-client-key-secret"
idP.BasicAuth.TLSClientKey.Name = keySecretName
keyFile := "123This is pretend content"
encoded = base64.StdEncoding.EncodeToString([]byte(keyFile))
keySecret := secrets.GenSecret(keySecretName, encoded, "openshift-config", "basicauth")
Leverage the work being done for the controller and implement end to end tests to
Spawned from the discussion on #113 (comment)
Quoting from that comment:
Have we considered the fact that more than one ocp4 feature may need to provide their configuration on the same CR, which means they'd need to edit a shared output file? Perhaps we need to share a data structure between transforms that represents the total file output so that more than one
Translator
can mutate the same individual output file.
In the survey we're only checking to see if it's still the default value.
https://github.com/fusor/cpma/blob/master/pkg/env/env.go#L153
It seems there is no good way to tell if the value came from a default or config
spf13/viper#323
We should probably figure out if it came from default or config to fix the behavior.
GenYAML()
is repeated twice for secrets and ouath - https://github.com/fusor/cpma/blob/1aa6ac9fc61860ec87f9326cd38f339dcc748b56/pkg/transform/secrets/secrets.go#L88 https://github.com/fusor/cpma/blob/55ef73f43a10785b6a76bd3a2e07a428d25603a0/pkg/transform/oauth/oauth.go#L159
Maybe we need to make it common for both of them and later we will need GenYAML()
for more CRs.
I am having trouble seeing any reason why we need a timeout based event loop here: https://github.com/fusor/cpma/blob/master/cmd/root.go#L58.
When the results are successful, the cpma
tool runs for 10 seconds repeating the same behavior over and over again until it times out. I can't think of any good reason why we would need this loop. Am I missing something here?
Basically create an array of test cases to cover edge cases and easy cases as well. The actual struct can be tailored to your individual test needs. The important part is the array.
testCases := []struct {
name string
c Configuration
expected []string
expectederr bool
handlerFunc http.HandlerFunc
}{
...
}
Then create your test loop:
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// get test server
serv := httptest.NewServer(tc.handlerFunc)
defer serv.Close()
// use the test server's url
dockerHubLoginURL = strings.Join([]string{serv.URL, "/v2/users/login/"}, "")
dockerHubRepoImages = strings.Join([]string{serv.URL,
"/v2/repositories/%v/?page_size=100"}, "")
dockerHubManifestURL = strings.Join([]string{serv.URL, "/v2/%v/manifests/%v"}, "")
// create the adapter we want to test
dha := DockerHubAdapter{Config: tc.c}
// test the GetImageNames method
output, err := dha.GetImageNames()
if tc.expectederr {
if !assert.Error(t, err) {
t.Fatal(err)
}
assert.NotEmpty(t, err.Error())
} else if err != nil {
t.Fatalf("unexpected error during test: %v\n", err)
}
errmsg := fmt.Sprintf("%s returned the wrong value", tc.name)
assert.Equal(t, tc.expected, output, errmsg)
})
}
As we build the functionality to assist with migrating Identify Management configuration, let's ensure we capture:
For Identify Management configuration in 3.x see:
https://docs.openshift.com/container-platform/3.11/install_config/configuring_authentication.html
Please test at least 3 different authentication provider configurations:
related PR - #29
Spawned from PR #128 (comment)
Remote config file path consts should be provided by cli/cpma-configfile values, with sensible defaults. I think this is something we should be able to handle with viper.
Spawned from #128 (comment)
I provided an example for how to accomplish this here: eriknelson@bdd651a
Specifically this is how I would convert Flush
into overridable behavior so it becomes testable: eriknelson@bdd651a
I'm not 100% satisfied with pkg level default functions; it doesn't feel like that's a sustainable model for dependency injection (DI). Something to consider, I was chatting with another kubernetes go developer about how people deal with DI in go and there have been some promising proposals in major projects that maybe we could leverage? kubernetes-sigs/controller-runtime#285
Pragmatically I do not think we should adopt this, but it's an interesting option to explore, should we choose to do so.
We should omit optional values if they are empty. For example:
identityProviders:
- name: my_remote_basic_auth_provider
challenge: true
login: true
mappingMethod: claim
provider:
apiVersion: v1
kind: BasicAuthPasswordIdentityProvider
url: https://www.example.com/remote-idp
ca: /path/to/ca.file # OPTIONAL
certFile: /path/to/client.crt # OPTIONAL
keyFile: /path/to/client.key # OPTIONAL
certFile, ca and keyFile can be empty and we should not generate secrets and have them in translated CR.
I checked out the project to my GOPATH
and using go 1.12.4, it will not build using make
. Cloning to a location outside of my GOPATH
does work. I understand there's some weirdness around go modules and go binary versions, so we should clearly communicate expectations in the README:
GO111MODULE="on"
?It seems like naming of output CRDs should follow a certain pattern to be accepted as a valid manifest, during manifest consumption on OCP4 cluster. We need to validate these values under DNS-1123 standard.
The Secret “htpasswd_auth-secret” is invalid: metadata.name: Invalid value: “htpasswd_auth-secret”: a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-’ or ‘.’, and must start and end with an alphanumeric character (e.g. ‘example.com’, regex used for validation is ‘[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*’)
Similar to https://bugzilla.redhat.com/show_bug.cgi?id=1488366
I don't have any oauth or secret configurations in my ocp3 config, and that led to manifests being created that have empty data.
[dymurray@pups cpma]$ cat data/manifests/100_CPMA-cluster-config-oauth.yaml
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
namespace: openshift-config
spec:
identityProviders:
- null
[dymurray@pups cpma]$
[dymurray@pups cpma]$ ls data/manifests/
100_CPMA-cluster-config-oauth.yaml 100_CPMA-cluster-config-secret-.yaml
[dymurray@pups cpma]$ cat data/manifests/100_CPMA-cluster-config-secret-.yaml
apiVersion: ""
kind: ""
type: ""
metadata:
name: ""
namespace: ""
data: null
We should have some better error checking so that we aren't creating null manifests.
Spawned from #128 (comment)
Reading up some more about cobra, I think I was wrong regarding the above comment, where I stated that we shouldn't have multiple cmd files. It makes sense to me for each command, we should have a different cmd/X
file (if we have a tool with different commands, ex: cpma report
, cpma transform
).
Firstly, do we prefer to accept the output passed as a command vs. an option? Ex:
cpma report X
cpma --output-type={report,transform}
, defaulting to transform?The first would result in a bunch of cmd/COMMAND.go
files, and they would also share command line args, like output directory. So does that really make sense over the argument approach?
For maintainability reasons, I like the argument approach. It means we have a single entrypoint. If we think the UX of a command approach is the right choice, then multiple files totally make sense under cmd/COMMAND.go
.
On top of all this, if we choose to go the command route, what does just cpma $OPTIONS
do? Just list the help? Default to transform
?
@jwmatthews curious about your thoughts on this because you stated you liked the command approach. If I had to choose, I like the argument approach because of its maintainability. Admittedly, the command approach is much closer to what oc
and kubectl
do today, so I could see that as a point in its favor.
Spawned from PR #128
We have a generic requirement that most (all?) transforms are going to need to get remote file contents as part of their Extract
methods, to be decoded and stored on an Extraction
object. It makes sense that we have a dedicated part of the codebase who's only responsibility is to provide this feature. Additionally, we should be able to easily mock it with something that provides testdata
sourced contents so we can test Extract
methods well.
If the above is true, we should move Fetch
out of transform/transform.go, and into its own file, and I would advocate a different pkg than transform
. This is the kind of thing that typically ends up in a util
package if it's a handful of functions, but if it grows to be more than that, I think it makes sense in its own file
pkg (name TBD).
It should handle file caching since most transforms will be requesting the same files from it, I know we do that today, but just speccing that here. Additionally, @gildub has pointed out that it's likely that we will need the ability to provide the host as an argument, since it's possible this may need to support a multi-host scenario.
Finally, it could possibly be useful to represent this concept with something like a FileManager
struct rather than a handful of functions. I don't have a reason for this right now, but I've certainly seen it done for something like this in the past.
At the very least, Fetch
as it is today should be moved out of that transform file and into something more focused on the concept of file retrieval, and it shouldn't be called from anywhere but transform Extract
methods, since that's the only place that should be downloading remote files.
Some Q's to be answered:
Wanted to capture this while it's still fresh in my mind in case anyone else runs into this. I'm using https://github.com/fusor/origin3-dev/ for my example ocp3 cluster, which brings up an ec2 vm and runs oc cluster up
to launch a cluster via docker containers. The cluster's configuration files are stored in a non-default location on the vm, and are mounted into possibly different locations inside the containers.
What this means is that when I run the tool against the cluster, we read the master config and see that htpasswd is configured to be at /etc/origin/master/htpasswd
. In reality, that file actually exists at /tmp/openshift.local.clusterup/kube-apiserver/htpasswd
on the vm, but it's mounted into the container at /etc/origin/master/htpasswd
.
I don't see how we can figure out the true location of the source files if the values in the master config differ from where they actually are on the machine.
@jwmatthews, is this just a product of our particular origin3-dev
setup, and we're unlikely to encounter this scenario with the production environments we're migrating? Or is this something we need to address?
In master I'm seeing a series of funcs (fetch, decode, translate, dump, or FDTD) run per remote config file (master, node). I'd like to propose an alternative model because this doesn't sufficiently address the problem of configuring OCP4 features, like container runtimes, which require multiple input configuration files.
The unit of translation should be on an OCP4 feature basis, not on a remote config basis. Let's call the OCP4 features components. For example, there is an OAuth
component, an SDN
component, and a ContainerRuntime
component.
Ideally, you should be able to define the FDTD functions for each "component", and I want to fetch/decode the N config files that are relevant to that component.
https://github.com/fusor/cpma/blob/master/Makefile#L25
Right now we're just checking the build and test, would be good practice to also enforce lint and gofmt with our CI runs, with the intention that all PRs should pass these before being merged. Devs can confirm their work will pass the checks by just running make ci
manually.
I don't believe the project is passing lint and/or gofmt currently, so as part of this issue we should repair the breakages.
CPMA should fail if one of values prompted by survey
was left empty, in the same way it is done in openshift-installer.
Would it make sense to have unit test for report.
I'm thinking about making sure a parameter that is not translated and therefore reported appears on the report side.
Prompt to allow insecure host key
Using insecure key could be done by using HostKeyCallback: ssh.InsecureIgnoreHostKey() here ->
https://github.com/fusor/cpma/blob/master/pkg/io/remotehost/remotehost.go#L56
That's exactly what I thought.
The reason it that apparently it's the practice when using "github.com/spf13/cobra".
I disagree, cobra does not dictate how you call the rest of your app. Let's be clear here, I'm not debating the need for the root.go
file. And if cobra requires it to be called root.go
that is also okay. What I'm against is this code living in root.go
.
ocpOAuth := new(ocp.OAuthConfig)
ocpSDN := new(ocp.SDNConfig)
ocpOAuth.Add(source)
ocpSDN.Add(source)
configs = append(configs, ocpOAuth, ocpSDN)
for _, config := range configs {
config.Extract()
config.Transform()
ocp.DumpManifests(config.GenYAML())
}
I'm of the opinion this should be in the core of your application. IMO root.go
is NOT the core of your application. It is merely the "UI" for this application. Basically the struct that knows how to handle CLI parameters really doesn't care about what the core of the application needs.
Basically I know to look in cmd/
to see what parameters there are and how they are parsed. And I look to see how the application is started.
So to satisfy the request I'm making it is as simple as moving the above code to another file in the pkg/
directory, something like this:
MigrationAssistant struct {
// the fields required to handle the transformations etc, I would defer to the POC that @eriknelson has been working on for this
}
func (m *MigrationAssistant) Start(params map[string]string) {
// take in the parameters you need to start this application
}
Then in root.go
I would do the following:
var rootCmd = &cobra.Command{
Use: "cpma",
Short: "Helps migration cluster configuration of a OCP 3.x cluster to OCP 4.x",
Long: `Helps migration cluster configuration of a OCP 3.x cluster to OCP 4.x`,
Run: func(cmd *cobra.Command, args []string) {
env.InitConfig()
env.InitLogger()
// start the application
ma := MigrationAssistant{}
ma.Start(PASS_YOUR_PARAMS_HERE)
},
...
Do you see now? Basically root.go
knows all about parsing parameters. It knows how to transform those CLI parameters into something that can be passed into the Start
of the application. It's cleaner and easier to see where the stuff for cobra is and the stuff required by your application.
Add supported OCP3 master-config's network parameters:
ingressIPNetworkCIDR
: https://docs.openshift.com/container-platform/4.1/networking/ingress-operator.html#configuring-ingress.Report unsupported parameters:
externalIPNetworkCIDRs
HostSubnetLength
We should probably have better error handling than calling logrus.Fatal and logrus.Panic.
There are times where we're going to have to expect errors and deal with them gracefully.
One example off the top of my head is looking for files that might not exist. CRI-O was not the default runtime in Openshift 3.x, but it was possible to use. In order to transfer the container runtime configuration we'll need to at least try to fetch the file. If that fails it shouldn't error the program out. Instead we need to simply not create a CR for runtime configuration.
Looks like the p.Provider.Raw decodes at the least:
https://github.com/fusor/cpma/blob/master/pkg/transform/oauth/basicauth.go#L32
[jesusr@transam cpma{master}]$ make lint
pkg/ocp/ocp.go:20:6: exported type ConfigFile should have comment or be unexported
pkg/ocp/ocp.go:26:6: exported type OAuthTranslator should have comment or be unexported
pkg/ocp/ocp.go:33:6: exported type SDNTranslator should have comment or be unexported
pkg/ocp/ocp.go:39:6: exported type Translator should have comment or be unexported
pkg/ocp/ocp.go:50:1: exported function NewOAuthTranslator should have comment or be unexported
pkg/ocp/ocp.go:62:1: exported function NewSDNTranslator should have comment or be unexported
pkg/ocp/ocp.go:107:1: exported method OAuthTranslator.Load should have comment or be unexported
pkg/ocp/ocp.go:122:1: exported method SDNTranslator.Load should have comment or be unexported
pkg/ocp3/ocp3.go:17:6: exported type Master should have comment or be unexported
pkg/ocp3/ocp3.go:19:6: exported type Node should have comment or be unexported
pkg/ocp3/ocp3.go:25:1: comment on exported function MasterDecode should be of the form "MasterDecode ..."
pkg/ocp3/ocp3.go:38:1: comment on exported function NodeDecode should be of the form "NodeDecode ..."
pkg/ocp4/ocp4.go:8:6: exported type Manifests should have comment or be unexported
pkg/ocp4/ocp4.go:26:1: exported function OAuthManifest should have comment or be unexported
pkg/ocp4/ocp4.go:37:1: exported function SecretsManifest should have comment or be unexported
pkg/ocp4/ocp4.go:44:1: exported function SDNManifest should have comment or be unexported
pkg/ocp4/oauth/basicauth.go:12:6: exported type IdentityProviderBasicAuth should have comment or be unexported
pkg/ocp4/oauth/github.go:12:6: exported type IdentityProviderGitHub should have comment or be unexported
pkg/ocp4/oauth/gitlab.go:10:6: exported type IdentityProviderGitLab should have comment or be unexported
pkg/ocp4/oauth/google.go:10:6: exported type IdentityProviderGoogle should have comment or be unexported
pkg/ocp4/oauth/htpasswd.go:14:6: exported type IdentityProviderHTPasswd should have comment or be unexported
pkg/ocp4/oauth/keystone.go:15:6: exported type IdentityProviderKeystone should have comment or be unexported
pkg/ocp4/oauth/ldap.go:9:6: exported type IdentityProviderLDAP should have comment or be unexported
pkg/ocp4/oauth/oauth.go:27:1: comment on exported type OAuthCRD should be of the form "OAuthCRD ..." (with optional leading article)
pkg/ocp4/oauth/oauth.go:28:6: type name will be used as oauth.OAuthCRD by other packages, and that stutters; consider calling this CRD
pkg/ocp4/oauth/oauth.go:45:6: exported type MetaData should have comment or be unexported
pkg/ocp4/oauth/oauth.go:51:2: exported var APIVersion should have comment or be unexported
pkg/ocp4/oauth/openid.go:10:6: exported type IdentityProviderOpenID should have comment or be unexported
pkg/ocp4/oauth/requestheader.go:9:6: exported type IdentityProviderRequestHeader should have comment or be unexported
pkg/ocp4/sdn/sdn.go:65:1: exported function TranslateClusterNetworks should have comment or be unexported
pkg/ocp4/sdn/sdn.go:80:1: exported function SelectNetworkPlugin should have comment or be unexported
pkg/ocp4/secrets/secrets.go:11:6: exported type HTPasswdFileSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:15:6: exported type KeystoneFileSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:19:6: exported type LiteralSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:22:6: exported type BasicAuthFileSecret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:26:6: exported type Secret should have comment or be unexported
pkg/ocp4/secrets/secrets.go:34:6: exported type MetaData should have comment or be unexported
pkg/ocp4/secrets/secrets.go:39:5: exported var APIVersion should have comment or be unexported
pkg/ocp4/secrets/secrets.go:41:1: exported function GenSecret should have comment or be unexported
internal/io/io.go:10:5: exported var GetFile should have comment or be unexported
internal/io/io.go:24:9: if block ends with a return statement, so drop this else and outdent its block
Found 41 lint suggestions; failing.
make: *** [Makefile:31: lint] Error 1
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.