Coder Social home page Coder Social logo

hashicorp / terraform-ls Goto Github PK

View Code? Open in Web Editor NEW
984.0 24.0 128.0 17.68 MB

Terraform Language Server

License: Mozilla Public License 2.0

Go 62.61% HCL 26.03% Ruby 7.08% Makefile 0.65% Python 2.94% Shell 0.71%
terraform language-server hcl lsp

terraform-ls's Introduction

Terraform Language Server

The official Terraform language server (terraform-ls) maintained by HashiCorp provides IDE features to any LSP-compatible editor.

Current Status

Not all language features (from LSP's or any other perspective) are available at the time of writing, but this is an active project with the aim of delivering smaller, incremental updates over time. You can review the LSP feature matrix.

We encourage you to browse existing issues and/or open new issue if you experience a bug or have an idea for a feature.

Stability

We aim to communicate our intentions regarding breaking changes via semver. Relatedly we may use pre-releases, such as MAJOR.MINOR.PATCH-beta1 to gather early feedback on certain features and changes.

We ask that you report any bugs in any versions but especially in pre-releases, if you decide to use them.

Installation

Some editors have built-in logic to install and update the language server automatically, so you may not need to worry about installation or updating of the server.

Read the installation page for installation instructions.

Usage

The most reasonable way you will interact with the language server is through a client represented by an IDE, or a plugin of an IDE.

Please follow the relevant guide for your IDE.

Contributing

Please refer to .github/CONTRIBUTING.md for more information on how to contribute to this project.

Credits

Telemetry

The server will collect data only if the client requests it during initialization. Telemetry is opt-in by default.

Read more about telemetry.

terraform-ls's People

Contributors

aeschright avatar aicarmic avatar ansgarm avatar appilon avatar beandrad avatar bflad avatar breathingdust avatar cappyzawa avatar danishprakash avatar dbanck avatar dependabot[bot] avatar devoc09 avatar eternaltyro avatar giautm avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar jeanneryan avatar jojo43 avatar jpogran avatar nepomuceno avatar njucz avatar paultyng avatar psibi avatar radeksimko avatar samjwillis97 avatar sarahethompson avatar teddylear avatar terminalfi avatar tomdaly avatar xiehan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-ls's Issues

textDocument/complete: Complete provider names

Completing all providers is currently blocked on hashicorp/terraform#24261 but this could be initially implemented in a minimal way where we only complete init'd providers.


The idea is that after implementing we're able to complete provider names in the label context, e.g.

provider "<HERE>" {
}

Relatedly we may need to decide whether/how to support completion in incomplete declarations, such as

provider "<HERE>

Support submodules

Hello i have a Terraform deploy that use submodules, I'm using OpenStack provider, but is recognized on root main.tf, but on any submodule doesnt recognize the openstack resources on Language Server.
On root:
image

On any submodule
image

textDocument/complete: Complete supported first-class keywords

Since MVP will support provider, resource and data source completion after #9 and #10 we should also be able to complete the following first-class keywords:

  • provider
  • resource
  • data

and insert appropriate snippets for each - i.e. labels, braces with cursor on the block type.

Related to snippets: #13

Support custom plugin cache directory & `dev_overrides`

This was initially left out from implementation of #16 due to scope/complexity.

https://www.terraform.io/docs/configuration/providers.html#provider-plugin-cache

LS currently won't be able to load schema if the user who chooses to set a custom cache directory, which is because the LS has no ability to parse Terraform's CLI config (yet) and read the plugin_cache_dir option there.

Plugin developers also use dev_overrides in which case no plugins are cached anywhere. This means that LS will opt to use the schema cache. We should probably attempt to source schema locally when plugin_cache_dir or TF_PLUGIN_CACHE_DIR is set.

terraform/schema: Consider swapping mutex for semaphore

Currently reading schema will be blocked if the cached schema is being refreshed.
While avoiding simultaneous reading/writing of schema is itself a good idea, we may not want operations relying on the schema to be blocked, but maybe instead just error out.

The following sequence of events would have to happen for this to be noticeable:

  • [0s] User runs terraform init to pick up some new providers defined in the config
  • [5s] init finishes downloading providers and stores then in plugin cache, updates lock file
  • [~5.01s] LS automatically detects this as event from file watcher and starts retrieving new schemas via terraform providers schema -json
    • the schema becomes locked at this point - nothing can read it
  • [5.5s] User tabs from terminal back into IDE and triggers completion
    • completion request tries to read schema by first acquiring lock, which turns into a waiting game
  • [9s] LS finishes retrieving new schemas, stores them and releases lock
  • [~9.01] User actually gets to see their completion candidates, after 4sec delay

One may suggest we could use the old schema until new one is in place but that may be outdated/invalid for the config in place, so returning error early on locked schema reader may be the better UX as the user can just retry and send the request again.

We can achieve this by replacing the existing sync.RWMutex with golang.org/x/sync/semaphore which allows us to check for status, rather than just lock and unlock.

langserver: Handle multiple workspaces correctly

It seems that language clients will initalize every new workspace. In Sublime LSP's case it means the following can happen:

  • user opens Sublime
  • user opens /var/first-folder and main.tf within that folder
    • Sublime sends initialize with rootUri set to /var/first-folder
  • user opens /var/second-folder and main.tf within that folder
    • Sublime sends initialize with rootUri set to /var/second-folder
    • which currently causes LS to error out here, because we assume initialization happens in process-wide context and the server was already initalized

This was reproduced with server running in TCP mode, which may be implemented as edge case in the client - i.e. the client may be just launching new process for each workspace when it controls it, but we should be able to handle multiple workspaces in TCP mode anyway.

Support workspace/didChangeWatchedFiles

Background

LSP method textDocument/didChange (which is already implemented) can be used to find out about changes made through the IDE, but sometimes files are changed outside of the IDE.

These can be detected through workspace/didChangeWatchedFiles which the LS doesn't implement yet.

Clients may choose to send pretty much arbitrary updates to the server and it is up to the server to filter them. Ideally though, the server should document what clients should watch so it doesn't have to filter garbage unnecessary updates.

Server may also add to the list of watched globs, which is a feature described under #867

UX Impact

Users will see completion, go-to-definition or go-to-references etc. involving indexed but unopened modules to be accurate even when these modules change outside of the editor - e.g. when git branch is switched or files are changed in another editor.

Proposal

Testing strategy

We should assess what we can and should test with regards to potential risk of breaking and ease of testing.

Generally I think we could apply two approaches:

  • standard unit testing wherever possible
  • E2E testing - essentially simulation of LSP client/server requests and responses - a test would be made of a sequence of requests such as
    • initialize
    • textDocument/didOpen
    • textDocument/completion -> compare expected response byte-to-byte

Watch provider changes to cache schema

Obtaining schema is expensive and can take ~1 second even on a smaller provider such as github.

We should consider caching the schema as part of initialize and invalidating the cache whenever the provider changes (as the schema it would output may changed).

Possibly related to https://github.com/radeksimko/terraform-ls/issues/15 - we might be able to watch for .terraform directory changes, but we need to reflect the reality of LSP clients not supporting workspace/didChangeWatchedFiles, such as Sublime Text

We also might want to account for users storing providers outside of current work directory - i.e. we can just error out if there is no .terraform repository and no TF_PLUGIN_DIR ENV variable

feat: go to definition on modules

It would be very helpful to have an options to:

  1. go to variable definitions from module statement params to module source,
  2. go to module dir from module source statement.

Error autocompleting if document changes in-flight

Per LSP spec:

if a server detects a state change that invalidates the result of a request in execution the server can error these requests with ContentModified. If clients receive a ContentModified error, it generally should not show it in the UI for the end-user. Clients can resend the request if appropriate.

We might be able to leverage document versions and compare versions and the beginning and before the end of completion handler.

This is especially important when completing because the operation may be expensive and time-consuming. Getting github schema from Terraform can take around 1sec and could take much more for bigger providers, so the user (or anything outside of the editor) may continue modifying the document in the meantime which then invalidates the list of completion candidates.


Resolving this will also allow us to block responding to a completion or formatting request until loading for the relevant root module is finished as per #218

Support custom-built providers

LS is currently unable to find custom-built providers due to the fact that it requires $HOME or $USER to be set and we aren't passing it down from LS environment to Terraform binary.

textDocument/complete: Complete references to named values

Context

Currently the LS completes block's known attributes and nested blocks per schema and naively assumes values are static.

Terraform language supports references to named values which are generally formatted as dot-based addresses, e.g.

variable "example" {
  default = "example-value"
}
data "aws_availability_zones" "names" {
}

can be referenced as var.example or data.aws_availability_zones.names respectively elsewhere in the configuration.

Requirements

Completing the references requires LS understanding a couple of things it doesn't understand today:

  • relationship between address format and block type
    • e.g. knowing that var.<name> belongs to variable block and evaluates to variable's value (which may come from its default)
  • context in which references are evaluated, for example
    • local values can be referenced between each other, but variables can not
    • backend blocks do not support any kind of interpolation
    • provider blocks have limited interpolation capabilities (e.g. via alias)
    • values have types and it would be suboptimal to suggest references that do not match the destination type (e.g. var.number for a string attribute)
  • whether the value is available locally within the configuration or from the state, e.g.
    • variable values would generally be local
    • data source and resource attributes would generally come from state, unless these attributes are referencing static values (such as variables)

How could we tackle this

This will likely require extensive design work to ensure stability and maintainability over time. There is a number of factors to consider, e.g.

  • How does #36 overlap with this problem
  • How will the functionality overlap with https://github.com/hashicorp/terraform/tree/master/addrs
    • we probably won't build a separate single library which decouples this logic yet, but should keep this option in mind as the risk of drifting from Terraform is real
  • We may need to keep (re-)parsing the whole workspace as opposed to just open files and hold the AST for the whole workspace in memory, as references are workspace-wide, not file-wide
    • we should resolve #15 first

For reasons above I would encourage discussing the design of the possible solution(s) here before attempting to raise a PR.

Implement server readiness

Per LSP spec:

  • shutdown method should make the server respond InvalidRequest to any request other than exit.
  • initialized method should make the server actually ready for processing requests

textDocument/complete: Use "isIncomplete" property

Based on some anecdotal evidence and common sense the server should not be sending more than ~100 suggestions for completion in one response - instead it should just send the first 100 best matches and set isIncomplete to true and let the client recompute (effectively force the user to keep typing to filter suggestions).

The LSP doesn't define a workflow for passing the letters for further filtering of the candidate list, so we will need to read it from the config, basically read anything on the left side of the position until "enclosing character". It may not be ideal in terms of performance, but probably better for simplicity to just leverage HCL parser there to understand where does the content end.

terraform/schema: Consider lock file location version-specific

As advised the location of a lock file (currently <workspace-dir>/.terraform/plugins/OS_ARCH/lock.json) may change in 0.13 and so we should version-guard the logic there and perhaps just refuse initializing the LS entirely (or certainly refuse to read schemas) if >=0.13 is detected - until we know for sure where the lock file is in 0.13+.

Add resource name suggestions to the template

It would be great if the resource names autocompleted. There is another extension Terraform Autocomplete that does this well:

screen shot 2018-10-17 at 2 26 12 pm

But, otherwise the functionality in this vscode-terraform is much more full-featured so I would prefer to stick with this one. The two extensions also conflict a bit so it doesn't work very well to have both installed.

filesystem: Support incremental updates

The current implementation updates file content in full (from the first to last byte), which may cost more resources than necessary. This is usually not noticeable if users open smaller files, but may become an issue for users with bigger files.

Keeping files small helps maintainability and is a good practice in general (and hence expected in most cases), but we may not want to hurt the UX for the edge cases with big files too much either.

Incremental updates are part of the LSP and both server and client have to opt-in via relevant capabilities:
https://microsoft.github.io/language-server-protocol/specifications/specification-current/#textDocument_synchronization_sc

Support single files and parent (non-Terraform) folders

Single files

A restriction is currently in place that prevents the Language Server from communicating with an IDE (client) that opened a single file. As a result it enforces the user to open whole folders.

This is done by failing initialization with the following error:

Editing a single file is not yet supported. Please open a directory.

Why

The restriction prevents the LS from having to deal with the potentially increased complexity in any logic that concerns the relationship between a file and its plugins (providers).

The LS has responsibility for:

  • finding compatible Terraform binary (happens during initialize via $PATH)
  • finding and storing Terraform version (happens during initialize)
  • retrieving and caching schema for inited providers (happens during initialize)
  • invalidating the cache and retrieving it again when providers change (happens any time via watching the lock file in a plugin folder)

The initialize method is called only once for every opened folder, which is mainly where the lower complexity comes from.

Parent (non-Terraform) folders

This also affects users who wish to open a folder higher up the filesystem hierarchy, e.g. a folder with all Terraform workspaces, as opposed to opening workspaces individually. We do not support such case today either, but don't actually raise any error in this case (yet?).

The same reasoning and complexity scope applies here too though and it's likely that both use cases have the same (or very similar) solution.

Local Modules

Users with locally stored modules (where module's source is path in a local filesystem) may fall into this category as such modules tend to be not inited within their own folder (i.e. such modules often don't have their own .terraform folder). Supporting them will rely heavily on how their provider inheritance is set up - i.e. we need to understand where can we get the provider schemas from for every module.

Future

If we support single files, we need to consider the above happening at different times (probably during didOpen) and significantly more often. Hence we need to ensure this scales well in terms of resource consumption - e.g. by ensuring that we never watch/refresh the same plugin directory twice within the context of a running server process. We may also need to decouple the Terraform binary discovery and version handling, to ensure we only do it once per workspace.

Related

This limitation also helps us avoid bugs similar to juliosueiras/terraform-lsp#58

Proposal

  • walker: introduce more limited walking mode, such that it doesn't descend into lower directories (to prevent unexpected outcomes if user opens ~ home dir)
  • initialize handler: avoid indexing if rootUri is empty (single file was open) and store a flag to say "LS is in single file mode"
  • didOpen handler: index on-demand if "LS is in single file mode" - basically just call
    modPath, err := uri.PathFromURI(added.URI)
    if err != nil {
    jrpc2.ServerFromContext(ctx).Notify(ctx, "window/showMessage", &lsp.ShowMessageParams{
    Type: lsp.Warning,
    Message: fmt.Sprintf("Ignoring new workspace folder %s: %s."+
    " This is most likely bug, please report it.", added.URI, err),
    })
    continue
    }
    err = watcher.AddModule(modPath)
    if err != nil {
    svc.logger.Printf("failed to add module to watcher: %s", err)
    continue
    }
    walker.EnqueuePath(modPath)

Allow exposure of Terraform logs

As LS executes Terraform we should enable passthrough of TF_LOG and TF_LOG_PATH, but may want to limit the use of TF_LOG without TF_LOG_PATH as that would send logs into stderr and print them out unreadable as one long string.

docs: Document usage & installation better

Expand/improve existing documentation for common IDEs - perhaps create a dedicated USAGE.md and link it from the main README.

Installation steps may depends on #31 but there will likely always be two methods:

  • go geting
  • downloading a binary

with each method coming with its own upsides and downsides related to reproducibility and update workflow.

bug: "Type" autocomplete not working

I've just installed the Terraform 1.3.8 plugin for my VS Code 1.32.3 but the autocomplete for types is not happening.

I tested by typing the start of resource, tabbing part way through, it then highlights the type word, and I start typing (either a or g) and nothing is suggested.

If I manually type a valid type, e.g. google_compute_instance, a link gets put on for the Terraform docs page, so https://www.terraform.io/docs/providers/google/r/compute_instance.html in this case, but the autocomplete for listing the available types is not working.

textDocument/complete: Complete resource+datasource names

Until hashicorp/terraform#24261 is resolved we'd only be able to complete resource names of providers which are already declared in the config.

That seems like a reasonable temporary limitation though, so it's still worth implementing even without hashicorp/terraform#24261

Example

provider "aws" {
  region = "us-west-2"
}

resource "<HERE>" "" {

}

We should also decide out whether/how to complete incomplete declarations, such as

provider "aws" {
  region = "us-west-2"
}

resource "<HERE>

I'm not sure how HCL parser behaves in this case ^

Finish snippet support for all types and nested blocks

Nested blocks currently don't support snippets at all:

func (p *providerBlock) completionItemForNestedBlock(name string, blockType *BlockType, pos hcl.Pos) lsp.CompletionItem {
// snippetSupport := p.caps.Completion.CompletionItem.SnippetSupport
return lsp.CompletionItem{
Label: name,
Kind: lsp.CIKField,
InsertTextFormat: lsp.ITFPlainText,
Detail: schemaBlockDetail(blockType),
}
}

and snippets for attributes don't support lists/sets/maps of primitive types:

func snippetForAttr(attr *tfjson.SchemaAttribute) string {
switch attr.AttributeType {
case cty.String:
return `"${0:value}"`
case cty.Bool:
return `${0:false}`
case cty.Number:
return `${0:42}`
}
return ""
}

Report progress for indexing operations

Background

The language server performs indexing of files, which involves parsing, decoding, running external commands or processing HTTP requests to external server - all of which can take time. Currently these time-consuming operations run on the background and don't "block" the user, but until indexing is finished the user may experience inaccurate or incomplete IntelliSense data.

It is currently very difficult/impossible for the user to tell when the indexing has finished.

Indexing happens on the following occasions:

  • initialize
  • textDocument/didOpen
  • textDocument/didChange
  • textDocument/didChangeWatchedFiles
  • workspace/didChangeWorkspaceFolders

Proposal

  • Introduce synchronisation primitives as per #1056
  • Report progress on walking and job completion
    • initialize
    • textDocument/didChangeWatchedFiles
    • workspace/didChangeWorkspaceFolders

Implementation Notes

The LSP allows the server to register progress via window/workDoneProgress/* and $/progress.

parser: Support 1st class blocks without exposed schema

Current Situation

LS currently parsers attributes and nested blocks within blocks which have their schema exposed in JSON format via terraform providers schema -json. The following are examples which don't fall into that category:

  • variable
  • locals
  • output
  • module
  • terraform

and meta parameters such as

  • provisioners
  • backends
  • count
  • for_each
  • etc.

Parsing of these blocks may therefore need to be approached differently.

How we could tackle this

This will likely require extensive design work to ensure stability and maintainability over time. There is a number of factors to consider, e.g.

  • How will the functionality overlap with https://github.com/hashicorp/terraform/blob/master/configs/parser_config.go
    • we probably won't build a separate single library which decouples this logic yet, but should keep this option in mind as the risk of drifting from Terraform is real
  • Versioning of language parser (internal/terraform/lang)
    • The parser is currently constrained to >=0.12.0 and FindCompatibleParser only finds a single parser. We can use this scaffold to reflect the reality of syntax changing between TF versions
  • How could we leverage the existing HCL parsing logic in internal/terraform/lang/hcl*
    • It is currently coupled with tfjson structs, but we could try to turn these structs into compatible interfaces and have a similar static implementation ("schema dispenser") for the above blocks
  • How do we accommodate cases which have part of the schema dynamic (provider, resource, datasource) and part static (meta parameters like provisioner, count etc.)

For reasons above I would encourage discussing the design of the possible solution(s) here before attempting to raise a PR.

Check and constrain Terraform version

As the language server relies on Terraform output it should define a range of versions it supports and provide reduced functionality if unsupported version is found - e.g. only offer autocomplete candidates we can without knowing the schema.

Validate configuration and publish diagnostics

Currently the language server ignores any potential errors or warnings.

In order for completion (and other future features) to work this is essential as we need to expect user will want to use LS features in incomplete and otherwise invalid configs and HCL has the ability to deal with such scenarios.

That said the user would benefit from having a feedback in the form of errors/warnings when their config is not up to scratch.

As part of this we need to assess what diagnostics and when to publish.

What

  • HCL parse errors
  • provider validation
  • anything else that terraform validate may run - perhaps consider even running that directly

terraform validate

terraform validate can parse config and provide this data in JSON readable format (via -json flag).
The only caveat is that it also reports what could be interpreted as false negatives in the context of the language server.

This is because the command was originally designed to test preparedness for plan/apply and as such it will also report missing required variables that could be provided as ENV variables in CI or elsewhere - generally somewhere outside the context of an editor and language server.

For these reasons we may need to suppress such errors, or just find a different way of validating configs.

Relatedly integrating validate at this point would be setting a new precedent of Terraform actually parsing the config and Terraform's parsing logic may differ (hopefully doesn't though) from how the language server one. It is therefore important to integrate validation with this in mind.

When

  • on textDocument/didChange
    • this could be potentially resource-intensive depending on how often do Language Clients send changes. Reparsing configuration on every keystroke may not be a good idea.
    • may not provide the best UX as the user is in the process of crafting a valid config and so they are likely to be aware that it's not valid yet.
  • on textDocument/didSave, textDocument/willSave or textDocument/willSaveWaitUntil
    • this could leave the user potentially puzzled while looking at invalid config wondering why it is invalid

Generally for when I think that some user testing/surveying should happen so we know when users expect their configs to be valid and when do they expect to receive feedback if it is not valid.

Pass logger down to handlers

Handlers should be able to use the centrally configured logger and not have to import log. Perhaps each handler function should instead be a method with struct which holds the *log.Logger, e.g.

func (h *Handler) Shutdown(ctx context.Context, vs lsp.None) error {
	h.log.Printf("Shutting down server ...")
	return nil
}

Support built-in functions

Background

Terraform configuration language has a number of built-in functions, some of which come from HCL.

Proposal - UX/LSP

Name Completion

When a user requests completion on the RHS (after "="), aside from other candidates (such as references), they're presented with function names compatible with the attribute type, i.e. functions which return string (or "any type") if the attribute is of type string.

image5

Argument Completion

When the user confirms the function name from completion or types the opening brace, they are provided with any references of the argument type, i.e. if the argument being completed is of string type, then any references of string type are provided.

image4

Hover

When the user hovers over the function name, they are provided with the signature of the function, brief description and link to the documentation for that function.

image2

Semantic Tokens

The name of a valid function, such as format will be highlighted semantically as function name, name of an invalid function (such as frmt) will not. That way users can tell the difference between typo-ed function name and correct one just from the colours.

Implementation Notes

  • zclconf/go-cty
    • Introduce Description to the Spec and Parameter types of zclconf/go-cty/function
    • Backfill descriptions of functions & args in cty stdlib from docs
  • hashicorp/terraform
    • Backfill descriptions of functions & args in Terraform Core from docs to enable signatures to be exposed via CLI later
  • hashicorp/terraform-schema
    • Introduce Signature (see below) to terraform-schema to loosely mimic cty's function.Spec
    • Codify all TF functions for 0.12+ versions into terraform-schema/internal/funcs/functions.go using the Signature type; see prototype for finding return types
    • Expose FunctionsForVersion(v *version.Version) (map[string]Signature, error) from terraform-schema/schema
  • hashicorp/hcl-lang
    • Add Functions map[string]function.Function to PathContext in hcl-lang
    • Update PathDecoder.CandidatesAtPos in hcl-lang/decoder
    • Update PathDecoder.HoverAtPos in hcl-lang/decoder
    • Update PathDecoder.SemanticTokensInFile in hcl-lang/decoder
  • hashicorp/terraform-ls
type FuncSignature struct {
   Name            string
   Description     lang.MarkupContent
   Parameters      []FuncParameter
   ActiveParameter uint
}
 
type FuncParameter struct {
   Name        string
   Description lang.MarkupContent
}
type Signature struct {
   Params      []function.Parameter
   VarParam    *function.Parameter
   ReturnTypes []cty.Type
   Description string
}

Implement context passing/wrapping

Each handler should ideally be able to access the following via context:

  • virtual Filesystem
  • server readiness (whether it's ready to receive requests, ready to be initialized, or ready to be shut down)
  • client's LSP capabilities (e.g. whether it supports snippets)

Relatedly each handler should only be given the necessary access it requires - e.g.

  • initialize gets write access to client's LSP capabilities
  • initialized gets write access to server readiness
  • textDocument/didChange gets write access to the Filesystem
  • textDocument/* get read only access to the relevant part of TextDocumentClientCapabilities , e.g. textDocument/completion gets CompletionClientCapabilities
  • shutdown gets write access to server readiness
  • exit gets capability to terminate the whole LS (process-wise)

This is not meant to implement any sort of ACL, but merely to make it easier to reason about what each handler does and should do with regards to the LSP spec.

Exit server when client disconnects

There are LSP clients in the wild which unfortunately do not send shutdown nor exit under certain conditions.

e.g. Sublime LSP doesn't correctly shut down any server if Sublime Text is closed when all matching files were already closed before (which in itself doesn't shut down the server either).

What happens on the server side in this case is following:

2020/03/12 18:57:09 server.go:223: Checking request for "textDocument/didClose": {"textDocument":{"uri":"file:///private/var/workspace/tf-test/github/main.tf"}}
2020/03/12 18:57:09 rpc_logger.go:29: Incoming notification for "textDocument/didClose": {"textDocument":{"uri":"file:///private/var/workspace/tf-test/github/main.tf"}}

2020/03/12 18:57:11 server.go:384: Server signaled to stop with err=EOF
2020/03/12 18:57:11 server.go:136: Reading next request: EOF

When the server runs in stdin/stdout mode, it seems to actually exit automatically on EOF, but it would be useful to distinguish between graceful shutdown via RPC and forced EOF-triggered one.

When the server runs in TCP mode it does not exit.

Implement file logging

Define a sensible strategy for file logging in terms of:

  • log levels
  • file path (absolute vs relative; relative to where)
  • configuration (flags + default behaviour)
  • appending (and risk running out of disk space) VS starting empty (and risk missing out messages older than the lifetime of LS process)
  • separation based on Language Clients (assuming that a server can be used by more than 1 client at a time)
  • separation based on Language Server instances (assuming that a server may run in multiple instances, even launched by the same Language Client)

We should consider either enabling file logging by default or create a Troubleshooting section of docs which explains clearly how to enable it when folks decide to report bugs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.