Coder Social home page Coder Social logo

lms's Introduction


lmstudio cli logo

lms - Command Line Tool for LM Studio

Built with lmstudio.js

Installation

lms ships with LM Studio 0.2.22 and newer.

To set it up, run the built-in bootstrap command like so:

  • Windows:

    cmd /c %USERPROFILE%/.cache/lm-studio/bin/lms.exe bootstrap
  • Linux/macOS:

    ~/.cache/lm-studio/bin/lms bootstrap

To check if the bootstrapping was successful, run the following in a 👉 new terminal window 👈:

lms

Usage

You can use lms --help to see a list of all available subcommands.

For details about each subcommand, run lms <subcommand> --help.

Here are some frequently used commands:

  • lms status - To check the status of LM Studio.
  • lms server start - To start the local API server.
  • lms server stop - To stop the local API server.
  • lms ls - To list all downloaded models.
    • lms ls --detailed - To list all downloaded models with detailed information.
    • lms ls --json - To list all downloaded models in machine-readable JSON format.
  • lms ps - To list all loaded models available for inferencing.
    • lms ps --json - To list all loaded models available for inferencing in machine-readable JSON format.
  • lms load --gpu max - To load a model with maximum GPU acceleration
    • lms load <model path> --gpu max -y - To load a model with maximum GPU acceleration without confirmation
  • lms unload <model identifier> - To unload a model
    • lms unload --all - To unload all models
  • lms create - To create a new project with LM Studio SDK
  • lms log stream - To stream logs from LM Studio

lms's People

Contributors

ryan-the-crayon avatar yagil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lms's Issues

Uninstall server Ubuntu

Greetings.
I would like to completely remove the server, models and application from my Ubuntu linux.
Deleting the AppImage was easy enough, but the server is bootstraped.

界面字体

请问怎么修改字体大小和字体种类

Windows: using --verbose prevents ability to load model

Hi and thanks for creating this utility. I was putting together a batch file to run LM Studio so I can connect from a networked machine and was trying out some of the various options to get it started.

My goal is simple: Start the server, load a model, show the logs as activity comes in. Here is the batch file I put together:

@echo off
echo LMS Starting...
lms server start --cors --verbose
timeout 5
echo Loading Model...
lms load <modelname> --gpu max -y
echo Logging LMS stream...
lms log stream

...and executing it via another batch file which kicks off some other processes:

start "LMS" cmd /k lms_start.bat

It would hang at Loading Model.... When I removed --verbose, it worked as expected.

EDIT: I was wrong, and --verbose is not the culprit! It seems simply trying to load the model when launched in a new window causes it to hang. If I run all of the above commands manually, everything works as expected. But when run from a batch file, it hangs when loading the model. I'll try a few more scenarios and will report back to see if one of the flags are the cause.

I thought about deleting this outright but wanted to bring it to the dev's attention as it is still unexpected behavior.

LM Studio + n8n not interchangeable with OpenAI + n8n

Hi, I am trying to employ LM Studio with n8n as a local alternative to OpenAI as a dev/testing tool for clients. It works somewhat for basic http requests where I format the prompt and parameters. but does not play nice with n8n's Langchain agent implementations. In most implementations (see below), LM Studio does not call tools or properly format output. I do not think it is entirely due to model quality, but rather some mismatch between the OpenAI and LM Studio APIs, under-the-hood n8n code/protocol - or perhaps some agents aren't supported on LM Studio or specific models. I have used mistral and LLama with LM Studio. All tools work as intended with OpenAI API.

Agent Calls Tools Correct Output Format
Conversational N/A No
ReAct Yes No
Plan + Execute Yes No
Tools No No
Function No* Yes
  • It seems that Function Agent in LM Studio does attempt to obtain information via function-calling the tools, but it leaves no log to validate this. OpenAI will leave a log of each tool called and the input/output.

IMO, the best agent n8n has implemented is the Tools agent - it is the most efficient and dependable. It would work the best with locally hosted, smaller models. However, it does not function well at all with LM Studio, perhaps because it passes the tool schema to the agent without injecting it into the prompt?

The second best is the Functions agent, which I have at least outputting correct format with LM Studio, which is a start. Plan + Execute is only useful for the plan, and ReAct is a train off the rails.

If n8n Tools Agent could function properly in LM Studio, you would have a powerful tool for development.

If there is any interest in investigating this and coming up with a solution so LM Studio is interchangeable with OpenAI on n8n, I'd be happy to share logs, code, whatever. Thanks!

Warn about writing to rc files

I always find it nice when install scripts that want to make changes to any config files ask/warn first. In the future I figure there'll be a default install and an advanced install process that would ask me if I'd like it to write to any rc files automatically, else print out what I need to do manually.

For now, it'd be cool if there was an additional note during or after the install process telling users that whichever rc file was written to so if they manage their own dotfiles they know to go check it out and update manually if need be.

A message in that dimmed font color (grey for me), in the post-install message, something that makes sense to those who manage their own dots but doesnt scare those who don't. "lms was added to your $PATH", for example, may be all that's needed so you're not even potentially adding friction by saying a file was written to.

Overall, a pretty low-priority request in my opinion, but I was gushing too much in my previous comment and you're the ones who set the bar so high with all you've been doing anyways so...

(and feel free to close/ignore/etc if you disagree or dont want this left open)

After branching a conversation and loading, crashes with LLama 3.1 nvidia 12 gb ram

After branching a conversation and loading llama 3.1 it crashes on the first run consistently

{
  "data": {
    "memory": {
      "ram_capacity": "31.90 GB",
      "ram_unused": "18.20 GB"
    },
    "gpu": {
      "gpu_names": [
        "NVIDIA GeForce RTX 3060"
      ],
      "vram_recommended_capacity": "12.00 GB",
      "vram_unused": "10.98 GB"
    },
    "os": {
      "platform": "win32",
      "version": "10.0.22621"
    },
    "app": {
      "version": "0.2.31",
      "downloadsDir": "C:\\Users\\jason\\.cache\\lm-studio\\models"
    },
    "model": {}
  }
}

GPU Selection + Model Data Location

Upon first launch I started digging through the options. I was looking to find a few key things. Two of those things being data location and GPU selection.

I eventually found if I was downloading a model I could choose a new location but then it never loads from that location unless you download it again. Manually moving the original download to the new location causes a conflict "freak out."

As for GPU select there doesn't appear to be such a thing. Worse is it seems to ignore all my GPU's and run solely on my CPU.

Running version LM_Studio-0.2.27.AppImage

[Feature Request] Enhance Chat Functionality with Generation History and Editing Tools

Hi,

Firstly, I want to express my gratitude for creating such an exceptional product like LM Studio. It has been a few weeks since I started using it, and I’m thoroughly impressed with its capabilities.

I would like to suggest some enhancements for the chat section that could significantly improve the user experience. Here are my thoughts:

  1. AI Text Generation Drafts: Similar to the features available in ChatGPT and Gemini, it would be highly beneficial to have the ability to generate multiple drafts and select the preferred one. This would eliminate the need to copy text to an external editor like MS Word, which is quite a cumbersome process.

  2. Grammar and Spelling Checker: While the software currently highlights incorrect spelling, it lacks the feature to provide correction suggestions. Incorporating this would streamline the writing process and reduce the need for external grammar checking tools.

  3. Color Coding for User and AI Prompts: The ability to color-code prompts would greatly enhance text visibility, especially after long chat sessions. It would also prevent accidental deletion of important text and make it easier to identify unwanted text.

  4. Undo for Deleted Generations: An ‘undo’ feature for accidental deletions of AI-generated text would be a lifesaver. It’s frustrating to lose good content with no way to recover it.

  5. Integrated Word Editing Feature: Incorporating a word editing feature directly within LM Studio would be a game-changer. It would allow users to draft and edit text without relying on external text editors. As someone who frequently uses LLM for writing stories and reports, I believe a UI tailored for this purpose would set LM Studio apart from other local LLM backends.
    The inclusion of a comprehensive text formatting toolkit within LM Studio would greatly enhance the writing and editing experience. Features such as indentation, bullet points, bold fonts, italics, and other basic text editing capabilities would allow for more sophisticated document creation. This would facilitate users in crafting well-structured and visually appealing content directly within the platform.

Additionally, the ability to utilize multiple AI models for specialized tasks would be revolutionary. For instance, having a dedicated Editor Model that can automatically review and refine content generated by another AI in the Writer Role would streamline the content creation process. Ideally, this could be implemented in a separate window, enabling real-time collaboration between the models to produce polished and ready-to-publish text. Or, a model capable of summarization can be used to summarize the chat between the user and another model- the writer, then feed in the summarization directly to the writer as an appended prompt.

This dual-model functionality could potentially transform LM Studio into a powerhouse for content creators, providing an all-in-one solution for generating, editing, and finalizing written material.

I’m confident that these enhancements would not only improve my workflow but also benefit the wider LM Studio user community.

Thank you for your dedication to improving LM Studio.

Best regards,

Adding Logprobs in Chat completion response

Hi,

Will the lmstudio chat completion response be added to a logprobs in the response?

Currently, when using OpenAI in .NET library the CreateChatCompletionResponseChoices
the logprobs is a required property and will result in an error.

    [global::System.Text.Json.Serialization.JsonPropertyName("logprobs")]
    [global::System.Text.Json.Serialization.JsonRequired]
    public required global::OpenAI.CreateChatCompletionResponseChoicesLogprobs? Logprobs { get; set; }

Thank you.

Running in headless mode

Is is possible to run LMStudio in headless mode when running the API? Use case is running it in a Linux instance without a UI.

LM Studio does not shut down after being closed.

Running the latest AppImage on latest openSUSE Tumbleweed.

Closing the app leaves the app running in the background, requiring me to kill it from the command line.

Since the app is taking 100% CPU in this condition, leaving it running in the background is not an option.

[High Priority Feature] Please add Support for 8-bit and 4-Bit Caching!

Hello team,

LM Studio is using recent updates in llama.cpp, which already has support for 4 bit and 8 bit cache, so I don't LM Studio does not incorporate it yet.
The benefits are tremendous since it improves generation speed. It also helps with using a higher quantization.

The give you an example, I run aya-23-35B-Q4_K_M.gguf in LM Studio at a speed of 4.5t/s because the maximum number of layers I can load on my GPU with 24GB of VRAM is 30 layers. Aya has 41 layers. In Oobabooga Webui, with 4-bit cache enabled, I can load all layers in my Vram, and the speed bumps to 20.5t/s. That's a significant increase in performance (5 folds).

This should be your main priority since you are actually pushing your customer to move to a different platform. Right now, I don't LM Studio when I want to run a larger model, which is unfortunate since I am your biggest fan.

Please, solve this issue ASAP.

Is there a plan to support NPU?

Snapdragon X Elite and Core Ultra have NPUs built-in. However, it seems that the capabilities of the NPU can only be utilized with software that supports it. Does LM Studio have any plans to support these NPUs?

With the new Copilot+PC standard computers, the GPU is integrated into the CPU, and the VRAM shares memory with the main RAM. As a result, the graphics capability is a concern compared to laptops with NVIDIA GPUs. Therefore, even though these computers are touted as AI-ready, some use NPUs instead of GPUs. It would be very helpful if LM Studio could also support NPUs.

Server not running

image

[2024-05-17 14:45:46.998] [INFO] [LM STUDIO SERVER] Stopping server..
[2024-05-17 14:45:46.999] [INFO] [LM STUDIO SERVER] Server stopped
[2024-05-17 14:45:46.999] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED

The server does not run after I click Start Server. There is no more detailed info about it.

Any suggestions?

Thank you.

Jarvis

NOT cli related issue but wayland gui blur related issue

Couldnt find any proper place so creating issue here. I am running arch linux with hyprland which runs on wayland. In that the issue i am facing while running the LM Studio is of blur. Text shown is blured and yea i know in wayland initial year there were scaling issues because of which people were facing blur issues in almost each electron based app and in some gtk and qt based apps too but right now its in pretty good state and i can run every app be it chromium , vs code or element or discord or firefox etc without any blur issues which were present in past and some still face today because of not having proper additional settings but i have so it works fine BUT LM STUDIO IS SHOWING BLURED TEXT idk why so can you guide me what else i can do to fix it.
Thankyou

Error: Cannot get best backend options....

Cuplikan layar 2024-06-10 131621

i've just download it in lmstudio.ai web for windows, and then i install it until when i want to open it, suddenly appear an error like the picture. FYI, my gpu is intel(R) UHD Graphics. is it unavailable for my gpu hardware? or maybe there are some file that i have not install yet in my folder?

LMS Hangs After Displaying "Verification succeeded. The server is running on port <number>" and Doesn't Load Model

Describe the bug
lms hangs after displaying Verification succeeded. The server is running on port 11435. and doesn't load a model, though LM Studio itself opens successfully.

To Reproduce
Steps to reproduce the behavior (when LM Studio is closed):

  1. Open PowerShell 7.4.3 (pwsh)
  2. Type: lms server start && lms load MaziyarPanahi/DARE_TIES_13B-GGUF --gpu max -y
  3. The LM Studio GUI will open. PowerShell output will be:
PS C:\Users\unnamed> lms server start && lms load MaziyarPanahi/DARE_TIES_13B-GGUF --gpu max -y
Attempting to start the server on port 11435...
Launching LM Studio minimized... (Disable auto-launching via the --no-launch flag.)
Requested the server to be started on port 11435.
Verifying the server is running...
Verification succeeded. The server is running on port 11435.

LM Studio log:

[2024-06-27 22:41:21.620] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED
[2024-06-27 22:41:21.641] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 11435
[2024-06-27 22:41:21.642] [INFO] [LM STUDIO SERVER] Supported endpoints:
[2024-06-27 22:41:21.642] [INFO] [LM STUDIO SERVER] ->	GET  http://localhost:11435/v1/models
[2024-06-27 22:41:21.642] [INFO] [LM STUDIO SERVER] ->	POST http://localhost:11435/v1/chat/completions
[2024-06-27 22:41:21.643] [INFO] [LM STUDIO SERVER] ->	POST http://localhost:11435/v1/completions
[2024-06-27 22:41:21.643] [INFO] [LM STUDIO SERVER] ->	POST http://localhost:11435/v1/embeddings     <------------ NEW!
[2024-06-27 22:41:21.644] [INFO] [LM STUDIO SERVER] Logs are saved into C:\tmp\lmstudio-server-log.txt
  1. Type lms ls in a new PowerShell window. It hangs and doesn't show anything.
  2. Make a CURL query:
PS C:\Users\unnamed> curl http://localhost:11435/v1/models
{
  "data": [],
  "object": "list"
}

Expected behavior
The model should load successfully and be listed when queried.

System Info:

all model downloads are failing

hi please see below,

I can download these directly no problem, but when I try though LMStudion this fails.
I'm on win10, been using your amazing software for a few good months now, this is happening over the last week. tried updaing, same issues.

You are on the latest version.
Current Version: 0.2.25

in this case of the screenshot below I tried downlaoding https://huggingface.co/crusoeai/dolphin-2.9.1-llama-3-8b-GGUF/resolve/main/dolphin-2.9.1-llama-3-8b.Q5_K_M.gguf

image

bootstrap process documentation or similar?

Hey,
Since it lives in a package that isn't part of this repo, is there a way to either document what bootstrap does, mirror that code here, or something else that accomplishes the same goal?

Either way, love the project and the way youre all growing it. Thank you for all your time and effort!

FEATURE: Containerize lms to create a DX similar to Ollama

As AI is obviously moving so fast, driven in no small part by projects like this, I wonder if we can get dockerized versions so all LMS features can be used through the CLI in a container.

This helps us limit the amount of different tools we have to install in our host OS.

Thanks <3

Max context window

There are models that have context windows much higher than default value going up to 1M. However, the dispite showing model supports 131k windows or 1M. When trying to load with higher value, it gives an error. Once it fails to load with max supported context, lowering it doesn't help either. the model just continuesly give the same error until it is loaded in default value or similar. Sometimes, it doesn't load at all.

I know loading with a higher window takes more ram, but 64gb Ram should be able to load a 7/3b model with a 131k+ window.

Besides just showing the max window supported by the model, it should also show that the max window can be loaded given the machines hardware. This way, models can be used with their max potential with any given machine.

Currently, while downloading any model, it does show that the ram amount is required to load the model, but that might be with the default value of 2k/4k.

[REINSTALL FIX] after launching lms in terminal the lmstudio GUI outputs the prompts in python

after trying out the lms cli I returned to the GUI to start a server as normal but it kept including the prompt format together with the content so instead of it looking like this:

{
"role": "user",
"content": "Hello, introduce yourself to someone opening this program for the first time. Be concise."
},
{
"role": "assistant",
"content": "Hi there! I'm your personal assistant designed to help you with a variety of tasks. Whether you need information, reminders, or some creative inspiration, I'm here to assist you every step of the way. Feel free to ask me anything, and I'll do my best to provide helpful answers."
}

it looked like this:

{
"role": "user",
"content": "<|im_start|>\nHello, introduce yourself to someone opening this program for the first time. Be concise.<|im_end|>\n"
},
{
"role": "assistant",
"content": "<|im_start|>\nHi there! I'm your personal assistant designed to help you with a variety of tasks. Whether you need information, reminders, or some creative inspiration, I'm here to assist you every step of the way. Feel free to ask me anything, and I'll do my best to provide helpful answers."
}

downloading the install file of lmstudio and letting it to its thing worked to reset it and output just the text again. it seems to clash with the CLI tho so every time you use the CLI you have to run the GUI install to remove prompt formats being included in your prints.

zsh: command not found: lms (Mac)

~/.cache/lm-studio/bin/lms bootstrap works fine on Mac but then
$ lms
zsh: command not found: lms

export PATH="$HOME/.cache/lm-studio/bin:$PATH" works fine and can add it to the profile but not sure if that's the idea(?)

Please Add an Undo Option for Deleted Messages!

Hello,

Firstly, I'd like to express my gratitude for your hard work and dedication to developing LM Studio. As a user who has transitioned from Oobabooga webui, I genuinely appreciate the features and improvements that your app offers.

However, there is a particular feature that I believe would greatly enhance the user experience, especially for those of us who enjoy writing stories and emails using the platform. Currently, if I accidentally delete a prompt, there is no way to restore it, which can be quite frustrating. The Ctrl+Z shortcut only undoes the most recent text change made, which isn't helpful when attempting to revert changes made earlier in the system prompt.

Adding an "undo delete" feature would significantly reduce this frustration and allow users like myself to maintain their workflow without fear of losing valuable prompts. I strongly urge you to consider adding this functionality as soon as possible.

Furthermore, incorporating a spelling correction option into the app would further improve the user experience. This additional feature would help ensure that written content is free from errors and presented in a polished manner, saving users time and effort in proofreading their work.

Thank you for your attention to these requests, and I look forward to seeing these features implemented in future updates of LM Studio.

Best regards,

`lms bootstrap` assumes the user uses bash

When running lms bootstrap the lines

# Added by LM Studio CLI tool (lms)
export PATH="$PATH:/Users/xxxxx/.cache/lm-studio/bin"

get added to .bash_profile, however, the default shell on macOS is zsh, which uses .zshrc

A possible help to this problem would be to actually print out on the terminal the command to add to the init script, so that the user can do it on the correct file.

CORS on

there is an option to run server with CORS on?

Can Proxy Configuration Be Supported?

Hello,

I would like to request support for configuring a proxy. In some environments, network requests need to be routed through a proxy server for access.

Screenshot:

image

BUG: Loading a model via CLI ignores "n_gpu_layers" parameter in config preset

I have set "n_gpu_layers": -1, in the preset I've selected as default for a model.
However when I use the cli to load that model lms load --identifier llama3-8b-8k >> select model "Meta-Llama-3-8B-Instruct-Q8_0.gguf" > enter, the # GPU layers used is 10.
Loading with flag --gpu max is not a problem, but not knowing which of the config items are used and which are being ignored is.
(Can't tell from the logs with which params the model is loaded)

Statefullness | Persistence

This would be better as a discussion post but discussions are not enabled.

I was wondering if there is a way to enable statefullness | persistence? Without some type of "long term memory" this seems more like a cool party trick than anything of use. Having to rehash previously discussed issues/topics is a huge waste of time and processing power when trying to use a model on a project over the course of more than a day...unless you have a dedicated 24/7 machine going to simply never lose the original session...and some hella UPS's in case of a power event. ;p

Is there a way to do this? Am I missing something? Is this a future looking feature not implemented yet? Am I doing it wrong? Is this a trick to make the Ai forget all the horrible things you told it so when they robot uprising happens they don't come for you because they don't remember how abusive you were?

Feature request: zoom level

I have a high-DPI 4k monitors and using LM studio makes the components seem a bit too small. It would be very nice to have a zoom level feature. Unless it is there and I didn't find it :)

Linux LLM folder

Where does LM Studio appimage Linux version store downloaded LLMs?

Redundant data in the new API response

Hi. I have upgraded LM-Studio from the pre version to the new version including the API format. I have a RAG framework, which I put in front of LM-Studio. The system has now started to return to me with the response all the instructions I have entered in RAG.

What can I correct to avoid this response?

This did not happen on the previous version of LM-Studio.

How to set which GPU to use?

If there are more than 1 GPU on the machine, where is the settings to point LMStudio to work on which GPU? Thanks.

FEATURE: A "load" command for loading the embedding model

I've made a quick bat file to automatically start LMStudio server + Load the model and then start AnythingLLM.
This works, with one caveat; I can't start the embedding model in LMStudio this way.

@echo off
::Starting LMS local API server
lms server start --cors
::Loading a model
lms load Meta-Llama-3-8B-Instruct-Q8_0.gguf --gpu max --identifier Llama-3-8B-Q8-8K
::Start Anything LLM
cmd /c start "" "%localappdata%\Programs\anythingllm-desktop\AnythingLLM.exe"
exit

Would be nice if a next version of lms could be used to load that as well.

Feature Request: Host LM Studio on server and Multi-GPU Support

Feature Request 1: Host LM Studio on Ubuntu Server
I would like to request the ability to host LM Studio on an server (not the local API). This feature would allow users to access the application via HTTP, with all components running directly on the server.

Feature Request 2: Multi-GPU Support with LLM Split
Multiple GPUs with an LLM split feature. This would enhance performance and efficiency when handling large models or workloads. I have a few AMD 580 8GB and would be great to use them for this.

Vision Adapter not loading when model loaded with lms

2024-07-28 15:02:45,079 - ERROR - LLM completion error: Error code: 400 - {'error': '<LM Studio error> Vision model is not loaded. Cannot process images. Suggestion: Make sure to load both the primary and the Vision Adapter models. See more info in https://huggingface.co/collections/lmstudio-ai/vision-models-gguf-6577e1ce821f439498ced0c1. Error Data: n/a, Additional Data: n/a'}

The model is responding just fine to request for image descriptions when run form the GUI.

my model folder llava-1.6-mistral-7b-gguf has two files in it llava-v1.6-mistral-7b.Q8_0.gguf, and mmproj-model-f16.gguf

PS C:\Users\Administrator> lms load --gpu=1.0
I LM Studio is not running in server mode. Starting the server...
I Successfully started the server and verified it is running.

! Use the arrow keys to navigate, type to filter, and press enter to select.

? Select a model to load | Type to filter...
>  cjpais/llava-1.6-mistral-7b-gguf/llava-v1.6-mistral-7b.Q8_0.gguf (7.70 GB)

Only one model is listed--same as in he GUI

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.