Coder Social home page Coder Social logo

microsoft / semantic-kernel Goto Github PK

View Code? Open in Web Editor NEW
20.6K 20.6K 3.0K 50.67 MB

Integrate cutting-edge LLM technology quickly and easily into your apps

Home Page: https://aka.ms/semantic-kernel

License: MIT License

Batchfile 0.01% Shell 0.04% C# 64.39% Makefile 0.02% Python 21.56% Jupyter Notebook 2.78% Java 11.05% Handlebars 0.14% PowerShell 0.02%
ai artificial-intelligence llm openai sdk

semantic-kernel's Introduction

Semantic Kernel

Status

  • Python
    Python package
  • .NET
    Nuget packagedotnet Dockerdotnet Windows
  • Java
    Java CICD BuildsMaven Central

Overview

License: MIT Discord

Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code.

What makes Semantic Kernel special, however, is its ability to automatically orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.

Please star the repo to show your support for this project!

Orchestrating plugins with planner

Getting started with Semantic Kernel

The Semantic Kernel SDK is available in C#, Python, and Java. To get started, choose your preferred language below. See the Feature Matrix to see a breakdown of feature parity between our currently supported languages.

Java logo

The quickest way to get started with the basics is to get an API key from either OpenAI or Azure OpenAI and to run one of the C#, Python, and Java console applications/scripts below.

For C#:

  1. Create a new console app.
  2. Add the semantic kernel nuget Microsoft.SemanticKernel.
  3. Copy the code from here into the app Program.cs file.
  4. Replace the configuration placeholders for API key and other params with your key and settings.
  5. Run with F5 or dotnet run

For Python:

  1. Install the pip package: python -m pip install semantic-kernel.
  2. Create a new script e.g. hello-world.py.
  3. Store your API key and settings in an .env file as described here.
  4. Copy the code from here into the hello-world.py script.
  5. Run the python script.

For Java:

  1. Clone the repository: git clone https://github.com/microsoft/semantic-kernel.git
    1. To access the latest Java code, clone and checkout the Java development branch: git clone -b java-development https://github.com/microsoft/semantic-kernel.git
  2. Follow the instructions here

Learning how to use Semantic Kernel

The fastest way to learn how to use Semantic Kernel is with our C# and Python Jupyter notebooks. These notebooks demonstrate how to use Semantic Kernel with code snippets that you can run with a push of a button.

Once you've finished the getting started notebooks, you can then check out the main walkthroughs on our Learn site. Each sample comes with a completed C# and Python project that you can run locally.

  1. 📖 Overview of the kernel
  2. 🔌 Understanding AI plugins
  3. 👄 Creating semantic functions
  4. 💽 Creating native functions
  5. ⛓️ Chaining functions together
  6. 🤖 Auto create plans with planner
  7. 💡 Create and run a ChatGPT plugin

Finally, refer to our API references for more details on the C# and Python APIs:

  • C# API reference
  • Python API reference (coming soon)
  • Java API reference (coming soon)

Join the community

We welcome your contributions and suggestions to SK community! One of the easiest ways to participate is to engage in discussions in the GitHub repository. Bug reports and fixes are welcome!

For new features, components, or extensions, please open an issue and discuss with us before sending a PR. This is to avoid rejection as we might be taking the core in a different direction, but also to consider the impact on the larger ecosystem.

To learn more and get started:

Contributor Wall of Fame

semantic-kernel contributors

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT license.

semantic-kernel's People

Contributors

adrianwyatt avatar alexchaomander avatar amsacha avatar awharrison-28 avatar brunoborges avatar crickman avatar dehoward avatar dependabot[bot] avatar dluc avatar dmytrostruk avatar dsgrieve avatar eavanvalkenburg avatar gitri-ms avatar glahaye avatar joe-braley avatar johnoliver avatar joowon-dm-snu avatar krzysztof318 avatar lemillermicrosoft avatar markwallace-microsoft avatar milderhc avatar mkarle avatar moonbox3 avatar rogerbarreto avatar sergeymenshykh avatar shawncal avatar stephentoub avatar taochenosu avatar teresaqhoang avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semantic-kernel's Issues

Can one use Skills with AddOpenAIChatCompletion?

Describe the bug
It looks like ChatCompletion was introduced in the latest checkin, however it doesn't seem to support skills. Is that by design or would that be enabled in the future?

Desktop (please complete the following information):

  • NuGet Package Version 0.9.61

Hooks for logging/replay of skills.

Since skills can be built out of other skills, I would like to get detailed logging information on what skills get called in the process of running a skill and what values it produced both for understanding what happened and for replay for unit testing skills. For example, given a skill calling out to another skill expensiveComputation like

The answer is {{expensiveComputation}}. What is the question?

I would like to have a way to get a log showing the value return by the native skill expensiveComputation (the actual prompt is given to the ILogger although it would also be nice to have a structured way to get it). Something like

{
    "skills": {
        "expensiveComputation": "42"
    },
    "prompt": "The answer is 42. What is the question?"
}

Although if expensiveComputation were a semantic skill with its own nested computations, then maybe there could be further nesting in the log. Furthermore, once this log existed it could be used to mock those skills so the computation wouldn't have to be redone when wanting to test other parts of the system. In addition to avoiding expensive computation, this could be particularly useful if those skills require filesystem or other access that isn't available when testing.

I do not think the specific log format is appropriate to this project. Instead, I was looking for the lowest impact way to adjust the APIs to allow for this functionality to be built in client code. It looks like PromptTemplateEngine is where the relevant work is done, but simply putting in my own version without modifying this project doesn't work as it relies on internal members of Block to perform its computations. One option would be to make those members public, but in the interest of limiting new externally visible APIs, I've made a branch which adds virtual methods to PromptTemplateEngine at the point where it computes the values of the blocks which could be used to either record the values computed or overriding the computation with a different value read from a log (and modifies KernelBuilder to make it easier to drop in a different PromptTemplateEngine implementation). These extension points could probably be used for other uses I haven't thought of as well.

If this solution is acceptable, I can make a PR, but the contributing guidelines requested I make a feature request to discuss first, and I'm not sure my solution is actually the best way to handle this, anyway.

The SSL connection could not be established

Describe the bug
In Simple Chat Summary App, when I click save button. The /api/skills/funskill/invoke/joke return UnknownError: Something went wrong: The SSL connection could not be established, see inner exception. Any help will be appreciative !

image

To Reproduce
Steps to reproduce the behavior:

  1. cd semantic-kernel/samples/dotnet/KernelHttpServer, run func start --csharp --verbose.
  2. cd semantic-kernel/samples/apps/chat-summary-webapp-react, run yarn start.
  3. open url in broswer.
  4. fill the fields of OpenAI Key and Model, and click save.

Expected behavior
Save success.

Desktop (please complete the following information):

  • OS: ubuntu 20.04

Additional context
when I click save button, log output:

[2023-03-29T07:29:01.470Z] Executing HTTP request: {
[2023-03-29T07:29:01.470Z]   requestId: "96b953fb-ee5d-4fd7-a7cf-7537e3769ab9",
[2023-03-29T07:29:01.470Z]   method: "POST",
[2023-03-29T07:29:01.470Z]   userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36",
[2023-03-29T07:29:01.470Z]   uri: "/api/skills/funskill/invoke/joke"
[2023-03-29T07:29:01.470Z] }
[2023-03-29T07:29:01.471Z] Executing 'Functions.InvokeFunction' (Reason='This function was programmatically called via the host APIs.', Id=5e416a89-b20c-403c-91aa-f32e3b9b50d1)
[2023-03-29T07:29:01.474Z] Embedding backend has not been supplied.
[2023-03-29T07:29:03.968Z] Executed 'Functions.InvokeFunction' (Succeeded, Id=5e416a89-b20c-403c-91aa-f32e3b9b50d1, Duration=2497ms)

import_semantic_skill_from_directory is not assigning config

Describe the bug
import_semantic_skill_from_directory is not reassigning the values with the result of the config.from_json(config_file.read()) call resulting in an empty config.

Easy fix is to just change line 45 to

config = config.from_json(config_file.read())

To Reproduce
Steps to reproduce the behavior:

  1. Set a debugger here in the IDE.
  2. Run the following in Debug mode on IDE
import semantic_kernel as sk
from semantic_kernel.kernel_extensions.import_semantic_skill_from_directory import import_semantic_skill_from_directory

#create a kernel
kernel = sk.KernelBuilder.create_kernel()

#import skills
skills_directory = "../../skills" #path to skills directory
skill = import_semantic_skill_from_directory(kernel, skills_directory, "SummarizeSkill")
  1. Look at the config object. Notice there's no description or default backends.

Expected behavior
config object has info from the json file such as description and default_backends.

Screenshots
If applicable, add screenshots to help explain your problem.
image

Desktop (please complete the following information):

  • OS: Windows
  • IDE: PyCharm

python: prompt_template_config need change

Describe the bug
use import_semantic_skill_from_directory(kernel, skill_dir, skill) function and if its config has no stop_sequence in it,
will raise error if its skill is really used by kernel.

image

I'll open PR, and this bug only need one line changed. see you at PR

To Reproduce
use import_semantic_skill_from_directory(kernel, skill_dir, skill)
and use skill with it.

Expected behavior
even though user didn't provide stop_sequence in config.json, it should be run anyway

Screenshots
check screenshot above

Desktop (please complete the following information):

  • OS: Mac M1
  • IDE: VS code
  • python 3.11

Additional context
Add any other context about the problem here.

[Always] Unable to run sample app [Simple chat summary] cause front page error

Describe the bug
[Simple chat summary] Setup page has error: If you want to write it to the DOM, pass a string instead: filled="true" or filled {value.toString()}. at svg

To Reproduce
Steps to reproduce the behavior:
Follow the doc:
https://learn.microsoft.com/en-us/semantic-kernel/samples/localapiservice
https://learn.microsoft.com/en-us/semantic-kernel/samples/simplechatsummary
local API service has been up, but when I visit the http://localhost:3000, it's not work

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
local api service:
image
setup page before save:
image
setup page after save:
image
dependencies versions:
image
run command yarn start withot error:
image

Desktop (please complete the following information):

  • OS: windows 10
  • IDE: VS Code
  • NuGet Package Version [e.g. 0.1.0]

Additional context

Python: Use Pydantic for configuration

We have a few configuration classes in the python version like BackendConfig, OpenAIConfig, AzureOpenAIConfig etc. We perform operations like loading these configs from .env files using helper methods like utils.settings::openai_settings_from_dot_env etc. We also have a class named Verify that verifies config values like checking that a value is not None etc. Pydantic is a popular open-source library that is particularly used in such cases. All the above mentioned functionality is built into Pydantic. I propose that we use pydantic for our configuration and replace the Verify class and the helper functions like utils.settings::openai_settings_from_dot_env with pydantic functions.

What are your thoughts on this?

"Too many inputs for model None. The max number of inputs is 1" when using AzureTextEmbeddings

Describe the bug

GenerateEmbeddingsAsync is designed pass in multiple text parts to optimize the number of roundtrips, however it (today) only works using one text input. When you pass in more you'll get this error message back from Azure OpenAI:

{
  "error": {
    "message": "Too many inputs for model None. The max number of inputs is 1.  We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}

To Reproduce

var kernel = Kernel.Builder.Build();

kernel.Config.AddAzureOpenAIEmbeddingsBackend(
    "embedding",                    // config label
    "text-embedding-ada-002",     // Azure OpenAI *Deployment ID*
    "https://contoso.openai.azure.com/", // Azure OpenAI *Endpoint*
    "<Azure OpenAI key>");

BackendConfig embeddingsBackendCfg = kernel.Config.GetEmbeddingsBackend("embedding");
var embeddingGenerator = new AzureTextEmbeddings(
        embeddingsBackendCfg.AzureOpenAI.DeploymentName,
        embeddingsBackendCfg.AzureOpenAI.Endpoint,
        embeddingsBackendCfg.AzureOpenAI.APIKey,
        embeddingsBackendCfg.AzureOpenAI.APIVersion,
        kernel.Log);

var embedding = await embeddingGenerator.GenerateEmbeddingsAsync(
                         new List<string> { "Text 1 to vectorize", "text 2 to vectorize" }
                         );

Expected behavior
Add a GenerateEmbeddingAsync equivalent that request a string as input. For now throw a NotSupportedException exception when Azure OpenAI is used in the GenerateEmbeddingsAsync method clarifying for the developer to use the other method for the time being.

Desktop (please complete the following information):

  • OS: Windows
  • IDE: Visual Studio
  • NuGet Package Version: 0.8

python-preview bug fix for `6-memory-and-embeddings`

Describe the bug
fix some codes for running semantic-kernel/samples/notebooks/python/6-memory-and-embeddings.ipynb
memories = await kernel.memory.search_async(memory_collection_name, ask, limit=5, min_relevance_score=0.77)
will returns empty list because there is missing codes in semantic-kernel/python/semantic_kernel/memory/semantic_text_memory.py

and output printing parts should be changed.

To Reproduce
run semantic-kernel/samples/notebooks/python/6-memory-and-embeddings.ipynb

Expected behavior
semantic-kernel/samples/notebooks/python/6-memory-and-embeddings.ipynb runs well

Screenshots
If applicable, add screenshots to help explain your problem.
image

Desktop (please complete the following information):

  • OS: Mac M1
  • IDE: VS code
  • NuGet Package Version [e.g. 0.1.0]

Additional context
Add any other context about the problem here.

skills are not working in PromptTemplate (python-3.9)

Describe the bug
After learning from the last issue that compatibility with 3.9 was not robust,
I rerun the memory skill part and realized that it was not working properly.

This error is silent because PromptTemplateEngine is using invoke_with_custom_iunput_async function that contains try-exception code for rendering blocks.

originally it should show this error

DelegateHandlers.get_handler(self._delegate_type)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/joowonkim/Repo/semantic-kernel-jw/semantic-kernel/python/semantic_kernel/orchestration/delegate_handlers.py", line 155, in get_handler
    value.__wrapped__, "_delegate_type"
AttributeError: 'staticmethod' object has no attribute '__wrapped__'

similar with #168 semantic-kernel/python/semantic_kernel/orchestration/delegate_handlers.py needs change.

To Reproduce
runsemantic-kernel/python/tests/memory.py
image

Expected behavior
PromptTemplateEngine render blocks correctly

Screenshots
image
image

Desktop (please complete the following information):

  • OS: Mac M1
  • IDE: VS code
  • Python: 3.9.13

Additional context
Add any other context about the problem here.

Missing assembly problem

Description
I am seeing a runtime exception in a c# Windows Form app when using the SK library:

{"Could not load file or assembly 'netstandard, Version=2.1.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. The system cannot find the file specified.":"netstandard, Version=2.1.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51"}

Here is the content of the FusionLog field of the exception:

"=== Pre-bind state information ===\r\nLOG: DisplayName = netstandard, Version=2.1.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51\n (Fully-specified)\r\nLOG: Appbase = file:///C:/ToB/projects/RpcInvestigator/bin/x64/Debug/\r\nLOG: Initial PrivatePath = NULL\r\nCalling assembly : Microsoft.SemanticKernel, Version=0.9.61.1, Culture=neutral, PublicKeyToken=null.\r\n===\r\nLOG: This bind starts in default load context.\r\nLOG: Using application configuration file: C:\\ToB\\projects\\RpcInvestigator\\bin\\x64\\Debug\\RpcInvestigator.exe.Config\r\nLOG: Using host configuration file: \r\nLOG: Using machine configuration file from C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319\\config\\machine.config.\r\nLOG: Redirect found in application configuration file: 2.1.0.0 redirected to 2.1.0.0.\r\nLOG: Post-policy reference: netstandard, Version=2.1.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51\r\nLOG: Attempting download of new URL file:///C:/ToB/projects/RpcInvestigator/bin/x64/Debug/netstandard.DLL.\r\nLOG: Attempting download of new URL file:///C:/ToB/projects/RpcInvestigator/bin/x64/Debug/netstandard/netstandard.DLL.\r\nLOG: Attempting download of new URL file:///C:/ToB/projects/RpcInvestigator/bin/x64/Debug/netstandard.EXE.\r\nLOG: Attempting download of new URL file:///C:/ToB/projects/RpcInvestigator/bin/x64/Debug/netstandard/netstandard.EXE.\r\n"

Looking at RpcInvestigator.exe.config, I see an assemblyBinding entry at the bottom that was added somehow by the compilation process (it does not exist in my App.config):

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.8.1" />
  </startup>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="System.Runtime.CompilerServices.Unsafe" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-7.0.0.0" newVersion="7.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Threading.Tasks.Extensions" publicKeyToken="cc7b13ffcd2ddd51" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-4.2.0.1" newVersion="4.2.0.1" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.Bcl.AsyncInterfaces" publicKeyToken="cc7b13ffcd2ddd51" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="8.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Memory" publicKeyToken="cc7b13ffcd2ddd51" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-4.0.1.2" newVersion="4.0.1.2" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Collections.Immutable" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="8.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Threading.Channels" publicKeyToken="cc7b13ffcd2ddd51" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="8.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="Google.Protobuf" publicKeyToken="a7d26565bac4d604" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-3.22.1.0" newVersion="3.22.1.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.Extensions.Logging.Abstractions" publicKeyToken="adb9793829ddae60" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="8.0.0.0" />
      </dependentAssembly>
    </assemblyBinding>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="netstandard" publicKeyToken="cc7b13ffcd2ddd51" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-2.1.0.0" newVersion="2.1.0.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
</configuration>

Here is my nuget reference in the csproj file:

<Reference Include="Microsoft.SemanticKernel">
      <HintPath>packages\Microsoft.SemanticKernel.0.9.61.1-preview\lib\netstandard2.1\Microsoft.SemanticKernel.dll</HintPath>
    </Reference>

To Reproduce
All I have done is installed the sk nuget package and added some of the sample code to my application. The exception happens every time I launch the app. I can push a branch to the RPC Investigator github project if further repro is needed.

Desktop (please complete the following information):

  • OS: Windows 11
  • IDE: Visual Studio 2022 pro
  • NuGet Package Version 0.9.61.1-preview

Return BadRequest when using gpt-35-turbo model

Describe the bug
Following the same code in README using the gpt-35-turbo model, returned bad request from azure openai. The reason is best_of is not supported at 3.5-turbo model.

To Reproduce
Run below code, use your own azure openai endpoint and api key

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.KernelExtensions;

var kernel = Kernel.Builder.Build();

// For Azure Open AI service endpoint and keys please see
// https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?pivots=rest-api
kernel.Config.AddAzureOpenAICompletionBackend(
    "gpt-35-turbo",                   // Alias used by the kernel
    "gpt-35-turbo",                  // Azure OpenAI *Deployment ID*
    "https://<your openai endpoint>.openai.azure.com/", // Azure OpenAI *Endpoint*
    "...your Azure OpenAI Key..."        // Azure OpenAI *Key*
);

string skPrompt = @"
{{$input}}

Give me the TLDR in 5 words.
";

string textToSummarize = @"
1) A robot may not injure a human being or, through inaction,
allow a human being to come to harm.

2) A robot must obey orders given it by human beings except where
such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
";

var tldrFunction = kernel.CreateSemanticFunction(skPrompt);

var summary = await kernel.RunAsync(textToSummarize, tldrFunction);

Console.WriteLine(summary);

// Output => Protect humans, follow orders, survive.

Expected behavior
Should return 200 status code.

Additional context

{"error":{"code":"BadRequest","message":"logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see Azure OpenAI Service REST API reference

Important Error Message from Azure OpenAI service is ignored

Describe the bug
In OpenAIClientAbstract.ExecutePostRequestAsync, when I sent a large prompt, there is error message in response:

{
"error": {
"message": "This model's maximum context length is 4097 tokens, however you requested 7326 tokens (3230 in your prompt; 4096 for the completion). Please reduce your prompt; or completion length.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}

In the ExecutePostRequestAsync, you throw new AIException(AIException.ErrorCodes.InvalidRequest,$"The request is not valid, HTTP status: {response.StatusCode:G}"). The important information from error.message is missed. I can only get this message when I debug into the code.

Actually the error message should be post to outer scope.

To Reproduce
Steps to reproduce the behavior:

  1. Give a very large ASK during your sample code.

Expected behavior
the error message from OpenAI serviceshould be post to outer scope, otherwise users will never be able to find the root cause.

Screenshots
N/A

Desktop (please complete the following information):

  • OS: Windows
  • IDE: Visual Studio
  • NuGet Package Version: 0.8.11.1-preview

Additional context
N/A

Support PDF Document Connector

Add support to read text from an already OCR'ed PDF file. This would expand the capability of DocumentSkills to read PDF file.

Happy to work on a PR :).

SharePoint List Connector

Add a new SharePoint List Connector. Happy to contribute a PR for this :)

Example feature SharePoint List Connector would unlock:
"Create a SharePoint List of action items with these fields: , , , from the last email I got from xxxx"

Allow custom semantic backend?

Hi folks, currently semantic kernel only allows us to use either OpenAI or Azure OpenAI as TextCompletion or Embedding backends. Is it possible for your team to add support for a customized TextCompletion backend? It will be helpful for developers who host their own GPT or AICG service.

use case

// my customized semantic backend
ITextCompletionClient myGossipTextClientInstance

IBackendConfig myGossipTextClientConfig

// suggested API to add config to kernel
kernel.Config.AddTextCompletionBackend(myGossipTextClientConfig, myGossipTextClientInstance)

Using Azure OpenAI for the GitHub repo sample gives errors

When trying to use Azure OpenAI for the GitHub repo example I get an error saying the particular model is not available.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Completion Model" tab in the GitHub repo bot example
  2. Click on Azure OpenAI
  3. Enter the Azure OpenAI Key and the Endpoint
  4. It correctly goes and is able to enumerate the model names.
  5. However on selecting a model and clicking Save gives the following error
  6. See error
    image

Expected behavior
Expected behavior is that this works. Btw this example works fine when using OpenAI key.

Additional context
It appears to me that somehow it is looking for the wrong name of the model. Also I did check that I am able to use the model successfully from within the Azure OpenAI service itself. I also waited for several hours to make sure that the model was successfully deployed in the Azure OpenAI service.

Can not start kernelhttpserver

Describe the bug
I followed the instruction of running kernelhttpserver but keeps receiving this exception:
[2023-03-29T16:01:35.028Z] Found C:\Users\zy418\Documents\Visual Studio 2022\semantic-kernel\samples\dotnet\KernelHttpServer\KernelHttpServer.csproj. Using for user secrets file configuration.
[2023-03-29T16:01:37.317Z] A host error has occurred during startup operation 'a5a93bd4-cb49-4c0f-80b6-f299af6af6fb'.
[2023-03-29T16:01:37.319Z] Microsoft.Azure.WebJobs.Extensions.Http: Could not load file or assembly 'System.Net.Http.Formatting, Version=5.2.8.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. 系统找不到指定的文件(System can not find the specific file)。.
Value cannot be null. (Parameter 'provider')

To Reproduce
Just following this document to start the kernelserver

Expected behavior
Server should run successfully

Screenshots
met error message below:
image

Desktop (please complete the following information):

  • OS: Windows
  • IDE: VS Code

image

- func tool version: -

image

Additional context
Add any other context about the problem here.

MemoryRecord struct is not public visible. This blocks extending the storage system

I have a large number of text files already embedded, embedding vectors are stored in a single file (embedding.jsonz). I would like to add these files to the memory system. However It turns out I could not reuse SemanticTextMemory class with my own IMemoryStore implementation.
The reason is SemanticTextMemory expects the result from storage is MemoryRecord, which is an internal type. my storage system could not create this type inside my code.
suggestions:

  1. Make MemoryRecord class public.
  2. Re-design the memory system. Current memory system is heavly over-cooked. Too many nested generic interface. As storage system heavyly rely on data serialize/deserialize and there's 5 key elements total (collection,key,content,content-type,embedding). A simple and lightweight data structure would be much better.
    Thanks

delegate_inference raise error in python-preview branch

Describe the bug
while run pytest . in semantic-kernel/python/tests

@staticmethod
def infer_delegate_type(function) -> DelegateTypes:
    # Get the function signature
    function_signature = signature(function)
    awaitable = iscoroutinefunction(function)

    for name, value in DelegateInference.__dict__.items():
        if name.startswith("infer_") and hasattr(
>               value.__wrapped__, "_delegate_type"
        ):
E           AttributeError: 'staticmethod' object has no attribute '__wrapped__'

../semantic_kernel/orchestration/delegate_inference.py:240: AttributeError
======================================================================================= short test summary info =======================================================================================
FAILED test_text_skill.py::test_can_be_imported - AttributeError: 'staticmethod' object has no attribute '__wrapped__'
FAILED test_text_skill.py::test_can_be_imported_with_name - AttributeError: 'staticmethod' object has no attribute '__wrapped__'
===================================================================================== 2 failed, 9 passed in 0.23s =====================================================================================

To Reproduce
Steps to reproduce the behavior:

  1. Go to semantic-kernel/python/tests
  2. run ``pytest .`

Expected behavior
this test should not raise error

Screenshots
image

Desktop (please complete the following information):

  • OS: Mac M1
  • IDE: VS Code
  • Python version: 3.9.13

Additional context
Add any other context about the problem here.

Improve Planner prompt to support invalid ASK

It appears that the Microsoft.SemanticKernel.CoreSkills.SemanticFunctionConstants.FunctionFlowFunctionDefinition prompt always attempts to generate a plan for a given goal, even if there are no appropriate skills available to fulfill it. This can result in some absurd plans being generated.

To address this, there are two potential options:

  1. Return an error message when there are no suitable skills available to fulfill the goal. This would help prevent the generation of ridiculous plans and provide feedback to the user that the goal cannot be accomplished with the current set of skills.
  2. Leave the plan verification task to the user.

Thx.

I want use `gpt-3.5-turbo` for nlp task

I sometimes use gpt-3.5-turbo for NLP tasks like text-davinci-003.
Because it's cheaper and feels like it performs much better than Curie.

But there are some problems with this. In the current version of python-preview, if I use the chat backend, it forces the template to record the user's conversation. This increases the number of tokens I use, which is costly. Also, if I'm dealing with long texts, it hits token limit in no time. like example below.
image

This can be solved by putting a memorization setting in the PromptTemplateConfig class and modifying the semantic-kernel/python/semantic_kernel/orchestration/sk_function.py, as shown in the photo below.
But I didn't open the PR because I'm not sure if it will match the direction Microsoft is looking at.
image

In the same vein, I'd like to see ChatCompletion imported via import_semantic_skill_from_directory the way it's done in the text-davinci-003. Currently I'm importing skills the following way and it feels unnatural, please let me know if I'm missing something.

def import_skills(
    kernel: sk.Kernel, skill_dir="./skills"
) -> Dict[str, sk.SKFunctionBase]:
    skills = {}

    for skill in os.listdir(skill_dir):
        if skill.endswith("Skill"):
            s = kernel.import_semantic_skill_from_directory(skill_dir, skill)
            skills[skill] = s

    skills["ChatSkills"] = {}
    skills["ChatSkills"][
        "ExtractInformationList"
    ] = extract_information.build_semantic_chat_function(kernel)

    return skills

I think using skprompt.yaml for prompt template instead of skprompt.txt would allow for a much freer use of the model.

Allow dots in $variable names and provide a way to override or replace the ContextVariables class

We're exploring integrating SK with the Bot Framework SDK's as a way to leverage Large Language Models (LLMs) for the creation of chat bots. SK seems like a perfect fit but one rough edge is around SK manages memory. The Bot Framework has a very rich state management system that breaks memory down into scopes. These scopes are extensible but the 3 default scopes that are relevant here are:

  • conversation - variables are automatically remembered on a per conversation basis so if the bot is being used in 4 different chats there are 4 completely separate sets of variables being remembered.
  • user - variables are automatically remembered for each user across all their conversations. So if there are 10 of us in a chat we each have a private set of user variables that are the same across all 4 chats I'm in with the bot.
  • temp - variables are transient and only remembered for the turn. This more closely matches the current working memory for SK.
  • activity - a pseudo memory scope for referencing properties off the received activity.

To create the best developer experience for our community it would be great if developers could just reference these scoped bags of state directly from the prompt; for example {{$user.name}} could reference the name of the current user, {{$activity.text}} could refence the text of the user massage the bot received, and {{$conversation.workItems}} could reference a list of work items being tracked by the bot.

The alternative is to make the developer manually copy every bot state variable they wish to reference in their prompts to working memory (ContextVariables) before they call into SK. That obviously would work but is a less then ideal developer experience.

To make SK "Bot Framework State Aware" I think 2 features would be needed:

  1. We would need to relax the requirement to not include dots in $variable names.
  2. We need some way of either replacing the ContextVariables class used by SK (make it an interface and include a clone() method) or let us inherit from and override the property getters and setters...

To go along with this on the Bot Framework side of things I would propose we map flat references like {{$history}} to our temp scope so we would turn that into {{$temp.history}} in our implementation of ContextVariables.

Possible Divide By Zero in volatile_memory_store.py

similarity_scores = (
embedding.dot(embedding_array.T)
/ (linalg.norm(embedding) * linalg.norm(embedding_array, axis=1))
)[0]

Possible divide by zero if linalg.norm returns a 0.

Possible fix:

        # Calculate the L2 norm (Euclidean norm) of the query embedding
        query_norm = linalg.norm(embedding)
        
        # Calculate the L2 norms of each embedding in the collection
        collection_norms = linalg.norm(embedding_array, axis=1)
        
        # Identify valid indices where both the query norm and collection norms are non-zero
        # This step helps to avoid division by zero issues when calculating cosine similarity
        valid_indices = (query_norm != 0) & (collection_norms != 0)
        
        # Initialize an array to store similarity scores, setting them to 0.0 by default
        similarity_scores = array([0.0] * len(embedding_collection))

        # If there are any valid indices (i.e., both query and collection norms are non-zero),
        # calculate the cosine similarity between the query embedding and the collection embeddings
        if valid_indices.any():
            similarity_scores[valid_indices] = (
                # Calculate the dot product between the query embedding and valid collection embeddings
                embedding.dot(embedding_array[valid_indices].T)
                # Normalize the dot product by multiplying the query norm and valid collection norms
                / (query_norm * collection_norms[valid_indices])
            )[0]

Request to Return Additional Information in CompleteAsync Method and Store it in ContextVariables

Currently, the CompleteAsync method only returns a response string. I would like to request that additional information such as status codes be returned as well.

I suggest that the CompleteAsync method in ITextCompletion should return some type T instead of a string to allow for this. This would enable us to gather more information about the response and make it easier to handle different scenarios.

Furthermore, I would like to propose that the ContextVariables class be updated to use a ConcurrentDictionary<string, object> instead of ConcurrentDictionary<string, string>. This would allow for the storage of variables of different types and make the class more flexible.

Please let me know if these changes are feasible and if there are any concerns or suggestions for improvement. Thank you for your time and consideration.

func start --csharp from Sample doesnt work

Describe the bug

`PS C:\git\semantic-kernel\samples\dotnet\KernelHttpServer> func start --csharp

MSBuild version 17.5.0+6f08c67f3 for .NET
C:\git\semantic-kernel\dotnet\Directory.Build.targets : error : Could not resolve SDK "Microsoft.Build.CentralPackageVersions". Exactly one of the probing messages below indicates why we could not resolve the SDK. Investigate and resolve that message to correctly specify the SDK.
C:\git\semantic-kernel\dotnet\Directory.Build.targets : error : SDK resolver "Microsoft.DotNet.MSBuildWorkloadSdkResolver" returned null.
C:\git\semantic-kernel\dotnet\Directory.Build.targets : error : Unable to find package Microsoft.Build.CentralPackageVersions. No packages exist with this id in source(s): Microsoft Visual Studio Offline Packages
C:\git\semantic-kernel\dotnet\Directory.Build.targets : error : MSB4276: The default SDK resolver failed to resolve SDK "Microsoft.Build.CentralPackageVersions" because directory "C:\Program Files\dotnet\sdk\7.0.202\Sdks\Microsoft.Build.CentralPackageVersions\Sdk" did not exist.
C:\git\semantic-kernel\dotnet\Directory.Build.targets : error MSB4236: The SDK 'Microsoft.Build.CentralPackageVersions/2.1.3' specified could not be found. [C:\git\semantic-kernel\samples\dotnet\KernelHttpServer\KernelHttpServer.csproj]`

Expected behavior
Function is started locally

Desktop (please complete the following information):

  • OS: Windows 11
  • IDE: VS CODE

[Python] APIConnectionError with OpenAI API key on Windows

Describe the bug
I ran into this APIConnectionError when I tried to run the kick-off program on the first page of Python version of semantic-kernel. Below is the error message:
Output: Error: ('ServiceError: OpenAI service failed to complete the prompt', APIConnectionError(message='Error communicating with OpenAI', http_status=None, request_id=None))

To Reproduce
Steps to reproduce the behavior:

  1. Install requirements.txt in the python folder
  2. Create test.py and paste the program into it, with my own OpenAI API key.
  3. Run python test.py
  4. See error

Expected behavior
The program connects to Open AI and returns a text string as suggested.

Screenshots

import sys
sys.path.append("./python")  # nopep8
import semantic_kernel as sk
import asyncio

kernel = sk.create_kernel()

api_key = "my_api_key"
org_id = "my_org_id"
kernel.config.add_openai_completion_backend(
    "davinci-002", "text-davinci-002", api_key, org_id
)

sk_prompt = """
{{$input}}
Give me the TLDR in 5 words.
"""

text_to_summarize = """
    1) A robot may not injure a human being or, through inaction,
    allow a human being to come to harm.
    2) A robot must obey orders given it by human beings except where
    such orders would conflict with the First Law.
    3) A robot must protect its own existence as long as such protection
    does not conflict with the First or Second Law.
"""

tldr_function = sk.extensions.create_semantic_function(
    kernel,
    sk_prompt,
    max_tokens=200,
    temperature=0,
    top_p=0.5,
)

summary = asyncio.run(kernel.run_on_str_async(text_to_summarize, tldr_function))
print(f"Output: {summary}")

Desktop (please complete the following information):

  • OS: Windows
  • IDE: Anaconda Powershell

Additional context

  • Version of Python: tested both 3.9.16 and 3.10.10
  • Versions of selected packages:
    • numpy 1.24.2
    • openai 0.27.2
    • aiofiles 23.1.0
    • aiohttp 3.8.4

Errors in native functions not bubbling up via the SKContext

Describe the bug

Imagine I do have a semantic skill (named QuestionAnswer) using a template like this

Question: {{$INPUT}}

<Input>
{{MySkill1.GetQuestionContext $INPUT}}
{{MySkill2.GetQuestionContext $INPUT}}
</Input>

and I'm using this semantic skill like this:

var output = await kernel.RunAsync(myContext, qaSkill["QuestionAnswer"]);
if (!output.ErrorOccurred)
{
    Console.WriteLine(output);
}
else
{
    Console.WriteLine($"Error: {output.LastErrorDescription}");
}

When then an error happens in one of the native functions it's not bubbling up via the SKContext instance. Today the error is logged
(https://github.com/microsoft/semantic-kernel/blob/main/dotnet/src/SemanticKernel/TemplateEngine/Blocks/CodeBlock.cs#L134-L140), but would be beneficial for application using SK to understand something went wrong.

To Reproduce

  1. Create two simple native skills and throw exception in them
  2. Use them in a simple template and test

Expected behavior
The outputted SKContext should show that an error has happened and the LastErrorDescription should show the error text. It's fine to keep completing the template, but the app using SK has to know as the net result is that most likely a different prompt is created generating an unexpected output.

Allow Custom httpClientHandler

Context

I deployed my own LLM models on Azure OpenAI, the endpoint is a self-signed.

Issues Description

When I sent complete request to my self-signed endpoint, semantic kernel will return an error message:

Error: The SSL connection could not be established, see inner exception.

I tried to trace the inner exception at debugging mode, got:

The remote certificate is invalid because of errors in the certificate chain: RevocationStatusUnKnown"

Related Code

In OpenAIClientAbstract.cs:

    internal OpenAIClientAbstract(ILogger? log = null, IDelegatingHandlerFactory? handlerFactory = null)
    {
        this.Log = log ?? this.Log;
        this._handlerFactory = handlerFactory ?? new DefaultHttpRetryHandlerFactory();

        this._httpClientHandler = new() { CheckCertificateRevocationList = true };
        this._retryHandler = this._handlerFactory.Create(this.Log);
        this._retryHandler.InnerHandler = this._httpClientHandler;

        this.HTTPClient = new HttpClient(this._retryHandler);
        this.HTTPClient.DefaultRequestHeaders.Add("User-Agent", HTTPUseragent);
    }

this._httpClientHandler = new() { CheckCertificateRevocationList = true }; The default httpClientHandler is set to always check certificate RevocationList.

Feature Request

Allow pass customized httpClientHandler either from _handlerFactory or from kernelBuilder.

Chat completion support

I've been digging through the IKernel and function abstractions hoping to find a way to enable gpt-3.5-turbo APIs (chat completion) and more recently GPT-4 APIs but given ITextCompletion only takes a string as input I haven't found a way to reasonably change the bits to enable the new behavior.

'KernelConfig' does not contain a definition for 'AddAzureOpenAICompletionBackend'

Getting this error when trying to run the notebooks in VS Code.

Error: (14,19): error CS1061: 'KernelConfig' does not contain a definition for 'AddAzureOpenAICompletionBackend' and no accessible extension method 'AddAzureOpenAICompletionBackend' accepting a first argument of type 'KernelConfig' could be found (are you missing a using directive or an assembly reference?)
(16,19): error CS1061: 'KernelConfig' does not contain a definition for 'AddOpenAICompletionBackend' and no accessible extension method 'AddOpenAICompletionBackend' accepting a first argument of type 'KernelConfig' could be found (are you missing a using directive or an assembly reference?)

The notebooks were running earlier but suddenly it is throwing this error - not sure what I'm missing. Please see below screenshot.

image

Using WebSearchEngineSkill in Planner not working

Describe the bug
I added the WebSearchEngineSkill to the planner (including it as a NativeSkill), the plan comes out correctly however when executing the plan the following error occurs:
An error occurred when attempting to access a disposed object named 'System.Net.Http.HttpClient'

To Reproduce
Steps to reproduce the behavior:

  1. Add the following to the RegisterNativeSkills
    if(ShouldLoad(nameof(WebSearchEngineSkill), skillsToLoad)) { using BingConnector connector = new("BING_API_KEY"); var webSearchSkill = new WebSearchEngineSkill(connector); _ = kernel.ImportSkill(webSearchSkill, nameof(WebSearchEngineSkill)); }
  2. Create a plan with the following body
    { "value": "What's the tallest building in Europe?", "skills": ["WebSearchEngineSkill", "FunSkill", "summarizeskill"] }
  3. Invoke the plan

Expected behavior
The result should be displayed from search

Rank `skills` for `planner` to select appropriate skills

There are already a couple of similar (if not duplicated) skills for certain tasks and as we move forwards, there are gonna more and more duplicated skills. This results in inconsistent performance.

I wonder if we can somehow enable planner to score and keep updating on how well a skill performs for a certain task so that next time when it selects a skill, it knows what is the best to choose from?

Consider Checking for Malformed {{ and }} brackets and Give Nice Error in prompt_template_engine.py

# Update the end of the last block
end_of_last_block = cursor + 1
start_found = False
# Move the cursor forward
cursor += 1
# If there is plain text after the last block, capture that as a text block
if end_of_last_block < len(template):
blocks.append(
TextBlock(template[end_of_last_block : len(template)], self._log)
)

Consider adding in a check of open / closed {{ and }} to give a nice error clue:

Possible solution if leading }} is valid:

    while cursor < len(template):
        # ... (previous code)

        # When '{{' is found
        if _get_char() == STARTER and _get_char(1) == STARTER:
            if start_found:
               raise TemplateException(
               TemplateException.ErrorCodes.SyntaxError,
               "Unmatched '{{' and '}}' brackets found in the template."
            )

            start_pos = cursor
            start_found = True

or catch any mismatch

    open_bracket_count = 0

    while cursor < len(template):

        # When '{{' is found
        if _get_char() == STARTER and _get_char(1) == STARTER:

            open_bracket_count += 1
        # When '}}' is found
        elif get_char() == ENDER and _get_char(1) == ENDER:

            open_bracket_count -= 1

            if start_found:
                 # Logic

        # Move the cursor forward
        cursor += 1

    # Check for unmatched brackets
    if open_bracket_count != 0:
        raise TemplateException(
            TemplateException.ErrorCodes.SyntaxError,
            "Unmatched '{{' and '}}' brackets found in the template."
        )

Implement Planner Skill In Python

Feature Request:

Implement the planner skill. @alexchaomander said it wasn't being worked on internally, so I've started on it. Opening this as a tracker for semantic kernel planning skill implementation in the Python branch. I've been working on it and hoping to get a PR out soon, but wanted to make sure I was on the same page stylistically with the team.

Scope:

Current scope is planning skill feature parity with the main branch (although maybe a few commits behind)...

Azure OpenAI with GitHub repo sample show 500 when retrieving models

Describe the bug

When trying the example, if using Azure OpenAI endpoint it will return 500 when retrieving models.

Expected behavior

  • It should handle the 500 error and show a clear message to the user instead of parsing the JSON.
  • I would want to know if there is an issue with my Azure OpenAI resource or how to solve it.

Screenshots

image

Desktop (please complete the following information):

  • OS: Windows WSL Linux (Bash)

Additional context

  • Azure OpenAI Pricing: Standard created few days ago in East US.
  • I have deployed 3 models, including davinci-003

More context in the next message of discussion.

Request: TypeScript support (~C# parity)

Hi - we have a project using Next.JS and would love to leverage SK on the backend.
A full TypeScript port (ideally 1:1 parity with C#) would be truly valuable and appreciated.

As part of this, a NPM distribution of the TS SDK (i.e. https://registry.npmjs.org).

Happy to discuss/explain the scenarios more.
Thanks!

Add Support for Liquid Template Engine in Skills

Summary

The purpose of this feature proposition is to introduce support for the Liquid template engine in the Semantic Kernel framework, allowing developers to create dynamic and reusable prompts for skills. This will enhance the flexibility and maintainability of skills and make it easier for developers to create complex, context-aware AI capabilities.

Background

Currently, the Semantic Kernel framework allows developers to build skills using LLM AI prompts, native computer code, or a hybrid of both. However, constructing and maintaining dynamic prompts can become cumbersome and challenging as the complexity of skills increases. The Liquid template engine, created by Shopify, offers a powerful and flexible solution to this problem, enabling developers to create dynamic templates with control flow, iteration, and variable interpolation.

Proposed Solution

Integrate the Liquid template engine into the Semantic Kernel framework. There is an open source library called Fluid that can parse templates and generate render the output. This could be integrated into the Kernel natively, so that any skill could use liquid templates.

Benefits

  • Flexibility: The Liquid template engine allows developers to create more dynamic and context-aware prompts by supporting conditional rendering, loops, and variable interpolation.
  • Maintainability: By using templates, developers can reduce code duplication and make it easier to update and manage prompts as requirements change.
  • Reusability: Liquid templates encourage the creation of reusable prompts that can be shared across different skills or skill versions. As more ChatGPT like "intelligences" come online, this would also make these somewhat "cross-platform".

I'm willing to take the first shot at this after some discussion!

I'm also aware that this has the potential to be a single Skill that other skills can put in the pipeline, would love to discuss the pros and cons of each approach (making this a skill vs integrating it into the Kernel).

Poetry Install crashes for python-preview branch.

Describe the bug
The poetry install command fails at debugpy installation. Possibly due to hashing error on 'debugpy/_vendored/pydevd/pydevd_attach_to_process/run_code_on_dllmain_x86.dll'.

To Reproduce
Steps to reproduce the behavior:

  1. Checkout python-preview branch: git checkout python-preview
  2. Change directory to python: cd python
  3. Install Poetry: pip install poetry
  4. Run Poetry installation: poetry install
  5. Should install most but stops and fails at debugpy.

Expected behavior
I was walking through the README.md tutorial for python setup in the branch. Expected the install to go through nicely.

Log
This is the log after a second install attempt.

Package operations: 7 installs, 0 updates, 0 removals

  • Installing debugpy (1.6.6)

  _WheelFileValidationError

  ["In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_cython.cp310-win_amd64.pyd didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/_pydevd_frame_eval/pydevd_frame_evaluator.cp310-win_amd64.pyd didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/pydevd_attach_to_process/attach_amd64.dll didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/pydevd_attach_to_process/attach_x86.dll didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/pydevd_attach_to_process/inject_dll_amd64.exe didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/pydevd_attach_to_process/inject_dll_x86.exe didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/pydevd_attach_to_process/run_code_on_dllmain_amd64.dll didn't match RECORD", "In C:\\Users\\peter\\AppData\\Local\\pypoetry\\Cache\\artifacts\\f3\\fe\\7e\\ae79971a5d18da24266b2396f07920898302fe5605d3467a4c20e44be7\\debugpy-1.6.6-cp310-cp310-win_amd64.whl, hash / size of debugpy/_vendored/pydevd/pydevd_attach_to_process/run_code_on_dllmain_x86.dll didn't match RECORD"]

  at ~\AppData\Local\Programs\Python\Python310\lib\site-packages\installer\sources.py:289 in validate_record
      285│                         f"In {self._zipfile.filename}, hash / size of {item.filename} didn't match RECORD"
      286│                     )
      287│
      288│         if issues:
    → 289│             raise _WheelFileValidationError(issues)
      290│
      291│     def get_contents(self) -> Iterator[WheelContentElement]:
      292│         """Sequential access to all contents of the wheel (including dist-info files).
      293│

Desktop (please complete the following information):

  • OS [dxdiag]: Windows 10 Pro 64-bit Build 19044
  • Python: Python 3.10.6
  • Pip: 23.0.1
  • VSCode:
    • 1.76.2
    • ee2b180d582a7f601fa6ecfdad8d9fd269ab1884
    • x64

Unable to run `2-running-prompts-from-file.ipynb`

Describe the bug
Following the sample notebooks, I was unable to execute the joke skill in 2-running-prompts-from-file.ipynb as it results in BadRequest error.

To Reproduce
Steps to reproduce the behavior:

  1. Follow 0-AI-settings.ipynb to setup credentials. Choose Azure OpenAI.
  2. Execute 2-running-prompts-from-file.ipynb examples

Expected behavior
Should be able to execute the last example which generates a joke from AI

Screenshots
image
Settings generated:
image

Desktop (please complete the following information):

  • OS: MacOS Ventura 13.2.1
  • IDE: VS Code
  • NuGet Package Version: Microsoft.SemanticKernel, 0.8.48.1-preview

GitHub Repo Q&A Bot - FunctionNotFound: Function `textmemoryskill.recall` not found

Describe the bug

GitHub Repo Q&A Bot Sample app returns following error:

Something went wrong. Please try again.
Details: {Bad Request => FunctionNotFound: Function textmemoryskill.recall not found}
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

start API function
start github-qna-webapp-react

In Application:

Provide GIT repo details
Ask question in Bot Window

Expected behavior
Bot responds without error

Screenshots

image

Desktop (please complete the following information):

  • OS: Windows11
  • IDE: VS Code

Planner - Allow manually specifying context variables

If you create and execute a plan as follows (essentially what is in 5-using-the-planner.ipynb):

SKContext plan = await _kernel.RunAsync(ask, _planner["CreatePlan"]);
SKContext executionResults = plan;
int step = 1;
int maxSteps = 5;
while (!executionResults.Variables.ToPlan().IsComplete && step < maxSteps)
{
    executionResults.Variables.Set("foo", "bar");
    SKContext results = await _kernel.RunAsync(executionResults.Variables, _planner["ExecutePlan"]);
    ...
    executionResults = results;
    step++;
}

And the function that gets executed as part of that plan looks like this:

[SKFunction("A description of Baz")]
[SKFunctionInput(Description = "Input to Baz")]
[SKFunctionContextParameter(Name = "foo", Description = "A description of what foo does")]
public async Task<string> Baz(string input, SKContext context)

The only way foo is available in the passed context is if it was defined in the plan XML, something like

<function.Test.Baz foo="bar" />

The call executionResults.Variables.Set("foo", "bar"); does not result in foo being available in Baz.

I believe this is because FunctionFlowRunner executes the skill using a new set of variables, built from the attributes (looking at this line: var result = await this._kernel.RunAsync(functionVariables, skillFunction); and tracing back where functionVariables is built)

Allowing a developer to specify variables to persist would be useful for cases where there is data the developer don't want to expose to injection risk. As an example, it could be a user session identifier.

`Planner` selected a wrong skill when there are two similar skills within a `skills` folder

Describe the bug
When I run Example12_Planning.cs, the planner failed to complete. Below is the console log:

======== Planning - Create and Execute Poetry Plan ========
Original plan:
<goal>
Write a poem about John Doe, then translate it into Italian.
</goal>
<plan>
  <function.WriterSkill.ShortPoem input="John Doe" />
  <function.WriterSkill.TranslateV2 input="$SHORT_POEM" language="Italian" setContextVariable="TRANSLATED_POEM" />
</plan>
Step 1 - Execution results:
<goal>
Write a poem about John Doe, then translate it into Italian.
</goal><plan>
  <function.WriterSkill.TranslateV2 input="$SHORT_POEM" language="Italian" setContextVariable="TRANSLATED_POEM" />
</plan>
Step 2 - Execution results:
<goal>
Write a poem about John Doe, then translate it into Italian.
</goal><plan>
</plan>
Step 2 - COMPLETE!
Error: InvalidRequest: The request is not valid, HTTP status: BadRequest
Execution complete in 6437 ms!
fail: object[0]
      Function call fail during pipeline step 0: WriterSkill.TranslateV2. Error: InvalidRequest: The request is not valid, HTTP status: BadRequest

It appears to me that the prompt here is to translate the generated poem to Italian but because the planner chose WriterSkill.TraslateV2 (which is the skill to translate from any language to English), it failed.

Expected behavior
The planner should select WriterSkill.Translate skill instead of WriterSkill.TranslateV2 and executed successfully as follows

======== Planning - Create and Execute Poetry Plan ========
Original plan:
<goal>
Write a poem about John Doe, then translate it into Italian.
</goal>
<plan>
  <function.WriterSkill.ShortPoem input="John Doe is a kind and generous man who loves to help others and make them smile."/>
  <function.WriterSkill.Translate language="Italian"/>
</plan>
Step 1 - Execution results:
<goal>
Write a poem about John Doe, then translate it into Italian.
</goal><plan>
  <function.WriterSkill.Translate language="Italian" />
</plan>
Step 2 - Execution results:
<goal>
Write a poem about John Doe, then translate it into Italian.
</goal><plan>
</plan>
Step 2 - COMPLETE!
John Doe è un uomo di grande cuore e grazia
Ha sempre un sorriso sul volto
Aiuta i poveri, i malati e i vecchi
Ma a volte la sua bontà lo mette nei guai
Come quando ha dato il suo cappotto a un tremante

Screenshots
image

Desktop (please complete the following information):

  • OS: Ubuntu
  • IDE: VSCode

Internationalization support

I made a few tests, rewriting the sample skills for Brazilian Portuguese, but the results were not very good.
I believe it might have an issue with the CoreSkills being written all in english.

There should be a way when creating the Kernel to provide the language the semmantic core is to use, and it should change the core skills to use this language.

It should also be possible to provide alternative wordings for the core skills, to allow unsupported languages to be used.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.