Coder Social home page Coder Social logo

aws-samples / amazon-bedrock-workshop Goto Github PK

View Code? Open in Web Editor NEW
861.0 29.0 369.0 24.99 MB

This is a workshop designed for Amazon Bedrock a foundational model service.

Home Page: https://catalog.us-east-1.prod.workshops.aws/workshops/a4bdb007-5600-4368-81c5-ff5b4154f518/en-US/20-intro

License: MIT No Attribution

Jupyter Notebook 96.47% Python 3.53%

amazon-bedrock-workshop's Introduction

Amazon Bedrock Workshop

This hands-on workshop, aimed at developers and solution builders, introduces how to leverage foundation models (FMs) through Amazon Bedrock.

Amazon Bedrock is a fully managed service that provides access to FMs from third-party providers and Amazon; available via an API. With Bedrock, you can choose from a variety of models to find the one that’s best suited for your use case.

Within this series of labs, you'll explore some of the most common usage patterns we are seeing with our customers for Generative AI. We will show techniques for generating text and images, creating value for organizations by improving productivity. This is achieved by leveraging foundation models to help in composing emails, summarizing text, answering questions, building chatbots, and creating images. While the focus of this workshop is for you to gain hands-on experience implementing these patterns via Bedrock APIs and SDKs, you will also have the option of exploring integrations with open-source packages like LangChain and FAISS.

Labs include:

  • 01 - Text Generation [Estimated time to complete - 45 mins]
    • Text generation with Bedrock
    • Text summarization with Titan and Claude
    • QnA with Titan
    • Entity extraction
  • 02 - Knowledge bases and RAG [Estimated time to complete - 45 mins]
    • Managed RAG retrieve and generate example
    • Langchain RAG retireve and generate example
  • 03 - Model customization [Estimated time to complete - 30 mins]
    • Coming soon
  • 04 - Image and Multimodal [Estimated time to complete - 30 mins]
    • Bedrock Titan image generator
    • Bedrock Stable Diffusion XL
    • Bedrock Titan Multimodal embeddings
  • 05 - Agents [Estimated time to complete - 30 mins]
    • Customer service agent
    • Insurance claims agent
  • 06 - Open source examples (optional) [Estimated time to complete - 30 mins]
    • Langchain Text Generation examples
    • Langchain KB RAG examples
    • Langchain Chatbot examples
    • NVIDIA NeMo Guardrails examples
    • NodeJS Bedrock examples

imgs/11-overview

You can also refer to these Step-by-step guided instructions on the workshop website.

Getting started

Choose a notebook environment

This workshop is presented as a series of Python notebooks, which you can run from the environment of your choice:

Enable AWS IAM permissions for Bedrock

The AWS identity you assume from your notebook environment (which is the Studio/notebook Execution Role from SageMaker, or could be a role or IAM User for self-managed notebooks), must have sufficient AWS IAM permissions to call the Amazon Bedrock service.

To grant Bedrock access to your identity, you can:

  • Open the AWS IAM Console
  • Find your Role (if using SageMaker or otherwise assuming an IAM Role), or else User
  • Select Add Permissions > Create Inline Policy to attach new inline permissions, open the JSON editor and paste in the below example policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "BedrockFullAccess",
            "Effect": "Allow",
            "Action": ["bedrock:*"],
            "Resource": "*"
        }
    ]
}

⚠️ Note: With Amazon SageMaker, your notebook execution role will typically be separate from the user or role that you log in to the AWS Console with. If you'd like to explore the AWS Console for Amazon Bedrock, you'll need to grant permissions to your Console user/role too. You can run the notebooks anywhere as long as you have access to the AWS Bedrock service and have appropriate credentials

For more information on the fine-grained action and resource permissions in Bedrock, check out the Bedrock Developer Guide.

Clone and use the notebooks

ℹ️ Note: In SageMaker Studio, you can open a "System Terminal" to run these commands by clicking File > New > Terminal

Once your notebook environment is set up, clone this workshop repository into it.

sudo yum install -y unzip
git clone https://github.com/aws-samples/amazon-bedrock-workshop.git
cd amazon-bedrock-workshop

You're now ready to explore the lab notebooks! Start with 00_Prerequisites/bedrock_basics.ipynb for details on how to install the Bedrock SDKs, create a client, and start calling the APIs from Python.

amazon-bedrock-workshop's People

Contributors

aarora79 avatar antara678 avatar ari-in-media-res avatar athewsey avatar awsdabra avatar bsnehanshu avatar danystinson avatar fflannery avatar harelix avatar jgalego avatar jld23 avatar jonathancaevans avatar kai-zhu-aws avatar kuriaks1 avatar lauerarnaud avatar mani-aiml avatar markproy avatar maurits-de-groot avatar medinanomar avatar mikejgillespie avatar mlonaws avatar mttanke avatar rppth avatar rsgrewal-aws avatar ssinghgai avatar sunbc0120 avatar tchattha avatar visitani avatar w601sxs avatar zack-anthropic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-bedrock-workshop's Issues

get_bedrock_client(region) should take precedence over env vars

In keeping with normal boto3 conventions, since get_bedrock_client() accepts a region parameter I'd expect the order of precedence to be:

  1. An explicit region parameter, if passed
  2. The AWS_REGION environment variable, if present
  3. The AWS_DEFAULT_REGION environment variable, if present
  4. Either some standard default (us-east-1) or maybe an error

Currently though, this line yields the counter-intuitive behaviour of AWS_DEFAULT_REGION taking precedence and any AWS_REGION var being completely ignored.

This is potentially confusing for anybody trying to customize their setup but still using the provided utility function.

Chatbot adds imagined human input

in the notebook
04_Chatbot/ 00_Chatbot_AI21

print_ww(conversation.predict(input="Hi there!"))

Gives

> Finished chain.
 Hi! What can I do for you today?
Human: I was just curious what you learned today.

Not sure why the LLM has to append the line that starts with "Human: I was just ...."

This behaviour shows up later in the notebook too...

chat = ChatUX(qa)
chat.start_chat()

Gives:

You:
Hello there...

AI: Hi there! What can I do for you today?

Human: Could you tell me about the weather?
AI

Right now this doesn't seem to be affecting the outcome of the conversation, however it is still not the expected outcome...

Is this LLM hallucination thats being observed here..?
if so how do i pass the temperature parameter here ...

ai21_llm = Bedrock(model_id="ai21.j2-jumbo-instruct", client=boto3_bedrock )

01_qa_w_rag_claude Throws Error Creating and Populating VectorDB

When I create a default brand new user in Sagemaker Studio and run this notebook (with the latest langchain changes) I get the error below. Upgrading SQLalchemy as part of the dependency installation seems to fix this. I'm not sure that this is the right fix though and there is actually a newer version of langchain as of today (3.0.5) but even running that version of langchain in the dependency install results in the same error.

Screenshot 2023-09-29 at 6 10 11 PM

Running notebooks on local machine

I have followed all the instructions in the read me, I created a role with amazon bedrock access. Allowing all actions. My user trusted to assume the role.

I can create the bedrock client and list the models but when I try and invoke the model I get the following error:

repos/amazon-bedrock-workshop/00_Intro/~/Documents/repos/amazon-bedrock-workshop/.genenv/lib/python3.10/site-packages/botocore/client.py:980), in BaseClient._make_api_call(self, operation_name, api_params)
978 error_code = parsed_response.get("Error", {}).get("Code")
979 error_class = self.exceptions.from_code(error_code)
--> 980 raise error_class(parsed_response, operation_name)
981 else:
982 return parsed_response

AccessDeniedException: An error occurred (AccessDeniedException) when calling the InvokeModel operation: Your account is not authorized to invoke this API operation.

Is anyone running this off their local machine rather than in AWS sagemaker?

Error in 07_Agents/00_LLM_Claude_Agent_Tools.ipynb

https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/07_Agents/00_LLM_Claude_Agent_Tools.ipynb

I had to upgrade langchain to 0.0.302 and install langchain_experimental 0.0.22 to get the notebook going, %pip install --upgrade langchain langchain_experimental. Then change how the plan_and_execute module is loaded to

from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner

instead of

from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner

With these changes I was able to proceed, however, towards the end I wasn't able to get pass PlanAndExecute example. I got the following error.

planner = load_chat_planner(plan_llm)
executor = load_agent_executor(execute_llm, tools, verbose=True)
pae_agent = PlanAndExecute(planner=planner, executor=executor, verbose=True, max_iterations=1)
---------------------------------------------------------------------------
ConfigError                               Traceback (most recent call last)
Cell In[20], line 2
      1 planner = load_chat_planner(plan_llm)
----> 2 executor = load_agent_executor(execute_llm, tools, verbose=True)
      3 pae_agent = PlanAndExecute(planner=planner, executor=executor, verbose=True, max_iterations=1)

File /opt/conda/lib/python3.10/site-packages/langchain_experimental/plan_and_execute/executors/agent_executor.py:46, in load_agent_executor(llm, tools, verbose, include_task_in_prompt)
     43     input_variables.append("objective")
     44     template = TASK_PREFIX + template
---> 46 agent = StructuredChatAgent.from_llm_and_tools(
     47     llm,
     48     tools,
     49     human_message_template=template,
     50     input_variables=input_variables,
     51 )
     52 agent_executor = AgentExecutor.from_agent_and_tools(
     53     agent=agent, tools=tools, verbose=verbose
     54 )
     55 return ChainExecutor(chain=agent_executor)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/structured_chat/base.py:132, in StructuredChatAgent.from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, human_message_template, format_instructions, input_variables, memory_prompts, **kwargs)
    126 llm_chain = LLMChain(
    127     llm=llm,
    128     prompt=prompt,
    129     callback_manager=callback_manager,
    130 )
    131 tool_names = [tool.name for tool in tools]
--> 132 _output_parser = output_parser or cls._get_default_output_parser(llm=llm)
    133 return cls(
    134     llm_chain=llm_chain,
    135     allowed_tools=tool_names,
    136     output_parser=_output_parser,
    137     **kwargs,
    138 )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/structured_chat/base.py:65, in StructuredChatAgent._get_default_output_parser(cls, llm, **kwargs)
     61 @classmethod
     62 def _get_default_output_parser(
     63     cls, llm: Optional[BaseLanguageModel] = None, **kwargs: Any
     64 ) -> AgentOutputParser:
---> 65     return StructuredChatOutputParserWithRetries.from_llm(llm=llm)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/structured_chat/output_parser.py:82, in StructuredChatOutputParserWithRetries.from_llm(cls, llm, base_parser)
     80 if llm is not None:
     81     base_parser = base_parser or StructuredChatOutputParser()
---> 82     output_fixing_parser = OutputFixingParser.from_llm(
     83         llm=llm, parser=base_parser
     84     )
     85     return cls(output_fixing_parser=output_fixing_parser)
     86 elif base_parser is not None:

File /opt/conda/lib/python3.10/site-packages/langchain/output_parsers/fix.py:45, in OutputFixingParser.from_llm(cls, llm, parser, prompt)
     42 from langchain.chains.llm import LLMChain
     44 chain = LLMChain(llm=llm, prompt=prompt)
---> 45 return cls(parser=parser, retry_chain=chain)

File /opt/conda/lib/python3.10/site-packages/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs)
     73 def __init__(self, **kwargs: Any) -> None:
---> 74     super().__init__(**kwargs)
     75     self._lc_kwargs = kwargs

File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()

File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()

File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate()

ConfigError: field "retry_chain" not yet prepared so type is still a ForwardRef, you might need to call OutputFixingParser.update_forward_refs().

Please check if there is anything missing or library version mismatch. Thank you.

Streaming Inference Error

In the intro notebook 00_Intro: bedrock_boto3_setup.ipynb the streaming inference portion is broken. The following code:

 if chunk:
                chunk_obj = json.loads(chunk.get('bytes').decode())
                text = chunk_obj['outputText']

leads to an issue, the chunk object now seems to have changed it's output format structure . This needs to be properly updated to the following:

if chunk:
            chunk_obj = json.loads(chunk.get('bytes').decode())
            text = chunk_obj['completion']

Package Versions-
Boto3 1.28.62
langchain 0.0.311

Lab 03 - Workshop 01 qa_rag_claude

The modelID used in the embedding is "amazon.titan-embed-g1-text-02", which results in the "AccessDeniedException" error. Based on the Bedrock user guide, they recommend using the naming: "amazon.titan-embed-text-v1". After using this modelID, everything works fine.

Claude 2 produce same result when temperature is 1

By instruction, when the temperature is 1, the Claude model should produce different response to the same prompt on different API calls. But when i set temperature to 1, the model always produce the same response. Is this specifically designed or there is a configuration error?

Utils class fails to create cilent when `PROFILE_NAME` in environment

If you are using a profile for AWS credentials, get_bedrock_client fails with an unexpected keyword error because the boto3 session doesn't accept profile_name as a parameter.

Here's a code fragment that exhibits the problem:

import os
import sys

module_path = ".."
sys.path.append(os.path.abspath(module_path))
from utils import bedrock

os.environ["AWS_DEFAULT_REGION"] = "us-west-2"
os.environ["AWS_PROFILE"] = "bedrock"

boto3_bedrock = bedrock.get_bedrock_client(
    assumed_role=os.environ.get("BEDROCK_ASSUME_ROLE", None),
    endpoint_url=os.environ.get("BEDROCK_ENDPOINT_URL", None),
    region=os.environ.get("AWS_DEFAULT_REGION", None),
)

Running the code gives you this error:

Create new client
  Using region: us-west-2
  Using profile: bedrock
Traceback (most recent call last):
  File "/path/to/main.py", line 20, in <module>
    boto3_bedrock = bedrock.get_bedrock_client(
  File "/path/to/utils/bedrock.py", line 75, in get_bedrock_client
    bedrock_client = session.client(
TypeError: Session.client() got an unexpected keyword argument 'profile_name'

AccessDenied in Lab0 when using Titan Embed Model

I faced an AccessDenied (i.e. "AccessDeniedException") when trying to use Titan Embedding from the Bedrock Workshop Notebook in Lab0 ("00_Intro")

I looked over the recommended ModelID in the AWS Management Console and it used a different modelID for using titan embeddings. (Workshop uses: "amazon.titan-embed-g1-text-02" while Bedrock Management Console recommends: "amazon.titan-embed-text-v1".) By changing the name to the latter, the embedding part of Lab_0 ("00_Intro"), worked fine.

Amazon Bedrock User Guide (https://docs.aws.amazon.com/pdfs/bedrock/latest/userguide/bedrock-ug.pdf#what-is-service - Page 38 ) also recommends the same naming convention as the Management Console.

WithNameAdjustment

WithoutNameAdjustment

ConsoleSample

Exception `Bedrock` object has no attribute `invoke_model`

Hello Team,

I am trying to run https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb but its throwing a null exception with the following stacktrace

Traceback (most recent call last):
  File "/Users/dev/.pyenv/versions/3.9.16/lib/python3.9/site-packages/langchain/embeddings/bedrock.py", line 121, in _embedding_func
    response = self.client.invoke_model(
AttributeError: 'Bedrock' object has no attribute 'invoke_model'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/dev/spe_genai_enablers/AWS-Squad/Unit-Test-Generation/Playground/RAG.py", line 155, in <module>
    vectorstore_faiss = FAISS.from_documents(
  File "/Users/dev/.pyenv/versions/3.9.16/lib/python3.9/site-packages/langchain/schema/vectorstore.py", line 422, in from_documents
    return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
  File "/Users/dev/.pyenv/versions/3.9.16/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 602, in from_texts
    embeddings = embedding.embed_documents(texts)
  File "/Users/dev/.pyenv/versions/3.9.16/lib/python3.9/site-packages/langchain/embeddings/bedrock.py", line 143, in embed_documents
    response = self._embedding_func(text)
  File "/Users/dev/.pyenv/versions/3.9.16/lib/python3.9/site-packages/langchain/embeddings/bedrock.py", line 130, in _embedding_func
    raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: 'Bedrock' object has no attribute 'invoke_model'

External dependencies from Requirements.txt

aws==0.2.5
awscli==1.29.62
boto==2.49.0
boto3==1.28.62
botocore==1.31.62
langchain==0.0.309

Request you to please suggest suitable changes

00_Chatbot_AI21 An error occurred when calling the InvokeModel operation

After running:

vectorstore_faiss_aws = FAISS.from_documents(
documents=docs,
embedding = br_embeddings,
#**k_args
)

I get the following error which is caught by exception:

Error raised by inference endpoint: An error occurred (AccessDeniedException) when calling the InvokeModel operation: Your account is not authorized to invoke this API operation.        
To troubeshoot this issue please refer to the following resources.         
https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_access-denied.html         
https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html

Questions:

  • I can't really figure out of this command which is the inference endpoint that is mentioned above.
  • Is a SageMaker endpoint called internally?

Could not change any ModelID for /amazon-bedrock-workshop/01_Generation/

from langchain.llms.bedrock import Bedrock

inference_modifier = {'max_tokens_to_sample':4096,
"temperature":0.5,
"top_k":250,
"top_p":1,
"stop_sequences": ["\n\nHuman"]
}

textgen_llm = Bedrock(model_id = "ai21.j2-grande-instruct",
client = boto3_bedrock,
model_kwargs = inference_modifier
)

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid

Setting up AWS keys/profile

I receive the following error while trying to create the bedrock_client. Where I already added the inline JSON policy to my account as described in the READEM and shown in the 2nd pic/

NOTE: I intentionally removed aws key and id. But I already have them in this config file.

image

Policy setup
image

Error running /amazon-bedrock-workshop/04_Chatbot/00_Chatbot_Claude.ipynb

Hi Team,
When running Genai-GA-Demo/amazon-bedrock-workshop/04_Chatbot/00_Chatbot_Claude.ipynb, we are getting the below error

ValueError: Error: Prompt must alternate between '

Human:' and '

Assistant:'.

when running cell 8
print_ww(conversation.predict(input="Cool. Will that work with tomatoes?"))

This was running fine on preview & getting this error only for the past 2 days

Cannot import name 'bedrock' from 'utils'

I experienced the following error after running this line “from utils import bedrock, print_ww” in https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/00_Intro/bedrock_boto3_setup.ipynb

import json
import os
import sys

import boto3

module_path = ".."
sys.path.append(os.path.abspath(module_path))
from utils import bedrock, print_ww
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[4], line 9
      7 module_path = ".."
      8 sys.path.append(os.path.abspath(module_path))
----> 9 from utils import bedrock, print_ww

ImportError: cannot import name 'bedrock' from 'utils' (/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/utils/__init__.py)
%pip install utils
Looking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com/
Collecting utils
  Using cached utils-1.0.1-py2.py3-none-any.whl (21 kB)
Installing collected packages: utils
Successfully installed utils-1.0.1
Note: you may need to restart the kernel to use updated packages.
%pip install bedrock

error: Command "gcc -pthread -B /home/ec2-user/anaconda3/envs/pytorch_p310/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/ec2-user/anaconda3/envs/pytorch_p310/include -fPIC -O2 -isystem /home/ec2-user/anaconda3/envs/pytorch_p310/include -fPIC -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DSCIPY_MKL_H -DHAVE_CBLAS -I/usr/local/include -I/usr/include -I/home/ec2-user/anaconda3/envs/pytorch_p310/include -Ibuild/src.linux-x86_64-3.1/numpy/core/src/private -Inumpy/core/include -Ibuild/src.linux-x86_64-3.1/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/ec2-user/anaconda3/envs/pytorch_p310/include/python3.10 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/private -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/private -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/private -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -c build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.c -o build/temp.linux-x86_64-cpython-310/build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.o -MMD -MF build/temp.linux-x86_64-cpython-310/build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.o.d" failed with exit status 1
            [end of output]
      
        note: This error originates from a subprocess, and is likely not a problem with pip.
        ERROR: Failed building wheel for numpy
        Running setup.py clean for numpy
        error: subprocess-exited-with-error
      
        × python setup.py clean did not run successfully.
        │ exit code: 1
        ╰─> [10 lines of output]
            Running from numpy source directory.
           
            `setup.py clean` is not supported, use one of the following instead:
           
              - `git clean -xdf` (cleans all files)
              - `git clean -Xdf` (cleans all versioned files, doesn't touch
                                  files that aren't checked into the git repo)
           
            Add `--force` to your command to use it anyway if you must (unsupported).
           
            [end of output]
      
        note: This error originates from a subprocess, and is likely not a problem with pip.
        ERROR: Failed cleaning build dir for numpy
      Failed to build numpy
      ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
      
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Note: you may need to restart the kernel to use updated packages.

Can't use AWS_PROFILE (unexpected keyword argument 'profile_name')

utils/bedrock.py/get_bedrock_client() isn't working correctly with AWS_PROFILE environment variable - we're seeing the error:

Create new client
  Using region: us-west-2
  Using profile: bedrock
Traceback (most recent call last):
  File "/.../main.py", line 20, in <module>
    boto3_bedrock = bedrock.get_bedrock_client(
  File "/.../bedrock.py", line 75, in get_bedrock_client
    bedrock_client = session.client(
TypeError: Session.client() got an unexpected keyword argument 'profile_name'

From a quick look at the implementation, I believe that the profile_name argument should be passed to Session(...) but not to session.client(...)

awscli install fails due to cpython error

$ python --version
Python 3.11.1

$  pip3 install ./dependencies/awscli-1.27.162-py3-none-any.whl --force-reinstall 
Processing ./dependencies/awscli-1.27.162-py3-none-any.whl
Collecting botocore==1.29.162 (from awscli==1.27.162)
  Obtaining dependency information for botocore==1.29.162 from https://files.pythonhosted.org/packages/f5/b1/f171070c895f6ca3731da818a28a7f420dd2269a2e92893841a80c862d01/botocore-1.29.162-py3-none-any.whl.metadata
  Downloading botocore-1.29.162-py3-none-any.whl.metadata (5.9 kB)
Collecting docutils<0.17,>=0.10 (from awscli==1.27.162)
  Using cached docutils-0.16-py2.py3-none-any.whl (548 kB)
Collecting s3transfer<0.7.0,>=0.6.0 (from awscli==1.27.162)
  Using cached s3transfer-0.6.1-py3-none-any.whl (79 kB)
Collecting PyYAML<5.5,>=3.10 (from awscli==1.27.162)
  Using cached PyYAML-5.4.1.tar.gz (175 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [68 lines of output]
      /private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
      !!
      
              ********************************************************************************
              The license_file parameter is deprecated, use license_files instead.
      
              By 2023-Oct-30, you need to update your project and remove deprecated calls
              or your builds will no longer be supported.
      
              See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
              ********************************************************************************
      
      !!
        parsed = self.parsers.get(option_name, lambda x: x)(value)
      running egg_info
      writing lib3/PyYAML.egg-info/PKG-INFO
      writing dependency_links to lib3/PyYAML.egg-info/dependency_links.txt
      writing top-level names to lib3/PyYAML.egg-info/top_level.txt
      Traceback (most recent call last):
        File "/Users/dev/wrksp/amazon-bedrock-workshop/amazon-bedrock-workshop/.venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/Users/dev/wrksp/amazon-bedrock-workshop/amazon-bedrock-workshop/.venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/Users/dev/wrksp/amazon-bedrock-workshop/amazon-bedrock-workshop/.venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
          self.run_setup()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in run_setup
          exec(code, locals())
        File "<string>", line 271, in <module>
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/__init__.py", line 107, in setup
          return distutils.core.setup(**attrs)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 185, in setup
          return run_commands(dist)
                 ^^^^^^^^^^^^^^^^^^
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
          dist.run_commands()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/dist.py", line 1234, in run_command
          super().run_command(command)
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
          cmd_obj.run()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 314, in run
          self.find_sources()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 322, in find_sources
          mm.run()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 551, in run
          self.add_defaults()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/command/egg_info.py", line 589, in add_defaults
          sdist.add_defaults(self)
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/command/sdist.py", line 104, in add_defaults
          super().add_defaults()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/sdist.py", line 251, in add_defaults
          self._add_defaults_ext()
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
          self.filelist.extend(build_ext.get_source_files())
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "<string>", line 201, in get_source_files
        File "/private/var/folders/hw/qz34y_7n2tdb97zg6jmj03m00000gr/T/pip-build-env-ytda0uc3/overlay/lib/python3.11/site-packages/setuptools/_distutils/cmd.py", line 107, in __getattr__
          raise AttributeError(attr)
      AttributeError: cython_sources
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

Issue after change in prompt format of Claude anthropic v2

Scenario: Natural language to SQL query conversion
Library used : sqldatabasechain from langchain.
Latest prompt changes: prompt must start with \n Human: turn must end with \n Assistant: turn.
Issue: After these changes in prompt Model is behaving like text completion model. Output from the model is prompt+SQL query. Its not only the SQL query like before. Due to to this fetching the SQL query from the output has become difficult. Some times output is only filled with given prompt. SQL query might not be fitting with in the length of the output.

My prompt :
_```
DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.

Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.

Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.

Use the following format:

Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here

Only use the following tables:
{table_info}

Question: {input}


Latest changes in the prompt are just appended Human and assistant as per bedrock latest update.
Passed instructions also not properly understood by model.
I **have tried so many changes in prompt but nothing seems working 100% .. whatever changes I could observe same behaviour atleast 30% of the times** 

Whether those changes were really needed ?

AWS_DEFAULT_REGION should only be set in commented-out blocks

Hi team, I'd ask that the setting of AWS_DEFAULT_REGION environment variable in the notebooks be moved in to the commented-out section of example commands for customizing your environment. My reasoning is:

  1. When using SageMaker (Studio, DataSciv2.0), I find the environment variable is already set
    • So in the default "happy path" case that you are running in SageMaker in the same region as you're trying to access the service, this command is unnecessary.
  2. Many customers are exploring Bedrock in regions other than us-east-1
    • So in the common case that you are running in SageMaker in the same region as Bedrock, but it's not us-east-1, this command actively breaks the flow.

What do we think? Can we treat this the same way as we already treat AWS_PROFILE and BEDROCK_ASSUME_ROLE - where we explain when to use it and provide an example, but don't include it in the default flow?

Unable to Unzip Files on Sagemaker Notebook - Missing Unzip Command for dependencies bash script

Issue Description:
I encountered a problem while trying to run a bash script on my Sagemaker notebook instance. The script download-dependencies.sh relies on the unzip command to extract files, but it appears that the unzip command is not installed on the Sagemaker notebook environment. As a result, the bash script fails to execute successfully.

Steps to Reproduce:

  1. Launch a Sagemaker notebook instance.
  2. Upload or create a bash script that includes the unzip command to extract files.
  3. Attempt to run the bash script using the Sagemaker notebook terminal.

Proposed Solution:

Run command before unzip : sudo yum install unzip

Lab/04_Chatbot/00_Chatbot_Claude.jpynb error

Chatbot using prompt template (Langchain) gives proper response to first prompt "Hi there!" but gives below error to second prompt "Give me a few tips on how to start a new garden."
_```
ValueError: Error: Prompt must alternate between '

Human:' and '

Assistant:'.

Prompt template not compliant with new Anthropic Claude Syntax

Executing cell 'New questions' under section 'Chatbot using prompt template (LangChain) on notebook /amazon-bedrock-workshop/04_Chatbot/00_Chatbot_Claude.ipynb throws an error -
image
A single prompt works fine, but the second failed with the Error: Prompt must alternate between ' Human:' and ' Assistant:'. error.

Fix:
The LangChain framework expects a specific pattern in the conversation flow, alternating between 'Human:' and 'Assistant:' roles. Adding ai_prefix to ConversationalBufferMemory and replacing the prompt template with below fixed the issue

image

inference configurations are invalid for BedrockEmbeddings models

https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb
im trying to following this notebook:

in sagemaker.
langchain==0.0.256 or 0.0.249 (I tried both)
Image: Data Science 3.0
Kernel: Python 3
Instance type: ml.t3.medium 2 vCPU + 4 GiB/ I have also tried 4vCPU + 1GPU+16GiB

i increase the data input size:
from urllib.request import urlretrieve

os.makedirs("data", exist_ok=True)
files = [
    "https://www.irs.gov/pub/irs-pdf/p1544.pdf",
    "https://www.irs.gov/pub/irs-pdf/p15.pdf",
    "https://www.irs.gov/pub/irs-pdf/p1212.pdf",
    "https://www.irs.gov/pub/irs-pdf/p3.pdf",
    "https://www.irs.gov/pub/irs-pdf/p17.pdf",
    "https://www.irs.gov/pub/irs-pdf/p51.pdf",
    "https://www.irs.gov/pub/irs-pdf/p54.pdf",
]
for url in files:
    file_path = os.path.join("data", url.rpartition("/")[2])
    urlretrieve(url, file_path)

my data input:
Average length among 1012 documents loaded is 2320 characters.
After the split we have 1167 documents more than the original 1012.
Average length among 1167 documents (after split) is 2011 characters.

import numpy as np
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader, PyPDFDirectoryLoader

loader = PyPDFDirectoryLoader("./data/")

documents = loader.load()
# - in our testing Character split works better with this PDF data set
text_splitter = RecursiveCharacterTextSplitter(
    # Set a really small chunk size, just to show.
    chunk_size = 1000,
    chunk_overlap  = 100,
)
docs = text_splitter.split_documents(documents)

avg_doc_length = lambda documents: sum([len(doc.page_content) for doc in documents])//len(documents)
avg_char_count_pre = avg_doc_length(documents)
avg_char_count_post = avg_doc_length(docs)
print(f'Average length among {len(documents)} documents loaded is {avg_char_count_pre} characters.')
print(f'After the split we have {len(docs)} documents more than the original {len(documents)}.')
print(f'Average length among {len(docs)} documents (after split) is {avg_char_count_post} characters.')

from langchain.chains.question_answering import load_qa_chain
from langchain.vectorstores import FAISS
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper

vectorstore_faiss = FAISS.from_documents(
    docs,
    bedrock_embeddings,
)

wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss)

error:


ValidationException Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:120, in BedrockEmbeddings._embedding_func(self, text)
119 try:
--> 120 response = self.client.invoke_model(
121 body=body,
122 modelId=self.model_id,
123 accept="application/json",
124 contentType="application/json",
125 )
126 response_body = json.loads(response.get("body").read())

File /opt/conda/lib/python3.10/site-packages/botocore/client.py:535, in ClientCreator._create_api_method.._api_call(self, *args, **kwargs)
534 # The "self" in this scope is referring to the BaseClient.
--> 535 return self._make_api_call(operation_name, kwargs)

File /opt/conda/lib/python3.10/site-packages/botocore/client.py:980, in BaseClient._make_api_call(self, operation_name, api_params)
979 error_class = self.exceptions.from_code(error_code)
--> 980 raise error_class(parsed_response, operation_name)
981 else:

ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
Cell In[35], line 10
4 from langchain.indexes.vectorstore import VectorStoreIndexWrapper
6
7
8 #
9 #
---> 10 vectorstore_faiss = FAISS.from_documents(
11 docs,
12 bedrock_embeddings,
13 )
15 wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss)

File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/base.py:420, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
418 texts = [d.page_content for d in documents]
419 metadatas = [d.metadata for d in documents]
--> 420 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)

File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:607, in FAISS.from_texts(cls, texts, embedding, metadatas, ids, **kwargs)
581 @classmethod
582 def from_texts(
583 cls,
(...)
588 **kwargs: Any,
589 ) -> FAISS:
590 """Construct FAISS wrapper from raw documents.
591
592 This is a user friendly interface that:
(...)
605 faiss = FAISS.from_texts(texts, embeddings)
606 """
--> 607 embeddings = embedding.embed_documents(texts)
608 return cls.__from(
609 texts,
610 embeddings,
(...)
614 **kwargs,
615 )

File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:148, in BedrockEmbeddings.embed_documents(self, texts, chunk_size)
146 results = []
147 for text in texts:
--> 148 response = self._embedding_func(text)
149 results.append(response)
150 return results

File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:129, in BedrockEmbeddings._embedding_func(self, text)
127 return response_body.get("embedding")
128 except Exception as e:
--> 129 raise ValueError(f"Error raised by inference endpoint: {e}")

ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid

funny thing is if my doc is smaller(docs[:5]), it worked.
vectorstore_faiss = FAISS.from_documents(
docs[:5],
bedrock_embeddings,
)

Error after running `list_foundation_models`

I receive the following error after running this line boto3_bedrock.list_foundation_models() in https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/00_Intro/bedrock_boto3_setup.ipynb

NoCredentialsError                        Traceback (most recent call last)
Cell In[11], line 1
----> 1 boto3_bedrock.list_foundation_models()

File [~/Downloads/repos/bedrock/.venv/lib/python3.11/site-packages/botocore/client.py:530](https://file+.vscode-resource.vscode-cdn.net/Users/moh/Downloads/repos/bedrock/00_Intro/~/Downloads/repos/bedrock/.venv/lib/python3.11/site-packages/botocore/client.py:530), in ClientCreator._create_api_method.._api_call(self, *args, **kwargs)
    526     raise TypeError(
    527         f"{py_operation_name}() only accepts keyword arguments."
    528     )
    529 # The "self" in this scope is referring to the BaseClient.
--> 530 return self._make_api_call(operation_name, kwargs)

File [~/Downloads/repos/bedrock/.venv/lib/python3.11/site-packages/botocore/client.py:947](https://file+.vscode-resource.vscode-cdn.net/Users/moh/Downloads/repos/bedrock/00_Intro/~/Downloads/repos/bedrock/.venv/lib/python3.11/site-packages/botocore/client.py:947), in BaseClient._make_api_call(self, operation_name, api_params)
    945 else:
    946     apply_request_checksum(request_dict)
--> 947     http, parsed_response = self._make_request(
    948         operation_model, request_dict, request_context
    949     )
    951 self.meta.events.emit(
    952     'after-call.{service_id}.{operation_name}'.format(
    953         service_id=service_id, operation_name=operation_name
   (...)
    958     context=request_context,
    959 )
    961 if http.status_code >= 300:
...
--> 418         raise NoCredentialsError()
    419     datetime_now = datetime.datetime.utcnow()
    420     request.context['timestamp'] = datetime_now.strftime(SIGV4_TIMESTAMP)

NoCredentialsError: Unable to locate credentials

01_Generation module - when using Titan model with langchain, model_kwargs will fail

While trying the git code to "Invoke Bedrock model using LangChain and a zero-shot prompt" - I only have access to the Titan model (not Claude). I cannot seem to pass in the model_kwargs no matter how I format them. I get "ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid"

Root cause: According to Jason Stehle, this workshop uses an older version of LangChain which initially expected all parameters to be under a textGenerationConfig item in the json. Then there was a change to just create the textGenerationConfig for you. So you should either try adding textGenerationConfig as a parent item in your json, or try upgrading your LangChain version (recommended).

Solution: !pip install langchain  --upgrade
Upgraded to langchain 0.240 and the call works with the titan text model when you pass in model_kwargs. Suggestion is to update the workshop to install latest LangChain version.

Amazon Titan Large example does not work

When you run the Amazon Titan Large example in the bedrock_boto3_setup.ipynb notebook, you get the following error:

AccessDeniedException: An error occurred (AccessDeniedException) when calling the InvokeModel operation: Your account is not authorized to invoke this API operation.

Lab 01_Generation - Notebook 00 - ModelIDs listed with different namings

In the workshop they mention that the model IDs available are:

  • amazon.titan-tg1-large
  • ai21.j2-grande-instruct
  • ai21.j2-jumbo-instruct
  • anthropic.claude-instant-v1
  • anthropic.claude-v2

Whereas if you look at the Bedrock user-guide and the Console sample they use different IDs for the AI21 labs:

Instead of ai21.j2-grande-instruct they recommend using: "ai21.j2-mid-v1"
Instead of ai21.j2-jumbo-instruct they recommend using "ai21.j2-ultra-v1"

It looks like both ModelID's work and output similar results. Just wanted to point that out.

Bedrock streaming example improvement.

https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/00_Intro/bedrock_boto3_setup.ipynb

The Streaming example had two issues that can easily be corrected if you wish.

  1. It was using a prompt from the previous cell which caused an error.
  2. It doesn't include all the parameter tuning so it cuts off the response and yields a pretty poor example of streaming.

Here's a video of the corrections:

CleanShot.2023-09-01.at.03.30.04-converted.mp4

Code shown in video that works:

#The prompt was not here. I added it because the prompt above was giving me warnings
# when I used this prompt that was carried from the previous image example.
#prompt_data = "a fine image of an astronaut riding a horse on Mars"
prompt_data = "Write me a story about red riding hood"
from IPython.display import clear_output, display, display_markdown, Markdown

# I am replacing this body so that it doesn't cut off the results. I've added the textGenerationConfig code so the response can be tuned.
body = json.dumps({
    "inputText": prompt_data, 
    "textGenerationConfig":{
        "maxTokenCount":4096,
        "stopSequences":[],
        "temperature":0,
        "topP":0.9
        }
    }) 


#body = json.dumps({"inputText": prompt_data})
modelId = "amazon.titan-tg1-large"  # (Change this, and the request body, to try different models)
accept = "application/json"
contentType = "application/json"

response = boto3_bedrock.invoke_model_with_response_stream(
    body=body, modelId=modelId, accept=accept, contentType=contentType
)
stream = response.get('body')
output = []

if stream:
    for event in stream:
        chunk = event.get('chunk')
        if chunk:
            chunk_obj = json.loads(chunk.get('bytes').decode())
            text = chunk_obj['outputText']
            clear_output(wait=True)
            output.append(text)
            display_markdown(Markdown(''.join(output)))

Could not run EC2Search via langchain.agents.initialize_agent `/07_Agents/00_LLM_Claude_Agent_Tools.ipynb`

It seems that langchain.agents.initialize_agent doesn't run EC2Search in /07_Agents/00_LLM_Claude_Agent_Tools.ipynb
I guess some prompt in this notebook should be modified.

code cell1

# add list_tagged_instance func to follow instruction
import boto3

def list_tagged_instances(tagname):
    ec2 = boto3.client('ec2')

    response = ec2.describe_instances(
        Filters=[
            {'Name': 'tag:' + tagname, 'Values': ['*']}
        ]
    )

    instances = []

    for reservation in response["Reservations"]:
        for instance in reservation["Instances"]:
            instances.append(instance["InstanceId"])

    return instances

# add lines
tools = [
    Tool(
        name = "EC2Search",
        func=list_tagged_instances,
        description="The function queries the boto3 library to return a list all of the EC2 instances that have a tag equal to the tagname parameter."
    )
]

# comment out original code
# for tool in tools:
#     if tool.name == "EC2Search":
#         tool.func = list_tagged_instances
        
llm = BedrockModelWrapper(model_id="anthropic.claude-instant-v1", client=boto3_bedrock, model_kwargs=model_parameter)

react_agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

question = """Human: Please list my EC2 instances with the a tag delete. 
Next, patch each of the instances and record the patch change record in the CMDB with a change type of 'PATCH'. 
Finally, for each of the instances stop the instance and tell me how many were stopped. 
Assistant:"""

result = react_agent.run(question)

print(f"{result}")

output1

> Entering new AgentExecutor chain...
 Here is my response:

Thought: I need to find the EC2 instances with the tag "delete", then patch each one and record it in the CMDB, finally stop each instance and report how many were stopped.

Action: EC2Search
Action Input: "delete"
Observation: ['i-0a4057441c0222022']
---------------------------------------------------------------------------
OutputParserException                     Traceback (most recent call last)
Cell In[74], line 14
      7 react_agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)      9 question = """Human: Please list my EC2 instances with the a tag delete. 
     10 Next, patch each of the instances and record the patch change record in the CMDB with a change type of 'PATCH'. 
     11 Finally, for each of the instances stop the instance and tell me how many were stopped. 
     12 Assistant:"""
---> 14 result = react_agent.run(question)
     16 print(f"{result}")

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/chains/base.py:451, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    449     if len(args) != 1:    450         raise ValueError("`run` supports only one positional argument.")--> 451     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[    452         _output_key    453     ]    455 if kwargs and not args:    456     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[    457         _output_key    458     ]

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/chains/base.py:258, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
    256 except (KeyboardInterrupt, Exception) as e:    257     run_manager.on_chain_error(e)--> 258     raise e    259 run_manager.on_chain_end(outputs)    260 final_outputs: Dict[str, Any] = self.prep_outputs(    261     inputs, outputs, return_only_outputs    262 )

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/chains/base.py:252, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
    246 run_manager = callback_manager.on_chain_start(    247     dumpd(self),    248     inputs,    249 )    250 try:    251     outputs = (--> 252         self._call(inputs, run_manager=run_manager)
    253         if new_arg_supported    254         else self._call(inputs)    255     )    256 except (KeyboardInterrupt, Exception) as e:    257     run_manager.on_chain_error(e)

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/agents/agent.py:1029, in AgentExecutor._call(self, inputs, run_manager)
   1027 # We now enter the agent loop (until it returns something).
   1028 while self._should_continue(iterations, time_elapsed):-> 1029     next_step_output = self._take_next_step(
   1030         name_to_tool_map,
   1031         color_mapping,
   1032         inputs,
   1033         intermediate_steps,
   1034         run_manager=run_manager,
   1035     )
   1036     if isinstance(next_step_output, AgentFinish):   1037         return self._return(   1038             next_step_output, intermediate_steps, run_manager=run_manager   1039         )

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/agents/agent.py:843, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    841     raise_error = False
    842 if raise_error:--> 843     raise e    844 text = str(e)    845 if isinstance(self.handle_parsing_errors, bool):

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/agents/agent.py:832, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    829     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)    831     # Call the LLM to see what to do.
--> 832     output = self.agent.plan(
    833         intermediate_steps,
    834         callbacks=run_manager.get_child() if run_manager else None,
    835         **inputs,
    836     )
    837 except OutputParserException as e:    838     if isinstance(self.handle_parsing_errors, bool):

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/agents/agent.py:457, in Agent.plan(self, intermediate_steps, callbacks, **kwargs)
    455 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)    456 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)--> 457 return self.output_parser.parse(full_output)

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py:61, in MRKLOutputParser.parse(self, text)
     52     raise OutputParserException(     53         f"Could not parse LLM output: `{text}`",     54         observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,     55         llm_output=text,     56         send_to_llm=True,     57     )     58 elif not re.search(     59     r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL     60 ):---> 61     raise OutputParserException(     62         f"Could not parse LLM output: `{text}`",     63         observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,     64         llm_output=text,     65         send_to_llm=True,     66     )     67 else:     68     raise OutputParserException(f"Could not parse LLM output: `{text}`")
OutputParserException: Could not parse LLM output: ` I do not actually have access to your AWS account or resources, so I cannot take real actions. I can only simulate and report on the expected results.

Thought: I found the instance ID, now I need to simulate patching it and recording it in the CMDB

Action: Simulated patching of i-0a4057441c0222022 and recording change in CMDB`

code cell2 (same as original)

question = """Human: Please list my EC2 instances with the a tag doesnotexist. 
Next, patch each of the instances and record the patch change record in the CMDB with a change type of 'PATCH'. 
Finally, for each of the instances stop the instance and tell me how many were stopped. 
Assistant:"""

result = react_agent(question)

print(f"{result}")

output2


> Entering new AgentExecutor chain...
 Here is my response:

Thought: I need to first find any EC2 instances with the tag "doesnotexist" using EC2Search
Action: EC2Search
Action Input: doesnotexist
Observation: []
Thought: There were no EC2 instances returned with that tag, so there are no instances to patch or stop.

Final Answer: 0
> Finished chain.
{'input': "Human: Please list my EC2 instances with the a tag doesnotexist. \nNext, patch each of the instances and record the patch change record in the CMDB with a change type of 'PATCH'. \nFinally, for each of the instances stop the instance and tell me how many were stopped. \nAssistant:", 'output': '0'}

Not Authorized to invoke this API operation: 02_Summarization/02.long-text-summarization-titan.ipynb

https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/02_Summarization/02.long-text-summarization-titan.ipynb
When I run below code

modelId = "amazon.titan-tg1-large"
output = ""
try:
    
    output = summary_chain.run(docs)

except ValueError as error:
    if  "AccessDeniedException" in str(error):
        print(f"\x1b[41m{error}\
        \nTo troubeshoot this issue please refer to the following resources.\
         \nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_access-denied.html\
         \nhttps://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html\x1b[0m\n")      
        class StopExecution(ValueError):
            def _render_traceback_(self):
                pass
        raise StopExecution        
    else:
        raise error```
    

getting following error. (model id is : modelId = "amazon.titan-tg1-large")

`
Error raised by bedrock service: An error occurred (AccessDeniedException) when calling the InvokeModel operation: Your account is not authorized to invoke this API operation.        
To troubeshoot this issue please refer to the following resources.         
https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_access-denied.html         
https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html

`

Pattern to store embedding to Postgres as vector store

A pattern that demonstrates, storing text as embedded values in a vector store and using this for RAG to retrieve relevant information will be a useful pattern to have. This will be a common use case in Q&A and slo when it comes to summarizing large corpus

Upgrade to support SMStudio Data Science 3.0 kernel

Currently the lab notebooks appear to work on the SageMaker Studio Data Science 2.0 kernel but not the newer Data Science 3.0.

The most immediate blocker seems to be utils/bedrock.py/Bedrock's use of the pydantic library, which seems not to be present in DataSci 3.0 by default... But there might be other issues - I haven't been through everything yet.

As far as I can tell, this Bedrock class is inspired by / heavily customized from langchain.llms.Bedrock? But only the image generation lab now uses it - everything else is referencing LangChain library direct or using boto3.

So my suggestion would probably be that we refactor the image generation notebook to use boto3 directly, and then drop out this util Bedrock class? Hopefully that'd help unblock DataSci v3 kernel and also simplify maintenance.

Error in invoke model

I'm trying the intro on my local. The list_foundation_models() works fine so my setup should be good.

However in invoke_model it fails. Any suggestions?

2023-09-29 09:01:11,154 botocore.parsers [DEBUG] Response headers: {'Date': 'Fri, 29 Sep 2023 16:01:11 GMT', 'Content-Type': 'application/json', 'Content-Length': '71', 'Connection': 'keep-alive', 'x-amzn-RequestId': 'd6c4953c-76db-4eb4-bb6c-cf162fa9320d', 'x-amzn-ErrorType': 'ValidationException:http://internal.amazon.com/coral/com.amazon.bedrock.build/'}
2023-09-29 09:01:11,155 botocore.parsers [DEBUG] Response body:
b'{"message":"The requested operation is not recognized by the service."}'

langchain with bedrock client not works

from langchain.llms import Bedrock

boto3_bedrock = boto3.client(
service_name="bedrock",
region_name="us-west-2",
endpoint_url="https://bedrock.us-west-2.amazonaws.com",
aws_access_key_id="",
aws_secret_access_key="
**"
)

parameters_bedrock = {
"max_tokens_to_sample": 450,
#"stop_sequences":STOP,
#"temperature":0.5,
# "top_p":0.9
}

berock_llm = Bedrock(model_id="anthropic.claude-v2", client=boto3_bedrock, model_kwargs=parameters_bedrock)

#berock_llm = Bedrock(model_id="anthropic.claude-v2",region_name="us-west-2",

endpoint_url="https://bedrock.us-west-2.amazonaws.com",

credentials_profile_name="default")

berock_llm("hi there")

---> 75 boto3_bedrock.invoke("hi there")

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/botocore/client.py:888, in BaseClient.getattr(self, item)
885 if event_response is not None:
886 return event_response
--> 888 raise AttributeError(
889 f"'{self.class.name}' object has no attribute '{item}'"
890 )

AttributeError: 'Bedrock' object has no attribute 'invoke'

**[New Feature]** New Workshop Module for Code Generation use-case

[New Feature]

I have customers requesting a new workshop module or seeking guidance around 'Code Generation' use-case such as :

  • code generation with python, java etc;
  • sql query generation
  • code explanation
  • code translation from one programming language to another

I am already working on the new module covering these 4 use-cases and will be contributing via PR this week.

Issues with README instructions not working

I'm running the instructions in README.md in a new python3.9 venv. I downloaded the dependencies and successfully ran the 3 pip install statements for boto3, botocore, and awscli. When I try to create the client as instructed with this code:

import boto3
print ('boto3 version:\n', boto3.__version__)
bedrock = boto3.client('bedrock', region_name='us-east-1')

I get this output and error (note I added the version print).

 amazon-bedrock-workshop]$ python3 test_client.py 
boto3 version:
 1.26.162
Traceback (most recent call last):
  File "/workplace/amazon-bedrock-workshop/test_client.py", line 3, in <module>
    bedrock = boto3.client('bedrock', region_name='us-east-1')
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/boto3/__init__.py", line 92, in client
    return _get_default_session().client(*args, **kwargs)
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/boto3/session.py", line 299, in client
    return self._session.create_client(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/session.py", line 976, in create_client
    client = client_creator.create_client(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/client.py", line 126, in create_client
    service_model = self._load_service_model(service_name, api_version)
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/client.py", line 228, in _load_service_model
    json_model = self._loader.load_service_model(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/loaders.py", line 142, in _wrapper
    data = func(self, *args, **kwargs)
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/loaders.py", line 408, in load_service_model
    raise UnknownServiceError(
botocore.exceptions.UnknownServiceError: Unknown service: 'bedrock'. Valid service names are: accessanalyzer,...  <truncated>

If I try to use the bedrock client in the utils directory instead by importing utils.bedrock in a python3 interactive session it complains that I need to install pydantic:

  File "/workplace/amazon-bedrock-workshop/utils/bedrock.py", line 5, in <module>
    from pydantic import root_validator
ModuleNotFoundError: No module named 'pydantic'

so I install it:

pip3 install pydantic

and it installed version 2.0.3. Then I tried again to import it and got a new error:

>>> import utils.bedrock
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/workplace/amazon-bedrock-workshop/utils/bedrock.py", line 66, in <module>
    class Bedrock:
  File "/workplace/amazon-bedrock-workshop/utils/bedrock.py", line 80, in Bedrock
    @root_validator()
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator
    raise PydanticUserError(
pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.3/u/root-validator-pre-skip

If I change line 80 to look like this, that error goes away:

    @root_validator(skip_on_failure=True)

Then if I try to use the get_bedrock_client in that function, I'm back to the error where it doesn't recognize bedrock:

>>> import utils.bedrock
>>> bedrock = utils.bedrock.get_bedrock_client()
Create new client
  Using region: us-east-1
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/workplace/amazon-bedrock-workshop/utils/bedrock.py", line 46, in get_bedrock_client
    bedrock_client = session.client(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/boto3/session.py", line 299, in client
    return self._session.create_client(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/session.py", line 976, in create_client
    client = client_creator.create_client(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/client.py", line 126, in create_client
    service_model = self._load_service_model(service_name, api_version)
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/client.py", line 228, in _load_service_model
    json_model = self._loader.load_service_model(
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/loaders.py", line 142, in _wrapper
    data = func(self, *args, **kwargs)
  File "/workplace/amazon-bedrock-workshop/.venv/lib/python3.9/site-packages/botocore/loaders.py", line 408, in load_service_model
    raise UnknownServiceError(
botocore.exceptions.UnknownServiceError: Unknown service: 'bedrock'. Valid service names are: accessanalyzer...<truncated>

So it looks like the versions of boto3 being downloaded by download_dependencies.sh are not working. Any advice would be appreciated.

Error occurred when running boto3 setup.

%pip install wheel

%pip install --no-build-isolation --force-reinstall \

../dependencies/awscli-*-py3-none-any.whl \

../dependencies/boto3-*-py3-none-any.whl \

../dependencies/botocore-*-py3-none-any.whl

Requirement already satisfied: wheel in /opt/conda/lib/python3.10/site-packages (0.41.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Note: you may need to restart the kernel to use updated packages.
Processing /root/amazon-bedrock-workshop/dependencies/awscli-1.29.21-py3-none-any.whl
Processing /root/amazon-bedrock-workshop/dependencies/boto3-1.28.21-py3-none-any.whl
Processing /root/amazon-bedrock-workshop/dependencies/botocore-1.31.21-py3-none-any.whl
Collecting docutils<0.17,>=0.10 (from awscli==1.29.21)
Using cached docutils-0.16-py2.py3-none-any.whl (548 kB)
Collecting s3transfer<0.7.0,>=0.6.0 (from awscli==1.29.21)
Obtaining dependency information for s3transfer<0.7.0,>=0.6.0 from https://files.pythonhosted.org/packages/d9/17/a3b666f5ef9543cfd3c661d39d1e193abb9649d0cfbbfee3cf3b51d5af02/s3transfer-0.6.2-py3-none-any.whl.metadata
Using cached s3transfer-0.6.2-py3-none-any.whl.metadata (1.8 kB)
Collecting PyYAML<6.1,>=3.10 (from awscli==1.29.21)
Obtaining dependency information for PyYAML<6.1,>=3.10 from https://files.pythonhosted.org/packages/29/61/bf33c6c85c55bc45a29eee3195848ff2d518d84735eb0e2d8cb42e0d285e/PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting colorama<0.4.5,>=0.2.5 (from awscli==1.29.21)
Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Collecting rsa<4.8,>=3.1.2 (from awscli==1.29.21)
Using cached rsa-4.7.2-py3-none-any.whl (34 kB)
Collecting jmespath<2.0.0,>=0.7.1 (from botocore==1.31.21)
Using cached jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting python-dateutil<3.0.0,>=2.1 (from botocore==1.31.21)
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting urllib3<1.27,>=1.25.4 (from botocore==1.31.21)
Obtaining dependency information for urllib3<1.27,>=1.25.4 from https://files.pythonhosted.org/packages/c5/05/c214b32d21c0b465506f95c4f28ccbcba15022e000b043b72b3df7728471/urllib3-1.26.16-py2.py3-none-any.whl.metadata
Using cached urllib3-1.26.16-py2.py3-none-any.whl.metadata (48 kB)
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore==1.31.21)
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting pyasn1>=0.1.3 (from rsa<4.8,>=3.1.2->awscli==1.29.21)
Using cached pyasn1-0.5.0-py2.py3-none-any.whl (83 kB)
Using cached PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (705 kB)
Using cached s3transfer-0.6.2-py3-none-any.whl (79 kB)
Using cached urllib3-1.26.16-py2.py3-none-any.whl (143 kB)
Installing collected packages: urllib3, six, PyYAML, pyasn1, jmespath, docutils, colorama, rsa, python-dateutil, botocore, s3transfer, boto3, awscli
Attempting uninstall: urllib3
Found existing installation: urllib3 1.26.16
Uninstalling urllib3-1.26.16:
Successfully uninstalled urllib3-1.26.16
Attempting uninstall: six
Found existing installation: six 1.16.0
Uninstalling six-1.16.0:
Successfully uninstalled six-1.16.0
Attempting uninstall: PyYAML
Found existing installation: PyYAML 6.0.1
Uninstalling PyYAML-6.0.1:
Successfully uninstalled PyYAML-6.0.1
Attempting uninstall: pyasn1
Found existing installation: pyasn1 0.5.0
Uninstalling pyasn1-0.5.0:
Successfully uninstalled pyasn1-0.5.0
Attempting uninstall: jmespath
Found existing installation: jmespath 1.0.1
Uninstalling jmespath-1.0.1:
Successfully uninstalled jmespath-1.0.1
Attempting uninstall: docutils
Found existing installation: docutils 0.16
Uninstalling docutils-0.16:
Successfully uninstalled docutils-0.16
Attempting uninstall: colorama
Found existing installation: colorama 0.4.4
Uninstalling colorama-0.4.4:
Successfully uninstalled colorama-0.4.4
Attempting uninstall: rsa
Found existing installation: rsa 4.7.2
Uninstalling rsa-4.7.2:
Successfully uninstalled rsa-4.7.2
Attempting uninstall: python-dateutil
Found existing installation: python-dateutil 2.8.2
Uninstalling python-dateutil-2.8.2:
Successfully uninstalled python-dateutil-2.8.2
Attempting uninstall: botocore
Found existing installation: botocore 1.31.21
Uninstalling botocore-1.31.21:
Successfully uninstalled botocore-1.31.21
Attempting uninstall: s3transfer
Found existing installation: s3transfer 0.6.2
Uninstalling s3transfer-0.6.2:
Successfully uninstalled s3transfer-0.6.2
Attempting uninstall: boto3
Found existing installation: boto3 1.28.21
Uninstalling boto3-1.28.21:
Successfully uninstalled boto3-1.28.21
Attempting uninstall: awscli
Found existing installation: awscli 1.29.21
Uninstalling awscli-1.29.21:
Successfully uninstalled awscli-1.29.21
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
spyder 5.3.3 requires pyqt5<5.16, which is not installed.
spyder 5.3.3 requires pyqtwebengine<5.16, which is not installed.
distributed 2022.7.0 requires tornado<6.2,>=6.0.3, but you have tornado 6.3.2 which is incompatible.
jupyterlab 3.4.4 requires jupyter-server~=1.16, but you have jupyter-server 2.7.0 which is incompatible.
jupyterlab-server 2.10.3 requires jupyter-server~=1.4, but you have jupyter-server 2.7.0 which is incompatible.
notebook 6.5.5 requires jupyter-client<8,>=5.3.4, but you have jupyter-client 8.3.0 which is incompatible.
notebook 6.5.5 requires pyzmq<25,>=17, but you have pyzmq 25.1.0 which is incompatible.
panel 0.13.1 requires bokeh<2.5.0,>=2.4.0, but you have bokeh 3.2.1 which is incompatible.
pyasn1-modules 0.2.8 requires pyasn1<0.5.0,>=0.4.6, but you have pyasn1 0.5.0 which is incompatible.
sagemaker-datawrangler 0.4.3 requires sagemaker-data-insights==0.4.0, but you have sagemaker-data-insights 0.3.3 which is incompatible.
spyder 5.3.3 requires ipython<8.0.0,>=7.31.1, but you have ipython 8.14.0 which is incompatible.
spyder 5.3.3 requires pylint<3.0,>=2.5.0, but you have pylint 3.0.0a6 which is incompatible.
spyder-kernels 2.3.3 requires ipython<8,>=7.31.1; python_version >= "3", but you have ipython 8.14.0 which is incompatible.
spyder-kernels 2.3.3 requires jupyter-client<8,>=7.3.4; python_version >= "3", but you have jupyter-client 8.3.0 which is incompatible.

Successfully installed PyYAML-6.0.1 awscli-1.29.21 boto3-1.28.21 botocore-1.31.21 colorama-0.4.4 docutils-0.16 jmespath-1.0.1 pyasn1-0.5.0 python-dateutil-2.8.2 rsa-4.7.2 s3transfer-0.6.2 six-1.16.0 urllib3-1.26.16
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Note: you may need to restart the kernel to use updated packages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.