Coder Social home page Coder Social logo

langchain-chatglm3's Introduction

Intelligent-Image-Matting-System

1、系统功能:

本项目使用显著性目标检测技术实现了一个智能抠图系统。其中显著性目标检测技术是自己设计的基于金字塔视觉Transformer( Pyramid Vision Transforme)的非对称RGBD显著性目标检测网络(PVTANet)。后面会简单介绍一下这个网络。

2、系统流程:

实现抠图功能的代码放在demo.py文件中。

这段代码使用Python和OpenCV库实现图片合成代码。代码中的功能是将两张图片进行合成,其中一张图片是原始的RGB图像,另一张图片是表示透明度的alpha图像。

我们挑选了几张代表不同场景下的图片。原始的RGB图像放在 /original_images 文件夹中,alpha图像是使用我们提出的显著性检测网络(PVTANet)检测得到的显著性预测图像,放在 /alpha_images 文件夹中。

在Alpha图像中,每个像素都有一个额外的通道,通常用于表示该像素的透明度级别。这个额外的通道通常被称为Alpha通道。Alpha通道的取值范围通常是0到255之间,其中0表示完全透明,255表示完全不透明。对于介于0和255之间的值,表示了介于完全透明和完全不透明之间的透明度级别。

实现抠图的主要流程为

  1. 首先使用cv2.imread()函数读取原始RGB图像,以灰度图的方式读取alpha图像
  2. 获取原始RGB图像的尺寸(高度、宽度、通道数),并创建一个全零数组img3来存储合成后的图片,大小与原始RGB图像相同,但通道数为4。
  3. 将原始RGB图像的像素值复制到img3的前三个通道中(RGB通道)
  4. 将alpha图像的像素值复制到img3的第四个通道中,这样img3就成为了一个带有透明度信息的RGBA图像。
  5. 使用cv2.imwrite()函数将合成后的图片保存到指定路径中。

我们的网络经过抠图得到的结果如下,详细结果在/cutout_images 文件夹中

1

这幅图中,可以看出我们的网络可以很清晰地将显著性目标从它所处的复杂环境中抠取出来。

3、PVTANet简单介绍

PVTANet是我们提出的一种基于金字塔视觉Transformer(PVT)的非对称显著性目标检测网络。具体来说,我们认识到当前主流对称网络结构在处理 RGB 和深度特征时忽略数据固有差异、导致难以充分融合进而导致了信息丢失和性能下降等问题。

为了更好提取不同模态数据中的有效特征,我们使用不同的编码器进行特征提取,通过利用 PVT 从 RGB 图像中提取特征,并设计了一个轻量级深度模块用于深度特征提取。该模块通过特征交互和空间自注意力机制加强对深度特征的理解和利用。

为了更好地融合 RGB 和深度特征,我们提出了流梯度融合模块,以渐进的方式融合不同级别的特征并保留细节信息。同时引入深度注意力模块确保深度特征有效引导 RGB 特征,从而充分利用深度特征中的几何特征。 PVTANet的网络结构图如下图所示:

图4-1.png

在网络训练过程中,我们使用PVT v2-b2作为骨干网络,以及使用它的预训练参数初始化,动量、权重衰减、批量大小和学习率分别设定为 0.9、0.0005、6 和 1e-10 本文的网络在 PyTorch 深度学习框架上实现,并在一块 NVIDIA GeForce RTX 3080 GPU 上加速。 最终在六个数据集上测试的定量P-R曲线如下: P-R.png 定性评估如下:

图4-6.png

langchain-chatglm3's People

Contributors

struggle1999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

langchain-chatglm3's Issues

代码部署问题

代码里service.config、service.chatglm_service分别是指的什么?

本地运行报错

raise ValueError(
ValueError: Environment variable OCR_AGENT must be set to an existing OCR agent module, not unstructured.partition.utils.ocr_models.tesseract_ocr.OCRAgentTesseract.

启动大模型,只能调用一次get_knowledeg_based_answer方法(基于知识库生成回答),调用第二次报错

ValidationError                           Traceback (most recent call last)
Cell In[2], line 7
      4 print('\n#############################################\n')
      6 application.knowledge_service.init_knowledge_base()
----> 7 result2 = application.get_knowledeg_based_answer('冠心病是什么原因引起的?可以吃什么药?')
      8 print('\n############################################\n')
      9 print('\nresult of knowledge bas[e:\n](localexplorer:E:\n)')

Cell In[1], line 77, in LangChainApplication.get_knowledeg_based_answer(self, query, history_len, temperature, top_p, top_k, chat_history)
     74 knowledge_chain.return_source_documents = True
     76 #传入问题内容进行查询
---> 77 result = knowledge_chain({"query":query})
     78 return result

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    146     warned = True
    147     emit_warning()
--> 148 return wrapped(*args, **kwargs)

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    346 """Execute the chain.
    347 
    348 Args:
   (...)
    369         `Chain.output_keys`.
    370 """
    371 config = {
    372     "callbacks": callbacks,
    373     "tags": tags,
    374     "metadata": metadata,
    375     "run_name": run_name,
    376 }
--> 378 return self.invoke(
    379     inputs,
    380     cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
    381     return_only_outputs=return_only_outputs,
    382     include_run_info=include_run_info,
    383 )

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
    150 try:
    151     self._validate_inputs(inputs)
    152     outputs = (
--> 153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )
    161 except BaseException as e:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:145, in BaseRetrievalQA._call(self, inputs, run_manager)
    143 else:
    144     docs = self._get_docs(question)  # type: ignore[call-arg]
--> 145 answer = self.combine_documents_chain.run(
    146     input_documents=docs, question=question, callbacks=_run_manager.get_child()
    147 )
    149 if self.return_source_documents:
    150     return {self.output_key: answer, "source_documents": docs}

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    146     warned = True
    147     emit_warning()
--> 148 return wrapped(*args, **kwargs)

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:600, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    595     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    596         _output_key
    597     ]
    599 if kwargs and not args:
--> 600     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    601         _output_key
    602     ]
    604 if not kwargs and not args:
    605     raise ValueError(
    606         "`run` supported with either positional arguments or keyword arguments,"
    607         " but none were provided."
    608     )

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    146     warned = True
    147     emit_warning()
--> 148 return wrapped(*args, **kwargs)

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    346 """Execute the chain.
    347 
    348 Args:
   (...)
    369         `Chain.output_keys`.
    370 """
    371 config = {
    372     "callbacks": callbacks,
    373     "tags": tags,
    374     "metadata": metadata,
    375     "run_name": run_name,
    376 }
--> 378 return self.invoke(
    379     inputs,
    380     cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
    381     return_only_outputs=return_only_outputs,
    382     include_run_info=include_run_info,
    383 )

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
    150 try:
    151     self._validate_inputs(inputs)
    152     outputs = (
--> 153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )
    161 except BaseException as e:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py:137, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
    135 # Other keys are assumed to be needed for LLM prediction
    136 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 137 output, extra_return_dict = self.combine_docs(
    138     docs, callbacks=_run_manager.get_child(), **other_keys
    139 )
    140 extra_return_dict[self.output_key] = output
    141 return extra_return_dict

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py:244, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
    242 inputs = self._get_inputs(docs, **kwargs)
    243 # Call predict on the LLM.
--> 244 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:316, in LLMChain.predict(self, callbacks, **kwargs)
    301 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
    302     """Format prompt with kwargs and pass to LLM.
    303 
    304     Args:
   (...)
    314             completion = llm.predict(adjective="funny")
    315     """
--> 316     return self(kwargs, callbacks=callbacks)[self.output_key]

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    146     warned = True
    147     emit_warning()
--> 148 return wrapped(*args, **kwargs)

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    346 """Execute the chain.
    347 
    348 Args:
   (...)
    369         `Chain.output_keys`.
    370 """
    371 config = {
    372     "callbacks": callbacks,
    373     "tags": tags,
    374     "metadata": metadata,
    375     "run_name": run_name,
    376 }
--> 378 return self.invoke(
    379     inputs,
    380     cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
    381     return_only_outputs=return_only_outputs,
    382     include_run_info=include_run_info,
    383 )

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
    150 try:
    151     self._validate_inputs(inputs)
    152     outputs = (
--> 153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )
    161 except BaseException as e:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:126, in LLMChain._call(self, inputs, run_manager)
    121 def _call(
    122     self,
    123     inputs: Dict[str, Any],
    124     run_manager: Optional[CallbackManagerForChainRun] = None,
    125 ) -> Dict[str, str]:
--> 126     response = self.generate([inputs], run_manager=run_manager)
    127     return self.create_outputs(response)[0]

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:138, in LLMChain.generate(self, input_list, run_manager)
    136 callbacks = run_manager.get_child() if run_manager else None
    137 if isinstance(self.llm, BaseLanguageModel):
--> 138     return self.llm.generate_prompt(
    139         prompts,
    140         stop,
    141         callbacks=callbacks,
    142         **self.llm_kwargs,
    143     )
    144 else:
    145     results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
    146         cast(List, prompts), {"callbacks": callbacks}
    147     )

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/language_models/llms.py:633, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    625 def generate_prompt(
    626     self,
    627     prompts: List[PromptValue],
   (...)
    630     **kwargs: Any,
    631 ) -> LLMResult:
    632     prompt_strings = [p.to_string() for p in prompts]
--> 633     return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/language_models/llms.py:803, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    788 if (self.cache is None and get_llm_cache() is None) or self.cache is False:
    789     run_managers = [
    790         callback_manager.on_llm_start(
    791             dumpd(self),
   (...)
    801         )
    802     ]
--> 803     output = self._generate_helper(
    804         prompts, stop, run_managers, bool(new_arg_supported), **kwargs
    805     )
    806     return output
    807 if len(missing_prompts) > 0:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/language_models/llms.py:670, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    668     for run_manager in run_managers:
    669         run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 670     raise e
    671 flattened_outputs = output.flatten()
    672 for manager, flattened_output in zip(run_managers, flattened_outputs):

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/language_models/llms.py:657, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    647 def _generate_helper(
    648     self,
    649     prompts: List[str],
   (...)
    653     **kwargs: Any,
    654 ) -> LLMResult:
    655     try:
    656         output = (
--> 657             self._generate(
    658                 prompts,
    659                 stop=stop,
    660                 # TODO: support multiple run managers
    661                 run_manager=run_managers[0] if run_managers else None,
    662                 **kwargs,
    663             )
    664             if new_arg_supported
    665             else self._generate(prompts, stop=stop)
    666         )
    667     except BaseException as e:
    668         for run_manager in run_managers:

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/language_models/llms.py:1321, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
   1315 for prompt in prompts:
   1316     text = (
   1317         self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
   1318         if new_arg_supported
   1319         else self._call(prompt, stop=stop, **kwargs)
   1320     )
-> 1321     generations.append([Generation(text=text)])
   1322 return LLMResult(generations=generations)

File /usr/local/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
    339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
    340 if validation_error:
--> 341     raise validation_error
    342 try:
    343     object_setattr(__pydantic_self__, '__dict__', values)

ValidationError: 1 validation error for Generation
text
  str type expected (type=type_error.str)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.