ImportError:
XLMRobertaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
that match your environment.
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
--- Running on CPU. If you're facing performance issues, you should consider switching to a CUDA device
INFO: 172.21.0.21:43244 - "GET /recommend HTTP/1.1" 200 OK
INFO: 172.21.0.21:43908 - "GET /recommend HTTP/1.1" 200 OK
INFO: 172.21.0.21:44016 - "POST /zero-shot/sample-records HTTP/1.1" 200 OK
INFO: 172.21.0.21:44074 - "POST /zero-shot/sample-records HTTP/1.1" 500 Internal Server Error
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 366, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 261, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 119, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 87, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 76, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 659, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 259, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 61, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.10/site-packages/starlette/concurrency.py", line 45, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 28, in run_sync
return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 754, in run
result = context.run(func, *args)
File "/program/./app.py", line 45, in zero_shot_text
return_values = util.get_zero_shot_10_records(
File "/program/./util/util.py", line 152, in get_zero_shot_10_records
result = get_zero_shot_labels(
File "/program/./util/util.py", line 120, in get_zero_shot_labels
result = get_labels_for_text(
File "/program/./model_integration/controller.py", line 16, in get_labels_for_text
return generic.get_labels_for_text(
File "/program/./model_integration/models/generic.py", line 20, in get_labels_for_text
classifier = __get_classifier_with_web_socket_update(
File "/program/./model_integration/models/generic.py", line 63, in __get_classifier_with_web_socket_update
classifier = __get_classifier(config)
File "/program/./model_integration/models/generic.py", line 46, in __get_classifier
__classifier[config] = pipeline("zero-shot-classification", model=config)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 598, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 546, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1780, in from_pretrained
return cls._from_pretrained(
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1915, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py", line 139, in __init__
super().__init__(
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 112, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/usr/local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1033, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/usr/local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 421, in __init__
requires_backends(self, "protobuf")
File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 761, in requires_backends
raise ImportError("".join(failed))
ImportError:
XLMRobertaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
that match your environment.
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 366, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 261, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 119, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 87, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 76, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 659, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 259, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 61, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.10/site-packages/starlette/concurrency.py", line 45, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 28, in run_sync
return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 754, in run
result = context.run(func, *args)
File "/program/./app.py", line 45, in zero_shot_text
return_values = util.get_zero_shot_10_records(
File "/program/./util/util.py", line 152, in get_zero_shot_10_records
result = get_zero_shot_labels(
File "/program/./util/util.py", line 120, in get_zero_shot_labels
result = get_labels_for_text(
File "/program/./model_integration/controller.py", line 16, in get_labels_for_text
return generic.get_labels_for_text(
File "/program/./model_integration/models/generic.py", line 20, in get_labels_for_text
classifier = __get_classifier_with_web_socket_update(
File "/program/./model_integration/models/generic.py", line 63, in __get_classifier_with_web_socket_update
classifier = __get_classifier(config)
File "/program/./model_integration/models/generic.py", line 46, in __get_classifier
__classifier[config] = pipeline("zero-shot-classification", model=config)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 598, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 546, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1780, in from_pretrained
return cls._from_pretrained(
--- Running on CPU. If you're facing performance issues, you should consider switching to a CUDA device
INFO: 172.21.0.21:44864 - "GET /recommend HTTP/1.1" 200 OK
INFO: 172.21.0.21:44884 - "POST /zero-shot/sample-records HTTP/1.1" 500 Internal Server Error
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1915, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py", line 139, in __init__
super().__init__(
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 112, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/usr/local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 1033, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/usr/local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py", line 421, in __init__
requires_backends(self, "protobuf")
File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 761, in requires_backends
raise ImportError("".join(failed))
ImportError:
XLMRobertaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
that match your environment.
--- Running on CPU. If you're facing performance issues, you should consider switching to a CUDA device
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)