Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while trying to load nlu.load('embed_sentence.bert') #153

Open
arvindacodes opened this issue Oct 14, 2022 · 1 comment
Open

Error while trying to load nlu.load('embed_sentence.bert') #153

arvindacodes opened this issue Oct 14, 2022 · 1 comment
Assignees

Comments

@arvindacodes
Copy link

I am trying to create sentence similarity model using Spark_nlp, but i am getting the below two different errors.

sent_small_bert_L2_128 download started this may take some time.
Approximate size to download 16.1 MB
[OK!]

IllegalArgumentException Traceback (most recent call last)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\pipe\component_resolution.py:276, in get_trained_component_for_nlp_model_ref(lang, nlu_ref, nlp_ref, license_type, model_configs)
274 if component.get_pretrained_model:
275 component = component.set_metadata(
--> 276 component.get_pretrained_model(nlp_ref, lang, model_bucket),
277 nlu_ref, nlp_ref, lang, False, license_type)
278 else:

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\components\embeddings\sentence_bert\BertSentenceEmbedding.py:13, in BertSentence.get_pretrained_model(name, language, bucket)
11 @staticmethod
12 def get_pretrained_model(name, language, bucket=None):
---> 13 return BertSentenceEmbeddings.pretrained(name,language,bucket)
14 .setInputCols('sentence')
15 .setOutputCol("sentence_embeddings")

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\annotator\embeddings\bert_sentence_embeddings.py:231, in BertSentenceEmbeddings.pretrained(name, lang, remote_loc)
230 from sparknlp.pretrained import ResourceDownloader
--> 231 return ResourceDownloader.downloadModel(BertSentenceEmbeddings, name, lang, remote_loc)

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\pretrained\resource_downloader.py:40, in ResourceDownloader.downloadModel(reader, name, language, remote_loc, j_dwn)
39 try:
---> 40 j_obj = _internal._DownloadModel(reader.name, name, language, remote_loc, j_dwn).apply()
41 except Py4JJavaError as e:

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\internal_init_.py:317, in _DownloadModel.init(self, reader, name, language, remote_loc, validator)
316 def init(self, reader, name, language, remote_loc, validator):
--> 317 super(_DownloadModel, self).init("com.johnsnowlabs.nlp.pretrained." + validator + ".downloadModel", reader,
318 name, language, remote_loc)

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\internal\extended_java_wrapper.py:26, in ExtendedJavaWrapper.init(self, java_obj, *args)
25 self.sc = SparkContext._active_spark_context
---> 26 self._java_obj = self.new_java_obj(java_obj, *args)
27 self.java_obj = self._java_obj

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\internal\extended_java_wrapper.py:36, in ExtendedJavaWrapper.new_java_obj(self, java_class, *args)
35 def new_java_obj(self, java_class, *args):
---> 36 return self._new_java_obj(java_class, *args)

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\pyspark\ml\wrapper.py:69, in JavaWrapper._new_java_obj(java_class, *args)
68 java_args = [_py2java(sc, arg) for arg in args]
---> 69 return java_obj(*java_args)

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\py4j\java_gateway.py:1304, in JavaMember.call(self, *args)
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1307 for temp_arg in temp_args:

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\pyspark\sql\utils.py:134, in capture_sql_exception..deco(*a, **kw)
131 if not isinstance(converted, UnknownException):
132 # Hide where the exception came from that shows a non-Pythonic
133 # JVM exception message.
--> 134 raise_from(converted)
135 else:

File :3, in raise_from(e)

IllegalArgumentException: requirement failed: Was not found appropriate resource to download for request: ResourceRequest(sent_small_bert_L2_128,Some(en),public/models,4.0.2,3.3.0) with downloader: com.johnsnowlabs.nlp.pretrained.S3ResourceDownloader@c7c973f

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu_init_.py:234, in load(request, path, verbose, gpu, streamlit_caching, m1_chip)
233 continue
--> 234 nlu_component = nlu_ref_to_component(nlu_ref)
235 # if we get a list of components, then the NLU reference is a pipeline, we do not need to check order

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\pipe\component_resolution.py:160, in nlu_ref_to_component(nlu_ref, detect_lang, authenticated)
159 else:
--> 160 resolved_component = get_trained_component_for_nlp_model_ref(lang, nlu_ref, nlp_ref, license_type, model_params)
162 if resolved_component is None:

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\pipe\component_resolution.py:287, in get_trained_component_for_nlp_model_ref(lang, nlu_ref, nlp_ref, license_type, model_configs)
286 except Exception as e:
--> 287 raise ValueError(f'Failure making component, nlp_ref={nlp_ref}, nlu_ref={nlu_ref}, lang={lang}, \n err={e}')
289 return component

ValueError: Failure making component, nlp_ref=sent_small_bert_L2_128, nlu_ref=embed_sentence.bert, lang=en,
err=requirement failed: Was not found appropriate resource to download for request: ResourceRequest(sent_small_bert_L2_128,Some(en),public/models,4.0.2,3.3.0) with downloader: com.johnsnowlabs.nlp.pretrained.S3ResourceDownloader@c7c973f

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)
Cell In [16], line 2
1 import nlu
----> 2 pipe = nlu.load('embed_sentence.bert')
3 print("pipe",pipe)

File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu_init_.py:249, in load(request, path, verbose, gpu, streamlit_caching, m1_chip)
247 print(e[1])
248 print(err)
--> 249 raise Exception(
250 f"Something went wrong during creating the Spark NLP model_anno_obj for your request = {request} Did you use a NLU Spell?")
251 # Complete Spark NLP Pipeline, which is defined as a DAG given by the starting Annotators
252 try:

Exception: Something went wrong during creating the Spark NLP model_anno_obj for your request = embed_sentence.bert Did you use a NLU Spell?

@maziyarpanahi maziyarpanahi transferred this issue from JohnSnowLabs/spark-nlp Oct 14, 2022
@maziyarpanahi
Copy link
Member

Transferring here since it's nlu package. (just in case, you need all these steps correctly followed in order to use Apache Spark on Windows: https://nlp.johnsnowlabs.com/docs/en/install#windows-support)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants