Coder Social home page Coder Social logo

lithops-cloud / lithops Goto Github PK

View Code? Open in Web Editor NEW
311.0 311.0 101.0 12.52 MB

A multi-cloud framework for big data analytics and embarrassingly parallel jobs, that provides an universal API for building parallel applications in the cloud โ˜๏ธ๐Ÿš€

Home Page: http://lithops.cloud

License: Apache License 2.0

Python 98.51% Dockerfile 0.97% Slim 0.09% Shell 0.43%
big-data big-data-analytics cloud-computing data-processing distributed kubernetes multicloud multiprocessing object-storage parallel python serverless serverless-computing serverless-functions

lithops's People

Contributors

abourramouss avatar aitorarjona avatar ayalaraanan avatar bystepii avatar cohen-j-omer avatar danielbraun89 avatar dependabot[bot] avatar geizaguirre avatar gerardparis avatar gfinol avatar gilv avatar idoyehe avatar josepsampe avatar kikemolina3 avatar lachlanstuart avatar macarronesc avatar mmeinhardt avatar mpneuber avatar omerb01 avatar otrack avatar pablogs98 avatar rabernat avatar richardscottoz avatar rigazilla avatar roca-pol avatar sadekjb avatar testing-alt avatar tkchafin avatar tomwhite avatar usamasource avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lithops's Issues

Avoid using CLI tool for installation scripts

Installation scripts, like pywren/deploy_pywren.sh and runtime/build_runtime.sh both use bx wsk CLI tool to create an action. We would like to explore how this can be avoided and use direct Python code to create actions. Such code already exists in https://github.com/pywren/pywren-ibm-cloud/blob/master/pywren/pywren_ibm_cloud/cf_connector.py. We should see how to use this code instead of dependency on CLI tool

IBM COS SSL validation fail

Hi,

I am trying to use pywren-ibm with IBM cloud function and ibm cos,
after setting up an ibm cloud account and editing the ~/.pywren_config file, I am able to do ./deploy_runtime
but when I try to execute python3 test/testpywren.py init , I got the following error:

Unable to create bucket: SSL validation failed for https://s3-api.us-geo.objectstorage.softlayer.net/pywrenbuck/__pywren.test/test0 [Errno 2] No such file or directory

I also tried with this end point https://s3.us.cloud-object-storage.appdomain.cloud but still got the same error. How should I fix this SSL issue, thanks

enable configuration to control create actions

deployutil.py uses hard coded values for timeout and memory.

cf_client.create_action(runtime_name, memory=512, timeout=600000,
                             code=action_bin, kind='blackbox', image=image_name)

We need to provide a way to configure those values by using configuration parameters

partitioner operation

when I try to split a text file from COS into partitions, I think PyWren divides the bytes ranges wrongly. I figured out on file partitioner.py the following code in line 103:

if chunk_size is not None and obj_size > chunk_size:
    size = 0
    while size < obj_size:
        brange = (size, size+chunk_size+CHUNK_THRESHOLD)
        size += chunk_size
        partition = {}
        partition['map_func_args'] = entry
        partition['data_byte_range'] = brange
        partitions.append(partition)
        total_partitions = total_partitions + 1

so I understand that in addition to the chunk_size, another variable is added to the range: CHUNK_THRESHOLD. is it necessary? or maybe the CHUNK_THRESHOLD should be added to size variable?
because of this addition, I get some duplications of text in different chunks.

issues with map() method on COS bucket

I have a function

def my_map(bucket, key, data_stream, ibm_cos).

I need to run this function on every object in the bucket ( with prefix )
Tried

input_data = 'mybucket/a/b/c'
pw = pywren.ibm_cf_executor(config=config, runtime='my run time')
pw.map(my_map, input_data)
results = pw.get_result()

DEBUG:pywren_ibm_cloud.cf_connector:Executor ID b00873ee-d75a Function 00000 - Activation ID: 6a47f9a728b645dd87f9a728b655dd6a - Time: 0.283 seconds
DEBUG:pywren_ibm_cloud.executor:Executor ID b00873ee-d75a Activation 00000 complete
DEBUG:pywren_ibm_cloud.executor:Executor ID b00873ee-d75a Invocation done: 0.387 seconds
DEBUG:pywren_ibm_cloud.wren:Executor ID b00873ee-d75a Getting results
WARNING:root:there was an error pickling. The original exception: An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist.
The pickling exception: init() missing 1 required positional argument: 'operation_name'

Errors running testpywren.py

I am getting errors when run python testpywren.py.

I am able run python testpywren.py init and i get the following outout

Uploading test files...
Upload file: __pywren.test/test0 - SUCCESS
Upload file: __pywren.test/test1 - SUCCESS
Upload file: __pywren.test/test2 - SUCCESS
Upload file: __pywren.test/test3 - SUCCESS
Upload file: __pywren.test/test4 - SUCCESS
Upload file: __pywren.test/result - SUCCESS
ALL DONE

I can verify that these files have been created from the dashboard.

However, when I run python testpywren.py. I get the following,

IBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB (Installing...)
The filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.
EThe filename, directory name, or volume label syntax is incorrect.

Pretty much copied an pasted the config variables from, https://cloud.ibm.com/openwhisk/learn/api-key

image

I made sure that I prefixed https:// to endpoint,
endpoint : https://us-south.functions.cloud.ibm.com

I was also able to run the "hello world" example from https://console.bluemix.net/docs/openwhisk/openwhisk_actions.html#creating-python-actions

I am not sure what I am doing wrong other than I have entered the wrong config variables, which looks like,

pywren: 
    storage_bucket: spif-pywren-bucket

ibm_cf:
    # Obtain all values from https://cloud.ibm.com/openwhisk/learn/api-key
    endpoint    : https://us-south.functions.cloud.ibm.com
    namespace   : [email protected]_dev
    api_key     : copied_from_above_link
   
ibm_cos:
    # Region endpoint example: https://s3.us-east.cloud-object-storage.appdomain.cloud
    endpoint   : https://s3.us-east.cloud-object-storage.appdomain.cloud
    # this is preferable authentication method for IBM COS
    api_key    :  copied_from_cos_dashboard
    # alternatively you may use HMAC authentication method
    # access_key : <ACCESS_KEY>
    # secret_key : <SECRET_KEY>

Partioner to support more data types

Partitioner currently supports CSV. We need to extend the mechanism to make it more generic and support other implementations, like Apache Parquet

Need functional tests

We need to add ability to execute full functional tests and add various test cases. The good start would be to run whatever is located in the example of the project. This should be optional of course. To run functional tests there will be need to configure COS and CF account.

Cancel invocations

Need to understand, enable if needed, and document - how to cancel job.
I assume for remote invocation we can simply kill the action, for client side based invocations we can simple do CTRL-C...

@JosepSampe any inputs from your side how to make it right?

functions' args pattern

We should consider to define a strict pattern for each external function in pywren so code like below can be understood properly:

def my_function(list):
    return sum(list)

pw = pywren_ibm_cloud.ibm_cf_executor()
pw.map(my_function, [[1, 2, 3], [1, 2, 3]])
result = pw.get_result()
print(result)

Running on Private OpenWhisk Cluster

Hi,

Just came across with this project, nice work!
Is it possible to use pywren-ibm-cloud with my own OpenWhisk cluster? I imagine it would require some modification, but just want to make sure it is possible before I dive deep into the repo.

Thanks!

Delete for temp data fails when number of objects exceeds 1000

Traceback (most recent call last):
File "", line 1, in
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.1-py3.6.egg/pywren_ibm_cloud/storage/cleaner.py", line 33, in clean_bucket
clean_os_bucket(bucket, prefix, internal_storage)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.1-py3.6.egg/pywren_ibm_cloud/storage/cleaner.py", line 50, in clean_os_bucket
internal_storage.delete_temporal_data(objects_to_delete)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.1-py3.6.egg/pywren_ibm_cloud/storage/storage.py", line 167, in delete_temporal_data
return self.backend_handler.delete_objects(self.storage_bucket, key_list)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.1-py3.6.egg/pywren_ibm_cloud/storage/backends/cos.py", line 153, in delete_objects
return self.cos_client.delete_objects(Bucket=bucket_name, Delete=delete_keys)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_cos_sdk_core-2.3.3-py3.6.egg/ibm_botocore/client.py", line 255, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_cos_sdk_core-2.3.3-py3.6.egg/ibm_botocore/client.py", line 545, in _make_api_call
raise error_class(parsed_response, operation_name)
ibm_botocore.exceptions.ClientError: An error occurred (InvalidRequest) when calling the DeleteObjects operation: Maximum 1000 objects in request body exceeded

When some activation fails get_results fails too

Even when one of the activations fails the getting result function fails and throw this error:

wait(fut_list, executor_id, internal_storage, throw_except)

File "/action/pywren_ibm_cloud/wait.py", line 69, in wait
File "/action/pywren_ibm_cloud/wait.py", line 121, in _wait
File "/action/pywren_ibm_cloud/wait.py", line 121, in
AttributeError: 'NoneType' object has no attribute 'done'

Is this supposed to be like this? I expected it should ignore the failed activations

Thank you

partitioner's logic throws exception with 1 chunk operation

when I try to run map_reduce() operation with chunk_size bigger than the size of the file which I'm working on, an exception throws:
(for example - run map_reduce() with chunk_size=4MB on a file that its size is 2MB)

Traceback (most recent call last):
  File "/Users/omerbelh/IdeaProjects/WORK/metabolomics-cloudbutton/examples/pywren-reads-input-chucks.py", line 58, in <module>

    spectra = pw.get_result()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/wren.py", line 338, in get_result
    WAIT_DUR_SEC=WAIT_DUR_SEC)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/wait.py", line 83, in wait
    pbar=pbar)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/wait.py", line 293, in _wait_storage
    pool.map(get_result, f_to_wait_on)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/pool.py", line 260, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/pool.py", line 608, in get
    raise self._value
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/wait.py", line 287, in get_result
    f.result(throw_except=throw_except, internal_storage=internal_storage)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/future.py", line 289, in result
    reraise(*self._traceback)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/six.py", line 692, in reraise
    raise value.with_traceback(tb)
  File "pywren_ibm_cloud/action/jobrunner.py", line 213, in run_function
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/executor.py", line 373, in reduce_function_wrapper
    wait(fut_list, executor_id, internal_storage, download_results=True)
  File "/action/pywren_ibm_cloud/wait.py", line 83, in wait
  File "/action/pywren_ibm_cloud/wait.py", line 293, in _wait_storage
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 288, in map
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 670, in get
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 119, in worker
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
  File "/action/pywren_ibm_cloud/wait.py", line 287, in get_result
  File "/action/pywren_ibm_cloud/future.py", line 289, in result
  File "/usr/local/lib/python3.6/site-packages/six.py", line 692, in reraise
    raise value.with_traceback(tb)
  File "pywren_ibm_cloud/action/jobrunner.py", line 213, in run_function
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/partitioner.py", line 40, in object_processing_wrapper
    wsb = WrappedStreamingBodyPartition(sb, chunk_size, data_byte_range)
  File "/action/pywren_ibm_cloud/partitioner.py", line 266, in __init__
TypeError: 'NoneType' object is not subscriptable

Support cos:// for execution timeline plots

Current code supports local directory to generate timeline plots

pw = pywren.ibm_cf_executor()
futures = pw.map(my_map_function, range(200))
result pw.get_result()
pw.create_timeline_plots(dst='/home/jsampe/desktop', name='testmap')

Map-Reduce: (need to return statuses from reduce, for now)

pw = pywren.ibm_cf_executor()
futures = pw.map(my_map_function, range(200), my_reduce_function)
result = pw.get_result()
pw.create_timeline_plots(dst='/home/jsampe/desktop', name='testmapreduce',
                     result['run_statuses'], result['invoke_statuses'] )

We need to extend this and support cos:// as a destination folder

to pickle or not to pickle

current implementation will pickle all dependencies that are not listed in default_preinstals module.
We need to extend this by avoid pickle libraries that are part of the Docker runtime.

concurrency/massive function spawning

Hi,

I have two 2 questions about concurrency

In the paper(https://dl.acm.org/citation.cfm?id=3284029), massive function spawning is mentioned and I am wondering how to enable/configure it as a user. I looked into the source code. and I assume the parameter invoke_pool_threads corresponds to the massive function spawning mechanism? For example, quoting from the paper :

For example, pretend that it is necessary to execute 1, 000 functions. In this case, the massive function spawning mechanism would internally arrange 10 groups of 100 invocations, which would be processed by 10 remote invoker functions separately.

so in that example, say the original script need to run 1000 functions, then to achieve this "10 groups of 100 invocations", user specify invoke_pool_threads=100 and remote_invocation=True ?

Generally, is concurrency enable/configure using the parameter invoke_pool_threads ? if not, which parameters are available?

please let me know, thanks!

gather futures from multiple executions

an issue that I noticed from the last commits,
PyWren can't run anymore multiple executions like in this example:

def simple_map_function(x, y):
    return x + y

pw = pywren.ibm_cf_executor()
iterdata = [[1, 1], [2, 2]]
pw.map(simple_map_function, iterdata)
iterdata = [[3, 3], [4, 4]]
pw.map(simple_map_function, iterdata)
result = pw.get_result()

cause the result is the result of the last execution instead of all of the above.

map() and map_reduce() are not consistent

Seems we need refactor both methods. So far map_reduce() contains many COS logic, while map doesn't. Since we not always need reduce but need COS related logic, we may call map_reduce(.. reduce = None ). This is wrong.
We need map() to support COS as well, similar to map_reduce method..

Perhaps the best would be to call map() from map_reduce() and move COS related logic to map(). This will cover all the cases

@JosepSampe need your input if this makes sense

deploy_runtime may choose wrong PyWren version

local Python may contain previous PyWren releases. For example

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.1-py3.6.egg:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.2-py3.6.egg:

Running ./deploy_runtime will choose the most updated version from the version.py

However running code that import PyWren
import pywren_ibm_cloud as pywren
import os

May choose previous version and then exception happens

2019-01-01T07:26:46.115837198Z stdout: [INFO] main: Starting IBM Cloud Function execution
2019-01-01T07:26:46.115886537Z stdout: [INFO] wrenhandler: Starting handler
2019-01-01T07:26:46.116373172Z stdout: [ERROR] wrenhandler: There was an exception: ('WRONGVERSION', 'Pywren version mismatch', '1.0.2', '1.0.1')

In my case i simply deleted previous version fom the site-packages and issue resolved.
However this seems to be a problematic solution and we need better approach

bucket used by PyWren and bucket used as input data

Currently we extract bucket_name from provided path.
For example

 pw.map(my_func, 'data/input/d.txt')

PyWren will use 'data' as a bucket name and 'input/d.txt' as a key.
However the endpoint and credentials are used defined in the config, cos section. This section defines credentials for the bucket used by PyWren, defined in 'storage_bucket'

Thus if "data" bucket doesn't exists in the same region or has different access key, exception will happen that no bucket exists.

As a temporary work around, we need to make sure that bucket used by PyWren has the same edpoint and same credentials as bucket used for the input data in map().

NumPy import error

My concurrent limit is 1000 when im trying to execute 1001 actions i get this error:

Traceback (most recent call last):
  File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1689, in <module>
    main()
  File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1683, in main
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1083, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/idoye/Documents/PycharmProjects/Industrial_Project_234313/Monte_Carlo/finance_monte_carlo.py", line 93, in <module>
    result_obj = executor.map_reduce_execution(map_function, iterdata, reduce_function)
  File "/Users/idoye/Documents/PycharmProjects/Industrial_Project_234313/ExecuterWrapper/executorWrapper.py", line 52, in map_reduce_execution
    result_object, duration = self._pywren_execution(map_function, iterable_data, reduce_function)
  File "/Users/idoye/Documents/PycharmProjects/Industrial_Project_234313/ExecuterWrapper/executorWrapper.py", line 34, in _pywren_execution
    result_object = pw.get_result()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.2-py3.6.egg/pywren_ibm_cloud/wren.py", line 267, in get_result
    verbose=verbose, timeout=timeout)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.2-py3.6.egg/pywren_ibm_cloud/wren.py", line 308, in _get_result
    throw_except=throw_except, verbose=verbose)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.2-py3.6.egg/pywren_ibm_cloud/future.py", line 278, in result
    reraise(*self._traceback)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/six.py", line 658, in reraise
    raise value.with_traceback(tb)
  File "pywren_ibm_cloud/action/jobrunner.py", line 165, in <module>
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.2-py3.6.egg/pywren_ibm_cloud/executor.py", line 451, in reduce_function_wrapper
    wait(fut_list, executor_id, internal_storage, throw_except)
  File "/action/pywren_ibm_cloud/wait.py", line 69, in wait
  File "/action/pywren_ibm_cloud/wait.py", line 186, in _wait
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 288, in map
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 670, in get
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 119, in worker
  File "/usr/local/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
  File "/action/pywren_ibm_cloud/wait.py", line 185, in get_result
  File "/action/pywren_ibm_cloud/future.py", line 206, in result
AttributeError: Can't get attribute 'dtype' on <module 'numpy' from  '/usr/local/lib/python3.6/site-packages/numpy/__init__.py'>

and it seems because the pywren is trying to import NumPy from wrong folder... in my macbook NumPy installation folder is:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/

image

is there any chance to change add it so sys.path?

Process CSV files in chunks suggestion

something I suggest to improve pywren's adaptability for any large data handling projects:

CSV files are usually processed by packages like pandas which use read_csv() method and create an instance of an efficient data handler called "DataFrame".
source: https://pandas.pydata.org/pandas-docs/stable/reference/frame.html

highlight supported features with this handler:

  • iterating over data lines
  • indexing data for optimizations purpose
  • computing data functions
  • dividing data into categorized groups

I think it will be significantly better if PyWren will be integrated with this handler so for example we can read CSV data efficiently from COS in categorized groups chunks (or whatever alternatives) instead of always making dedicated pre-storing manipulated data files in COS and then with a simple logic change, we need to change everything manually again.

Support multiple PyWren executions on the same executor

we need to improve executor class for executing multiple map functions on the same executor.

for now, we have to init an executer before each function run:

pw = pywren_ibm_cloud.ibm_cf_executor()
pw.map_reduce(map_function, iterdata, reduce_function)

we need to be able to run PyWren like below:

pw = pywren_ibm_cloud.ibm_cf_executor()
iterdata1 = ...
pw.map_reduce(map_function, iterdata1, reduce_function)
result1 = pw.get_result()
iterdata2 = ...
pw.map_reduce(map_function, iterdata2, reduce_function)
result2 = pw.get_result()

map reduce using url

When I trying to run the example of map_reduce_url I get an error about chunk_size that is not pass to the function "split_object_from_url" and I look the code and it seems that the call to that function from partitioner (executer.py line 524) mismatch the arguments

enable configurable logging level via configuration

Current code contains statements like
wrenlogging.ow_config(logging.DEBUG)
level = logging.DEBUG ( jobrunner )

We need to extend configuration and control in runtime the logging level we want to have. This configuration will be used as a logging level for all PyWren components

Low bandwidth accessing to COS from some activations

I'm running an experiment that consists of 400 actions (in parallel) accessing to COS to read 250MB each one.

The problem is that, each time I run the experiment, there are always between 1 and 4 actions that, for some reason, get very low bandwidth. For example:
bad_cos_bw
As shown in the plot, in that experiment there were 3 actions that took more than 330 seconds to read its 250MB, while the rest of the actions took 10 seconds.

I'm already investigating what causes the issue.

pickle may have issues with nested classes

PyWren uses pickle to serialize and deserialize the code. If the code contains nested classes, then pickle may fail to properly deserialize the nested class and will lookup in the package level. This might produce an error [ERROR] jobrunner: There was an exception: module 'main' has no attribute 'XYZ'

Not sure what can be done. This issue is to document the current behavior.

Windows: signal.SIGALRM is not compatible on windows.

It looks like the signal module has limited compatibility with windows, and signal.SIGALRM and signal.alarm are both only available on *nix based systems.

I am on windows 10, python 3.7.2

I came across,

(.venv) D:\python\pywren-ibm>python testwren.py
IBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID 553e0bef-efa9
Executor ID 553e0bef-efa9 Uploading function and data - Total: 3.4KiB
Executor ID 553e0bef-efa9 Starting function invocation: hello_world() - Total: 1 activations
Executor ID 553e0bef-efa9 Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID 42d6e074-849e
Executor ID 42d6e074-849e Uploading function and data - Total: 3.8KiB
Executor ID 42d6e074-849e Starting function invocation: simple_map_function() - Total: 4 activations
Executor ID 42d6e074-849e Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID 2b65c1e6-b2e2
Executor ID 2b65c1e6-b2e2 Uploading function and data - Total: 4.2KiB
Executor ID 2b65c1e6-b2e2 Starting function invocation: simple_map_function() - Total: 4 activations
Executor ID 2b65c1e6-b2e2 Uploading function and data - Total: 9.3KiB
Executor ID 2b65c1e6-b2e2 Starting function invocation: simple_reduce_function() - Total: 1 activations
Executor ID 2b65c1e6-b2e2 Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID d63f92b8-ba3e
Executor ID d63f92b8-ba3e Uploading function and data - Total: 4.5KiB
Executor ID d63f92b8-ba3e Starting function invocation: simple_map_function() - Total: 2 activations
Executor ID d63f92b8-ba3e Uploading function and data - Total: 4.5KiB
Executor ID d63f92b8-ba3e Starting function invocation: simple_map_function() - Total: 2 activations
Executor ID d63f92b8-ba3e Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID e34103df-ebcc
Executor ID e34103df-ebcc Uploading function and data - Total: 15.1KiB
Executor ID e34103df-ebcc Starting function invocation: my_map_function_bucket() - Total: 8 activations
Executor ID e34103df-ebcc Uploading function and data - Total: 19.8KiB
Executor ID e34103df-ebcc Starting function invocation: my_reduce_function() - Total: 1 activations
Executor ID e34103df-ebcc Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID cc2c15eb-3cfe
Executor ID cc2c15eb-3cfe Uploading function and data - Total: 15.6KiB
Executor ID cc2c15eb-3cfe Starting function invocation: my_map_function_bucket() - Total: 8 activations
Executor ID cc2c15eb-3cfe Uploading function and data - Total: 32.0KiB
Executor ID cc2c15eb-3cfe Starting function invocation: my_reduce_function() - Total: 6 activations
Executor ID cc2c15eb-3cfe Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID 2fc1066a-1953
Executor ID 2fc1066a-1953 Uploading function and data - Total: 15.7KiB
Executor ID 2fc1066a-1953 Starting function invocation: my_map_function_bucket() - Total: 6 activations
Executor ID 2fc1066a-1953 Uploading function and data - Total: 19.9KiB
Executor ID 2fc1066a-1953 Starting function invocation: my_reduce_function() - Total: 1 activations
Executor ID 2fc1066a-1953 Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID b9a7cae4-63a0
Executor ID b9a7cae4-63a0 Uploading function and data - Total: 16.2KiB
Executor ID b9a7cae4-63a0 Starting function invocation: my_map_function_bucket() - Total: 6 activations
Executor ID b9a7cae4-63a0 Uploading function and data - Total: 32.2KiB
Executor ID b9a7cae4-63a0 Starting function invocation: my_reduce_function() - Total: 6 activations
Executor ID b9a7cae4-63a0 Getting results ...
ERetrieving items' names from bucket: spif-pywren-bucket, prefix: __pywren.test
IBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID 65a2828a-bc0f
Executor ID 65a2828a-bc0f Uploading function and data - Total: 16.6KiB
Executor ID 65a2828a-bc0f Starting function invocation: my_map_function_key() - Total: 6 activations
Executor ID 65a2828a-bc0f Uploading function and data - Total: 20.8KiB
Executor ID 65a2828a-bc0f Starting function invocation: my_reduce_function() - Total: 1 activations
Executor ID 65a2828a-bc0f Getting results ...
ERetrieving items' names from bucket: spif-pywren-bucket, prefix: __pywren.test
IBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID f18fe21b-e595
Executor ID f18fe21b-e595 Uploading function and data - Total: 17.1KiB
Executor ID f18fe21b-e595 Starting function invocation: my_map_function_key() - Total: 6 activations
Executor ID f18fe21b-e595 Uploading function and data - Total: 33.1KiB
Executor ID f18fe21b-e595 Starting function invocation: my_reduce_function() - Total: 6 activations
Executor ID f18fe21b-e595 Getting results ...
EIBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID aa582a99-6927
Executor ID aa582a99-6927 Uploading function and data - Total: 17.6KiB
Executor ID aa582a99-6927 Starting function invocation: my_map_function_url() - Total: 5 activations
Executor ID aa582a99-6927 Uploading function and data - Total: 21.4KiB
Executor ID aa582a99-6927 Starting function invocation: my_reduce_function() - Total: 1 activations
Executor ID aa582a99-6927 Getting results ...
ERetrieving items' names from bucket: spif-pywren-bucket, prefix: __pywren.test
IBM Cloud Functions init for Namespace: [email protected]_dev
IBM Cloud Functions init for Host: https://us-south.functions.cloud.ibm.com
IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.7 - 256MB
IBM Cloud Functions executor created with ID bec79da1-f923
Executor ID bec79da1-f923 Uploading function and data - Total: 16.3KiB
Executor ID bec79da1-f923 Starting function invocation: my_map_function_storage_handler() - Total: 6 activations
Executor ID bec79da1-f923 Uploading function and data - Total: 22.2KiB
Executor ID bec79da1-f923 Starting function invocation: my_reduce_function() - Total: 1 activations
Executor ID bec79da1-f923 Getting results ...
E
======================================================================
ERROR: test_call_async (__main__.TestPywren)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 108, in test_call_async
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map (__main__.TestPywren)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 125, in test_map
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map_reduce (__main__.TestPywren)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 132, in test_map_reduce
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_multiple_executions (__main__.TestPywren)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 141, in test_multiple_executions
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_chunks_bucket (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 317, in test_chunks_bucket
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_chunks_bucket_one_reducer_per_object (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 325, in test_chunks_bucket_one_reducer_per_object
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map_reduce_cos_bucket (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 270, in test_map_reduce_cos_bucket
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map_reduce_cos_bucket_one_reducer_per_object (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 277, in test_map_reduce_cos_bucket_one_reducer_per_object
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map_reduce_cos_key (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 286, in test_map_reduce_cos_key
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map_reduce_cos_key_one_reducer_per_object (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 295, in test_map_reduce_cos_key_one_reducer_per_object
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_map_reduce_url (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 301, in test_map_reduce_url
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

======================================================================
ERROR: test_storage_handler (__main__.TestPywrenCos)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testwren.py", line 310, in test_storage_handler
    result = pw.get_result()
  File "D:\python\pywren-ibm\.venv\lib\site-packages\pywren_ibm_cloud\wren.py", line 333, in get_result
    signal.signal(signal.SIGALRM, timeout_handler)
AttributeError: module 'signal' has no attribute 'SIGALRM'

----------------------------------------------------------------------
Ran 12 tests in 18.251s

FAILED (errors=12)

Time of getting result increase significantly when result few MB

While executing my code using IBM - PyWren and reviewed the innovations graph I notice that I got the result after 145 sec but in the graph all job results fetched after ~12 sec.
Therefore I thought my reducer function is the bottleneck, so I decide to make a "dummy reducer function" that do nothing and return an empty dictionary...Indeed this reduce the execution time to
~ 23 sec. the next thing I try is to let the reduce function to preform what it should do but not to return it, so again the function return an empty dictionary but does some computations. The surprise was it takes around the same time as the dummy reducer around ~ 23 sec....
The final try was to let the reducer return the object it's compute (5 MB list), this increases the executing time significantly to ~ 145 sec!

conclusion: the bottleneck is not in my code but is the returning the object. And the to download 5 MB file doesn't take 120 sec....

How do you explain this case and how I can avoid it?

Attached invocation graph:
dbpedia-model- chunksize 8mb

Install script to get (optional ) version number as an input

Users who uses

  curl -fsSL "https://git.io/fhe9X" | sh 

have no control over which version is installed as this scipt pull latest release. This is good for many cases, however we need to have an option to provide version number of specific release as an input to this script. If no version is provided, then latest will be used, as is today

Pywren doesn't get result when there are many

Recently I notice that when Im trying to get a dozens of results (for example just using map without reduce) the pywren getting stuck and never return with the result.

Notice: my plan in IBM-Cloud is lite and it may be related

clean data from failed job

I created a bugy code to run over 1000 objects from object storage.
PyWren generates 1000 tasks and each task will fail. In this case, each task generates status.json and output.pickle , resulting in 2000 objects.
As Job failed so data was not deleted from the object storage

We need to fix this so PyWren will cleanup data from all tasks when job is failed.

create time plots after map does not working

Hi @JosepSampe
I remember that I used the method "create_timeline_plots" right after getting result with path and job name and it used to create plots about the invocations in my local filesystem but it seems that something goes wrong with that because it doesn't work anymore...
That plot are not created and I debug the method and it seems self.future is an empty list...

Can you please check that, I guess this is not big problem because it used to work.

Thanks,
Ido

issues with ./deploy_runtime clone running within Watson Studio

I used Watson Studio notebook and executed
!cd pywren-ibm-cloud-master/runtime && ./deploy_runtime clone cactusone/pywren:3.5

This failed running the script, as there is no pywren.config file in local environment.

create_blackbox_runtime(image_name) File "./deploy_runtime", line 70, in create_blackbox_runtime config = wrenconfig.default() File "/opt/conda/envs/DSX-Python35/lib/python3.5/site-packages/pywren_ibm_cloud-1.0.2-py3.5.egg/pywren_ibm_cloud/wrenconfig.py", line 108, in default config_data = load(config_filename) File "/opt/conda/envs/DSX-Python35/lib/python3.5/site-packages/pywren_ibm_cloud-1.0.2-py3.5.egg/pywren_ibm_cloud/wrenconfig.py", line 36, in load with open(config_filename, 'r') as config_file: FileNotFoundError: [Errno 2] No such file or directory: '/home/dsxuser/.pywren_config'

In Watson Studio we use Python dictionary and provide configuration in runtime.
The best solution would be to pass this dictionary as an input to deploy_runtime ( make it optional of course )

chuck size map operation on several keys

when I use map_reduce() method with chunk_size on several keys, I get a list of results that I can't know easily which of them are splited chunks and which are the whole data from a key (different keys can lead to different files in various sizes).

I think it would be better if the chunks' results of each key would return in a list, so we will get a list that includes lists which each of them refers to the relevant key and each key's result list includes the chunks' results.

current result pattern example:

[result_key1, result_key2_chunk1, result_key2_chunk2, result_key3]

suggested result pattern example:

[[result_key1], [result_key2_chunk1, result_key2_chunk2], [result_key3]]

PyWren actions ( default, user based ) shoud have dynamic suffix name

There might be issues when multiple users using the same CF account. In this case users who deploy actions with the same name, may overwrite each others actions without notificaiton.

We should create actions with suffix having user identification, this way action names will be unique

issues with gevent version and inconsistency with openwhisk - to monitor

Our Docker image is based on https://github.com/apache/incubator-openwhisk-runtime-docker/releases. The following pull request apache/openwhisk-runtime-docker#62 upgraded gevent and flask versions, however there is no release that contains this pull request.

For now we can't upgrade requirement.txt with recent gevent versions as it will hit version incompatibility with openwhisk. This commit b33f7d9 moved gevent to earlier release

tests doesn't work

Getting "can't open data file" when try to run tests. Seems doesn't work.

python3 test/testpywren.py init
can't open data file
pywren-ibm-cloud me$ ls test/
         data		testpywren.py
pywren-ibm-cloud me$ cd test/
test me$ python3 testpywren.py init
    can't open config file

@omerb01 Can you check this please?

Which runtime is being used

Not sure there is an issue here, perhaps more usage related question.
I created runtime with ./deploy_runtime create cactusone/pywren-dev:3.6 then run our tests with python test/testpywren.py .
So far tests were using default runtime from the "action_name : pywren-dev_3.6 ".

But when i run tests it printed IBM Cloud Functions init for Runtime: ibmfunctions/action-python-v3.6 - 256MB (Installing...)
so it doesn't use my runtime, but create a new one..

@JosepSampe can you explain this?

storage_handler usage

we need support for using a storage handler that you can get externally from PyWren.

for example:

import pywren_ibm_cloud as pywren

def my_map_function(bucket, key, data_stream, storage_handler):
    data = data_stream.read()
    obj = storage_handler.get_object(bucket, '<BUCKET_NAME>/<KEY>')
    return data, obj

data_stream = '<BUCKET_NAME>/<PREFIX>'
pw = pywren.ibm_cf_executor()
pw.map(my_map_function, data_stream)
print(pw.get_result())

this will throw an error instead of export a storage handler:

IBM Cloud Functions init for namespace: pywren_dev_us_east and host: https://us-east.functions.cloud.ibm.com
IBM Cloud Functions init for pywren_3.6
IBM Cloud Functions executor created with ID eedcbd2c-c7c3
Traceback (most recent call last):
  File "/Users/omerbelh/IdeaProjects/WORK/python-pywren-example/main.py", line 12, in <module>
    pw.map(my_map_function, data_stream)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/wren.py", line 143, in map
    exclude_modules=exclude_modules)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/executor.py", line 216, in multiple_call
    arg_data = wrenutil.verify_args(map_function, data, object_processing=True)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pywren_ibm_cloud-1.0.3-py3.6.egg/pywren_ibm_cloud/wrenutil.py", line 283, in verify_args
    new_elem = dict(new_func_sig.bind(elem).arguments)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/inspect.py", line 2934, in bind
    return args[0]._bind(args[1:], kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/inspect.py", line 2849, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'storage_handler'

Process finished with exit code 1

pandas, seaborn and matplotlib required for plots and not in Pywren requirements

I just install my project environment from scratch as well last version of IBM - PyWren and when trying to call the new function of creating invocation graphs I got that "pandas", "seaborn" and "matplotlib" packages are required for it but not install... (import of then exist in plots.py file)

Can you please add those packages to IBM - PyWren requirements that will be great, that way the user shouldn't need to install it manually...

Deploy Python 3.7 action

it seems that IBM cloud function already support python 3.7. you can review the it here: IBM Cloud Blog

The blog claim it is faster than python 3.6, it will be grate if the default runtime will be python 3.7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.