Coder Social home page Coder Social logo

cohesity / cohesity-ansible-role Goto Github PK

View Code? Open in Web Editor NEW
13.0 13.0 14.0 659 KB

This repository provides an Ansible role and related modules for Cohesity DataPlatform.

License: Apache License 2.0

Python 96.97% PowerShell 2.79% Jinja 0.24%
ansible api automation devops module rest

cohesity-ansible-role's People

Contributors

anvesh-cohesity avatar chandrashekar-cohesity avatar goodrum avatar naveena-maplelabs avatar pyashish avatar sagnihotri-cohesity avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

cohesity-ansible-role's Issues

Missing parameter start_time in tasks/oracle_job.yml

๐Ÿ› Bug Report

(A clear and concise description of what the bug is)

The oracle_job can take start_time as an input for setting the job's custom start time. The Python code library/cohesity_oracle_job.py expects the parameter start_time as part of the input params from the oracle_job task. As the argument start_time is not specified in the tasks/oracle_job.yml , the python code library/cohesity_oracle_job.py sets the default value of the start_time as null always. Can you please fix the tasks/oracle_job.yml to include start_time as an argument.

To Reproduce

(Write your steps here:)

  1. create a playbook for Oracle Protection job with start_time as a paramater
  2. When the play book is executed, the start_time that we set in the playbbok is not reflected in the Job created at Cohesity

Expected behavior

(Write what you thought would happen.)

Actual Behavior

(Write what happened. Add screenshots, if applicable.)

Documentation issue cohesity_source module

Hi Cohesity,
in the examples section of the sorce module:

https://cohesity.github.io/cohesity-ansible-role/#/modules/cohesity_source

the parameter "server" is mentioned several times:

- cohesity_source:
    server: cohesity.lab
    username: admin
    password: password
    endpoint: mylinux.host.lab
   state: present
  • the parameter server is rejected, the right one should be
    cluster: xxx
  • also the parameter
    environment:
    is missing, but it is a required parameter.

Thanks, Frank

Errors unless "excludeFilePaths" is Defined

๐Ÿ› Bug Report

The "excludeFilePaths" variable in the Ansible role is not behaving as expected. The problems are:

  1. When a Cohesity protection job is defined via Ansible WITHOUT "excludeFilePaths" set the protection job is succesfully created. Immediate & subsequent runs of the exact same playbook fail with error until "excludeFilePaths" is defined.

  2. When a Cohesity protection job is defined via Ansible WITH "excludeFilePaths" set the protection job is succesfully created. If you undefine "excludeFilePaths" later the playbook fails with error unless at least ONE "excludeFilePaths" is defined per included path.

To Reproduce

  1. Define a playbook to create a protection job for a single client. Do not define any "excludeFilePaths".
  2. Run the exact same playbook a second time.

Here is the playbook i am using:
`#=> Create a file based protection job for all linux sources

  • hosts: 127.0.0.1
    gather_facts: no
    roles:
    • cohesity-ansible-role-2.1.2
      tasks:
    • name: Protection job
      include_role:
      name: cohesity.cohesity_ansible_role
      tasks_from: job
      vars:
      #cohesity_server: 10.22.149.28
      #cohesity_admin: "{{ username }}"
      #cohesity_password: "{{ password }}"
      #cohesity_validate_certs: False
      cohesity_protection:
      state: present
      job_name: "protect_vm"
      storage_domain: "DefaultStorageDomain"
      sources:
      - endpoint: esaabvm1
      paths:
      - includeFilePath: "/path1/"
      excludeFilePaths:
      - "/path1/exclude_path1" # This path should be present under /path1
      - "/path1/exclude_path2" # This path should be present under /path1
      skipNestedVolumes: False
      - includeFilePath: "/path2"
      excludeFilePaths:
      - "/path2/exclude_path1" # This path should be present under /path2
      skipNestedVolumes: False
      environment: "PhysicalFiles"
      #with_items: "{{ groups['linux'] }}"
      `

Expected behavior

Subsequent runs of the playbook should not error. They do unless "excludeFilePaths" is defined.

Actual Behavior

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'excludeFilePaths'
fatal: [127.0.0.1]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/home/user/.ansible/tmp/ansible-tmp-1610495163.82-10786-279261066461986/AnsiballZ_cohesity_job.py", line 102, in \n _ansiballz_main()\n File "/home/user/.ansible/tmp/ansible-tmp-1610495163.82-10786-279261066461986/AnsiballZ_cohesity_job.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/home/user/.ansible/tmp/ansible-tmp-1610495163.82-10786-279261066461986/AnsiballZ_cohesity_job.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cohesity_job', init_globals=None, run_name='main', alter_sys=True)\n File "/usr/lib64/python2.7/runpy.py", line 176, in run_module\n fname, loader, pkg_name)\n File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code\n exec code in run_globals\n File "/tmp/ansible_cohesity_job_payload_pnR1_H/ansible_cohesity_job_payload.zip/ansible/modules/cohesity_job.py", line 1150, in \n File "/tmp/ansible_cohesity_job_payload_pnR1_H/ansible_cohesity_job_payload.zip/ansible/modules/cohesity_job.py", line 951, in main\n File "/tmp/ansible_cohesity_job_payload_pnR1_H/ansible_cohesity_job_payload.zip/ansible/modules/cohesity_job.py", line 820, in update_job_util\nKeyError: 'excludeFilePaths'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Agent installation: Failed to remove an orphaned Cohesity Agent service which is still running

Hi Cohesity!
I'm just beginnig to test your ansible role and running into problems during the agent installation:
...
2019-01-18 09:45:44,848 p=20254 u=kortstie | changed: [xxx]
2019-01-18 09:45:44,918 p=20254 u=kortstie | TASK [Uninstall Cohesity Agent from each Linux Server] ****************************************************************************************************************************************


2019-01-18 09:45:45,185 p=20254 u=kortstie | TASK [cohesity.ansible : Install Prerequisite Packages for CentOS] **************************************************************************************************************************************************************************
2019-01-18 09:45:45,301 p=20254 u=kortstie | skipping: [xxx]
2019-01-18 09:45:45,368 p=20254 u=kortstie | TASK [cohesity.ansible : Install Prerequisite Packages for Ubuntu] **************************************************************************************************************************************************************************
2019-01-18 09:45:45,451 p=20254 u=kortstie | skipping: [xxx]
2019-01-18 09:45:45,531 p=20254 u=kortstie | TASK [cohesity.ansible : Cohesity agent: Set Agent to state of absent] **********************************************************************************************************************************************************************
2019-01-18 09:45:52,761 p=20254 u=kortstie | changed: [xxx]
2019-01-18 09:45:52,829 p=20254 u=kortstie | TASK [Install new Cohesity Agent on each Linux Server] **************************************************************************************************************************************************************************************
2019-01-18 09:45:53,035 p=20254 u=kortstie | TASK [cohesity.ansible : Install Prerequisite Packages for CentOS] **************************************************************************************************************************************************************************
2019-01-18 09:45:53,122 p=20254 u=kortstie | skipping: [xxx]
2019-01-18 09:45:53,186 p=20254 u=kortstie | TASK [cohesity.ansible : Install Prerequisite Packages for Ubuntu] **************************************************************************************************************************************************************************
2019-01-18 09:45:53,266 p=20254 u=kortstie | skipping: [xxx]
2019-01-18 09:45:53,335 p=20254 u=kortstie | TASK [cohesity.ansible : Cohesity agent: Set Agent to state of present] *********************************************************************************************************************************************************************
2019-01-18 09:45:53,810 p=20254 u=kortstie | fatal: [xxx]: FAILED! => {"Failed": true, "changed": false, "check_agent": {"stderr": "kill: sending signal to 21526 failed: No such process\n", "stdout": ""}, "msg": "Failed to remove an orphaned Cohesity Agent service which is still running", "process_id": "21526", "state": "present", "version": false}
2019-01-18 09:45:53,858 p=20254 u=kortstie | to retry, use: --limit @/opt/ansible/roles/cohesity/cohesity.retry

I checked the system, but no cohesity processes were running and there was no pid 21526...

What is going wrong here?

Best regards from germany

Frank

Capacity, snapshots and Restore

โ“ Questions and Requests

Hey Folks,

A couple of questions

  1. Module: cohesity_facts: Storage Domain Capacity
  • The cohesity_facts module doesn't seem to return the Storage Domain's "usagePerfStats", "localUsagePerfStats", "logicalStats" and "dataUsageStats". This is available via the API "/public/viewBoxes/{id}".
  • Is this something you can add to the module or does it already return these values and I'm not seeing it?
  • What would be good is to get the Storage Domain's physical/ logical capacity for the cluster.
  1. Module: cohesity_job: Delete Backup Snapshot for View
  • I couldn't find an option to delete a snapshot for a View. I am able to delete the entire protection job but not any individual snapshots within the protection job.
  1. Module:cohesity_restore_file: Restore files for Views.
  • Is there functionality to do this with the current module. Under environment I don't see an option for Views

Appreciate you help.
Thanks
ec

If it's an urgent request, Please contact us instead.

Nutanix acropolis support

Hello,

Do you know when Nutanix acropolis will be supported in this module ?
my infrascructure will move on full nutanix and this module can but really good for me.

Regards,

Fabien.

How to add a server to existing protection job without specifying all servers

In my scenario I have a protection job named TEST_JOB_JT that has 5 vm's protected. I would like to use this ansible module to a 6th server named "my-new-server" to the job so I tried this:

    - name: Add server to protection job
      cohesity_job:
        cluster: "{{ cohesity_cluster }}"
        username: "{{ cohesity_api_user }}"
        password: "{{ cohesity_api_pass }}"
        validate_certs: false
        state: present
        name: TEST_JOB_JT
        environment: VMware
        include:
          - my-new-server
      register: cohesity_protection

This unfortunately changes the job to only have "my-new-server" in the job and the original 5 servers are removed. Is this possible or do I need to list all servers in the job under the "include:" section? If so, how do you get the server names in a job? In the cohesity_facts module for jobs it lists them as "sourceIds" with numeric id numbers instead of vm names.

Thanks,
Josh

Cohesity agent: Set Agent to state of present - Errors 'HTTPMessage' object has no attribute 'dict'

TASK [cohesity.cohesity_ansible_role : Cohesity agent: Set Agent to state of present] *****************************************************************************************************
task path: /home/adminlocal/cohesity-agent.playbook/.roles/cohesity.cohesity_ansible_role/tasks/agent.yml:54
<10.54.2.11> ESTABLISH SSH CONNECTION FOR USER: matthew.williams
<10.54.2.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="matthew.williams"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/adminlocal/.ansible/cp/9d1db22039 10.54.2.11 '/bin/sh -c '"'"'echo ~matthew.williams && sleep 0'"'"''
<10.54.2.11> (0, b'/home/matthew.williams\n', b'')
<10.54.2.11> ESTABLISH SSH CONNECTION FOR USER: matthew.williams
<10.54.2.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="matthew.williams"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/adminlocal/.ansible/cp/9d1db22039 10.54.2.11 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162 `" && echo ansible-tmp-1584050465.9643524-67940419594162="` echo /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162 `" ) && sleep 0'"'"''
<10.54.2.11> (0, b'ansible-tmp-1584050465.9643524-67940419594162=/home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162\n', b'')
Using module file /home/adminlocal/cohesity-agent.playbook/.roles/cohesity.cohesity_ansible_role/library/cohesity_agent.py
<10.54.2.11> PUT /home/adminlocal/.ansible/tmp/ansible-local-44424bnwzbh71/tmp1aqn1yee TO /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162/AnsiballZ_cohesity_agent.py
<10.54.2.11> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="matthew.williams"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/adminlocal/.ansible/cp/9d1db22039 '[10.54.2.11]'
<10.54.2.11> (0, b'sftp> put /home/adminlocal/.ansible/tmp/ansible-local-44424bnwzbh71/tmp1aqn1yee /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162/AnsiballZ_cohesity_agent.py\n', b'')
<10.54.2.11> ESTABLISH SSH CONNECTION FOR USER: matthew.williams
<10.54.2.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="matthew.williams"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/adminlocal/.ansible/cp/9d1db22039 10.54.2.11 '/bin/sh -c '"'"'chmod u+x /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162/ /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162/AnsiballZ_cohesity_agent.py && sleep 0'"'"''
<10.54.2.11> (0, b'', b'')
<10.54.2.11> ESTABLISH SSH CONNECTION FOR USER: matthew.williams
<10.54.2.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="matthew.williams"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/adminlocal/.ansible/cp/9d1db22039 -tt 10.54.2.11 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-pqtrkixaiyulmlwoiotousljiadztaou ; /usr/bin/python3 /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162/AnsiballZ_cohesity_agent.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<10.54.2.11> (1, b'\r\n{"msg": "Unexpected error caused while managing the Cohesity Module.", "error_details": "\'HTTPMessage\' object has no attribute \'dict\'", "error_class": "AttributeError", "failed": true, "exception": "  File \\"/tmp/ansible_cohesity_agent_payload_vl_pidi7/__main__.py\\", line 262, in download_agent\\n    resp_headers = agent.info().dict\\n", "invocation": {"module_args": {"cluster": "dk-bl-cohesity.secmet.co", "username": "cohesitybackups", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "validate_certs": false, "state": "present", "service_user": "cohesityagent", "service_group": "cohesityagent", "create_user": true, "download_location": "", "native_package": false, "download_uri": "", "operating_system": "Ubuntu", "file_based": false}}}\r\n', b'Shared connection to 10.54.2.11 closed.\r\n')
<10.54.2.11> Failed to connect to the host via ssh: Shared connection to 10.54.2.11 closed.
<10.54.2.11> ESTABLISH SSH CONNECTION FOR USER: matthew.williams
<10.54.2.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="matthew.williams"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/adminlocal/.ansible/cp/9d1db22039 10.54.2.11 '/bin/sh -c '"'"'rm -f -r /home/matthew.williams/.ansible/tmp/ansible-tmp-1584050465.9643524-67940419594162/ > /dev/null 2>&1 && sleep 0'"'"''
<10.54.2.11> (0, b'', b'')
The full traceback is:
  File "/tmp/ansible_cohesity_agent_payload_vl_pidi7/__main__.py", line 262, in download_agent
    resp_headers = agent.info().dict

fatal: [hq-it-phv-tools-1]: FAILED! => {
    "changed": false,
    "error_class": "AttributeError",
    "error_details": "'HTTPMessage' object has no attribute 'dict'",
    "invocation": {
        "module_args": {
            "cluster": "dk-bl-cohesity.secmet.co",
            "create_user": true,
            "download_location": "",
            "download_uri": "",
            "file_based": false,
            "native_package": false,
            "operating_system": "Ubuntu",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "service_group": "cohesityagent",
            "service_user": "cohesityagent",
            "state": "present",
            "username": "cohesitybackups",
            "validate_certs": false
        }
    },
    "msg": "Unexpected error caused while managing the Cohesity Module."
}

Just reporting that I am not seeing any resolution or a specific error in my implementation. This is running the playbook call from the installation instructions found here: https://cohesity.github.io/cohesity-ansible-role/#/how-to-use

I have tested all versions of the role, both numbered and master. Not sure what I am missing.

Include Archive Log Keep Days for the Oracle Protection sources

๐Ÿš€ Feature Request

Please include a feature where we can configure if the archive logs needs to be deleted from the Oracle Source and if yes, configure how many day to keep the archive logs at the Oracle server.
Capture

Example or reference

(Write your answer here)

endpoint sets itself to null in cohesity_job module

I am attempting to create backup jobs using the cohesity_job module to create new backup jobs and getting the error "Unexpected error caused while managing the Cohesity Module.". with an error details message telling me to specify one or more sources to back up, even though endpoints are included in my playbook.

running with the -vvv switch I see that the endpoint is somehow being set to null when the module is invoked, even though it is a simple string in my playbook. I did try adding additional items to the list, and they do appear, but all of the values for the endpoint key are switched to null.

errors

fatal: [localhost]: FAILED! => {
    "changed": false,
    "error_class": "bytes",
    "error_details": "b'{\"errorCode\":\"KValidationError\",\"message\":\"Please specify one or more sources to backup.\"}\\n'",
    "invocation": {
        "module_args": {
            "cancel_active": false,
            "cluster": "10.181.224.10",
            "delete_backups": false,
            "description": null,
            "environment": "VMware",
            "name": "test",
            "ondemand_run_type": "Regular",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "protection_policy": "Gold",
            "protection_sources": [
                {
                    "endpoint": null
                }
            ],
            "start_time": null,
            "state": "present",
            "storage_domain": "DefaultStorageDomain",
            "time_zone": "America/Los_Angeles",
            "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "validate_certs": false
        }
    },
    "msg": "Unexpected error caused while managing the Cohesity Module."
}

playbook

---
- name: Create backup jobs
  hosts: localhost
  gather_facts: no
  become: false

  tasks:
  - name: create new protection job
    cohesity_job:
      cluster: _cluster-vip_
      username: _username_
      password: _password_
      state: present
      name: test
      environment: VMware
      protection_sources:
      - endpoint: vcsa01.skunkworkx.cloud
      protection_policy: Gold
      storage_domain: DefaultStorageDomain

[feature-request] - cohesity_restore_vm: module should select the protection job on its own

When I call the cohesity_resotre_vm module, and leave the job_name parameter, the module exits with the following errror:
...
/tmp/ansible-tmp-1549545897.5-200667828612790/AnsiballZ_cohesity_restore_vm.py", line 48, in invoke_module\n imp.load_module('main', mod
, module, MOD_DESC)\n File "/tmp/ansible_cohesity_restore_vm_payload_61RW3z/main.py", line 627, in \n File "/tmp/ansible_cohesity_restore_vm_payload_61RW3z/main.py", line 478, in main\nTypeError: unsupported operand
type(s) for +: 'NoneType' and 'str'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

The module should select the most recent snaphot from any existing protection job, if the job_name parameter is not given.

[feature request] - cohesity_job: limit the vm's added managed by a job

I suggest to add a parameter "vm_names" to the cohesity_job module to limit the Virtual Machines managed by a protection job.
Example:

  • cohesity_job:
    cluster: "{{ var_cohesity_server }}"
    username: "{{ var_cohesity_admin }}"
    password: "{{ var_cohesity_password }}"
    ....
    protection_sources:
    • another.v.center
      vm_names:
      - myvm1.my.domain
      - myvm2.my.domain
      ....

[feature request] - cohesity_agent: agent installation on isolated hosts / DMZ

We have many webservers which can't reach the cohesity API directly, therefore I can't install the Agent with the ansible role.
In the past I wrote my own playbook which copied the agent installer to these systems and installs / upgrades it then locally on the webserver...
A function to manage such isolated hosts integrated in your role would be really helpful!

Refresh source for Newly Created VM

Can I add "refresh source function" to cohesity_source or cohesity_job module

Hello,

I'd like to automate creating VM and then creating a protection job for the new VM.
However, newly registered VM to vCenter doesn't be seen unless source is refreshed manually.
So can I add a function to cohesity_source or cohesity_job module?

Which one is the best?

Example or reference

def refresh_source(module, self):

Creation Protection Policy with archival

I get a error when i want to create a Protection Policy with archival.

Here we go this error :
fatal: [localhost]: FAILED! => {"changed": false, "error_class": "TypeError", "error_details": "must be str, not NoneType", "msg": "Unexpected error caused while managing the Cohesity Module."}

Here we go my create_policy.yml :

Playbook to create protection policy.


  • hosts: localhost
    gather_facts: true
    become: false
    collections:
    • cohesity.dataprotect
      tasks:
    • name: Create protection policy to cohesity server.
      cohesity_policy:
      cluster: "cohesity.xxxxxxxxxxx" #"{{ cohesity_server }}"
      username: "xxxxxxxxxxxxxxxxx" #"{{ cohesity_username }}"
      password: "xxxxxxxxxxxxxxxxx" #"{{ cohesity_password }}"
      validate_certs: false
      name: 'Ansible'
      state: 'present'
      incremental_backup_schedule:
      periodicity: Daily
      archival_copy: [
      target_name: "archive",
      target_type: 'Nas',
      days_to_retain: 1,
      periodicity: 'Month',
      copy_partial: true,
      multiplier: 1
      ]

Anyone can help me to find the trouble ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.