dfhawthorne / ansible-ocm12c Goto Github PK
View Code? Open in Web Editor NEWAnsible playbooks and associated files for constructing an Oracle 12C RAC environment
License: MIT License
Ansible playbooks and associated files for constructing an Oracle 12C RAC environment
License: MIT License
Ping tests show connectivity across interconnect but large packets are dropped:
64 bytes from 192.168.2.141: icmp_seq=96 ttl=64 time=0.225 ms
64 bytes from 192.168.2.141: icmp_seq=97 ttl=64 time=0.204 ms
^C
--- 192.168.2.141 ping statistics ---
97 packets transmitted, 97 received, 0% packet loss, time 96002ms
rtt min/avg/max/mdev = 0.151/0.203/0.435/0.032 ms
[douglas@redfern1 ~]$ ping -s 8192 192.168.2.141
PING 192.168.2.141 (192.168.2.141) 8192(8220) bytes of data.
^C
--- 192.168.2.141 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 8999ms
OUI produces output similar to the following:
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB. Actual 42743 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5119 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-05-08_06-40-23PM. Please wait ...You can find the log of this install session at:
/opt/app/oraInventory/logs/installActions2018-05-08_06-40-23PM.log
The installation of Oracle Grid Infrastructure 12c was successful.
Please check '/opt/app/oraInventory/logs/silentInstall2018-05-08_06-40-23PM.log' for more details.
As a root user, execute the following script(s):
1. /opt/app/oraInventory/orainstRoot.sh
2. /opt/app/12.1.0/grid/root.sh
Successfully Setup Software.
As install user, execute the following script to complete the configuration.
1. /opt/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=<response_file>
Note:
1. This script must be run on the same host from where installer was run.
2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).
The root script files that need to be run are listed in this output.
Instead of hard coding these values, it may be possible to extract them from the OUI output.
On a green fields installation, the validation of TFA fails with the following messages:
TASK [oracle_gi : Get Current Version of TFA Tool] *****************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "module_stderr": "Shared connection to redfern1.yaocm.id.au closed.
", "module_stdout": "
Traceback (most recent call last):
File "/tmp/ansible_tyocfm/ansible_module_command.py", line 248, in <module>
main()
File "/tmp/ansible_tyocfm/ansible_module_command.py", line 192, in main
os.chdir(chdir)
OSError: [Errno 2] No such file or directory: '/opt/app/12.1.0/grid/tfa/bin'
", "msg": "MODULE FAILURE", "rc": 1}
sudo -u oracle /opt/share/Software/grid/linuxamd64_12102/grid/runcluvfy.sh stage -pre crsinst -n redfern1
fails with:
[sudo] password for douglas:
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "redfern1"Checking user equivalence...
PRVG-2019 : Check for equivalence of user "oracle" from node "redfern1" to node "redfern1" failed
TASK [oracle_gi : Output from CLUVFY Post CRS Installation Check] **************************
ok: [redfern1.yaocm.id.au] => {
"cluvfy_stage_post_crsinst.stdout_lines": [
"",
"PRVG-8003 : Unable to retrieve node list from the Oracle Clusterware.",
"",
"",
"Post-check for cluster services setup was unsuccessful on all the nodes. ",
"",
"CVU operation performed: stage -post crsinst",
"Date: 21/06/2018 7:46:29 PM",
"CVU home: /opt/app/12.1.0/cluvfy/bin/../",
"User: oracle"
]
}
Full run fails with:
RUNNING HANDLER [oracle_gi : restart ntpd] *************************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "msg": "Unable to restart service ntpd: Failed to restart ntpd.service: Interactive authentication required.\nSee system logs and 'systemctl status ntpd.service' for details.\n"}
to retry, use: --limit @/etc/ansible/ansible-ocm12c/sites.retry
TASK [oracle_gi : Output of CLUVFY Post HWOS Stage Check] **********************
ok: [redfern1.yaocm.id.au] => {
"cluvfy_stage_post_hwos.stdout_lines": [
"",
"Verifying Node Connectivity ...",
" Verifying Hosts File ...PASSED",
" Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED",
"Verifying Node Connectivity ...PASSED",
"Verifying Multicast check ...PASSED",
"Verifying Users With Same UID: 0 ...PASSED",
"Verifying Time zone consistency ...PASSED",
"Verifying Shared Storage Accessibility:/dev/oracleasm/disks/VOTE ...FAILED (PRVG-11502)",
"Verifying DNS/NIS name service ...PASSED",
"",
"Post-check for hardware and operating system setup was unsuccessful on all the nodes. ",
"",
"",
"Failures were encountered during execution of CVU verification request \"stage -post hwos\".",
"",
"Verifying Shared Storage Accessibility:/dev/oracleasm/disks/VOTE ...FAILED",
"PRVG-11502 : Path \"/dev/oracleasm/disks\" of type \"File System\" is not suitable",
"for usage as RAC database file for release \"12.2\". Supported storage types are",
"\"NFS, ASM Disk Group, OCFS2, VXFS, ACFS\".",
"",
"",
"CVU operation performed: stage -post hwos",
"Date: 20/06/2018 8:42:09 PM",
"CVU home: /opt/app/12.1.0/cluvfy/bin/../",
"User: oracle"
]
}
TASK [oracle_gi : fail] ********************************************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "msg": "CLUVFY for installation failed"}
ASM Disks had to be mounted manually:
[root@redfern1 ~]# oracleasm listdisks
[root@redfern1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "FRA"
Instantiating disk "DATA"
Instantiating disk "REDO1"
Instantiating disk "REDO2"
Instantiating disk "VOTE"
[root@redfern1 ~]# oracleasm listdisks
DATA
FRA
REDO1
REDO2
VOTE
Run Restart NTPD handler after TASK [oracle_gi : Ensure NTPD is running and enabled]
During pre-installation check for CRS, the command
/opt/share/Software/grid/linuxamd64_12102/grid/runcluvfy.sh stage -pre crsinst -n redfern1
fails with:
Starting check for /dev/shm mounted as temporary file system ...
ERROR:
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
Check for /dev/shm mounted as temporary file system failed
When investigating issue #18, I got the following messages:
**************** General Advice ****************
*#* The TFA Version 12.2.1.2.0 in "redfern1.tfa_Mon_Apr_16_20_50_43_AEST_2018.zip" is more than 120 days old.
*#*
*#* Unless you are on ZDLRA, please download and install the latest TFA Support Tools Bundle from Note:1594347.1
************************************************
`ansible-playbook --ask-become-pass sites.yml' fails with:
TASK [oracle_user : Set authorized key from file] ******************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "msg": "The pexpect python module is required"}
Failed to add redfern2
to known hosts file on redfern2
:
TASK [oracle_user : Check known hosts file for entry related to redfern2.yaocm.id.au] **********************************************************************************************************************
ok: [redfern2.yaocm.id.au] => {"changed": false, "cmd": ["ssh-keygen", "-f", "/home/oracle/.ssh/known_hosts", "-F", "redfern2.yaocm.id.au"], "delta": "0:00:00.019585", "end": "2018-06-16 22:16:49.314079", "failed_when_result": false, "msg": "non-zero return code", "rc": 1, "start": "2018-06-16 22:16:49.294494", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [oracle_user : Add host name (redfern2.yaocm.id.au) to known hosts file for oracle] *******************************************************************************************************************
fatal: [redfern2.yaocm.id.au]: FAILED! => {"changed": true, "cmd": "/bin/ssh-copy-id \"redfern2.yaocm.id.au\" -f", "delta": "0:00:00.132080", "end": "2018-06-16 22:16:50.052453", "msg": "non-zero return code", "rc": 1, "start": "2018-06-16 22:16:49.920373", "stdout": "\r\n/bin/ssh-copy-id: ERROR: failed to open ID file '/home/oracle/.pub': No such file or directory", "stdout_lines": ["", "/bin/ssh-copy-id: ERROR: failed to open ID file '/home/oracle/.pub': No such file or directory"]}
yum install nscd
fails because YUM mirrors cannot be contacted. See attached log.
nscd_install_oel6.log
Startup of REDFERN2
fails with the following Xen messages:
Parsing config from /OVS/running_pool/REDFERN2/redfern2.cfg
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/block add [-1] exited with error status 1
libxl: error: libxl_device.c:1141:device_hotplug_child_death_cb: script: File /OVS/shareDisk/REDFERN/FRA_01 is loopback-mounted through /dev/loop3,
which is mounted in a guest domain,
and so cannot be mounted now.
New release of TFA is available.
Full run of playbook (sites.yml) failed with:
TASK [oracle_gi : Run root.sh on first node] ***********************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": true, "cmd": ["/opt/app/12.1.0/grid/root.sh"], "delta": "0:25:41.584772", "end": "2018-04-14 20:33:20.230517", "msg": "non-zero return code", "rc": 25, "start": "2018-04-14 20:07:38.645745", "stderr": "", "stderr_lines": [], "stdout": "Check /opt/app/12.1.0/grid/install/root_redfern1.yaocm.id.au_2018-04-14_20-07-38.log for the output of root script", "stdout_lines": ["Check /opt/app/12.1.0/grid/install/root_redfern1.yaocm.id.au_2018-04-14_20-07-38.log for the output of root script"]}
to retry, use: --limit @/etc/ansible/ansible-ocm12c/sites.retry
During pre-installation check for CRS, the command
/opt/share/Software/grid/linuxamd64_12102/grid/runcluvfy.sh stage -pre crsinst -n redfern1
fails with:
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
PRVG-11138 : Interface "192.168.1.140" on node "redfern1" is not able to communicate with interface "192.168.1.140" on node "redfern1" over multicast group "224.0.0.251"
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
PRVG-11138 : Interface "192.168.2.140" on node "redfern1" is not able to communicate with interface "192.168.2.140" on node "redfern1" over multicast group "224.0.0.251"
Following the resolution of issue #18, the full run of sites.yml
shows:
TASK [oracle_user : Creating the Oracle Home and Oracle Base Directory]
***************************************************************************************
changed: [redfern1.yaocm.id.au] => (item=app/12.1.0/grid)
ok: [redfern1.yaocm.id.au] => (item=app/grid)
ok: [redfern1.yaocm.id.au] => (item=app/oracle)
After sites.yml
runs, the permissions on this directory are:
drwxr-xr-x. 76 oracle oinstall 4096 Apr 14 20:11 /opt/app/12.1.0/grid
After the root.sh
script is run, the he permissions on this directory are:
drwxr-xr-x. 76 root oinstall 4096 Apr 14 20:11 /opt/app/12.1.0/grid
For the ORACLE user on the REDFERN cluster,
New system installation fails with:
TASK [oracle_gi : Run Verification Checks for CRS Installation] ****************
ok: [redfern1.yaocm.id.au]
TASK [oracle_gi : debug] *******************************************************
ok: [redfern1.yaocm.id.au] => {
"cluvfy_stage_post_hwos.stdout_lines": [
"",
"Verifying Node Connectivity ...",
" Verifying Hosts File ...PASSED",
" Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED",
"Verifying Node Connectivity ...PASSED",
"Verifying Multicast check ...PASSED",
"Verifying Users With Same UID: 0 ...PASSED",
"Verifying Time zone consistency ...PASSED",
"Verifying Shared Storage Discovery ...FAILED (PRVF-4100)",
"Verifying DNS/NIS name service ...PASSED",
"",
"Post-check for hardware and operating system setup was unsuccessful on all the nodes. ",
"",
"",
"Failures were encountered during execution of CVU verification request \"stage -post hwos\".",
"",
"Verifying Shared Storage Discovery ...FAILED",
"PRVF-4100 : Shared storage check failed on nodes \"redfern1\"",
"",
"",
"CVU operation performed: stage -post hwos",
"Date: 12/05/2018 9:46:02 PM",
"CVU home: /opt/app/12.1.0/cluvfy/bin/../",
"User: oracle"
]
}
TASK [oracle_gi : fail] ********************************************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "msg": "CLUVFY for installation failed"}
Full run of playbook fails with
TASK [hugepages : Validate Transparent Huge Pages is disabled] *****************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "msg": "Transparent Huge Pages is enabled"}
ansible-playbook --ask-become-pass --tags user_equivalency sites.yml
always gives:
SUDO password:
PLAY [oracle_rac] **************************************************************
TASK [Gathering Facts] *********************************************************
ok: [redfern1.yaocm.id.au]TASK [oracle_user : Add current long host name to known hosts] *****************
changed: [redfern1.yaocm.id.au] => (item=redfern1.yaocm.id.au)TASK [oracle_user : Add current short host name to known hosts] ****************
changed: [redfern1.yaocm.id.au] => (item=redfern1.yaocm.id.au)PLAY RECAP *********************************************************************
redfern1.yaocm.id.au : ok=3 changed=2 unreachable=0 failed=0
Apply Oracle Critical Patch Update for April 2018 to REDFERN cluster.
Current method merely adds a group to the ORACLE user if not in group list.
The proper way to specify the definitive list of groups with the installation group as the first one.
roles/oracle_user/tasks/user_equivalency.yml lists the host names explicitly instead of deriving them from Ansible facts.
Configuration of CRS on REDFERN1 Fails with:
TASK [oracle_gi : Create response file for silent tools configuration] *********
changed: [redfern1.yaocm.id.au]
TASK [oracle_gi : Silently configure tools on first node only] *****************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "cmd": "/opt/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/opt/app/grid/cfgrsp.properties", "msg": "[Errno 8] Exec format error", "rc": 8}
Full log is attached as redfern1_2018_05_11.log
- name: Creating the Oracle Home and Oracle Base Directory
file:
path: /opt/{{ item }}
owner: oracle
group: oinstall
mode: 0755
state: directory
recurse: yes
with_items:
- app/12.1.0/grid
- app/grid
- app/oracle
Gives:
TASK [oracle_user : Creating the Oracle Home and Oracle Base Directory] ********
changed: [redfern1.yaocm.id.au] => (item=app/12.1.0/grid)
changed: [redfern1.yaocm.id.au] => (item=app/grid)
ok: [redfern1.yaocm.id.au] => (item=app/oracle)
This also happens in:
- name: "Creates patching directory"
file:
path: "{{ patching_dir }}"
state: directory
owner: oracle
group: "{{ oracle_user.install_group.name }}"
mode: 0770
Which gives:
TASK [oracle_gi : Creates patching directory] **********************************
changed: [redfern1.yaocm.id.au]
But:
- name: Create Mount Point for Oracle Installation Software
file:
path: /opt/share/Software
state: directory
Works fine:
TASK [oracle_user : Create Mount Point for Oracle Installation Software] *******
ok: [redfern1.yaocm.id.au]
TASK [oracle_gi : Silently configure CRS on first node only] *******************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": true, "cmd": ["/opt/app/12.1.0/grid/crs/config/config.sh", "-silent", "-responseFile", "/opt/app/grid/crs_config.rsp", "-waitforcompletion", "-ignorePrereq"], "delta": "0:00:09.687174", "end": "2018-04-18 19:37:50.585126", "msg": "non-zero return code", "rc": 254, "start": "2018-04-18 19:37:40.897952", "stderr": "", "stderr_lines": [], "stdout": "[FATAL] [INS-40401] The Installer has detected a configured Oracle Clusterware home on the system.\n CAUSE: The Installer has detected the presence of Oracle Clusterware software configured on the node.\n ACTION: You can have only one instance of Oracle Clusterware software configured on a node that is part of an existing cluster.", "stdout_lines": ["[FATAL] [INS-40401] The Installer has detected a configured Oracle Clusterware home on the system.", " CAUSE: The Installer has detected the presence of Oracle Clusterware software configured on the node.", " ACTION: You can have only one instance of Oracle Clusterware software configured on a node that is part of an existing cluster."]}
Putting names on DEBUG module calls makes it easier to understand where the output is coming from.
New system installation fails with:
TASK [oracle_user : Check known hosts file for entry related to redfern1] ******
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "cmd": ["ssh-keygen", "-f", "/home/oracle/.ssh/known_hosts", "-F", "redfern1"], "delta": "0:00:00.006030", "end": "2018-05-12 21:24:08.374698", "failed_when_result": true, "msg": "non-zero return code", "rc": 255, "start": "2018-05-12 21:24:08.368668", "stderr": "do_known_hosts: hostkeys_foreach failed: No such file or directory", "stderr_lines": ["do_known_hosts: hostkeys_foreach failed: No such file or directory"], "stdout": "", "stdout_lines": []}
Apply the latest PSU to the Grid Infrastructure (GI) home.
Use the following error message from CLUVFY to detect whether CRS is installed or not:
PRVP-5201 : Cluster Ready Services configuration is not detected. See usage for detail.
After resolution of issue #29, the following messages appeared in a subsequent run of cluvfy
PRVG-11750 : File "/var/tmp/.oracle/ora_gipc_GPNPD_redfern1" exists
on node "redfern1".
Verifying Grid Infrastructure home path: /opt/app/12.1.0/grid ...FAILED
Verifying '/opt/app/12.1.0/grid' ...FAILED
redfern1: PRVG-11931 : Path "/opt/app/12.1.0/grid" is not writeable on node
"redfern1".
Full log has been uploaded as cluvfy_2018_05_12A.log
Full run of play-book on REDFERN1
fails with:
TASK [oracle_gi : Create response file for silent tools configuration] *********
changed: [redfern1.yaocm.id.au]
TASK [oracle_gi : Silently configure tools on first node only] *****************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": true, "cmd": ["/bin/sh", "/opt/app/12.1.0/grid/cfgtoollogs/configToolAllCommands", "RESPONSE_FILE=/opt/app/grid/cfgrsp.properties"], "delta": "0:23:33.326162", "end": "2018-05-13 23:43:34.742814", "msg": "non-zero return code", "rc": 3, "start": "2018-05-13 23:20:01.416652", ... "stdout_lines": ["Setting the invPtrLoc to /opt/app/12.1.0/grid/oraInst.loc", "", "perform - mode is starting for action: configure", "", "", "perform - mode finished for action: configure", "", "You can see the log file: /opt/app/12.1.0/grid/cfgtoollogs/oui/configActions2018-05-13_11-20-02-PM.log"]}
The log, configActions2018-05-13_11-20-02-PM.log, shows:
Starting Clock synchronization checks using Network Time Protocol(NTP)...
PRVG-1019 : The NTP configuration file "/etc/ntp.conf" does not exist on nodes "redfern1"
PRVF-5414 : Check of NTP Config file failed on all nodes. Cannot proceed further for the NTP tests
PRVF-7590 : "ntpd" is not running on node "redfern1"
PRVG-1024 : The NTP Daemon or Service was not running on any of the cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Clock synchronization check using Network Time Protocol(NTP) failed
PRVF-9652 : Cluster Time Synchronization Services check failed
oracleasm_init_disk.yml fails to initialise the disk if the partition does not exist at the start of the task.
In this case, oracleasm status /dev/xvdh1
returns
Unable to access device "/dev/xvdh1"
Following the resolution of issue #18, the following task failed:
TASK [oracle_gi : Run Verification Checks for CRS Installation] *******************************************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": true, "cmd": ["/opt/app/12.1.0/grid/bin/cluvfy", "stage", "-pre", "crsinst", "-n", "redfern1", "-r", "12.1"], "delta": "0:00:24.887316", "end": "2018-05-01 20:30:34.993424", "msg": "non-zero return code", "rc": 1,
This task runs /opt/app/12.1.0/grid/bin/cluvfy stage -pre crsinst -n redfern1 -r 12.1
which returns the following error message:
Swap space check failed
The resolution of issue #18 increased the memory allocated to the VM from 4164 MB to 8908 MB without increasing the size of the swap file.
networks.yml fails with the following error messages:
TASK [oracle_gi : install needed network manager libs] *************************
changed: [redfern1.yaocm.id.au] => (item=[u'NetworkManager-glib'])
TASK [oracle_gi : Configure Public LAN Interface] ******************************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "msg": "Error: Failed to modify connection 'eth0': No such method 'Update2'\n", "name": "eth0", "rc": 1}
Failures were encountered during execution of CVU verification request "stage -pre crsinst"
Log is attached: cluvfy_2018_05_10.log
Used the variable, ORACLE_HOME
, to refer to installation location for GI.
Suggest using CRS_HOME
instead.
sudo oracleasm querydisk /dev/xvdd1
returns:
Device "/dev/xvdd1" is not marked as an ASM disk
This should have been done by the task file oracleasm_init_disk.yml
Role (oracle_gi) gets the following warning:
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using
result|succeeded
instead useresult is succeeded
. This feature will be
removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
The tasks affected are:
Use the suffix of the host short name to determine whether the node is the first one in the cluster or not.
For example,
Since upgrade to Ansible 2.5.5, the networks.yml
task file is not correctly setting the eth1
parameters on REDFERN1
.
Setup module shows:
"ansible_eth1": {
"active": true,
"device": "eth1",
"features": {
...
},
"hw_timestamp_filters": [],
"macaddress": "00:16:3e:00:00:0f",
"module": "xen_netfront",
"mtu": 1500,
"pciid": "vif-1",
"promisc": false,
"timestamping": [
"rx_software",
"software"
],
"type": "ether"
},
NMCLI
shows:
[douglas@redfern1 ~]$ nmcli connection show eth1
connection.id: eth1
...
connection.interface-name: eth1
connection.type: 802-3-ethernet
connection.autoconnect: no
...
802-3-ethernet.mtu: 9000
...
ipv4.addresses: 192.168.2.140/24
...
[douglas@redfern1 ~]$ nmcli connection show
NAME UUID TYPE DEVICE
eth0 bfa6bd50-925b-41d9-acdb-5d2a0cac5aec 802-3-ethernet eth0
eth1 92e8601c-bf7c-435e-8595-fbd008083048 802-3-ethernet --
After resolving issue #36, the cluvfy stage -post hwos
command gets the following warning:
WARNING (PRVG-1615) : Virtual environment detected. Skipping shared storage check for disks "/dev/oracleasm/disks/VOTE".
Log is:
TASK [oracle_gi : Run Verification Checks for CRS Installation] ****************
ok: [redfern1.yaocm.id.au]
TASK [oracle_gi : debug] *******************************************************
ok: [redfern1.yaocm.id.au] => {
"cluvfy_stage_post_hwos.stdout_lines": [
"",
"Verifying Node Connectivity ...",
" Verifying Hosts File ...PASSED",
" Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED",
"Verifying Node Connectivity ...PASSED",
"Verifying Multicast check ...PASSED",
"Verifying Users With Same UID: 0 ...PASSED",
"Verifying Time zone consistency ...PASSED",
"Verifying Shared Storage Accessibility:/dev/oracleasm/disks/VOTE ...WARNING (PRVG-1615)",
"Verifying DNS/NIS name service ...PASSED",
"",
"Post-check for hardware and operating system setup was successful. ",
"",
"",
"Warnings were encountered during execution of CVU verification request \"stage -post hwos\".",
"",
"Verifying Shared Storage Accessibility:/dev/oracleasm/disks/VOTE ...WARNING",
"PRVG-1615 : Virtual environment detected. Skipping shared storage check for",
"disks \"/dev/oracleasm/disks/VOTE\".",
"",
"",
"CVU operation performed: stage -post hwos",
"Date: 12/05/2018 10:06:10 PM",
"CVU home: /opt/app/12.1.0/cluvfy/bin/../",
"User: oracle"
]
}
Following resolution #25, the play book fails with:
TASK [oracle_gi : Extract Installer for Latest version of TFA] *****************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"msg": "The conditional check 'do_install_tfa' failed. The error was: error while evaluating conditional (do_install_tfa): 'do_install_tfa' is undefined\n\nThe error appears to have been in '/etc/ansible/ansible-ocm12c/roles/oracle_gi/tasks/install_tfa.yml': line 66, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: \"Extract Installer for Latest version of TFA\"\n ^ here\n"}
During pre-installation check for CRS, the command
/opt/share/Software/grid/linuxamd64_12102/grid/runcluvfy.sh stage -pre crsinst -n redfern1
fails with:
Total memory check failed
Check failed on nodes:
redfern1
The output from the runcluvfy.sh
command needs to be scanned for errors that cannot be ignored.
Errors that can be ignored are:
The command is in roles/oracle_gi/tasks/crs_inst.yml
Full run of play book fails with:
TASK [oracle_gi : Run Verification Checks for CRS Installation] ****************
fatal: [redfern1.yaocm.id.au]: FAILED! => {"changed": false, "cmd": "/opt/app/12.1.0/grid/bin/cluvfy stage -pre crsinst -n redfern1 -r 12.1", "msg": "[Errno 2] No such file or directory", "rc": 2}
GI Installer fails with the following message:
TASK [oracle_gi : Install Oracle GI 12.1.0.2 Software Only in Silent Mode] *****
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB. Actual 42787 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5119 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-05-02_07-57-00PM. Please wait ...[FATAL] [INS-32031] Invalid inventory location.
ACTION: Specify a valid inventory location.
[FATAL] [INS-32033] Central Inventory location is not writable.
ACTION: Ensure that the inventory location is writable.
A log of this session is currently saved as: /tmp/OraInstall2018-05-02_07-57-00PM/installActions2018-05-02_07-57-00PM.log. Oracle recommends that if you want to keep this log, you should move it from the temporary location.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.