It is more faster and easier to pass the Oracle 1Z0-058 exam by using Best Quality Oracle Oracle Real Application Clusters 11g Release 2 and Grid Infrastructure questuins and answers. Immediate access to the Avant-garde 1Z0-058 Exam and find the same core area 1Z0-058 questions with professionally verified answers, then PASS your exam with a high score now.

2016 Nov 1Z0-058 practice

Q1. Various clients can access and manipulate ASM files. Which two statements are true? 

A. The DBMS_FILE_TRANSFER.COPY_FILE procedure can move a database file from one ASM to another ASM, but not to an operating system file system. 

B. The ASMCMD cp command can move database files from a file system to ASM, but not from ASM to ASM. 

C. The SQL*Plus command ALTER DISKGROUP orcl MOVE '+DATA/orcl/example01.dbf' to '+OLDDATA/orcl/example01.dbf' can move the example01 data file to a different diskgroup. 

D. The DBMS_FILE_TRANSFER.GET_FILE procedure reads an ASM file from a remote machine and makes a local copy on an ASM or a file system. 

E. The ASMCMD rm command will delete ASM files and directories, but not database files on an operating system file system. 

Answer: D,E 

Explanation: 

DBMS_FILE_TRANSFER 

COPY_FILE Procedure 

This procedure reads a file from a source directory and creates a copy of it in a destination directory. The source and destination directories can both be in a local file system, or both be in an Automatic Storage Management (ASM) disk group, or between local file system and ASM with copying in either direction. You can copy any type of file to and from a local file system. However, you can copy only database files (such as datafiles, tempfiles, controlfiles, and so on) to and from an ASM disk group. 

GET_FILE Procedure 

This procedure contacts a remote database to read a remote file and then creates a copy of the file in the local file system or ASM. The file that is copied is the source file, and the new file that results from the copy is the destination file. The destination file is not closed until the procedure completes successfully. 

Examples 

CREATE OR REPLACE DIRECTORY df AS '+datafile' ; 

GRANT WRITE ON DIRECTORY df TO "user"; 

CREATE DIRECTORY DSK_FILES AS ''^t_work^'' 

GRANT WRITE ON DIRECTORY dsk_files TO "user"; 

-- asumes that dbs2 link has been created and we are connected to the instance. 

-- dbs2 could be a loopback or point to another instance. 

BEGIN -- asm file to an os file -- get an asm file from dbs1.asm/a1 to dbs2.^t_work^/oa5.dat DBMS_FILE_TRANSFER.GET_FILE ( 'df' , 'a1' , 'dbs1', 'dsk_files' , 'oa5.dat' ); -- os file to an os file -- get an os file from dbs1.^t_work^/a2.dat to dbs2.^t_work^/a2back.dat DBMS_FILE_TRANSFER.GET_FILE ( 'dsk_files' , 'a2.dat' , 'dbs1', 'dsk_files' , 'a2back.dat' ); END ; / Oracle. Database PL/SQL Packages and Types Reference 11g Release 2 (11.2) ASMCMD cp Purpose Enables you to copy files between Oracle ASM disk groups and between a disk group and the operating system. You can use the cp command to: Copy files from a disk group to the operating system Copy files from a disk group to a disk group Copy files from the operating system to a disk group rm Purpose Deletes the specified Oracle ASM files and directories. Oracle. Automatic Storage Management Administrator's Guide 11g Release 2 (11.2) 


Q2. You have two administrator-defined server pools on your eight-node cluster called OLTP and DSS. 

Hosts RACNODE3, RACNODE4, and RACNODE5 are currently assigned to the DSS Pool. Hosts RACNODE6, RACNODE7, and RACNODE8 are assigned to the OLTP Pool. 

Hosts RACNODE1 and RACNODE2 are assigned to the Generic pool. 

You are patching the Oracle Grid Infrastructure in a rolling fashion for your cluster and you have completed patching nodes RACNODE3, RACNODE4, RACNODE5, and RACNODE6, but you have not patched nodes RACNODE1 and RACNODE2. 

While examining the status of RACNODE2 software, you get this output: 

$ crsctl query crs softwareversion 

Oracle Cluster-ware version on node [RACNODE2] is [11.2.0.2.0] 

$ crsctl query crs activeversion 

Oracle Clusterware active version on node [RACNODE2] is [11.2.0.1.0] 

Which two statements describe the reasons for the active versions on the nodes of the cluster? 

A. The active version is 11.2.0.2.0 on RACNODE3, RACNODE4, and RACNODE5 because all the nodes in the DSS server pool have the same installed version. 

B. The active version is 11.2.0.1.0 on RACNODE6, RACNODE7, and RACNODE8 because some nodes in the cluster still have version 11.2.0.1.0 installed. 

C. The active version is 11.2.0.1.0 on RACNODE6, RACNODE7, and RACNODE8 because some nodes in the OLTP Pool still have version 11.2.0.1.0 installed. 

D. The active version is 11.2.0.1.0 on RACNODE3, RACNODE4, and RACNODE5 because some nodes in the cluster still have version 11.2.0.1.0 installed. 

Answer: B,D 

Explanation: 

crsctl query crs softwareversion Use the crsctl query crs softwareversion command to display latest version of the software that has been successfully started on the specified node. crsctl query crs activeversion Use the crsctl query crs activeversion command to display the active version of the Oracle Clusterware software running in the cluster. During a rolling upgrade, however, the active version is not advanced until the upgrade is finished across the cluster, until which time the cluster operates at the pre-upgrade version. 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2) 


Q3. You notice that there is a very high percentage of wait time for RAC database that has frequent insert operations. 

Which two recommendations may reduce this problem? 

A. shorter transactions 

B. increasing sequence cache sizes 

C. using reverse key indexes 

D. uniform and large extent sizes 

E. automatic segment space management 

F. smaller extent sizes 

Answer: D,E 

Explanation: Segments have High Water Mark (HWM) indicating that blocks below that HWM have been formatted. New tables or truncated tables [that is truncated without reuse storage clause], have HWM value set to segment header block. Meaning, there are zero blocks below HWM. As new rows inserted or existing rows updated (increasing row length), more blocks are added to the free lists and HWM bumped up to reflect these new blocks. HW enqueues are acquired in Exclusive mode before updating HWM and essentially HW enqueues operate as a serializing mechanism for HWM updates. Allocating additional extent with instance keyword seems to help in non-ASSM tablespace serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments. This is particularly common on INSERT-heavy applications, in applications that have raised the block size above 8K, or in applications with large numbers of active users and few rollback segments. Use automatic segment-space management (ASSM) and automatic undo management to solve this problem. HW enqueue The HW enqueue is used to serialize the allocation of space beyond the high water mark of a segment. 

. V$SESSION_WAIT.P2 / V$LOCK.ID1 is the tablespace number. 

. V$SESSION_WAIT.P3 / V$LOCK.ID2 is the relative dba of segment header of the 

object for which space is being allocated. If this is a point of contention for an object, then manual allocation of extents solves the problem. 


Q4. Assuming a RAC database called ORCL, select three statements that correctly demonstrate management actions for the AP service. 

A. To start the AP service, execute: srvctl start service -d ORCL -s AP 

B. To disable the AP service on the ORCL4 instance, execute: srvctl disable service -d ORCL -s AP -i ORCL4 

C. To stop the AP service, execute: srvctl stop service -s AP 

D. To make ORCL5 a preferred instance for the AP service, execute: srvctl set service -d ORCL -s AP -i ORCL5 -r 

E. To relocate the AP service from the ORCL5 instance to the ORCL4 instance, execute: srvctl relocate service -d ORCL -s AP -i ORCL5 -t ORCL4 

Answer: A,B,E 

Explanation: 

SRVCTL Command Reference 

srvctl start service -d db_unique_name 

[-s "service_name_list" [-n node_name | -i instance_name]] 

[-o start_options] 

srvctl disable service -d db_unique_name 

-s "service_name_list" [-i instance_name | -n node_name] 

srvctl stop service -d db_unique_name [-s "service_name_list" 

[-n node_name | -i instance_name] [-f] 

srvctl relocate service -d db_unique_name -s service_name 

{-c source_node -n target_node | -i old_instance_name -t new_instance_name} 

[-f] 

Oracle. Real Application Clusters Administration and Deployment Guide 

11g Release 2 (11.2) 


Q5. Examine the following output: 

[oracIe@gr5153 ~]$ sudo crsctl config crs CRS-4622: Oracle High Availability Services autostart is enabled. [oracIe@gr5153 ~]$ srvctl config database -d RACDB -a Database unique name: RACDB Database name: RACDB Oracle home : /u01/app/oracle/product/l11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/ RACDB /spfileRACDB.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: POOL1 Database instances: Disk Groups: DATA, FRA Services: Database is enabled Database is policy managed 

Oracle Clusterware is started automatically after the system boot. Which two statements are true regarding the attributes of RACDB? 

A. Oracle Clusterware automatically starts RACDB. 

B. You must manually start RACDB. 

C. Database resource is managed by crsd for high availability and may be automatically restarted in place if it fails. 

D. Database resource Is not managed by crsd for high availability and needs to be restarted manually if it fails. 

Answer: A,C 

Explanation: 

Switch Between the Automatic and Manual Policies By default, Oracle Clusterware is configured to start the VIP, listener, instance, ASM, database services, and other resources during system boot. It is possible to modify some resources to have their profile parameter AUTO_START set to the value 2. This means that after node reboot, or when Oracle Clusterware is started, resources with AUTO_START=2 need to be started manually via srvctl. This is designed to assist in troubleshooting and system maintenance. When changing resource profiles through srvctl, the command tool automatically modifies the profile attributes of other dependent resources given the current prebuilt dependencies. The command to accomplish this is: srvctl modify database -d <dbname> -y AUTOMATIC|MANUAL 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 – 3 

3.4.1 Benefits of Using Oracle Clusterware 

Oracle Clusterware provides the following benefits: 

Tolerates and quickly recovers from computer and instance failures. 

Simplifies management and support by means of using Oracle Clusterware together with 

Oracle Database. 

By using fewer vendors and an all Oracle stack you gain better integration compared to using third-party clusterware. 

Performs rolling upgrades for system and hardware changes. For example, you can apply 

Oracle 

Clusterware upgrades, patch sets, and interim patches in a rolling fashion, as follows: 

Upgrade Oracle Clusterware from Oracle Database 10g to Oracle Database 11g 

Upgrade Oracle Clusterware from Oracle Database release 11.1 to release 11.2 

Patch Oracle Clusterware from Oracle Database 11.1.0.6 to 11.1.0.7 

Patch Oracle Clusterware from Oracle Database 10.2.0.2 Bundle 1 to Oracle Database 

10.2.0.2 Bundle 2 

Automatically restarts failed Oracle processes. 

Automatically manages the virtual IP (VIP) address so when a node fails then the node's 

VIP address fails over to another node on which the VIP address can accept connections. 

Automatically restarts resources from failed nodes on surviving nodes. 

Controls Oracle processes as follows: 

For Oracle RAC databases, Oracle Clusterware controls all Oracle processes by default. 

For Oracle single-instance databases, Oracle Clusterware allows you to configure the 

Oracle processes into a resource group that is under the control of Oracle Clusterware. 

Provides an application programming interface (API) for Oracle and non-Oracle applications that enables you to control other Oracle processes with Oracle Clusterware, such as restart or react to failures and certain rules. 

Manages node membership and prevents split-brain syndrome in which two or more instances attempt to control the database. 

Provides the ability to perform rolling release upgrades of Oracle Clusterware, with no downtime for applications. 

Oracle. Database High Availability Overview 11g Release 2 (11.2) 


Up to the minute 1Z0-058 test question:

Q6. Which three statements are true about ASM dynamic volume manager (ADVM)? 

A. ADVM provides volume management services and a standard disk device driver interface to file system drivers. 

B. The administrator can use ADVM to create volumes that contain bootable vendor operating systems. 

C. File systems and other disk-based applications issue I/O requests to ADVM volume devices as they would to other storage devices on a vendor operating system. 

D. ADVM extends ASM by providing a disk driver interface to storage backed by an ASM volume. 

E. To use the ADVM driver, the oraclesacfs, oradeoks, and oracleadvm drivers must be loaded, but an ASM instance is not required. 

Answer: A,C,D 

Explanation: 

At the operating system (OS) level, the ASM instance provides the disk group, which is a logical container for physical disk space. The disk group can hold ASM database files and ASM dynamic volume files. The ASM Dynamic Volume Manager (ADVM) presents the volume device file to the operating system as a block device. The mkfs utility can be used to create an ASM file system in the volume device file. Four OS kernel modules loaded in the OS provide the data service. On Linux, they are: oracleasm, the ASM module; oracleadvm, the ASM dynamic volume manager module; oracleoks, the kernel services module; and oracleacfs, the ASM file system module. These modules provide the ASM Cluster File System, ACFS snapshots, the ADVM, and cluster services. The ASM volumes are presented to the OS as a device file at /dev/asm/<volume name>-<number>. ADVM provides volume management services and a standard disk device driver interface to clients. Clients, such as file systems and other disk-based applications, issue I/O requests to ADVM volume devices as they would to other storage devices on a vendor operating system. ADVM extends ASM by providing a disk driver interface to storage backed by an ASM file. The administrator can use the ADVM to create volumes that contain file systems. These file systems can be used to support files beyond Oracle database files such as executables, report files, trace files, alert logs, and other application data files. With the addition of ADVM and ACFS, ASM becomes a complete storage solution of user data for both database and non-database file needs. ACFS is intended as a general file system accessible by the standard OS utilities. ACFS can be used in either a single server or a cluster environment. Note: Oracle ACFS file systems cannot be used for an Oracle base directory or an Oracle grid infrastructure home that contains the software for Oracle Clusterware, ASM, Oracle ACFS, and Oracle ADVM components. Oracle ACFS file systems cannot be used for an OS root directory or boot directory. ASM volumes serve as containers for storage presented as a block device accessed through ADVM. File systems or user processes can do I/O on this “ASM volume device” just as they would on any other device. To accomplish this, ADVM is configured into the operating system. A volume device is constructed from an ASM file. 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 10 - 3,4,5 


Q7. The disk groups on the current ASM instance at version 11.2 were configured to support a version 10.2 database instance. The 10.2 instance has the COMPATIBLE parameter defined as 10.2.0. The compatible.asm attribute is set to 11.2 for each disk group. The database has been upgraded to 11.2. Which statement indicates the proper time to change the compatible.rdbms disk group attribute to 11.2? 

A. Change the disk group attribute after the database instance COMPATIBLE parameter is upgraded to 11.2. 

B. Change the disk group attribute after the database instance is started with the 11.2 software. 

C. Change the disk group attribute after the database instance optimizer_features_enabled parameter is set to 11.2. 

D. Change each disk group after the 11.2 features are required for use on the disk group. 

E. Never, upgrading the attribute is not reversible. 

Answer:

Explanation: 

COMPATIBLE.RDBMS The value for the disk group COMPATIBLE.RDBMS attribute determines the minimum COMPATIBLE database initialization parameter setting for any database instance that is allowed to use the disk group. Before advancing the COMPATIBLE.RDBMS attribute, ensure that the values for the COMPATIBLE initialization parameter for all of the databases that access the disk group are set to at least the value of the new setting for COMPATIBLE.RDBMS. 

Oracle. Automatic Storage Management Administrator's Guide 11g Release 2 (11.2) 


Q8. Examine the following details from the AWR report for your three-instance RAC database: 

Which inferences is correct? 

A. There are a large number of requests for cr blocks or current blocks currently in progress. 

B. Global cache access is optimal without any significant delays. 

C. The log file sync waits are clue to cluster interconnect latency. 

D. To determine the frequency of two-way block requests you must examine other events In the report. 

Answer:

Explanation: 

Analyzing Cache Fusion Transfer Impact Using GCS Statistics This section describes how to monitor GCS performance by identifying objects read and modified frequently and the service times imposed by the remote access. Waiting for blocks to arrive may constitute a significant portion of the response time, in the same way that reading from disk could increase the block access delays, only that cache fusion transfers in most cases are faster than disk access latencies. The following wait events indicate that the remotely cached blocks were shipped to the local instance without having been busy, pinned or requiring a log flush: 

gc current block 2-way gc current block 3-way gc cr block 2-way gc cr block 3-way 

The object statistics for gc current blocks received and gc cr blocks received enable quick identification of the indexes and tables which are shared by the active instances. As mentioned earlier, creating an ADDM analysis will, in most cases, point you to the SQL statements and database objects that could be impacted by interinstance contention. Any increases in the average wait times for the events mentioned in the preceding list could be caused by the following occurrences: High load: CPU shortages, long run queues, scheduling delays Misconfiguration: using public instead of private interconnect for message and block traffic If the average wait times are acceptable and no interconnect or load issues can be diagnosed, then the accumulated time waited can usually be attributed to a few SQL statements which need to be tuned to minimize the number of blocks accessed. Oracle. Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) 


Q9. Which three actions are required to create a general purpose ASM cluster file system (ACFS) to be automatically mounted by Oracle Clusterware? 

A. Format an ASM volume with an ASM cluster file system. 

B. Create mount points on all cluster nodes where the ASM cluster file system will be mounted. 

C. Manually add an entry to /etc/fstab defining the volume, mount point, and mount options on each node in the cluster. 

D. Register the mount point. 

Answer: A,B,D 

Explanation: 

. File systems that are to be mounted persistently (across reboots) can be registered with the Oracle ACFS mount registry. In cluster configurations, registered Oracle ACFS file systems are automatically mounted by the mount registry, similar to a clusterwide mount table. However, in Oracle Restart configurations the automatic mounting of registered Oracle ACFS file systems is not supported. 

By default, an Oracle ACFS file system that is inserted into the cluster mount registry is automatically mounted on all cluster members, including cluster members that are added after the registry addition. However, the cluster mount registry also accommodates single-node and multi-node (subset of cluster nodes) file system registrations. The mount registry actions for each cluster member mount only registered file systems that have been designated for mounting on that member. The Oracle ACFS registry resource actions are designed to automatically mount a file system only one time for each Oracle Grid Infrastructure initialization to avoid potential conflicts with administrative actions to dismount a given file system. 

Oracle Automatic Storage Management Administrator's Guide 


Q10. Your cluster was originally created with nodes RACNODE1 and RACNODE2 three years ago. Last year, nodes RACNODE3 and RACNODE4 were added. 

These nodes have faster processors and more local storage than the original nodes making performance management and tuning more difficult. 

Two more nodes with the same processor speed have been added to the cluster last week as RACNODE5 and RACNODE6 and you must remove RACNODE1 and RACNODE2 for redeployment. 

The Oracle Grid Infrastructure is using GNS and the databases are all 11g Release 2, all running from the same home. The Grid home is /fs01/home/grid. 

Which three steps must be performed to remove the nodes from the cluster? 

A. Run /fs01/home/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/fs01/home/grid "CLUSTER_NODES= {RACNODE3 , RACNODE4 , 

RACNODE5 , 

RACNODE6} 

as the grid software owner on any remaining node. 

B. Run /fs01/home/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/fs01/home/grid " CLUSTER_NODES={RACNODE1} as the grid software owner on RACNODE1 and run /fs01/home/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/ fs01/home/grid " CLUSTER_NODES={RACNODE 2} as the grid software owner on RACNODE2. 

C. Run /fs01/home/grid/oui/bin/runInstaller -detachHome ORACLE_HOME=/fs01/home/grid as the grid software owner on RACNODE1 and RACNODE2. 

D. Run the /fs01/home/grid/crs/install/rootcrs.pl script as root on each node to be deleted. 

E. Run crsctl delete node -n RACNODE1 and crsctl delete node -n RACNODE2 as root from any node remaining in the cluster. 

Answer: A,D,E 

Explanation: 

Deleting a Cluster Node on Linux and UNIX Systems 

1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software. 

2. Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned: $ olsnodes -s -t If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step. 

3. Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted, as follows: # ./rootcrs.pl -deconfig -deinstall -force If you are deleting multiple nodes, then run the rootcrs.pl script on each node that you are deleting. If you are deleting all nodes from a cluster, then append the -lastnode option to the preceding command to clear OCR and the voting disks, as follows: # ./rootcrs.pl -deconfig -deinstall -force -lastnode 

4. From any node that you are not deleting, run the following command from the 

Grid_home/bin directory as root to delete the node from the cluster: 

# crsctl delete node -n node_to_be_deleted 

Then if you run a dynamic Grid Plug and Play cluster using DHCP and GNS, skip to step 7. 

5. On the node you want to delete, run the following command as the user that installed 

Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the 

name of the node that you are deleting: 

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES= 

{node_to_be_deleted}" CRS=TRUE -silent -local 

6. On the node that you are deleting, depending on whether you have a shared or local 

Oracle home, complete one of the following procedures as the user that installed Oracle 

Clusterware: 

If you have a shared home, then run the following command from the Grid_home/oui/bin directory on the node you want to delete: 

$ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local 

For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home: 

$ Grid_home/deinstall/deinstall –local 

7. On any node other than the node you are deleting, run the following command from the 

Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the 

nodes that are going to remain part of your cluster: 

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES= 

{remaining_nodes_list}" CRS=TRUE -silent 

8. Run the following CVU command to verify that the specified nodes have been 

successfully deleted from the cluster: 

$ cluvfy stage -post nodedel -n node_list [-verbose] 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2)