It is more faster and easier to pass the Oracle 1Z0-062 exam by using Accurate Oracle Oracle Database 12c: Installation and Administration questuins and answers. Immediate access to the Most up-to-date 1Z0-062 Exam and find the same core area 1Z0-062 questions with professionally verified answers, then PASS your exam with a high score now.

2016 Jul 1z0-062 book:

Q71. Which three statements are true about Oracle Data Pump export and import operations? 

A. You can detach from a data pump export job and reattach later. 

B. Data pump uses parallel execution server processes to implement parallel import. 

C. Data pump import requires the import file to be in a directory owned by the oracle owner. 

D. The master table is the last object to be exported by the data pump. 

E. You can detach from a data pump import job and reattach later. 

Answer: A,B,D 

Explanation: B: Data Pump can employ multiple worker processes, running in parallel, to increase job performance. 

D: For export jobs, the master table records the location of database objects within a dump file set. / Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set. / For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database. 


Q72. Which three statements are true regarding the use of the Database Migration Assistant for Unicode (DMU)? 

A. A DBA can check specific tables with the DMU 

B. The database to be migrated must be opened read-only. 

C. The release of the database to be converted can be any release since 9.2.0.8. 

D. The DMU can report columns that are too long in the converted characterset. 

E. The DMU can report columns that are not represented in the converted characterset. 

Answer: A,D,E 

Explanation: A: In certain situations, you may want to exclude selected columns or tables from scanning or conversion steps of the migration process. 

D: Exceed column limit 

The cell data will not fit into a column after conversion. 

E: Need conversion 

The cell data needs to be converted, because its binary representation in the target character set is different than the representation in the current character set, but neither length limit issues nor invalid representation issues have been found. 

* Oracle Database Migration Assistant for Unicode (DMU) is a unique next-generation migration tool providing an end-to-end solution for migrating your databases from legacy encodings to Unicode. 

Incorrect: 

Not C: The release of Oracle Database must be 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, or later. 


Q73. On your Oracle 12c database, you invoked SQL *Loader to load data into the EMPLOYEES table in the HR schema by issuing the following command: 

$> sqlldr hr/hr@pdb table=employees 

Which two statements are true regarding the command? 

A. It succeeds with default settings if the EMPLOYEES table belonging to HR is already defined in the database. 

B. It fails because no SQL *Loader data file location is specified. 

C. It fails if the HR user does not have the CREATE ANY DIRECTORY privilege. 

D. It fails because no SQL *Loader control file location is specified. 

Answer: A,C 

Explanation: 

Note: 

* SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that establish session characteristics. 


Q74. Which three statements are true about Flashback Database? 

A. Flashback logs are written sequentially, and are archived. 

B. Flashback Database uses a restored control file to recover a database. 

C. The Oracle database automatically creates, deletes, and resides flashback logs in the Fast Recovery Area. 

D. Flashback Database can recover a database to the state that it was in before a reset logs operation. 

E. Flashback Database can recover a data file that was dropped during the span of time of the flashback. 

F. Flashback logs are used to restore to the blocks' before images, and then the redo data may be used to roll forward to the desired flashback time. 

Answer: B,C,F 

Explanation: * Flashback Database uses its own logging mechanism, creating flashback logs and storing them in the fast recovery area (C). You can only use Flashback Database if flashback logs are available. To take advantage of this feature, you must set up your database in advance to create flashback logs. 

* To enable Flashback Database, you configure a fast recovery area and set a flashback retention target. This retention target specifies how far back you can rewind a database with Flashback Database. 

From that time onwards, at regular intervals, the database copies images of each altered block in every data file into the flashback logs. These block images can later be reused to reconstruct the data file contents for any moment at which logs were captured. (F) 

Incorrect: Not E: You cannot use Flashback Database alone to retrieve a dropped data file. If you flash back a database to a time when a dropped data file existed in the database, only the data file entry is added to the control file. You can only recover the dropped data file by using RMAN to fully restore and recover the data file. 

Reference: Oracle Database Backup and Recovery User's Guide 12c R 


Q75. You are connected using SQL* Plus to a multitenant container database (CDB) with SYSDBA privileges and execute the following sequence statements: 


What is the result of the last SET CONTAINER statement and why is it so? 

A. It succeeds because the PDB_ADMIN user has the required privileges. 

B. It fails because common users are unable to use the SET CONTAINER statement. 

C. It fails because local users are unable to use the SET CONTAINER statement. 

D. If fails because the SET CONTAINER statement cannot be used with PDB$SEED as the target pluggable database (PDB). 

Answer: C 


certifyforall.com

Up to the minute exam 1z0-062:

Q76. Your multitenant container database (CDB) contains a pluggable database, HR_PDB. The default permanent tablespace in HR_PDB is USERDATA. The container database (CDB) is open and you connect RMAN. 

You want to issue the following RMAN command: 

RMAN > BACKUP TABLESPACE hr_pdb:userdata; 

Which task should you perform before issuing the command? 

A. Place the root container in ARHCHIVELOG mode. 

B. Take the user data tablespace offline. 

C. Place the root container in the nomount stage. 

D. Ensure that HR_PDB is open. 

Answer: A 


Q77. Identify three scenarios in which you would recommend the use of SQL Performance Analyzer to analyze impact on the performance of SQL statements. 

A. Change in the Oracle Database version 

B. Change in your network infrastructure 

C. Change in the hardware configuration of the database server 

D. Migration of database storage from non-ASM to ASM storage 

E. Database and operating system upgrade 

Answer: A,C,E 

Explanation: Oracle 11g/12c makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as: 

* (E) Database, operating system, or hardware upgrades. 

* (A,C) Database, operating system, or hardware configuration changes. 

* Database initialization parameter changes. 

* Schema changes, such as adding indexes or materialized views. 

* Refreshing optimizer statistics. 

* Creating or changing SQL profiles. 


Q78. You are administering a database and you receive a requirement to apply the following restrictions: 

1. A connection must be terminated after four unsuccessful login attempts by user. 

2. A user should not be able to create more than four simultaneous sessions. 

3. User session must be terminated after 15 minutes of inactivity. 

4. Users must be prompted to change their passwords every 15 days. 

How would you accomplish these requirements? 

A. by granting a secure application role to the users 

B. by creating and assigning a profile to the users and setting the REMOTE_OS_AUTHENT parameter to FALSE 

C. By creating and assigning a profile to the users and setting the SEC_MAX_FAILED_LOGIN_ATTEMPTS parameter to 4 

D. By Implementing Fine-Grained Auditing (FGA) and setting the REMOTE_LOGIN_PASSWORD_FILE parameter to NONE. 

E. By implementing the database resource Manager plan and setting the SEC_MAX_FAILED_LOGIN_ATTEMPTS parameters to 4. 

Answer: A 

Explanation: You can design your applications to automatically grant a role to the user who is trying to log in, provided the user meets criteria that you specify. To do so, you create a secure application role, which is a role that is associated with a PL/SQL procedure (or PL/SQL package that contains multiple procedures). The procedure validates the user: if the user fails the validation, then the user cannot log in. If the user passes the validation, then the procedure grants the user a role so that he or she can use the application. The user has this role only as long as he or she is logged in to the application. When the user logs out, the role is revoked. 

Incorrect: 

Not B: REMOTE_OS_AUTHENT specifies whether remote clients will be authenticated with the value of the OS_AUTHENT_PREFIX parameter. 

Not C, not E: SEC_MAX_FAILED_LOGIN_ATTEMPTS specifies the number of authentication attempts that can be made by a client on a connection to the server process. 

After the specified number of failure attempts, the connection will be automatically dropped by the server process. 

Not D: REMOTE_LOGIN_PASSWORDFILE specifies whether Oracle checks for a password file. 

Values: 

shared 

One or more databases can use the password file. The password file can contain SYS as well as non-SYS users. 

exclusive 

The password file can be used by only one database. The password file can contain SYS as well as non-SYS users. 

none 

Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system. 

Note: 

The REMOTE_OS_AUTHENT parameter is deprecated. It is retained for backward compatibility only. 


Q79. Which two statements are true about the Oracle Direct Network File system (DNFS)? 

A. It utilizes the OS file system cache. 

B. A traditional NFS mount is not required when using Direct NFS. 

C. Oracle Disk Manager can manage NFS on its own, without using the operating kernel NFS driver. 

D. Direct NFS is available only in UNIX platforms. 

E. Direct NFS can load-balance I/O traffic across multiple network adapters. 

Answer: C,E 

Explanation: E: Performance is improved by load balancing across multiple network interfaces (if available). 

Note: 

* To enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library with one that supports Direct NFS Client. 

Incorrect: Not A: Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system level caches and eliminates any operating system write-ordering locks Not B: 

* To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts. 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Not D: Direct NFS is provided as part of the database kernel, and is thus available on all supported database platforms - even those that don't support NFS natively, like Windows. 

Note: 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Direct NFS is built directly into the database kernel - just like ASM which is mainly used when using DAS or SAN storage. 

* Oracle Direct NFS (dNFS) is an internal I/O layer that provides faster access to large NFS files than traditional NFS clients. 


Q80. You have altered a non-unique index to be invisible to determine if queries execute within an acceptable response time without using this index. 

Which two are possible if table updates are performed which affect the invisible index columns? 

A. The index remains invisible. 

B. The index is not updated by the DML statements on the indexed table. 

C. The index automatically becomes visible in order to have it updated by DML on the table. 

D. The index becomes unusable but the table is updated by the DML. 

E. The index is updated by the DML on the table. 

Answer: A,E 

Explanation: Unlike unusable indexes, an invisible index is maintained during DML statements. 

Note: 

* Oracle 11g allows indexes to be marked as invisible. Invisible indexes are maintained like any other index, but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level. Indexes can be created as invisible by using the INVISIBLE keyword, and their visibility can be toggled using the ALTER INDEX command.