Your success in Microsoft 70-764 is our sole target and we develop all our 70-764 braindumps in a way that facilitates the attainment of this target. Not only is our 70-764 study material the best you can find, it is also the most detailed and the most updated. 70-764 Practice Exams for Microsoft Data and AI 70-764 are written to the highest standards of technical accuracy.

Free 70-764 Demo Online For Microsoft Certifitcation:

NEW QUESTION 1

You are designing two stored procedures named Procedure1 and Procedure2. You identify the following
requirements:
Procedure1 must take a parameter that ensures that multiple rows of data can pass into the stored procedure.
Procedure2 must use business logic that resides in a Microsoft .NET Framework assembly. You need to identify the appropriate technology for each stored procedure.
Which technologies should you identify? To answer, drag the appropriate technology to the correct stored procedure in the answer area. (Answer choices may be used once, more than once, or not at all.)
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
Procedure 1 - A table-valued parameter (TVP); Procedure 2 - Common language runtime (CLR) References:
http://msdn.microsoft.com/en-us/library/ms131102.aspx http://msdn.microsoft.com/en-us/library/bb522446.aspx http://msdn.microsoft.com/en-us/library/bb510489.aspx

NEW QUESTION 2

You have a table that has grown in the past six months.
A user reports that queries against the table take a long time to complete.
You need to update the statistics for the table in the least amount of time without disabling automatic statistics updates.
Which Transact-SQL statement should you run?

  • A. UPDATE STATISTICS WITH RESAMPLE
  • B. UPDATE STATISTICS WITH FULLSCAN
  • C. UPDATE STATISTICS WITH SAMPLE 10 PERCENT
  • D. UPDATE STATISTICS WITH NORECOMPUTE

Answer: C

Explanation:
SAMPLE number { PERCENT | ROWS } specifies the approximate percentage or number of rows in the table or indexed view for the query optimizer to use when it updates statistics.
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/update-statistics-transact-sql?view=sql-server-2017

NEW QUESTION 3

Overview
Application Overview
Contoso, Ltd., is the developer of an enterprise resource planning (ERP) application.
Contoso is designing a new version of the ERP application. The previous version of the ERP application used SQL Server 2008 R2.
The new version will use SQL Server 2014.
The ERP application relies on an import process to load supplier data. The import process updates thousands of rows simultaneously, requires exclusive access to the database, and runs daily.
You receive several support calls reporting unexpected behavior in the ERP application. After analyzing the calls, you conclude that users made changes directly to the tables in the database.
Tables
The current database schema contains a table named OrderDetails.
The OrderDetails table contains information about the items sold for each purchase order. OrderDetails stores the product ID, quantities, and discounts applied to each product in a purchase order.
The product price is stored in a table named Products. The Products table was defined by using the SQL_Latin1_General_CP1_CI_AS collation.
A column named ProductName was created by using the varchar data type. The database contains a table
named Orders.
Orders contains all of the purchase orders from the last 12 months. Purchase orders that are older than 12 months are stored in a table named OrdersOld.
The previous version of the ERP application relied on table-level security. Stored Procedures
The current version of the database contains stored procedures that change two tables. The following shows the relevant portions of the two stored procedures:
70-764 dumps exhibit
Customer Problems Installation Issues
The current version of the ERP application requires that several SQL Server logins be set up to function correctly. Most customers set up the ERP application in multiple locations and must create logins multiple times.
Index Fragmentation Issues
Customers discover that clustered indexes often are fragmented. To resolve this issue, the customers defragment the indexes more frequently. All of the tables affected by fragmentation have the following columns that are used as the clustered index key:
70-764 dumps exhibit
Backup Issues
Customers who have large amounts of historical purchase order data report that backup time is unacceptable. Search Issues
Users report that when they search product names, the search results exclude product names that contain accents, unless the search string includes the accent.
Missing Data Issues
Customers report that when they make a price change in the Products table, they cannot retrieve the price that the item was sold for in previous orders.
Query Performance Issues
Customers report that query performance degrades very quickly. Additionally, the customers report that users cannot run queries when SQL Server runs maintenance tasks. Import Issues During the monthly import process, database administrators receive many supports call from users who report that they cannot access the supplier data. The database administrators want to reduce the amount of time required to import the data.
Design Requirements
File Storage Requirements
The ERP database stores scanned documents that are larger than 2 MB. These files must only be accessed through the ERP application. File access must have the best possible read and write performance.
Data Recovery Requirements
If the import process fails, the database must be returned to its prior state immediately. Security Requirements
You must provide users with the ability to execute functions within the ERP application, without having direct access to the underlying tables.
Concurrency Requirements
You must reduce the likelihood of deadlocks occurring when Sales.Prod and Sales.Proc2 execute.
You need to recommend a solution that addresses the security requirement. What should you recommend?

  • A. Revoke user permissions on the table
  • B. Create stored procedures that manipulate dat
  • C. Grant the users the EXECUTE permission on the stored procedures.
  • D. Grant the users the SELECT permission on the table
  • E. Create views that retrieve data from the tables.Grant the users the SELECT permission on the views.
  • F. Deny the users SELECT permission on the table
  • G. Create views that retrieve data from the table
  • H. Grant the users the SELECT permission on the views.
  • I. Deny the users the SELECT permission on the table
  • J. Create stored procedures that manipulate data.Grant the users the EXECUTE permission on the stored procedures.

Answer: C

Explanation:
- Security Requirements
You must provide users with the ability to execute functions within the ERP application, without having direct access to the underlying tables.

NEW QUESTION 4

You have a database named DB1 that contains two tables.
You need to encrypt one column in each table by using the Always Encrypted feature. The solution must support groupings on encrypted columns.
Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  • A. Encrypt both columns by using deterministic encryption.
  • B. Provision a symmetric key by using Transact-SQL.
  • C. Encrypt both columns by using randomized encryption.
  • D. Provision column master keys and column encryption keys by using Microsoft SQL Server Management Studio (SSMS).

Answer: AD

Explanation:
A: Use deterministic encryption for columns that will be used as search or grouping parameters, for example a government ID number.
Deterministic encryption always generates the same encrypted value for any given plain text value. Using deterministic encryption allows point lookups, equality joins, grouping and indexing on encrypted columns.
D: Always Encrypted uses two types of keys: column encryption keys and column master keys. A column encryption key is used to encrypt data in an encrypted column. A column master key is a key-protecting key that encrypts one or more column encryption keys.

NEW QUESTION 5

You plan to create a database.
The database will be used by a Microsoft .NET application for a special event that will last for two days. During the event, data must be highly available.
After the event, the database will be deleted.
You need to recommend a solution to implement the database while minimizing costs. The solution must not affect any existing applications.
What should you recommend? More than one answer choice may achieve the goal. Select the BEST answer.

  • A. SQL Server 2014 Enterprise
  • B. SQL Server 2014 Standard
  • C. SQL Azure
  • D. SQL Server 2014 Express with Advanced Services

Answer: B

Explanation:
Programmability (AMO, ADOMD.Net, OLEDB, XML/A, ASSL) supported by Standard and Enterprise editions only. References: Features Supported by the Editions of SQL Server 2014.

NEW QUESTION 6

You administer a Microsoft SQL Server database named Contoso. You create a stored procedure named Sales.ReviewInvoice by running the following Transact-SQL statement:
70-764 dumps exhibit
You need to create a Windows-authenticated login named ContosoSearch and ensure that ContosoSearch can run the Sales.ReviewInvoices stored procedure.
Which three Transact-SQL segments should you use to develop the solution? To answer, move the appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange them in the correct order.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
70-764 dumps exhibit

NEW QUESTION 7

Overview
You are a database administrator for a company named Litware, Inc.
Litware is a book publishing house. Litware has a main office and a branch office.
You are designing the database infrastructure to support a new web-based application that is being developed. The web application will be accessed at www.litwareinc.com. Both internal employees and external partners
will use the application.
You have an existing desktop application that uses a SQL Server 2008 database named App1_DB. App1_DB will remain in production.
Requirements Planned Changes
You plan to deploy a SQL Server 2014 instance that will contain two databases named Database1 and Database2.
All database files will be stored in a highly available SAN. Database1 will contain two tables named Orders and OrderDetails.
Database1 will also contain a stored procedure named usp_UpdateOrderDetails.
The stored procedure is used to update order information. The stored procedure queries the Orders table twice each time the procedure executes.
The rows returned from the first query must be returned on the second query unchanged along with any rows added to the table between the two read operations.
Database1 will contain several queries that access data in the Database2 tables. Database2 will contain a table named Inventory.
Inventory will contain over 100 GB of data.
The Inventory table will have two indexes: a clustered index on the primary key and a nonclustered index.
The column that is used as the primary key will use the identity property.
Database2 wilt contains a stored procedure named usp_UpdateInventory. usp_UpdateInventory will manipulate a table that contains a self-join that has an unlimited number of hierarchies. All data in Database2 is recreated each day ad does not change until the next data creation process. Data from Database2 will be accessed periodically by an external application named Application1. The data from Database2 will be sent to a database named Appl_Dbl as soon as changes occur to the data in Database2. Litware plans to use offsite storage for all SQL Server 2014 backups.
Business Requirements
You have the following requirements:
Costs for new licenses must be minimized.
Private information that is accessed by Application must be stored in a secure format.
Development effort must be minimized whenever possible.
The storage requirements for databases must be minimized.
System administrators must be able to run real-time reports on disk usage.
The databases must be available if the SQL Server service fails.
Database administrators must receive a detailed report that contains allocation errors and data corruption.
Application developers must be denied direct access to the database tables. Applications must be denied direct access to the tables.
You must encrypt the backup files to meet regulatory compliance requirements.
The encryption strategy must minimize changes to the databases and to the applications. You need to recommend an isolation level for usp_UpdateOrderDetails.
Which isolation level should recommend?

  • A. Read committed
  • B. Repeatable read
  • C. Read uncommitted
  • D. Serializable

Answer: B

Explanation:
- Scenario: Databasel will also contain a stored procedure named usp_UpdateOrderDetails. The stored procedure is used to update order information. The stored procedure queries the Orders table twice each time the procedure executes. The rows returned from the first query must be returned on the second query unchanged along with any rows added to the table between the two read operations.
- REPEATABLE READ Specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data that has been read by the current transaction until the current transaction completes.

NEW QUESTION 8

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You collect performance metrics on multiple Microsoft SQL Server instances and store the data in a single repository.
You need to examine disk usage, query statistics, and server activity without building custom counters.
What should you use?

  • A. Activity Monitor
  • B. Sp_who3 stored procedure
  • C. Object Explorer in the Microsoft SQL Server Management Studio (SSMS)
  • D. SQL Server Data Collector
  • E. SQL Server Data Tools (SSDT)
  • F. SQL Server Configuration Manager

Answer: D

Explanation:
The data collector is a core component of the data collection platform for SQL Server 2017 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data

NEW QUESTION 9

Background Corporate Information
Fabrikam, Inc. is a retailer that sells electronics products on the Internet. The company has a headquarters site and one satellite sales office. You have been hired as the database administrator, and the company wants you to change the architecture of the Fabrikam ecommerce site to optimize performance and reduce downtime while keeping capital expenditures to a minimum. To help with the solution, Fabrikam has decided to use cloud resources as well as on-premise servers.
Physical Locations
All of the corporate executives, product managers, and support staff are stationed at the headquarters office. Half of the sales force works at this location. There is also a satellite sales office. The other half of the sales force works at the satellite office in order to have sales people closer to clients in that area. Only sales people work at the satellite location.
Problem Statement
To be successful, Fabrikam needs a website that is fast and has a high degree of system uptime. The current system operates on a single server and the company is not happy with the single point of failure this presents. The current nightly backups have been failing due to insufficient space on the available drives and manual drive cleanup often needing to happen to get past the errors. Additional space will not be made available for backups on the HQ or satellite servers. During your investigation, you discover that the sales force reports are causing significant contention.
Configuration Windows Logins
The network administrators have set up Windows groups to make it easier to manage security. Users may belong to more than one group depending on their role. The groups have been set up as shown in the following table:
70-764 dumps exhibit
Server Configuration The IT department has configured two physical servers with Microsoft Windows Server 2012 R2 and SQL Server 2014 Enterprise Edition and one Windows Azure Server. There are two tiers of storage available for use by database files only a fast tier and a slower tier. Currently the data and log files are stored on the fast tier of storage only. If a possible use case exists, management would like to utilize the slower tier storage for data files. The servers are configured as shown in the following table:
70-764 dumps exhibit
Database
Currently all information is stored in a single database called ProdDB, created with the following script:
70-764 dumps exhibit
The Product table is in the Production schema owned by the ProductionStaff Windows group. It is the main table in the system so access to information in the Product table should be as fast as possible. The columns in the Product table are defined as shown in the following table:
70-764 dumps exhibit
The SalesOrderDetail table holds the details about each sale. It is in the Sales schema owned by the SalesStaff Windows group. This table is constantly being updated, inserted into, and read. The columns in the SalesOrderDetail table are defined as shown in the following table:
70-764 dumps exhibit
Database Issues
The current database does not perform well. Additionally, a recent disk problem caused the system to go down, resulting in lost sales revenue. In reviewing the current system, you found that there are no automated maintenance procedures. The database is severely fragmented, and everyone has read and write access.
Requirements Database
The database should be configured to maximize uptime and to ensure that very little data is lost in the event of a server failure. To help with performance, the database needs to be modified so that it can support in-memory data, specifically for the Product table, which the CIO has indicated should be a memoryoptimized table. The auto-update statistics option is set off on this database. Only product managers are allowed to add products or to make changes to the name, description, price, cost, and supplier. The changes are made in an internal database and pushed to the Product table in ProdDB during system maintenance time. Product managers and others working at the headquarters location also should be able to generate reports that include supplier and cost information.
Customer data access
Customers access the company's website to order products, so they must be able to read product information such asname, description, and price from the Product table. When customers place orders, stored procedures calledby the website update product quantityon-hand values. This means the product table is constantly updated at randomtimes.
Customer support data access
Customer support representatives need to be able to view and not update or change product information. Management does not want the customer support representatives to be able to see the product cost or any supplier information.
Sales force data access
Sales people at both the headquarters office and the satellite office must generate reports that read from the Product and SalesOrderDetail tables. No updates or inserts are ever made by sales people. These reports are run at random times and there can be no reporting downtime to refresh the data set except during the monthly
maintenance window. The reports that run from the satellite office are process intensive queries with large data sets. Regardless of which office runs a sales force report, the SalesOrderDetail table should only return valid, committed order data; any orders not yet committed should be ignored.
Historical Data
The system should keep historical information about customers who access the site so that sales people can see how frequently customers log in and how long they stay on the site.
The information should be stored in a table called Customer Access. Supporting this requirement should have minimal impact on production website performance.
Backups
The recovery strategy for Fabrikam needs to include the ability to do point in time restores and minimize the risk of data loss by performing transaction log backups every 15 minutes.
Database Maintenance
The company has defined a maintenance window every month when the server can be unavailable. Any maintenance functions that require exclusive access should be accomplished during that window.
Project milestones completed
Revoked all existing read and write access to the database, leaving the schema ownership in place.
Configured an Azure storage container secured with the storage account name MyStorageAccount with the primary access key StorageAccountKey on the cloud file server.
SQL Server 2014 has been configured on the satellite server and is ready for use.
On each database server, the fast storage has been assigned to drive letter F:, and the slow storage has been assigned to drive letter D:.
You need to implement a backup strategy to support the requirements.
Which two actions should you perform? Each correct answer presents part of the solution. (Choose two.)

  • A. Create a credential called MyCredential on SQL Server by using a Windows domain account and password.
  • B. Schedule a full backup by using the command BACKUP DATABASE ProdDB TO DISK...
  • C. Create a share on your Windows Azure site by using your Windows Azure storage account information, and grant permission to the SQL Server service login.
  • D. Schedule a full backup by using the command BACKUP DATABASE ProdDB TO URL ... WTTH CREDENTIAL=N'MyCredential'
  • E. Create a share on the hot standby site and grant permission to the SQL Server service login.
  • F. Create a credential called MyCredential on SQL Server, using MyStorageAccount for the storage account name and StorageAccountKey for the access key.
  • G. Schedule a full backup by using the command BACKUP DATABASE ProdDB TO SHARE ... WITH CREDENTIAL=N' MyCredential'

Answer: CD

Explanation:
- Scenario: The current nightly backups have been failing due to insufficient space on the available drives and manual drive cleanup often needing to happen to get past the errors. Additional space will not be made available for backups on the HQ or satellite servers.
- Need to store files in the cloud.
- Manage your backups to Windows Azure: Using the same methods used to backup to DISK and TAPE, you can now back up to Windows Azure storage by Specifying URL as the backup destination.
You can use this feature to manually backup or configure your own backup strategy like you would for a local storage or other off-site options.
This feature is also referred to as SQL Server Backup to URL. SQL Server Managed Backup to Windows Azure

NEW QUESTION 10

You have a database named DB1. You observe issues with indexes and other consistency issues.
You need to identify and repair all physical database problems while minimizing data loss and database downtime.
Which four Transact SQL statements should you use to develop the solution?
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
70-764 dumps exhibit

NEW QUESTION 11

You use SQL Server 2014. You create a table within a database by using the following DDL:
70-764 dumps exhibit
The following table illustrates a representative sample of data:
70-764 dumps exhibit
The system is expected to handle 50 million orders a month over the next five years.
You have been instructed by your Team Lead to follow best practices for storage and performance in the utilization of SPARSE columns.
Which columns should you designate as SPARSE? To answer, mark each column as SPARSE or NOT SPARSE in the answer area.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
Sparse columns are ordinary columns that have an optimized storage for null values. Sparse columns reduce the space requirements for null values at the cost of more overhead to retrieve nonnull values. Consider using sparse columns when the space saved is at least 20 percent to 40 percent.

NEW QUESTION 12

Overview
Application Overview
Contoso, Ltd., is the developer of an enterprise resource planning (ERP) application.
Contoso is designing a new version of the ERP application. The previous version of the ERP application used SQL Server 2008 R2.
The new version will use SQL Server 2014.
The ERP application relies on an import process to load supplier data. The import process updates thousands of rows simultaneously, requires exclusive access to the database, and runs daily.
You receive several support calls reporting unexpected behavior in the ERP application. After analyzing the calls, you conclude that users made changes directly to the tables in the database.
Tables
The current database schema contains a table named OrderDetails.
The OrderDetails table contains information about the items sold for each purchase order. OrderDetails stores the product ID, quantities, and discounts applied to each product in a purchase order.
The product price is stored in a table named Products. The Products table was defined by using the SQL_Latin1_General_CP1_CI_AS collation.
A column named ProductName was created by using the varchar data type. The database contains a table named Orders.
Orders contains all of the purchase orders from the last 12 months. Purchase orders that are older than 12 months are stored in a table named OrdersOld.
The previous version of the ERP application relied on table-level security. Stored Procedures
The current version of the database contains stored procedures that change two tables. The following shows the relevant portions of the two stored procedures:
70-764 dumps exhibit
Customer Problems Installation Issues
The current version of the ERP application requires that several SQL Server logins be set up to function
correctly. Most customers set up the ERP application in multiple locations and must create logins multiple times.
Index Fragmentation Issues
Customers discover that clustered indexes often are fragmented. To resolve this issue, the customers defragment the indexes more frequently. All of the tables affected by fragmentation have the following columns that are used as the clustered index key:
70-764 dumps exhibit
Backup Issues
Customers who have large amounts of historical purchase order data report that backup time is unacceptable. Search Issues
Users report that when they search product names, the search results exclude product names that contain accents, unless the search string includes the accent.
Missing Data Issues
Customers report that when they make a price change in the Products table, they cannot retrieve the price that the item was sold for in previous orders.
Query Performance Issues
Customers report that query performance degrades very quickly. Additionally, the customers report that users cannot run queries when SQL Server runs maintenance tasks. Import Issues During the monthly import process, database administrators receive many supports call from users who report that they cannot access the supplier data. The database administrators want to reduce the amount of time required to import the data.
Design Requirements
File Storage Requirements
The ERP database stores scanned documents that are larger than 2 MB. These files must only be accessed through the ERP application. File access must have the best possible read and write performance.
Data Recovery Requirements
If the import process fails, the database must be returned to its prior state immediately.
Security Requirements
You must provide users with the ability to execute functions within the ERP application, without having direct access to the underlying tables.
Concurrency Requirements
You must reduce the likelihood of deadlocks occurring when Sales.Prod and Sales.Proc2 execute. You need to recommend a solution that addresses the file storage requirements.
What should you include in the recommendation?

  • A. FileStream
  • B. FileTable
  • C. The varbinary data type
  • D. The image data type

Answer: B

Explanation:
- Scenario: File Storage Requirements The ERP database stores scanned documents that are larger than 2 MB. These files must only be accessed through the ERP application. File access must have the best possible read and write performance.
- FileTables remove a significant barrier to the use of SQL Server for the storage and management of unstructured data that is currently residing as files on file servers.
Enterprises can move this data from file servers into FileTables to take advantage of integrated administration and services provided by SQL Server. At the same time, they can maintain Windows application compatibility for their existing Windows applications that see this data as files in the file system.

NEW QUESTION 13

You have a SQL Server instance on a server named Server1. You need to recommend a solution to perform the following tasks every week:
Rebuild the indexes by using a new fill factor.
Run a custom T-SQL command.
Back up the databases.
What should you recommend? More than one answer choice may achieve the goal. Select the BEST answer.

  • A. A trigger
  • B. An alert
  • C. A maintenance plan
  • D. Windows PowerShell
  • E. A system policy

Answer: C

Explanation:
Maintenance plans create a workflow of the tasks required to make sure that your database is optimized, regularly backed up, and free of inconsistencies.

NEW QUESTION 14

You have a server named SQL1 that has SQL Server 2012 installed. SQL1 hosts a database named Database1.
Database1 contains a table named Table1. Table1 is partitioned across five filegroups based on the Date field. The schema of Table1 is configured as shown in the following table.
70-764 dumps exhibit
Table1 contains the indexes shown in the following table.
70-764 dumps exhibit
You need to recommend an index strategy to maximize performance for the queries that consume the indexes available to Table1.
Which type of index storage should you recommend? To answer, drag the appropriate index storage type to the correct index in the answer area.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
Index Storage Type
Designing a partitioned index independently (unaligned) of the base table can be useful in the following cases:
- The base table has not been partitioned.
- The index key is unique and it does not contain the partitioning column of the table.
- You want the base table to participate in collocated joins with more tables using different join columns.

NEW QUESTION 15

You are the database administrator of a Microsoft SQL Server instance. Developers are writing stored procedures to send emails using sp_send_dbmail. Database Mail is enabled.
You need to configure each account’s profile security and meet the following requirements:
Account SMTP1_Account must only be usable by logins that have been given explicit permissions to use the SMTP1_profile.
Account SMTP2_Account must only be usable by logins who are a member of the [DatabaseMailUserRole] role in msdb.
In the table below. identify the profile type that must be used for each account. NOTE: Make only one selection in each column.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
SMTP1_Account1: Private Profile
When no profile_name is specified, sp_send_dbmail uses the default private profile for the current user. I user does not have a default private profile, sp_send_dbmail uses the default public profile for
the msdb database.
SMTP1_Account2: Default Profile
Execute permissions forsp_send_dbmail default to all members of the DatabaseMailUser database role in the msdb database.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql

NEW QUESTION 16

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario.
You have five servers that run Microsoft Windows 2012 R2. Each server hosts a Microsoft SQL Server instance. The topology for the environment is shown in the following diagram.
70-764 dumps exhibit
You have an Always On Availability group named AG1. The details for AG1 are shown in the following table.
70-764 dumps exhibit
Instance1 experiences heavy read-write traffic. The instance hosts a database named OperationsMain that is four terabytes (TB) in size. The database has multiple data files and filegroups. One of the filegroups is read_only and is half of the total database size.
Instance4 and Instance5 are not part of AG1. Instance4 is engaged in heavy read-write I/O.
Instance5 hosts a database named StagedExternal. A nightly BULK INSERT process loads data into an empty table that has a rowstore clustered index and two nonclustered rowstore indexes.
You must minimize the growth of the StagedExternal database log file during the BULK INSERT operations and perform point-in-time recovery after the BULK INSERT transaction. Changes made must not interrupt the log backup chain.
You plan to add a new instance named Instance6 to a datacenter that is geographically distant from Site1 and Site2. You must minimize latency between the nodes in AG1.
All databases use the full recovery model. All backups are written to the network location \\SQLBackup\. A separate process copies backups to an offsite location. You should minimize both the time required to restore the databases and the space required to store backups. The recovery point objective (RPO) for each instance is shown in the following table.
70-764 dumps exhibit
Full backups of OperationsMain take longer than six hours to complete. All SQL Server backups use the keyword COMPRESSION.
You plan to deploy the following solutions to the environment. The solutions will access a database named DB1 that is part of AG1.
70-764 dumps exhibit Reporting system: This solution accesses data inDB1with a login that is mapped to a database user that is a member of the db_datareader role. The user has EXECUTE permissions on the database. Queries make no changes to the data. The queries must be load balanced over variable read-only replicas.
70-764 dumps exhibit Operations system: This solution accesses data inDB1with a login that is mapped to a database user that is a member of the db_datareader and db_datawriter roles. The user has EXECUTE permissions on the database. Queries from the operations system will perform both DDL and DML operations.
The wait statistics monitoring requirements for the instances are described in the following table.
70-764 dumps exhibit
End of repeated scenario.
You need to reduce the amount of time it takes to back up OperationsMain. What should you do?

  • A. Modify the backup script to use the keyword NO_COMPRESSION in the WITH statement.
  • B. Modify the backup script to use the keywords INIT and SKIP in the WITH statement.
  • C. Run the following Transact-SQL statement for each file in OperationsMain: BACKUP DATABASE OperationsMain FILE […]
  • D. Run the following Transact-SQL statement: BACKUP DATABASE OperationsMain READ_WRITE_FILEGROUPS

Answer: D

Explanation:
READ_WRITE_FILEGROUPS specifies that all read/write filegroups be backed up in the partial backup. If the database is read-only, READ_WRITE_FILEGROUPS includes only the primary filegroup.
Scenario: Full backups of OperationsMain take longer than six hours to complete. All SQL Server backups use the keyword COMPRESSION.

NEW QUESTION 17

You have a SQL Server 2012 database named DB1.
You plan to import a large number of records from a SQL Azure database to DB1.
You need to recommend a solution to minimize the amount of space used in the transaction log during the import operation.
What should you include in the recommendation?

  • A. a new log file
  • B. a new filegroup
  • C. the full recovery model
  • D. a new partitioned table
  • E. the bulk-logged recovery model

Answer: E

Explanation:
Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.
Note:
The bulk-logged recovery model is a special-purpose recovery model that should be used only intermittently to improve the performance of certain large-scale bulk operations, such as bulk imports of large amounts of data. Recovery Models (SQL Server)

NEW QUESTION 18

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data warehouse that stored sales data. One fact table has 100 million rows. You must reduce storage needs for the data warehouse.
You need to implement a solution that uses column-based storage and provides real-time analytics for the operational workload.
Solution: You remove all clustered indexes, sort the transactions in the table, and create a clustered index on the table, so that the table is not a heap.
Does the solution meet the goal?

  • A. Yes
  • B. No

Answer: A

Explanation:
Columnstore indexes are the standard for storing and querying large data warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data size.
In SQL Server, rowstore refers to table where the underlying data storage format is a heap, a clustered index, or a memory-optimized table.
References: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview

NEW QUESTION 19

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You are a database administrator for a company that has an on-premises Microsoft SQL Server environment and Microsoft Azure SQL Database instances. The environment hosts several customer databases, and each customer uses a dedicated instance. The environments that you manage are shown in the following table.
70-764 dumps exhibit
You need to configure auditing for WDWDB.
In the table below, identify the event type that you must audit for each activity.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
70-764 dumps exhibit

NEW QUESTION 20

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You are a database administrator for a company that has an on-premises Microsoft SQL Server environment and Microsoft Azure SQL Database instances. The environment hosts several customer databases, and each customer uses a dedicated instance. The environments that you manage are shown in the following table.
70-764 dumps exhibit
You need to configure monitoring for Tailspin Toys.
In the table below, identify the monitoring tool that you must use for each activity.
NOTE: Make only one selection in each column.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
Monitoring from application: Transact-SQL
Transact-SQL can be used to monitor a customized application. Trend analysis: System Monitor
System Monitor can provide trend analysis. From question:
70-764 dumps exhibit
Tailspin Toys has a custom application that accesses a hosted database named TSpinDB. The application will monitor TSpinDB and capture information over time about which database objects are accessed and how frequently they are accessed.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/performance/performance-monitoring-and-tuning-tools

NEW QUESTION 21

A Microsoft SQL Server database named DB1 has two filegroups named FG1 and FG2. You implement a backup strategy that creates backups for the filegroups.
DB1 experiences a failure. You must restore FG1 and then FG2.
You need to ensure that the database remains in the RECOVERING state until the restoration of FG2 completes. After the restoration of FG2 completes, the database must be online.
What should you specify when you run the recovery command?

  • A. the WITH NORECOVERY clause for FG1 and the WITH RECOVERY clause for FG2
  • B. the WITH RECOVERY clause for FG1 and the WITH RECOVERY clause for FG2
  • C. the WITH RECOVERY clause for both FG1 and FG2
  • D. the WITH NORECOVERY clause for both FG1 and FG2

Answer: A

NEW QUESTION 22

You are designing a Windows Azure SQL Database for an order fulfillment system. You create a table named Sales.Orders with the following script.
70-764 dumps exhibit
Each order is tracked by using one of the following statuses:
Fulfilled
Shipped
Ordered
Received
You need to design the database to ensure that that you can retrieve the following information:
The current status of an order
The previous status of an order.
The date when the status changed.
The solution must minimize storage.
More than one answer choice may achieve the goal. Select the BEST answer.

  • A. To the Sales.Orders table, add three columns named Status, PreviousStatus and ChangeDat
  • B. Update rows as the order status changes.
  • C. Create a new table named Sales.OrderStatus that contains three columns named OrderID, StatusDate, and Statu
  • D. Insert new rows into the table as the order status changes.
  • E. Implement change data capture on the Sales.Orders table.
  • F. To the Sales.Orders table, add three columns named FulfilledDate, ShippedDate, and ReceivedDate.Update the value of each column from null to the appropriate date as the order status changes.

Answer: B

NEW QUESTION 23

You need to recommend a backup process for an Online Transaction Processing (OLTP) database. The process must meet the following requirements:
Ensure that if a hardware failure occurs, you can bring the database online with a minimum amount of data loss.
Minimize the amount of administrative effort required to restore any lost data.
What should you include in the recommendation? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
70-764 dumps exhibit

  • A. Mastered
  • B. Not Mastered

Answer: A

Explanation:
70-764 dumps exhibit

NEW QUESTION 24

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it As a result, these questions wilt not appear in the review screen.
You have a database named DB1 that contains a table named Table1. You need to audit all updates to Table1. Solution: You convert Table1 to a system-versioned temporal table. Does this meet the goal?

  • A. Yes
  • B. No

Answer: A

NEW QUESTION 25

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have five servers that run Microsoft Windows 2012 R2. Each server hosts a Microsoft SQL Server instance. The topology for the environment is shown in the following diagram.
70-764 dumps exhibit
You have an Always On Availability group named AG1. The details for AG1 are shown in the following table.
70-764 dumps exhibit
Instance1 experiences heavy read-write traffic. The instance hosts a database named OperationsMain that is four terabytes (TB) in size. The database has multiple data files and filegroups. One of the filegroups is read_only and is half of the total database size.
Instance4 and Instance5 are not part of AG1. Instance4 is engaged in heavy read-write I/O.
Instance5 hosts a database named StagedExternal. A nightly BULK INSERT process loads data into an empty table that has a rowstore clustered index and two nonclustered rowstore indexes.
You must minimize the growth of the StagedExternal database log file during the BULK INSERT operations and perform point-in-time recovery after the BULK INSERT transaction. Changes made must not interrupt the log backup chain.
You plan to add a new instance named Instance6 to a datacenter that is geographically distant from Site1 and Site2. You must minimize latency between the nodes in AG1.
All databases use the full recovery model. All backups are written to the network location \\SQLBackup\. A separate process copies backups to an offsite location. You should minimize both the time required to restore
the databases and the space required to store backups. The recovery point objective (RPO) for each instance is shown in the following table.
70-764 dumps exhibit
Full backups of OperationsMain take longer than six hours to complete. All SQL Server backups use the keyword COMPRESSION.
You plan to deploy the following solutions to the environment. The solutions will access a database named DB1 that is part of AG1.
Reporting system: This solution accesses data inDB1with a login that is mapped to a database user that is a member of the db_datareader role. The user has EXECUTE permissions on the database. Queries make no changes to the data. The queries must be load balanced over variable read-only replicas.
Operations system: This solution accesses data inDB1with a login that is mapped to a database user that is a member of the db_datareader and db_datawriter roles. The user has EXECUTE permissions on the database. Queries from the operations system will perform both DDL and DML operations.
The wait statistics monitoring requirements for the instances are described in the following table.
70-764 dumps exhibit
You need to reduce the amount of time it takes to backup OperationsMain. What should you do?

  • A. Modify the backup script to use the keyword SKIP in the FILE_SNAPSHOT statement.
  • B. Modify the backup script to use the keyword SKIP in the WITH statement
  • C. Modify the backup script to use the keyword NO_COMPRESSION in the WITH statement.
  • D. Modify the full database backups script to stripe the backup across multiple backup files.

Answer: D

Explanation:
One of the filegroup is read_only should be as it only need to be backup up once. Partial backups are useful whenever you want to exclude read-only filegroups. A partial backup resembles a full database backup, but a partial backup does not contain all the filegroups. Instead, for a read-write database, a partial backup contains the data in the primary filegroup, every read-write filegroup, and, optionally, one or more read-only files. A partial backup of a read-only database contains only the primary filegroup.
From scenario: Instance1 experiences heavy read-write traffic. The instance hosts a database named OperationsMainthat is four terabytes (TB) in size. The database has multiple data files and filegroups. One of the filegroups is read_only and is half of the total database size.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/partial-backups-sql-server

NEW QUESTION 26

General Overview
You are the Senior Database Administrator (DBA) for a software development company named Leafield Solutions. The company develops software applications custom designed to meet customer requirements.
Requirements Leafield Solutions has been asked by a customer to develop a web-based Enterprise Resource Planning and Management application. The new application will eventually replace a desktop application that the customer is currently using. The current application will remain in use while the users are trained to use the new webbased application.
You need to design the SQL Server and database infrastructure for the web-based application. Databases
You plan to implement databases named Customers, Sales, Products, Current_Inventory, and TempReporting. The Sales database contains a table named OrderTotals and a table named SalesInfo.
A stored procedure named SPUpdateSalesInfo reads data in the OrderTotals table and modifies data in the SalesInfo table.
The stored procedure then reads data in the OrderTotals table a second time and makes further changes to the information in the SalesInfo table.
The Current_Inventory database contains a large table named Inv_Current. The Inv_Current table has a clustered index for the primary key and a nonclustered index. The primary key column uses the identity property.
The data in the Inv_Current table is over 120GB in size. The tables in the Current_Inventory database are accessed by multiple queries in the Sales database.
Another table in the Current_Inventory database contains a self-join with an unlimited number of hierarchies. This table is modified by a stored procedure named SPUpdate2.
An external application named ExternalApp1 will periodically query the Current_Inventory database to generate statistical information. The TempReporting database contains a single table named GenInfo.
A stored procedure named SPUPdateGenInfo combines data from multiple databases and generates millions of rows of data in the GenInfo table.
The GenInfo table is used for reports.
When the information in GenInfo is generated, a reporting process reads data from the Inv_Current table and queries information in the GenInfo table based on that data.
The GenInfo table is deleted after the reporting process completes. The Products database contains tables named ProductNames and ProductTypes.
Current System
The current desktop application uses data stored in a SQL Server 2005 database named DesABCopAppDB.
This database will remain online and data from the Current_Inventory database will be copied to it as soon as data is changed in the Current_Inventory database.
SQL Servers
A new SQL Server 2012 instance will be deployed to host the databases for the new system. The databases will be hosted on a Storage Area Network (SAN) that provides highly available storage.
Design Requirements
Your SQL Server infrastructure and database design must meet the following requirements:
Confidential information in the Current_ Inventory database that is accessed by ExternalApp1 must be securely stored.
Direct access to database tables by developers or applications must be denied.
The account used to generate reports must have restrictions on the hours when it is allowed to make a connection.
Deadlocks must be analyzed with the use of Deadlock Graphs.
In the event of a SQL Server failure, the databases must remain available.
Software licensing and database storage costs must be minimized.
Development effort must be minimized.
The Tempdb databases must be monitored for insufficient free space.
Failed authentication requests must be logged.
Every time a new row is added to the ProductTypes table in the Products database, a user defined function that validates the row must be called before the row is added to the table.
When SPUpdateSalesInfo queries data in the OrderTotals table the first time, the same rows must be returned along with any newly added rows when SPUpdateSalesInfo queries data in the OrderTotals table the second time.
You need to enable users to modify data in the database tables using UPDATE operations. You need to implement a solution that meets the design requirements.
What should you configure?

  • A. You should configure a server role.
  • B. You should configure a database role.
  • C. You should configure functions that use the EXECUTE AS statement.
  • D. You should configure stored procedures that use the EXECUTE AS statement.

Answer: D

NEW QUESTION 27

You administer a Microsoft SQL Server 2016 instance.
After a routine shutdown, the drive that contains tempdb fails. You need to be able to start the SQL Server.
What should you do?

  • A. Modify tempdb location in startup parameters.
  • B. Start SQL Server in minimal configuration mode.
  • C. Start SQL Server in single-user mode.
  • D. Configure SQL Server to bypass Windows application logging.

Answer: B

NEW QUESTION 28
......

P.S. Certshared now are offering 100% pass ensure 70-764 dumps! All 70-764 exam questions have been updated with correct answers: https://www.certshared.com/exam/70-764/ (438 New Questions)