70-473 Design and Implement Cloud Data Platform Solutions
Short Description
70-473 Design and Implement Cloud Data Platform Solutions...
Description
“70-473 Design and Implement Cloud Data Platform Solutions” (Beta Exam) Exam Preparation document. Throughout the document the URL’s to the original sources can be found. To be shure to have the latest information, read the online document. To make studying easer I tried to pick the information that seems to be the most important or relevant for the exam. Last update: December 29, 2015 15:26 CET. Used Sources:
Microsoft Virtual Academy o https://mva.microsoft.com/search/SearchResults.aspx?q=sql%202014 Aure SQL Database Documentation o https://azure.microsoft.com/en-us/documentation/services/sql-database Technet Library SQL Server o https://technet.microsoft.com/en-us/library/mt590198(v=sql.1).aspx Channel 9, Azure SQL Database o https://channel9.msdn.com/Search?term=azure%20sql%20database#ch9Search&lan g-en=en&lang-nl=nl&pubDate=year Channel 9, SQL Server 2014 o https://channel9.msdn.com/Search?term=sql%20server%202014#ch9Search&langnl=nl&lang-en=en
70-473 Design and Implement Cloud Data Platform Solutions 1
Design and implement database solutions for Microsoft SQL Server and SQL Database (20–25%) .............. 4 1.1 1.1.1
Design Geo/DR topology ................................................................................................................... 6
1.1.2
Design a data storage architecture ................................................................................................. 10
1.1.3
Design a security architecture ......................................................................................................... 12
1.1.4
Design a data load strategy ............................................................................................................. 17
1.2
Implement SQL Server on Azure Virtual Machines (VMs) ................................................................... 17
1.2.1
Provision SQL Server in an Azure VM .............................................................................................. 17
1.2.2
Configure firewall rules ................................................................................................................... 17
1.2.3
Configure and optimize storage ...................................................................................................... 17
1.2.4
Migrate an on-premises database to Microsoft Azure ................................................................... 28
1.2.5
Configure and optimize VM sizes by workload ............................................................................... 28
1.3
Design a SQL Database solution .......................................................................................................... 28
1.3.1
Design a solution architecture ........................................................................................................ 29
1.3.2
Design Geo/DR topology ................................................................................................................. 29
1.3.3
Design a security architecture ......................................................................................................... 29
1.3.4
Design a data load strategy ............................................................................................................. 30
1.3.5
Determine the appropriate service tier .......................................................................................... 31
1.4
Implement SQL Database .................................................................................................................... 39
1.4.1
Provision SQL Database ................................................................................................................... 39
1.4.2
Configure firewall rules ................................................................................................................... 39
1.4.3
Configure active geo-replication ..................................................................................................... 39
1.4.4
Migrate an on-premises database to SQL Database ....................................................................... 39
1.4.5
Configure for scale and performance .............................................................................................. 39
1.5
2
Design a hybrid SQL Server solution ...................................................................................................... 4
Design and implement data warehousing on Azure ............................................................................ 39
1.5.1
Design a data warehousing solution on Azure ................................................................................ 39
1.5.2
Design a data load strategy and topology ....................................................................................... 39
1.5.3
Configure SQL Data Warehouse ...................................................................................................... 40
1.5.4
Migrate an on-premises database to SQL Data Warehouse ........................................................... 40
Manage database management systems (DBMS) security (25–30%) .......................................................... 40 2.1
Design and implement SQL Server Database security ......................................................................... 40
2.1.1
Configure firewalls .......................................................................................................................... 40
2.1.2
Manage logins, users, and roles ...................................................................................................... 40
2.1.3
Assign permissions .......................................................................................................................... 40
2.1.4
Configure auditing ........................................................................................................................... 40
2.1.5
Configure transparent database encryption ................................................................................... 51
2.2
Implement Azure SQL Database security ............................................................................................ 51
3
2.2.1
Configure firewalls .......................................................................................................................... 51
2.2.2
Manage logins, users, and roles ...................................................................................................... 52
2.2.3
Assign permissions .......................................................................................................................... 52
2.2.4
Configure auditing ........................................................................................................................... 52
2.2.5
Configure row-level security ........................................................................................................... 53
2.2.6
Configure data encryption .............................................................................................................. 53
2.2.7
Configure data masking ................................................................................................................... 54
2.2.8
Configure Always Encrypted ........................................................................................................... 55
Design for high availability, disaster recovery, and scalability (25–30%) ..................................................... 56 3.1 3.1.1
Design a high availability solution topology .................................................................................... 56
3.1.2
Implement high availability solutions between on-premises and Azure ........................................ 57
3.1.3
Design cloud-based backup solutions ............................................................................................. 59
3.1.4
Implement backup and recovery strategies .................................................................................... 61
3.2
Design and implement scalable solutions ........................................................................................... 64
3.2.1
Design a scale-out solution ............................................................................................................. 64
3.2.2
Implement multi-master scenarios with database replication ....................................................... 64
3.2.3
Implement elastic scale for SQL Database ...................................................................................... 64
3.3
4
Design and implement high availability solutions ............................................................................... 56
Design and implement SQL Database data recovery........................................................................... 65
3.3.1
Design a backup solution for SQL Database .................................................................................... 65
3.3.2
Implement self-service restore ....................................................................................................... 68
3.3.3
Copy and export databases ............................................................................................................. 68
Monitor and manage database implementations on Azure (25–30%) ........................................................ 68 4.1
Monitor and troubleshoot SQL Server VMs on Azure ......................................................................... 68
4.1.1
Monitor database and instance activity .......................................................................................... 68
4.1.2
Monitor using dynamic management views (DMVs) and dynamic management functions (DMFs) 70
4.1.3
Monitor performance and scalability .............................................................................................. 70
4.2
Monitor and troubleshoot SQL Database ............................................................................................ 70
4.2.1
Monitor and troubleshoot SQL Database ....................................................................................... 70
4.2.2
Monitor database activity ............................................................................................................... 71
4.2.3
Monitor using DMVs and DMFs ...................................................................................................... 72
4.2.4
Monitor performance and scalability. ............................................................................................. 72
4.3
Automate and manage database implementations on Azure ............................................................. 72
4.3.1
Manage SQL Server in Azure VMs with PowerShell ........................................................................ 72
4.3.2
Manage Azure SQL Database with PowerShell ............................................................................... 73
4.3.3
Configure Automation and Runbooks ............................................................................................. 74
1 Design and implement database solutions for Microsoft SQL Server and SQL Database (20–25%) 1.1 Design a hybrid SQL Server solution MVA course: Platform for Hybrid Cloud with SQL Server 2014 Jump Start Extend on-premises AlwaysOn Availability Groups to Azure: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-extend-onpremises-alwayson-availability-groups Selecting a SQL Server option in Azure: Azure SQL Database (PaaS) or SQL Server on Azure VMs (IaaS) https://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-databaseand-sql-server-iaas Learn how each option fits into the Microsoft data platform and get help matching the right option to your business requirements. Whether you prioritize cost savings or minimal administration ahead of everything else, this article can help you decide which approach delivers against the business requirements you care about most.
When designing an application, four basic options are available for hosting the SQL Server part of the application: SQL Server on non-virtualized physical machines SQL Server in on-premises virtualized machines (private cloud) SQL Server in Azure Virtual Machine (public cloud) Azure SQL Database (public cloud) The following table summarizes the main characteristics of SQL Database and SQL Server on Azure VMs:
Best for
SQL DATABASE
SQL SERVER IN AZURE VM
New cloud-designed applications that have time
Existing applications that require fast migration to the cloud with minimal changes.
constraints in development and marketing.
Applications that need built-in high availability, disaster recovery, and upgrade mechanisms.
SQL Server applications that require access to onpremises resources (such as Active Directory) from Azure via a secure tunnel.
If you need a customized IT environment with full administrative rights.
Rapid development and test scenarios when you do not want to buy on-premises non-production SQL Server hardware.
Teams that do not want to manage the underlying operating system and configuration settings.
Disaster recovery for on-premises SQL Server applications using [backup to Azure Storage](http://msdn.microsoft.com/library/jj919 148.aspx) or [AlwaysOn replicas with Azure VMs](../virtual-machines/virtual-machines-sqlserver-high-availability-and-disaster-recoverysolutions.md).
Applications using scale-out patterns.
Databases of up to 1 TB in size.
Building Software-as-a Service (SaaS) applications.
You do not want to employ IT resources for support and maintenance of the underlying infrastructure.
You want to focus on the application layer.
Eliminates hardware costs. Reduces administrative costs.
Eliminates hardware costs.
In addition to built-in fault tolerance infrastructure capabilities, Azure SQL Database provides features, such as Point in Time Restore, Geo-Restore, and Geo-Replication to increase business continuity. For more information, see [SQL Database business continuity overview](sqldatabase-businesscontinuity.md).
SQL Server on Azure VMs lets you to set up a high availability and disaster recovery solution for your database’s specific needs. Therefore, you can have a system that is highly optimized for your application. You can test and run failovers by yourself when needed. For more information, see [High Availability and Disaster Recovery for SQL Server on Azure Virtual Machines]((../virtualmachines/virtual-machines-sql-server-highavailability-and-disaster-recovery-solutions.md).
Resources
Total cost of
Large databases that are bigger than 1 TB in size.
You have IT resources for support and maintenance.
ownership
Business continuity
Hybrid cloud
Your on-premises application can access data in Azure SQL Database.
With SQL Server on Azure VMs, you can have applications that run partly in the cloud and partly on-premises. For example, you can extend your on-premises network and Active Directory Domain to the cloud via [Azure Virtual Network](../virtual-network/virtual-networksoverview.md). In addition, you can store onpremises data files in Azure Storage using [SQL Server Data Files in Azure] (http://msdn.microsoft.com/library/dn385720.as px). For more information, see [Introduction to SQL Server 2014 Hybrid Cloud](http://msdn.microsoft.com/library/dn606 154.aspx).
Supports disaster recovery for on-premises SQL Server applications using [SQL Server Backup and Restore with Azure Blob Storage] (http://msdn.microsoft.com/library/jj919148.asp x) or [AlwaysOn replicas in Azure VMs](../virtualmachines/virtual-machines-sql-server-highavailability-and-disaster-recovery-solutions.md).
Getting Started with Azure SQL Data Sync: https://azure.microsoft.com/en-us/documentation/articles/sql-database-get-started-sql-data-sync In this tutorial, you create a hybrid (SQL Server and SQL Database instances) sync group fully configured and synchronizing on the schedule you set.
1.1.1 Design Geo/DR topology Designing cloud applications for business continuity using Geo-Replication https://azure.microsoft.com/en-us/documentation/articles/sql-database-designing-cloud-solutionsfor-disaster-recovery Overview The Active Geo-Replication feature implements a mechanism to provide database redundancy within the same Microsoft Azure region or in different regions (geo-redundancy). Active Geo-Replication asynchronously replicates committed transactions from a database to up to four copies of the database on different servers. The original database becomes the primary database of the continuous copy. Each continuous copy is referred to as an online secondary database. The primary database asynchronously replicates committed transactions to each of the online secondary databases. While at any given point, the online secondary data might be slightly behind the primary database, the online secondary data is guaranteed to always be transactionally consistent with changes committed to the primary database. Active Geo-Replication supports up to four online secondaries, or up to three online secondaries and one offline secondary. One of the primary benefits of Active Geo-Replication is that it provides a database-level disaster recovery solution. Using Active Geo-Replication, you can configure a user database in the Premium service tier to replicate transactions to databases on different Microsoft Azure SQL Database servers within the same or different regions. Cross-region redundancy enables applications to recover from a
permanent loss of a datacenter caused by natural disasters, catastrophic human errors, or malicious acts. Another key benefit is that the online secondary databases are readable. Therefore, an online secondary can act as a load balancer for read workloads such as reporting. While you can create an online secondary in a different region for disaster recovery, you could also have an online secondary in the same region on a different server. Both online secondary databases can be used to balance read only workloads serving clients distributed across several regions. Other scenarios where Active Geo-Replication can be used include:
Database migration: You can use Active Geo-Replication to migrate a database from one server to another online with minimum downtime.
Application upgrades: You can use the online secondary as a fail back option. To achieve real business continuity, adding redundancy between datacenters to relational storage is only part of the solution. Recovering an application (service) end-to-end after a disastrous failure requires recovery of all components that constitute the service and any dependent services. Examples of these components include the client software (for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all components are resilient to the same failures and become available within the recovery time objective (RTO) of your application. Therefore, you need to identify all dependent services and understand the guarantees and capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the failover of the services on which it depends. For more information about designing solutions for disaster recovery, see Designing Cloud Solutions for Disaster Recovery Using Active GeoReplication. Active Geo-Replication Capabilities The Active Geo-Replication feature provides the following essential capabilities:
Automatic Asynchronous Replication: After an online secondary database has been seeded, updates to the primary database are asynchronously copied to the online secondary database automatically. This means that transactions are committed on the primary database before they are copied to the online secondary database. However, after seeding, the online secondary database is transactionally consistent at any given point in time. NOTE: Asynchronous replication accommodates the latency that typifies wide-area networks by which remote datacenters are connected.
Multiple online secondary databases: Two or more online secondary databases increase redundancy and protection for the primary database and application. If multiple online secondary databases exist, the application will remain protected even if one of the online secondary databases fails. If there is only one online secondary database, and it fails, the application is exposed to higher risk until a new online secondary database is created.
Readable online secondary databases: An application can access an online secondary database for read-only operations using the same security principals used for accessing the primary database. Continuous copy operations on the online secondary database take precedence over application access. Also, if the queries on the online secondary database cause prolonged table locking, transactions could eventually fail on the primary database.
User-controlled termination for failover: Before you can failover an application to an online secondary database, the continuous copy relationship with the primary database must be terminated. Termination of the continuous copy relationship requires an explicit action by the application or an administrative script or manually via the portal. After termination, the online secondary database becomes a stand-alone database. It becomes a read-write database unless the primary database was a read-only database. Two forms of Termination of a Continuous Copy Relationship are described later in this topic. NOTE: Active Geo-Replication is only supported for databases in the Premium service tier. This applies for both the primary and the online secondary databases. The online secondary must be configured to have the same or larger performance level as the primary. Changes to performance levels to the primary database are not automatically replicated to the secondaries. Any upgrades should be done on the secondary databases first and finally on the primary. For more information about changing performance levels, see Changing Performance Levels. There are two main reasons the online secondary should be at least the same size as the primary. The secondary must have enough capacity to process the replicated transactions at the same speed as the primary. If the secondary does not have, at minimum, the same capacity to process the incoming transactions, it could lag behind and eventually impact the availability of the primary. If the secondary does not have the same capacity as the primary, the failover may degrade the application’s performance and availability. Continuous Copy Relationship Concepts Local data redundancy and operational recovery are standard features for Azure SQL Database. Each database possesses one primary and two local replica databases that reside in the same datacenter, providing high availability within that datacenter. This means that the Active Geo-Replication databases also have redundant replicas. Both the primary and online secondary databases have two secondary replicas. However, the primary replica for the secondary database is directly updated by the continuous copy mechanism and cannot accept any application-initiated updates. The following figure illustrates how Active Geo-Replication extends database redundancy across two Azure regions. The region that hosts the primary database is known as the primary region. The region that hosts the online secondary database is known as the secondary region. In this figure, North Europe is the primary region. West Europe is the secondary region.
If the primary database becomes unavailable, terminating the continuous copy relationship for a given online secondary database makes the online secondary database a stand-alone database. The online secondary database inherits the read-only/read-write mode of the primary database which is unchanged by the termination. For example, if the primary database is a read-only database, after termination, the online secondary database becomes a read-only database. At this point, the application can fail over and continue using the online secondary database. To provide resiliency in the event of a catastrophic failure of the datacenter or a prolonged outage in the primary region, at least one online secondary database needs to reside in a different region. Creating a Continuous Copy You can only create a continuous copy of an existing database. Creating a continuous copy of an existing database is useful for adding geo-redundancy. A continuous copy can also be created to copy an existing database to a different Azure SQL Database server. Once created the secondary database is populated with the data copied from the primary database. This process is known as seeding. After seeding is complete each new transaction is replicated after it commits on the primary. For information about how to create a continuous copy of an existing database, see How to enable Geo-Replication. Preventing the Loss of Critical Data Due to the high latency of wide area networks, continuous copy uses an asynchronous replication mechanism. This makes some data loss unavoidable if a failure occurs. However, some applications may require no data loss. To protect these critical updates, an application developer can call the sp_wait_for_database_copy_sync system procedure immediately after committing the transaction. Calling sp_wait_for_database_copy_sync blocks the calling thread until the last committed transaction has been replicated to the online secondary database. The procedure will wait until all queued transactions have been acknowledged by the online secondary database. sp_wait_for_database_copy_sync is scoped to a specific continuous copy link. Any user with the connection rights to the primary database can call this procedure. NOTE: The delay caused by a sp_wait_for_database_copy_sync procedure call might be significant. The delay depends on the length of the queue and on the available bandwidth. Avoid calling this procedure unless absolutely necessary. Termination of a Continuous Copy Relationship The continuous copy relationship can be terminated at any time. Terminating a continuous copy relationship does not remove the secondary database. There are two methods of terminating a continuous copy relationship:
Planned Termination is useful for planned operations where data loss is unacceptable. A planned termination can only be performed on the primary database, after the online secondary database has been seeded. In a planned termination, all transactions committed on the primary database are replicated to the online secondary database first, and then the continuous copy relationship is terminated. This prevents loss of data on the secondary database.
Unplanned (Forced) Termination is intended for responding to the loss of either the primary database or one of its online secondary databases. A forced termination can be performed on either
the primary database or the secondary database. Every forced termination results in the irreversible loss of the replication relationship between the primary database and the associated online secondary database. Additionally, forced termination causes the loss of any transactions that have not been replicated from the primary database. A forced termination terminates the continuous copy relationship immediately. In-flight transactions are not replicated to the online secondary database. Therefore, a forced termination can result in an irreversible loss of any transactions that have not been replicated from the primary database. NOTE: If the primary database has only one continuous copy relationship, after termination, updates to the primary database will no longer be protected. For more information about how to terminate a continuous copy relationship, see Recover an Azure SQL Database from an outage.
1.1.2 Design a data storage architecture SQL Server Database Files and Filegroups: https://msdn.microsoft.com/en-us/library/ms189563.aspx Database Files: File Primary
Secondary
Description The primary data file contains the startup information for the database and points to the other files in the database. User data and objects can be stored in this file or in secondary data files. Every database has one primary data file. The recommended file name extension for primary data files is .mdf. Secondary data files are optional, are user-defined, and store user data. Secondary files can be used to spread data across multiple disks by putting each file on a different disk drive. Additionally, if a database exceeds the maximum size for a single Windows file, you can use secondary data files so the database can continue to grow.
The recommended file name extension for secondary data files is .ndf. Transaction The transaction log files hold the log information that is used to recover the database. Log There must be at least one log file for each database. The recommended file name extension for transaction logs is .ldf. File Groups: Filegroup Primary Userdefined
Description The filegroup that contains the primary file. All system tables are allocated to the primary filegroup. Any filegroup that is specifically created by the user when the user first creates or later modifies the database.
SQL Server Data Files in Microsoft Azure https://msdn.microsoft.com/en-US/library/dn385720.aspx SQL Server Data Files in Microsoft Azure enables native support for SQL Server database files stored as Microsoft Azure Blobs. It allows you to create a database in SQL Server running in on-premises or in a virtual machine in Microsoft Azure with a dedicated storage location for your data in Microsoft
Azure Blob Storage. This enhancement especially simplifies to move databases between machines by using detach and attach operations. In addition, it provides an alternative storage location for your database backup files by allowing you to restore from or to Microsoft Azure Storage. Therefore, it enables several hybrid solutions by providing several benefits for data virtualization, data movement, security and availability, and any easy low costs and maintenance for high-availability and elastic scaling. This topic introduces concepts and considerations that are central to storing SQL Server data files in Microsoft Azure Storage Service. For a practical hands-on experience on how to use this new feature, see Tutorial: Using the Microsoft Azure Blob storage service with SQL Server 2016 databases . The following diagram demonstrates that this enhancement enables you to store SQL Server database files as Microsoft Azure blobs in Microsoft Azure Storage regardless of where your server resides.
Benefits of using SQL Server Data Files in Microsoft Azure
Easy and fast migration benefits: This feature simplifies the migration process by moving one database at a time between machines in on-premises as well as between on-premises and cloud environments without any application changes. Therefore, it supports an incremental migration while maintaining your existing on-premises infrastructure in place. In addition, having access to a centralized data storage simplifies the application logic when an application needs to run in multiple locations in an on-premises environment. In some cases, you may need to rapidly setup computer centers in geographically dispersed locations, which gather data from many different sources. By using this new enhancement, instead of moving data from one location to another, you can store many databases as Microsoft Azure blobs, and then run Transact-SQL scripts to create databases on the local machines or virtual machines. Cost and limitless storage benefits: This feature enables you to have limitless off-site storage in Microsoft Azure while leveraging on-premises compute resources. When you use Microsoft Azure as a storage location, you can easily focus on the application logic without
the overhead of hardware management. If you lose a computation node on-premises, you can set up a new one without any data movement. High availability and disaster recovery benefits: Using SQL Server Data Files in Microsoft Azure feature might simplify the high availability and disaster recovery solutions. For example, if a virtual machine in Microsoft Azure or an instance of SQL Server crashes, you can re-create your databases in a new machine by just re-establishing links to Microsoft Azure Blobs. Security benefits: This new enhancement allows you to separate a compute instance from a storage instance. You can have a fully encrypted database with decryption only occurring on compute instance but not in a storage instance. In other words, using this new enhancement, you can encrypt all data in public cloud using Transparent Data Encryption (TDE) certificates, which are physically separated from the data. The TDE keys can be stored in the master database, which is stored locally in your physically secure on-premises machine and backed up locally. You can use these local keys to encrypt the data, which resides in Microsoft Azure Storage. If your cloud storage account credentials are stolen, your data still stays secure as the TDE certificates always reside in on-premises. Snapshot backup: This feature enables you to use Azure snapshots to provide nearly instantaneous backups and quicker restores for database files stored using the Azure Blob storage service. This capability enables you to simplify your backup and restore policies. For more information, see File-Snapshot Backups for Database Files in Azure.
More info in the article SQL Server Data Files in Microsoft Azure ……. https://msdn.microsoft.com/en-US/library/dn385720.aspx
1.1.3 Design a security architecture Channel 9, Ignite 2015 video: Overview and Roadmap for Microsoft SQL Server Security
Encryption o Always Encrypted o TDE for SQL DB, TDE Perf (Intel NIS HW acceleration) o Enhancements to Crypto o CLE for SQL DB (Cell Level Encryption) Auditing o Enhancements to SQL Audit Reporting and Analysis (also with power BI) Audit outcome of transactions Secure App Development o Role level security o Dynamic Data Masking
Always encrypted https://msdn.microsoft.com/en-us/library/mt163865.aspx https://channel9.msdn.com/Shows/Data-Exposed/SQL-Server-2016-Always-Encrypted
Allows customers to securely store sensitive data outside of their trust boundary. Data remains protected from high-privileged, unauthorized users. Client driven: Client side encryption and decryption.
Always Encrypted Typical Scenarios: Client and Data On-Premises A customer has a client application and SQL Server both running on-premises, at their business location. The customer wants to hire an external vendor to administer SQL Server. In order to protect sensitive data stored in SQL Server, the customer uses Always Encrypted to ensure the separation of duties between database administrators and application administrators. The customer stores plaintext values of Always Encrypted keys in a trusted key store which the client application can access. SQL Server administrators have no access to the keys and, therefore, are unable to decrypt sensitive data stored in SQL Server. Client On-Premises with Data in Azure A customer has an on-premises client application at their business location. The application operates on sensitive data stored in a database hosted in Azure (SQL Database or SQL Server running in a virtual machine on Microsoft Azure). The customer uses Always Encrypted and stores Always
Encrypted keys in a trusted key store hosted on-premises, to ensure Microsoft cloud administrators have no access to sensitive data. Client and Data in Azure A customer has a client application, hosted in Microsoft Azure (e.g. in a worker role or a web role), which operates on sensitive data stored also stored in Microsoft Azure. The customer uses Always Encrypted to reduce security attack surface area (the data is always encrypted in the database and on the machine hosting the database). Always Encrypted supports two types of encryption: randomized encryption and deterministic encryption.
Deterministic encryption uses a method which always generates the same encrypted value for any given plain text value. Using deterministic encryption allows grouping, filtering by equality, and joining tables based on encrypted values, but can also allow unauthorized users to guess information about encrypted values by examining patterns in the encrypted column. This weakness is increased when there is a small set of possible encrypted values, such as True/False, or North/South/East/West region. Deterministic encryption must use a column collation with a binary2 sort order for character columns. Randomized encryption uses a method that encrypts data in a less predictable manner. Randomized encryption is more secure, but prevents equality searches, grouping, indexing, and joining on encrypted columns.
Use deterministic encryption for columns that will be used as search or grouping parameters, for example a government ID number. Use randomized encryption, for data such as confidential investigation comments, which are not grouped with other records and are not used to join tables. Row-Level Security (RLS) https://msdn.microsoft.com/en-us/library/dn765131.aspx
Store data intented for many customers in a single database/table while at the same time restricting row-level read & write access based on users’ execution context. RLS Concepts:
RLS supports two types of security predicates.
Filter predicates silently filter the rows available to read operations (SELECT, UPDATE, and DELETE). Block predicates explicitly block write operations (AFTER INSERT, AFTER UPDATE, BEFORE UPDATE, BEFORE DELETE) that violate the predicate.
Example use cases:
A hospital can create a security policy that allows nurses to view data rows for their own patients only. A bank can create a policy to restrict access to rows of financial data based on the employee's business division, or based on the employee's role within the company. A multi-tenant application can create a policy to enforce a logical separation of each tenant's data rows from every other tenant's rows. Efficiencies are achieved by the storage of data for many tenants in a single table. Of course, each tenant can see only its data rows.
Transparant Data Encryption (new in Azure SQL Database v12)
2 – 40% perf overhead depending of load type (simple OLTP or complex heavy duty queries/analysis), most of time lower side of overhead. Dynamic Data Masking https://msdn.microsoft.com/en-us/library/mt130841.aspx
1.1.4 Design a data load strategy SQL Server Customer Advisory Team: Loading data to SQL Azure the fast way
1.2 Implement SQL Server on Azure Virtual Machines (VMs) Different type of VM’s to translate on prem server specs to Azure VM specs. VM’s with premium storage
1.2.1 Provision SQL Server in an Azure VM https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-serverinfrastructure-services
1.2.2 Configure firewall rules For SQL Server TCP Port 1433 must be opened in the Windows Firewall.
1.2.3 Configure and optimize storage White Paper: Performance Guidance for SQL Server in Windows Azure Virtual Machines https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-serverperformance-best-practices Azure virtual machine disks and cache settings Azure Virtual Machines provide three types of disks: operating system (OS) disk, temporary disk, and data disks. For a description of each disk type, see section Azure Infrastructure services fundamentals in this article. Operating system disk vs. data disk When placing your data and log files you should consider disk cache settings in addition to size limits. For a description of cache settings, see section Azure Infrastructure services fundamentals in this article.
While “Read Write” cache (default setting) for the operating system disk helps improve the overall operating system performance, boot times and reducing the read latency for the IO patterns that OS usually generates, we recommend that you do not use OS disk for hosting system and user database files. Instead, we recommend that you use data disks. When the workload demands a high rate of random I/Os (such as a SQL Server OLTP workload) and throughput is important to you, the general guideline is to keep the cache set to the default value of “None” (disabled). Because Azure storage is capable of more IOPS than a direct attached storage disk, this setting causes the physical host local disks to be bypassed, therefore providing the highest I/O rate. Temporary disk Unlike Azure disks (operating system and data disks) which are essentially VHDs stored as page blobs in Azure Storage, the temporary disk (labeled as D:) is not persistent and is not implemented using Azure Storage. It is reserved by the operating system for the page file and its performance is not guaranteed to be predictable. Any data stored on it may be lost after your virtual machine is restarted or resized. Hence, we do not recommend the D: drive for storing any user or system database files, including tempdb. Data disks performance options and considerations This section discusses the best practices and recommendations on data disk performance options based on testing done by Microsoft. You should be familiar with how SQL Server I/O operations work in order to interpret the test results reported in this section. For more information, see Pages and Extents Architecture. It is important to note that the results we provide in this section were achieved without SQL Server High Availability and Disaster Recovery Solutions enabled (such as, AlwaysOn Availability Groups, database mirroring or log shipping). We recommend that you deploy one of these features to maintain multiple redundant copies of your databases across at least two virtual machines in an availability set in order to be covered by the Azure Cloud Services, Virtual Machines, and Virtual Network Service Level Agreement. Enabling any of these features affects performance, so you should consider incorporating one of them in your own performance testing to get more accurate results. As a general rule, we recommend that you attach maximum number of disks allowed by the VM size (such as, 16 data disks for an A7 VM) for throughput sensitive applications. While latency may not necessarily improve by adding more data disks when your workload is within the maximum IOPS limit, the additional IOPS and bandwidth that you get from the attached additional disks can help to avoid reaching the single disk 500 IOPS limit. Note that this might trigger throttling events that might increase disk response times and disk latency. Single data disk configuration In our performance tests, we’ve executed several SQL Server I/O measurements to understand data disk response characteristics with respect to the typical I/O patterns generated by SQL Server based on different kind of workloads. The results for a single disk configuration on an A7 VM instance are summarized here: Random I/O (8 KB Pages) Reads
Writes
Sequential I/O (64 KB Extents) Reads
Writes
IOPS Bandwidth
500
500
500
300
4 MB/s
4 MB/s
30 MB/s
20 MB/s
Note: Because Azure Infrastructure Services is a multi-tenant environment, performance results may vary. You should consider these results as an indication of what you can achieve, but not a guarantee. We suggest you repeat these tests and measurements based on your specific workload. Multiple data disk configuration If your workload exceeds or is close to the I/O performance numbers mentioned in the previous section, we recommend that you add multiple disks (depending on your virtual machine size) and stripe multiple disks in volumes. This configuration gives you the ability to create volumes with specific throughput and bandwidth, based on your data and log performance needs by combining multiple data disks together. Adding multiple data disks to Azure virtual machine After you create a virtual machine in Azure, you can attach a data disk to it using either the Azure Management Portal or the Add-AzureDataDisk Azure PowerShell cmdlet. Both techniques allow you to select an existing data disk from a storage account, or create a new blank data disk. If you choose to create a new blank data disk at the Management Portal, you can choose the storage account that your virtual machine was created in but not a different storage account. To place your existing data disk (.vhd file) into a specific storage account, you need to use the Azure PowerShell cmdlets. The following example demonstrates how to update a virtual machine using the Get-AzureVM and the Add-AzureDataDisk cmdlets. The Get-AzureVM cmdlet retrieves information on a specific virtual machine. The Add-AzureDataDisk cmdlet creates a new data disk with specified size and label in a previously created Storage Account. Get-AzureVM "CloudServiceName" -Name "VMNAme" | Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -MediaLocation ` "https://.blob.core.windows.net/vmdisk/Disk1.vhd" -DiskLabel "disk1" -LUN 1 | Update-AzureVM
To create a new storage account, use the New-AzureStorageAccount cmdlet as follows: New-AzureStorageAccount -StorageAccountName "StorageAccountX" -Label "StorageAccountX" Location "North Central US"
For more information about Azure PowerShell cmdlets, see Azure PowerShell on MSDN and Azure command line tools. Disk striping options for Azure Virtual Machines For Azure VMs running on Windows Server 2008 R2 and previous releases, the only striping technology available is striped volumes for dynamic disks. You can use this option to stripe multiple data disks into volumes that provide more throughput and bandwidth than what a single disk can provide. Starting with Windows Server 2012, Storage Pools are introduced and operating system software RAID capabilities are deprecated. Storage Pools enable you to virtualize storage by grouping industrystandard disks into “pools”, and then create virtual disks called Storage Spaces from the available capacity in the storage pools. You can then configure these virtual disks to provide striping capabilities across all disks in the pool, combining good performance characteristics. In addition, it enables you to add and remove disk space based on your needs.
During our tests, after adding a number of data disks (4, 8 and 16) as shown in the previous section, we created a new storage pool by using the following Windows PowerShell command: New-StoragePool –FriendlyName StoragePool1 –StorageSubsystemFriendlyName "Storage Spaces*" – PhysicalDisks (Get-PhysicalDisk –CanPool $True)
Next, we created a virtual disk on top of the new storage pool and specified resiliency setting and virtual disk size. $disks = Get-StoragePool –FriendlyName StoragePool1 -IsPrimordial $false | Get-PhysicalDisk New-VirtualDisk –FriendlyName VirtualDisk1 -ResiliencySettingName Simple –NumberOfColumns $disks.Count –UseMaximumSize –Interleave 256KB
Important Note: For performance, it is very important that the –NumberOfColumns parameter is set to the number of disks utilized to create the underlying Storage Pool. Otherwise, IO requests cannot be evenly distributed across all data disks in the pool and you will get suboptimal performance. The –Interleave parameter enables you to specify the number of bytes written in each underlying data disk in a virtual disk. We recommend that you use 256 KB for all workloads. Lastly, we created and formatted the volume to make it usable to the operating system and applications by using the following Windows PowerShell commands: Get-VirtualDisk –FriendlyName VirtualDisk1 | Get-Disk | Initialize-Disk –Passthru Partition –AssignDriveLetter –UseMaximumSize | Format-Volume –AllocationUnitSize 64K
| New-
Once the volume created, it is possible to dynamically increase the disk capacity by attaching new data disks. To achieve optimal capacity utilization, consider the number of columns your storage spaces have and add disks in multiples of that number. See Windows Server Storage spaces Frequently Asked Questions for more information. Using Storage Pools instead of traditional Windows operating system striping in dynamic disks brings several advantages in terms of performance and manageability. We recommend that you use Storage Pools for disk striping in Azure Virtual Machines. During our internal testing, we have implemented the following scenarios with different number of disks as well as disk volume configurations. We tested the following scenarios with configurations of 4, 8 and 16 data disks respectively, and we observed increased IOPS for each data disk added as expected:
We arranged multiple data disks as simple volumes and leveraged the Database Files and Filegroups feature of SQL Server to stripe database files across multiple volumes. We used Windows Server Storage Pools to create larger volumes, which contains multiple data disks, and we placed database and log files inside these volumes.
It’s important to notice that using multiple data disks provides performance benefits but it creates more management overhead. In addition, partial unavailability of one of the striped disks can result in unavailability of a database. Therefore, for such configurations, we recommend that you consider enhancing the availability of your databases using high availability and disaster recovery capabilities of SQL Server as described in High Availability and Disaster Recovery for SQL Server in Azure Virtual Machines.
The following tables summarize the results of tests that we performed using multiple data disks configurations at Microsoft. Aggregated throughput and bandwidth across 4 data disks Random I/O (8 KB Pages)
IOPS Bandwidth
Sequential I/O (64 KB Extents)
Reads
Writes
Reads
Writes
2000
2000
1600
1200
16 MB/s
16 MB/s
100 MB/s
75 MB/s
Aggregated throughput and bandwidth across 8 data disks Random I/O (8 KB Pages)
IOPS Bandwidth
Sequential I/O (64 KB Extents)
Reads
Writes
Reads
Writes
4000
4000
2400
2400
30 MB/s
30 MB/s
150 MB/s
150 MB/s
Aggregated throughput and bandwidth across 16 data disks Random I/O (8 KB Pages)
IOPS Bandwidth
Sequential I/O (64 KB Extents)
Reads
Writes
Reads
Writes
8000
8000
2400
4000
60 MB/s
60 MB/s
150 MB/s
250 MB/s
Note: Because Azure Infrastructure Services is a shared, multi-tenant environment, performance results may vary. You should consider these results as an indication of what you can achieve, but not a guarantee. We recommend that you repeat these tests and measurements based on your specific workload. By using the newly introduced Intel-based A8 and A9 VM sizes, we repeated our IO performance tests and noticed a significant increase in bandwidth and throughput for larger sequential IO requests. If you use Intel-based A8 and A9 VM sizes, you can get a performance increase for 64 KB (and bigger) read and write operations. If your workload is IO intensive, these new VM sizes (A8 and A9) can help in achieving more linear scalability compare to smaller VM sizes, but always within the 500 IOPs per disk boundaries. For more information, see About the A8 and A9 Compute Intensive Instances. Based on our tests, we have made the following observations about the Azure Virtual Machine environment:
Spreading your I/O workload across a number of data disks benefits smaller random operations (more common in OLTP scenarios) where IOPS and bandwidth scale in a nearly linear fashion.
As the I/O block size increases, for read operations adding more data disks does not result in higher IOPS or bandwidth. This means that if your workload is read intensive with more analytical queries, adding more disks will not necessarily help.
For write intensive workload, adding more data disks can increase performance in a nearly linear fashion. This means that you can benefit from placing each transaction log for multiple databases on a separate data disk.
For large sequential I/O block sizes (such as, 64 KB or greater), writes generally perform better than reads.
A8 and A9 VM sizes provide increased throughput for IO sensitive workloads.
For SQL Server Load D and DS VM’s can also be very interesting. Especially DS series where you have Premium Storage (SSD) available for the data disks. Placement of database files Depending on how you configure your storage, you should place and the data and log files for user and system databases accordingly to achieve your performance goals. This section provides guidance on how you should place database files when using SQL Server in Azure Virtual Machines:
Option 1: You can create a single striped volume using Windows Server Storage Spaces leveraging multiple data disks, and place all database and log files in this volume. In this scenario, all your database workload shares aggregated I/O throughput and bandwidth provided by these multiple disks, and you simplify the placement of database files. Individual database workloads are load balanced across all available disks, and you do not need to worry about single database spikes or workload distribution. You can find the graphical representation of this configuration below:
Option 2: You can create multiple striped volumes, each composed by the number of data disks required to achieve specific I/O performance, and do a careful placement of user and system database files on these volumes accordingly. You may have one important production database with a write-intensive workload that has high priority, and you may want to maximize the database and log file throughput by segregating them on two separate 4 disk volumes (each volume providing around 2000 IOPs and 100 MB/sec). For example, use:
4-disks volume for hosting TempDB data and log files. 4-disks volume for hosting other minor databases.
This option can give you precise file placement by optimizing available IO performance. You can find the graphical representation of this configuration below:
You can still create single disk volumes and leverage SQL Server files and filegroups placement for your databases. While this can still offer some benefits in terms of flexible storage layout organization, it introduces additional complexity and also limits single file (data or log) IO performance to a value that a single Azure data disk can provide such as 500 IOPs and 60 MB/sec. Although Azure data disks have different behaviors than traditional rotating spindles (,in which competing random and sequential operations on the same disks can impact performance), we still recommend that you keep data and log files in different paths to achieve dedicated IOPs and bandwidth for them. To help understand your IO requirements and performance while running your SQL Server workloads on Azure Virtual Machines, you need to analyze the following three tools and combine the results carefully: -
SQL Server IO statistics: They reflect the database management system view of the IO subsystem.
-
Windows Server Logical Disk Performance Counters: They show how the operating system performs on IOs. Azure Storage Analytics: Azure hosts data disks’ VHD files in Azure Storage. You can turn on logging and metrics for the storage account that hosts your data disks, and get useful information such as the number of successful and failed requests, timeout, throttling, network, authorization, and other errors. You can configure and get data from these metrics on the Azure Portal, or via PowerShell, REST APIs, and .NET Storage Client library.
By leveraging all these information, you can understand:
If your IO related stalls or wait types in SQL Server (manifesting as increased disk response times in OS Perf Counters) are related to throttling events happening in Azure Storage. And, If rebalancing your data and log files across different volumes (and underlying disks) can help maintaining throughput and bandwidth between storage performance limits.
TempDB As mentioned in section Azure virtual machine disks and cache settings, we recommend that you place tempDB on data disks instead of the temporary disk (D:). Following are the three primary reasons for this recommendation based on our internal testing with SQL Server test workloads.
Performance variance: In our testing, we noticed that you can get the same level of performance you get on D:, if not more IOPS, from a single data disk. However, the performance of D: drive is not guaranteed to be as predictable as the operating system or data disk. This is because the size of the D: drive and the performance you get from it depends on the size of the virtual machine you use, and the underlying physical disks shared between all VMs hosted by the same server. Configuration upon VM downtime situation: If the virtual machine gets shutdown down (due to planned or unplanned reasons), in order for SQL Server to recreate the tempDB under the D: drive, the service account under which SQL Server service is started needs to have local administrator privileges. In addition, the common practice with on-premises SQL deployments is to keep database and log files (including tempDB) in a separate folder, in which case the folder needs to be created before SQL Server starts. For most customers, this extra re-configuration overhead is not worth the return. Performance bottleneck: If you place tempdb on D: drive and your application workloads use tempDB heavily, this can cause performance bottleneck because the D: drive can introduce constraints in terms of IOPS throughput. Instead, place tempDB on data disks to gain more flexibility. For more information on configuration best practices for optimizing tempdb, see Compilation of SQL Server TempDB IO Best Practices.
We strongly recommend that you perform your own workload testing before implementing a desired SQL Server file layout strategy. Effects of warm-up on data disks With Azure disks, we have observed a “warm-up effect” that can result in a reduced rate of throughput and bandwidth for a short period of time. In situations where a data disk is not accessed for a period of time (approximately 20 minutes), adaptive partitioning and load balancing mechanisms kick in. If the disk is accessed while these algorithms are active, you may notice some degradation in throughput
and bandwidth for a short period of time (approximately 10 minutes), after which they return to their normal levels. This warm-up effect happens because of the adaptive partitioning and load balancing mechanism of Azure, which dynamically adjusts to workload changes in a multi-tenant storage environment. You may observe similar effects in other widely known cloud storage systems as well. For more information, see Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency. This warm-up effect is unlikely to be noticed for systems that are in continuous use. But we recommend you consider it during performance testing or when accessing systems that have been inactive for a while. Single vs. multiple storage accounts for data disks attached to a single VM To simplify management and reduce potential risks of consistency in case of failures, we recommend that you leave all the data disks attached to a single virtual machine in the same storage account. Storage accounts are implemented as a recovery unit in case of failures. So, keeping all the disks in the same account makes the recovery operations simple. There is no performance improvement if you store data disks attached to a single VM in multiple storage accounts. If you have multiple VMs, we recommend that you consider the storage account limits for throughput and bandwidth during capacity planning. In addition, distribute VMs and their data disks to multiple storage accounts if the aggregated throughput or bandwidth is higher than what a single storage account can provide. For information on storage account limits, see Azure Storage Scalability and Performance Targets. For information on max IOPS per disk, see Virtual Machine and Cloud Service Sizes for Azure. NTFS allocation unit size NTFS volumes use a default cluster size of 4 KB. Based on our performance tests, we recommend changing the default cluster size to 64 KB during volume creation for both single disk and multiple disks (storage spaces) volumes. Data compression for I/O bound workloads Some I/O intensive workloads can gain performance benefits through data compression. Compressed tables and indexes means more data stored in fewer pages, and hence require reading fewer pages from disk, which in turn can improve the performance of workloads that are I/O intensive. For a data warehouse workload running on SQL Server in Azure VM, we found significant improvement in query performance by using page compression on tables and indexes, as shown in Figure 1.
Query Performance with Data Compression 250000
1000000 900000
700000 150000
Reads
800000
Time (ms)
200000
600000 500000
100000
400000 NONE
CPU Time
PAGE
Elapsed Time
Logical Reads
Physical Reads (+RA)
Figure 1: Query Performance with Data Compression Figure 1 compares performance of one query with no compression (NONE) and page compression (PAGE). As illustrated, the logical and physical reads are significantly reduced with page compression, and so is the elapsed time. As expected, CPU time of the query does go up with page compression, because SQL Server needs to decompress the data while returning results to the query. Your results will vary, depending upon your workload. For an OLTP workload, we observed significant improvements in throughput (as measured by business transactions per second) by using page compression on selected tables and indexes that were involved in the I/O intensive workload. Figure 2 compares the throughput and CPU usage for the OLTP workload with and without page compression.
CPU Time (%), Throughput (Business Transactions/sec)
OLTP Throughput and CPU Usage with Data Compression 70 60 50 40 30 20 10
0 NONE Throughput
PAGE CPU Time (%)
Figure 2: OLTP Throughput and CPU Usage with Data Compression Note that you may see different results when you test your workloads in Azure Virtual Machine environment. But we recommend that you test data compression techniques for I/O intensive workloads and then decide which tables and indexes to compress. For more information, see Data Compression: Strategy, Capacity Planning and Best Practices. Restore performance – instant file initialization For databases of any significant size, enabling instant file initialization can improve the performance of some operations involving database files, such as creating a database or restoring a database, adding files to a database or extending the size of an existing file, autogrow, and so on. For information, see How and Why to Enable Instant File Initialization. To take advantage of instant file initialization, you grant the SQL Server (MSSQLSERVER) service account with SE_MANAGE_VOLUME_NAME and add it to the Perform Volume Maintenance Tasks security policy. If you are using a SQL Server platform image for Azure, the default service account (NT Service\MSSQLSERVER) isn’t added to the Perform Volume Maintenance Tasks security policy. In other words, instant file initialization is not enabled in a SQL Server Azure platform image. After adding the SQL Server service account to the Perform Volume Maintenance Tasks security policy, restart the SQL Server service. The following figure illustrates observed test results for creating and restoring a 100 GB database with and without instant file initialization.
Impact of Instant File Initialization 60
Time (minutes)
50 40 30 20
10 0 Create 100 GB database
Without Instant File Initialization
Restore 100 GB database
With Instant File Initialization
Figure 3: Performance Impact of Instant File Initialization For more information, see Database File Initialization. Other existing best practices Many of the best practices when running SQL Server on premises are still relevant in Azure Virtual Machines, including:
Limit or disable autogrow on the database: Autogrow is considered to be merely a contingency for unexpected growth. Do not manage your data and log growth on a day-today basis with autogrow. If autogrow is used, pre-grow the file using the Size switch.
Disable autoshrink on the database: Make sure autoshrink is disabled to avoid unnecessary overhead that can negatively affect performance. For more information about autogrow and autoshrink, see Considerations for the "autogrow" and "autoshrink" settings in SQL Server. Establish locked pages to reduce IO and any paging activities: Lock pages in memory is a Windows policy that determines, which account can use a process to keep memory allocations pinned in physical memory. It prevents the system from paging the data to virtual memory on disk. When the SQL Server service account is granted this user right, buffer pool memory cannot be paged out by Windows. For more information about enabling the Lock pages in memory user right, see How to: Enable the Lock Pages in Memory Option (Windows).
1.2.4 Migrate an on-premises database to Microsoft Azure 1.2.4.1 Migrating a SQL Server database to Azure SQL Database https://azure.microsoft.com/en-us/documentation/articles/sql-database-cloud-migrate Moving your on-premises database to Azure SQL Database varies in complexity based on your database and application design, and your tolerance for downtime. For compatible databases, migration to Azure SQL Database is a straightforward schema and data movement operation requiring few, if any, changes to the schema and little or no re-engineering of applications. Azure SQL Database V12 brings near-complete engine compatibility with SQL Server 2014 and SQL Server 2016. Most SQL Server 2016 Transact-SQL statements are fully supported in Microsoft Azure SQL Database. This includes the SQL Server data types, operators, and the string, arithmetic, logical, cursor functions, and the other Transact-SQL elements that most applications depend upon. Partially or unsupported functions are usually related to differences in how SQL Database manages the database (such as file, high availability, and security features) or for special purpose features such as service broker. Because SQL Database isolates many features from dependency on the master database, many server-level activities are inappropriate and unsupported. Features deprecated in SQL Server are generally not supported in SQL Database. Databases and applications that rely on partially or unsupported functions will need some re-engineering before they can be migrated. The workflow for migrating a SQL Server database to Azure SQL Database are: 1. Determine if your database is compatible 2. If not compatible, fix database compatibility issues 3. Migrate a compatible database
1.2.5 Configure and optimize VM sizes by workload White Paper: Performance Guidance for SQL Server in Windows Azure Virtual Machines https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-serverperformance-best-practices
1.3 Design a SQL Database solution https://azure.microsoft.com/en-us/services/sql-database https://azure.microsoft.com/en-us/documentation/services/sql-database
1.3.1 Design a solution architecture 1.3.2 Design Geo/DR topology https://azure.microsoft.com/en-us/updates/general-availability-azure-sql-database-geo-replicationenhancements
Azure SQL Database geo-replication enhancements (General availability Nov 10, 2015) Azure SQL Database geo-replication includes a set of new features that improve programming and management capabilities for business continuity and disaster recovery scenarios. These enhancements are available for V12 databases, and they include:
T-SQL syntax for geo-replication Failover and failback Ability to synchronize security credentials and firewall rules Full support of geo-replication for databases in elastic pools Configurable performance levels of the secondary database Azure Resource Manager API and support of role-based security Synchronous PowerShell cmdlets
For more details, please refer to Spotlight on new capabilities of SQL Database geo-replication.
1.3.3 Design a security architecture Webinar December, 30 2015 09:30:00 GMT (UTC): https://azure.microsoft.com/en-us/community/events/azure-sql-db-security Azure SQL Database security guidelines and limitations Connecting to SQL Database By Using Azure Active Directory Authentication
1.3.4 Design a data load strategy Migrating a SQL Server database to Azure SQL Database To test for SQL Database compatibility issues before you start the migration process, use one of the following methods:
Use SqlPackage: SqlPackage is a command-prompt utility will test for and, if found, generate a report containing detected compatibility issues.
Use SQL Server Management Studio: The Export Data Tier application wizard in SQL Server management studio will display detected errors to the screen. If compatibility issues are detected, you must fix these compatibility issues before proceeding with the migration.
Use SQL Azure Migration Wizard
Use SQL Server Data Tools for Visual Studio
Use SQL Server Management Studio
To migrate a compatible SQL Server database, Microsoft provides several migration methods for various scenarios. The method you choose depends upon your tolerance for downtime, the size and complexity of your SQL Server database, and your connectivity to the Microsoft Azure cloud.
SSMS Migration Wizard Export to BACPAC File Import from BACPAC File Transactional Replication
To choose your migration method, the first question to ask is can you afford to take the database out of production during the migration. Migrating a database while active transactions are occurring can result in database inconsistencies and possible database corruption. There are many methods to quiesce a database, from disabling client connectivity to creating a database snapshot. To migrate with minimal downtime, use SQL Server transaction replication if your database meets the requirements for transactional replication. If you can afford some downtime or you are performing a test migration of a production database for later migration, consider one of the following three methods:
SSMS Migration Wizard: For small to medium databases, migrating a compatible SQL Server 2005 or later database is as simple as running the Deploy Database to Microsoft Azure Database Wizard in SQL Server Management Studio.
Export to BACPAC File and then Import from BACPAC File: If you have connectivity challenges (no connectivity, low bandwidth, or timeout issues) and for medium to large databases, use a BACPAC file. With this method, you export the SQL Server schema and data to a BACPAC file and then import the BACPAC file into SQL Database using the Export Data Tier Application Wizard in SQL Server Management Studio or the SqlPackage command-prompt utility.
Use BACPAC and BCP together: Use a BACPAC file and BCP for much large databases to achieve greater parallelization for increases performance, albeit with greater complexity. With this method, migrate the schema and the data separately.
Export the schema only to a BACPAC file.
Import the schema only from the BACPAC File into SQL Database.
Use BCP to extract the data into flat files and then parallel load these files into Azure SQL Database.
1.3.5 Determine the appropriate service tier https://azure.microsoft.com/en-us/documentation/articles/sql-database-service-tiers Understand DTU’s and know the values in the tables for single databases and elastic database pools:
Azure SQL Database provides multiple service tiers to handle different types of workloads. You can create a single database with defined characteristics and pricing. Or you can manage multiple databases by creating an elastic database pool. In both cases, the tiers include Basic, Standard, and Premium. But the database options in these tiers vary based on whether you are creating an individual database or a database within an elastic database pool. This article provides an overview of service tiers in both contexts. Service tiers and database options Basic, Standard, and Premium service tiers all have an uptime SLA of 99.99% and offer predictable performance, flexible business continuity options, security features, and hourly billing. The following table provides examples of the tiers best suited for different application workloads. Service tier
Target workloads
Basic
Best suited for a small size database, supporting typically one single active operation at a given time. Examples include databases used for development or testing, or small scale infrequently used applications.
Standard
The go-to option for most cloud applications, supporting multiple concurrent queries. Examples include workgroup or web applications.
Premium
Designed for high transactional volume, supporting a large number of concurrent users and requiring the highest level of business continuity capabilities. Examples are databases supporting mission critical applications.
NOTE: Web and Business editions are being retired. Find out how to upgrade Web and Business editions. Please read the Sunset FAQ if you plan to continue using Web and Business Editions. Single database service tiers and performance levels
For single databases there are multiple performance levels within each service tier, you have the flexibility to choose the level that best meets your workload’s demands. If you need to scale up or down, you can easily change the tiers of your database in the Azure Classic Portal, with zerodowntime for your application. See Changing Database Service Tiers and Performance Levels for details. Performance characteristics listed here apply to databases created using SQL Database V12. In situations where the underlying hardware in Azure hosts multiple SQL databases, your database will still get a guaranteed set of resources, and the expected performance characteristics of your individual database is not affected.
For a better understanding of DTUs, see the DTU section in this topic. NOTE: For a detailed explanation of all other rows in this service tiers table, see Service tier capabilities and limits. Elastic database pool service tiers and performance in eDTUs In addition to creating and scaling a single database, you also have the option of managing multiple databases within an elastic database pool. All of the databases in an elastic database pool share a common set of resources. The performance characteristics are measured by elastic Database Transaction Units (eDTUs). As with single databases, elastic database pools come in three service tiers: Basic, Standard, and Premium. For elastic databases these three service tiers still define the overall performance limits and several features. Elastic database pools allow these databases to share and consume DTU resources without needing to assign a specific performance level to the databases in the pool. For example, a single database in a Standard pool can go from using 0 eDTUs to the maximum database eDTU (either 100 eDTUs defined by the service tier or a custom number that you configure). This allows multiple databases with varying workloads to efficiently use eDTU resources available to the entire pool. The following table describes the characteristics of the elastic database pool service tiers.
Each database within a pool also adheres to the single-database characteristics for that tier. For example, the Basic pool has a limit for max sessions per pool of 2400 - 28800, but an individual database within that pool has a database limit of 300 sessions (the limit for a single Basic database as specified in the previous section). Understanding DTUs The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the relative power of databases based on a real-world measure: the database transaction. We took a set of operations that are typical for an online transaction processing (OLTP) request, and then measured how many transactions could be completed per second under fully loaded conditions (that’s the short version, you can read the gory details in the Benchmark overview). A Basic database has 5 DTUs, which means it can complete 5 transactions per second, while a Premium P11 database has 1750 DTUs.
DTU vs. eDTU The DTU for single databases translates directly to the eDTU for elastic databases. For example, a database in a Basic elastic database pool offers up to 5 eDTUs. That’s the same performance as a
single Basic database. The difference is that the elastic database won’t consume any eDTUs from the pool until it has to.
A simple example helps. Take a Basic elastic database pool with 1000 DTUs and drop 800 databases in it. As long as only 200 of the 800 databases are being used at any point in time (5 DTU X 200 = 1000), you won’t hit capacity of the pool, and database performance won’t degrade. This example is simplified for clarity. The real math is a bit more involved. The portal does the math for you, and makes a recommendation based on historical database usage. See Price and performance considerations for an elastic database pool to learn how the recommendations work, or to do the math yourself. Monitoring database performance Monitoring the performance of a SQL database starts with monitoring the resource utilization relative to level of database performance you choose. This relevant data is exposed in the following ways: 1. The Microsoft Azure Classic Portal. 2. Dynamic Management Views in the user database, and in the master database of the server that contains the user database. In the Azure Portal, you can monitor a single database’s utilization by selecting your database and clicking the Monitoring chart. This brings up a Metric window that you can change by clicking the Edit chart button. Add the following metrics:
CPU Percentage
DTU Percentage
Data IO Percentage
Storage Percentage Once you’ve added these metrics, you can continue to view them in the Monitoring chart with more details on the Metric window. All four metrics show the average utilization percentage relative to the DTU of your database.
You can also configure alerts on the performance metrics. Click the Add alert button in the Metric window. Follow the wizard to configure your alert. You have the option to alert if the metrics exceeds a certain threshold or if the metric falls below a certain threshold. For example, if you expect the workload on your database to grow, you can choose to configure an email alert whenever your database reaches 80% on any of the performance metrics. You can use this as an early warning to figure out when you might have to switch to the next higher performance level. The performance metrics can also help you determine if you are able to downgrade to a lower performance level. Assume you are using a Standard S2 database and all performance metrics show that the database on average does not use more than 10% at any given time. It is likely that the database will work well in Standard S1. However, be aware of workloads that spike or fluctuate before making the decision to move to a lower performance level. The same metrics that are exposed in the portal are also available through system views: sys.resource_stats in the logical master database of your server, and sys.dm_db_resource_stats in the user database (sys.dm_db_resource_stats is created in each Basic, Standard, and Premium user database. Web and Business edition databases return an empty result set). Use sys.resource_stats if you need to monitor less granular data across a longer period of time. Use sys.dm_db_resource_stats if you need to monitor more granular data within a smaller time frame. For more information, see Azure SQL Database Performance Guidance. For elastic database pools, you can monitor individual databases in the pool with the techniques described in this section. But you can also monitor the pool as a whole. For information, see Monitor and manage an elastic database pool.
https://msdn.microsoft.com/en-us/library/azure/dn741340.aspx Use Microsoft Azure SQL Database service tiers (editions) to dial-in cloud database performance and capabilities to suit your application. Understand the Capabilities of Service Tiers (Editions) Basic, Standard, Premium service tiers offer predictable performance, flexible business continuity options, and streamlined billing. In addition, with multiple performance levels, you can have the flexibility to choose the level that best meet your workload demands. Should your workload increase or decrease, you can easily change the performance characteristics of a database in the Microsoft Azure Management Portal. Select your database, click Scale, and then choose a new service tier. For more information, see Changing Database Service Tiers and Performance Levels. You can go a service tier up and fix the performance. The features available with each service tier fall into the following categories:
Performance and Scalability: Basic, Standard, and Premium service tiers have one or more performance levels that offer predictable performance. Performance levels are expressed in database throughput units (DTUs), which provide a quick way to compare the relative performance of a database. For more detailed information about performance levels and DTUs, see Azure SQL Database Service Tiers and Performance Levels. In addition to the performance level, for all database service tiers, you also pick a maximum database size supported by the service tier. For more information on the supported database sizes, see CREATE DATABASE.
Business Continuity: These features help you recover your database from human and application errors, or datacenter failures. Many built-in features, such as Geo-Restore, are available with Basic, Standard, and Premium service tiers. For more information, see Azure SQL Database Business Continuity.
Auditing: With Basic, Standard, and Premium service tiers, you can track logs and events that occur in a database. For more information, see Azure SQL Database Performance Guidance.
Service Tier
Common App Pattern
Transactional Perf. Objective
Small databases with a single operation at a given point in Reliability per hour time Workgroup and cloud applications with multiple concurrent Standard Reliability per minute transactions Mission-critical, high transactional volume with many Premium Reliability per second concurrent users Web apps, workgroup, dept. apps, and other lightweight Web N/A database workloads Lightweight database workloads that require larger sizes than Business N/A supported with Web Web and Business service tiers (editions) are retired since September 2015. Basic
Service Tier Basic S0 S1 S2 S3 P1 P2 P3 P4 P6 P11
Database Transaction Unit (DTU) 5 10 20 50 100 125 250 500 1000 1750
Adjust performance and scale without downtime. SQL databases is available in Basic, Standard, and Premium service tiers. Each service tier offers different levels of performance and capabilities to support lightweight to heavyweight database workloads. You can build your first app on a small database for a few bucks a month, then change the service tier manually or programmatically at any time as your app goes viral worldwide, without downtime to your app or your customers. For many businesses and apps, being able to create databases and dial single database performance up or down on demand is enough, especially if usage patterns are relatively predictable. But if you have unpredictable usage patterns, it can make it hard to manage costs and your business model. Elastic database pools in SQL Database solve this problem. The concept is simple. You allocate performance to a pool, and pay for the collective performance of the pool rather than single database performance. You don’t need to dial database performance up or down. The databases in the pool, called elastic databases, automatically scale up and down to meet demand. Elastic databases consume but don’t exceed the limits of the pool, so your cost remains predictable even if database usage doesn’t. What’s more, you can add and remove databases to the pool, scaling your app from a handful of databases to thousands, all within a budget that you control. Either way you go—single or elastic—you’re not locked in. You can blend single databases with elastic database pools, and change the service tiers of single databases and pools to create innovate designs. Moreover, with the power and reach of Azure, you can mix-and-match Azure services with SQL Database to meet your unique modern app design needs, drive cost and resource efficiencies, and unlock new business opportunities. But how can you compare the relative performance of databases and database pools? How do you know the right click-stop when you dial up and down? The answer is the database transaction unit (DTU) for single databases and the elastic DTU (eDTU) for elastic databases and database pools. https://channel9.msdn.com/Series/Windows-Azure-Storage-SQL-Database-Tutorials/Scott-KleinVideo-02 In this episode, Scott is joined by Tobias Ternstrom – Principal Program Manager Lead for performance in Azure SQL Database, as he breaks down what the new Database Throughput Unit is and how you can use it to understand what kind of horsepower you can expect from the new services tiers. The DTU is a critical part of providing a more predictable performance experience for
you, A DTU represents the power of the database engine as a blended measure of CPU, memory, and read and write rates. This measure helps you assess the relative power of the six SQL Database performance levels (Basic, S1, S2, P1, P2, and P3).
1.4 Implement SQL Database 1.4.1 Provision SQL Database Create SQL Database with Powershell New-AzureRmSqlDatabase -ResourceGroupName "resourcegroup01" -ServerName "server01" DatabaseName "database01" Create elastic SQL Database with Powershell New-AzureRmSqlDatabase -ResourceGroupName "resourcegroup01" -ServerName "server01" DatabaseName "database01" -ElasticPoolName "elasticpool01"
1.4.2 Configure firewall rules How to configure an Azure SQL database firewall
1.4.3 Configure active geo-replication Active Geo-Replication for Azure SQL Database
1.4.4 Migrate an on-premises database to SQL Database Migrating a SQL Server database to Azure SQL Database
1.4.5 Configure for scale and performance Channel 9 video Azure SQL Database - In-Memory Technologies, “Increase performance without increasing your service tier”. Learn about in-memory technologies for Azure SQL Database. Improve OLTP and analytics performance without changing your database service tier. For more information, see http://aka.ms/sqldbinmem.
1.5 Design and implement data warehousing on Azure https://azure.microsoft.com/en-us/documentation/articles/sql-data-warehouse-overview-what-is https://azure.microsoft.com/en-us/services/sql-data-warehouse https://azure.microsoft.com/en-us/documentation/services/sql-data-warehouse Channel 9 Ignite 2015 video: Microsoft Azure SQL Data Warehouse Overview Channel 9 Ignite 2015 video: Azure SQL Data Warehouse: Deep Dive
1.5.1 Design a data warehousing solution on Azure SQL Data Warehouse documentation: https://azure.microsoft.com/en-us/documentation/services/sql-data-warehouse
1.5.2 Design a data load strategy and topology Start reading the “SQL Data Warehouse, Load” documentation at “Load data into SQL Data Warehouse”. Below a short selection from the article to get an idea of the contents. Also added some links to video’s. SQL Data Warehouse presents numerous options for loading data including:
PolyBase Azure Data Factory (Loading Azure SQL Data Warehouse with Azure Data Factory 3m40s) BCP command-line utility (Channel 9 video: Loading data into Azure SQL Data Warehouse with BCP 3 minutes) SQL Server Integration Services (SSIS) 3rd party data loading tools
While all of the above methods can be used with SQL Data Warehouse, PolyBase's ability to transparently parallelize loads from Azure Blob Storage will make it the fastest tool for loading data. Check out the article “Load data into SQL Data Warehouse” to learn more about how to load with PolyBase and get some guidance on initial data loading.
1.5.3 Configure SQL Data Warehouse https://azure.microsoft.com/en-us/documentation/articles/sql-data-warehouse-get-startedprovision New-AzureRMSqlDatabase -RequestedServiceObjectiveName "" -DatabaseName "" -ServerName "" -ResourceGroupName "" -Edition "DataWarehouse"
1.5.4 Migrate an on-premises database to SQL Data Warehouse Read the documentation “Migrate your solution to SQL Data Warehouse”.
2 Manage database management systems (DBMS) security (25– 30%) 2.1 Design and implement SQL Server Database security 2.1.1 Configure firewalls 2.1.2 Manage logins, users, and roles Managing databases and logins in Azure SQL Database
2.1.3 Assign permissions GRANT (Transact_SQL)
2.1.4 Configure auditing SQL Server Audit (Database Engine) https://msdn.microsoft.com/en-us/library/cc280386.aspx Audits can have the following categories of actions: Server-level. These actions include server operations, such as management changes and logon and logoff operations. Database-level. These actions encompass data manipulation languages (DML) and data definition language (DDL) operations. Audit-level. These actions include actions in the auditing process. Server Level Audit Action Groups:
The following table describes the server-level audit action groups and provides the equivalent SQL Server Event Class where applicable. Action group name
Description
APPLICATION_ROLE_CHANGE_PASSWORD_GROUP This event is raised whenever a password is changed for an application role. Equivalent to the Audit App Role Change Password Event Class. AUDIT_CHANGE_GROUP
This event is raised whenever any audit is created, modified or deleted. This event is raised whenever any audit specification is created, modified, or deleted. Any change to an audit is audited in that audit. Equivalent to the Audit Change Audit Event Class.
BACKUP_RESTORE_GROUP
This event is raised whenever a backup or restore command is issued. Equivalent to the Audit Backup/Restore Event Class.
BROKER_LOGIN_GROUP
This event is raised to report audit messages related to Service Broker transport security. Equivalent to the Audit Broker Login Event Class.
DATABASE_CHANGE_GROUP
This event is raised when a database is created, altered, or dropped. This event is raised whenever any database is created, altered or dropped. Equivalent to the Audit Database Management Event Class.
DATABASE_LOGOUT_GROUP
This event is raised when a contained database user logs out of a database. Equivalent to the Audit Database Logout Event Class.
DATABASE_MIRRORING_LOGIN_GROUP
This event is raised to report audit messages related to database mirroring transport security. Equivalent to the Audit Database Mirroring Login Event Class.
DATABASE_OBJECT_ACCESS_GROUP
This event is raised whenever database objects such as message type, assembly, contract are accessed. This event is raised for any access to any database. Note This could potentially lead to large audit records. Equivalent to the Audit Database Object Access Event Class.
DATABASE_OBJECT_CHANGE_GROUP
This event is raised when a CREATE, ALTER, or DROP statement is executed on database objects, such as schemas. This event is raised whenever any database object is created, altered or dropped. Note This could lead to very large quantities of audit records. Equivalent to the Audit Database Object Management Event Class.
DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP This event is raised when a change of owner for objects within database scope. This event is raised for any object ownership change in any database on the server. Equivalent to the Audit Database Object Take Ownership Event Class. DATABASE_OBJECT_PERMISSION_CHANGE_GROUP This event is raised when a GRANT, REVOKE, or DENY has been issued for database objects, such as assemblies and schemas. This event is raised for any object permission change for any database on the server. Equivalent to the Audit Database Object GDR Event Class. DATABASE_OPERATION_GROUP
This event is raised when operations in a database, such as checkpoint or subscribe query notification, occur. This event is raised on any database operation on any database. Equivalent to the Audit Database Operation Event Class.
Action group name
Description
DATABASE_OWNERSHIP_CHANGE_GROUP
This event is raised when you use the ALTER AUTHORIZATION statement to change the owner of a database, and the permissions that are required to do that are checked. This event is raised for any database ownership change on any database on the server. Equivalent to the Audit Change Database Owner Event Class.
DATABASE_PERMISSION_CHANGE_GROUP
This event is raised whenever a GRANT, REVOKE, or DENY is issued for a statement permission by any principal in SQL Server (This applies to database-only events, such as granting permissions on a database). This event is raised for any database permission change (GDR) for any database in the server. Equivalent to the Audit Database Scope GDR Event Class.
DATABASE_PRINCIPAL_CHANGE_GROUP
This event is raised when principals, such as users, are created, altered, or dropped from a database. Equivalent to the Audit Database Principal Management Event Class. (Also equivalent to the Audit Add DB Principal Event Class, which occurs on the deprecated sp_grantdbaccess, sp_revokedbaccess, sp_addPrincipal, and sp_dropPrincipal stored procedures.) This event is raised whenever a database role is added to or removed by using the sp_addrole, sp_droprole stored procedures. This event is raised whenever any database principals are created, altered, or dropped from any database. Equivalent to the Audit Add Role Event Class.
DATABASE_PRINCIPAL_IMPERSONATION_GROUP
This event is raised when there is an impersonation operation in the database scope such as EXECUTE AS or SETPRINCIPAL. This event is raised for impersonations done in any database. Equivalent to the Audit Database Principal Impersonation Event Class.
DATABASE_ROLE_MEMBER_CHANGE_GROUP
This event is raised whenever a login is added to or removed from a database role. This event class is raised for the sp_addrolemember, sp_changegroup, and sp_droprolemember stored procedures. This event is raised on any Database role member change in any database. Equivalent to the Audit Add Member to DB Role Event Class.
DBCC_GROUP
This event is raised whenever a principal issues any DBCC command. Equivalent to the Audit DBCC Event Class.
FAILED_DATABASE_AUTHENTICATION_GROUP
Indicates that a principal tried to log on to a contained database and failed. Events in this class are raised by new connections or by connections that are reused from a connection pool. Equivalent to the Audit Login Failed Event Class.
FAILED_LOGIN_GROUP
Indicates that a principal tried to log on to SQL Server and failed. Events in this class are raised by new connections or by connections that are reused from a connection pool. Equivalent to the Audit Login Failed Event Class.
FULLTEXT_GROUP
Indicates fulltext event occurred. Equivalent to the Audit Fulltext Event Class.
LOGIN_CHANGE_PASSWORD_GROUP
This event is raised whenever a login password is changed by way of ALTER LOGIN statement or sp_password stored procedure. Equivalent to the Audit Login Change Password Event Class.
LOGOUT_GROUP
Indicates that a principal has logged out of SQL Server. Events in this class are raised by new connections or by connections that are reused from a connection pool. Equivalent to the Audit Logout Event Class.
SCHEMA_OBJECT_ACCESS_GROUP
This event is raised whenever an object permission has been used in the schema. Equivalent to the Audit Schema Object Access Event Class.
Action group name
Description
SCHEMA_OBJECT_CHANGE_GROUP
This event is raised when a CREATE, ALTER, or DROP operation is performed on a schema. Equivalent to the Audit Schema Object Management Event Class. This event is raised on schema objects. Equivalent to the Audit Object Derived Permission Event Class. This event is raised whenever any schema of any database changes. Equivalent to the Audit Statement Permission Event Class.
SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP
This event is raised when the permissions to change the owner of schema object (such as a table, procedure, or function) is checked. This occurs when the ALTER AUTHORIZATION statement is used to assign an owner to an object. This event is raised for any schema ownership change for any database on the server. Equivalent to the Audit Schema Object Take Ownership Event Class.
SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP
This event is raised whenever a grant, deny, revoke is performed against a schema object. Equivalent to the Audit Schema Object GDR Event Class.
SERVER_OBJECT_CHANGE_GROUP
This event is raised for CREATE, ALTER, or DROP operations on server objects. Equivalent to the Audit Server Object Management Event Class.
SERVER_OBJECT_OWNERSHIP_CHANGE_GROUP
This event is raised when the owner is changed for objects in server scope. Equivalent to the Audit Server Object Take Ownership Event Class.
SERVER_OBJECT_PERMISSION_CHANGE_GROUP
This event is raised whenever a GRANT, REVOKE, or DENY is issued for a server object permission by any principal in SQL Server. Equivalent to the Audit Server Object GDR Event Class.
SERVER_OPERATION_GROUP
This event is raised when Security Audit operations such as altering settings, resources, external access, or authorization are used. Equivalent to the Audit Server Operation Event Class.
SERVER_PERMISSION_CHANGE_GROUP
This event is raised when a GRANT, REVOKE, or DENY is issued for permissions in the server scope, such as creating a login. Equivalent to the Audit Server Scope GDR Event Class.
SERVER_PRINCIPAL_CHANGE_GROUP
This event is raised when server principals are created, altered, or dropped. Equivalent to the Audit Server Principal Management Event Class. This event is raised when a principal issues the sp_defaultdb or sp_defaultlanguage stored procedures or ALTER LOGIN statements. Equivalent to the Audit Addlogin Event Class. This event is raised on the sp_addlogin and sp_droplogin stored procedures. Also equivalent to the Audit Login Change Property Event Class. This event is raised for the sp_grantlogin or sp_revokelogin stored procedures. Equivalent to the Audit Login GDR Event Class.
SERVER_PRINCIPAL_IMPERSONATION_GROUP
This event is raised when there is an impersonation within server scope, such as EXECUTE AS . Equivalent to the Audit Server Principal Impersonation Event Class.
SERVER_ROLE_MEMBER_CHANGE_GROUP
This event is raised whenever a login is added or removed from a fixed server role. This event is raised for the sp_addsrvrolemember and sp_dropsrvrolemember stored procedures. Equivalent to the Audit Add Login to Server Role Event Class.
SERVER_STATE_CHANGE_GROUP
This event is raised when the SQL Server service state is modified. Equivalent to the Audit Server Starts and Stops Event Class.
Action group name
Description
SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP Indicates that a principal successfully logged in to a contained database. Equivalent to the Audit Successful Database Authentication Event Class. SUCCESSFUL_LOGIN_GROUP
Indicates that a principal has successfully logged in to SQL Server. Events in this class are raised by new connections or by connections that are reused from a connection pool. Equivalent to the Audit Login Event Class.
TRACE_CHANGE_GROUP
This event is raised for all statements that check for the ALTER TRACE permission. Equivalent to the Audit Server Alter Trace Event Class.
USER_CHANGE_PASSWORD_GROUP
This event is raised whenever the password of a contained database user is changed by using the ALTER USER statement.
USER_DEFINED_AUDIT_GROUP
This group monitors events raised by using sp_audit_write (TransactSQL). Typically triggers or stored procedures include calls to sp_audit_write to enable auditing of important events.
Database-Level Audit Action Groups Database-Level Audit Action Groups are actions similar to SQL Server Security Audit Event classes. For more information about event classes, see SQL Server Event Class Reference. The following table describes the database-level audit action groups and provides their equivalent SQL Server Event Class where applicable.
Action group name
Description This event is raised whenever a password is changed for an application APPLICATION_ROLE_CHANGE_PASSWORD_GROUP role. Equivalent to the Audit App Role Change Password Event Class. This event is raised whenever any audit is created, modified or deleted. This event is raised whenever any audit specification is created, AUDIT_CHANGE_GROUP modified, or deleted. Any change to an audit is audited in that audit. Equivalent to the Audit Change Audit Event Class. This event is raised whenever a backup or restore command is issued. BACKUP_RESTORE_GROUP Equivalent to the Audit Backup/Restore Event Class. This event is raised when a database is created, altered, or dropped. DATABASE_CHANGE_GROUP Equivalent to the Audit Database Management Event Class. This event is raised when a contained database user logs out of a database. DATABASE_LOGOUT_GROUP Equivalent to the Audit Backup/Restore Event Class.
This event is raised whenever database objects such as certificates DATABASE_OBJECT_ACCESS_GROUP and asymmetric keys are accessed. Equivalent to the Audit Database Object Access Event Class. This event is raised when a CREATE, ALTER, or DROP statement is executed DATABASE_OBJECT_CHANGE_GROUP on database objects, such as schemas. Equivalent to the Audit Database Object Management Event Class. This event is raised when a change of owner for objects within database DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP scope occurs. Equivalent to the Audit Database Object Take Ownership Event Class. This event is raised when a GRANT, REVOKE, or DENY has been issued for DATABASE_OBJECT_PERMISSION_CHANGE_GROUP database objects, such as assemblies and schemas. Equivalent to the Audit Database Object GDR Event Class. This event is raised when operations in a database, such as checkpoint or DATABASE_OPERATION_GROUP subscribe query notification, occur. Equivalent to the Audit Database Operation Event Class. This event is raised when you use the ALTER AUTHORIZATION statement to change the owner of a database, and DATABASE_OWNERSHIP_CHANGE_GROUP the permissions that are required to do that are checked. Equivalent to the Audit Change Database Owner Event Class. This event is raised whenever a GRANT, REVOKE, or DENY is issued for a statement permission by any user in DATABASE_PERMISSION_CHANGE_GROUP SQL Server for database-only events such as granting permissions on a database. Equivalent to the Audit Database Scope GDR Event Class. This event is raised when principals, such as users, are created, altered, or dropped from a database. Equivalent DATABASE_PRINCIPAL_CHANGE_GROUP to the Audit Database Principal Management Event Class. Also equivalent to the Audit Add DB User Event Class, which occurs on
deprecated sp_grantdbaccess, sp_revokedbaccess, sp_adduser, and sp_dropuser stored procedures.
DATABASE_PRINCIPAL_IMPERSONATION_GROUP
DATABASE_ROLE_MEMBER_CHANGE_GROUP
DBCC_GROUP
FAILED_DATABASE_AUTHENTICATION_GROUP
SCHEMA_OBJECT_ACCESS_GROUP
SCHEMA_OBJECT_CHANGE_GROUP
This event is raised whenever a database role is added to or removed using deprecated sp_addrole and sp_droprole stored procedures. Equivalent to the Audit Add Role Event Class. This event is raised when there is an impersonation within database scope such as EXECUTE AS or SETUSER. Equivalent to the Audit Database Principal Impersonation Event Class. This event is raised whenever a login is added to or removed from a database role. This event class is used with the sp_addrolemember, sp_changegroup, and sp_droprolemember stored procedures.Equivalent to the Audit Add Member to DB Role Event Class This event is raised whenever a principal issues any DBCC command. Equivalent to the Audit DBCC Event Class. Indicates that a principal tried to log on to a contained database and failed. Events in this class are raised by new connections or by connections that are reused from a connection pool. This event is raised. This event is raised whenever an object permission has been used in the schema. Equivalent to the Audit Schema Object Access Event Class. This event is raised when a CREATE, ALTER, or DROP operation is performed on a schema. Equivalent to the Audit Schema Object Management Event Class. This event is raised on schema objects. Equivalent to the Audit Object Derived Permission Event Class. Also
equivalent to the Audit Statement Permission Event Class. This event is raised when the permissions to change the owner of schema object such as a table, procedure, or function is checked. This SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP occurs when the ALTER AUTHORIZATION statement is used to assign an owner to an object. Equivalent to the Audit Schema Object Take Ownership Event Class. This event is raised whenever a grant, deny, or revoke is issued for a schema SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP object. Equivalent to the Audit Schema Object GDR Event Class. Indicates that a principal successfully logged in to a contained database. SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP Equivalent to the Audit Successful Database Authentication Event Class. This event is raised whenever the password of a contained database user USER_CHANGE_PASSWORD_GROUP is changed by using the ALTER USER statement. This group monitors events raised by USER_DEFINED_AUDIT_GROUP using sp_audit_write (Transact-SQL).
Database-Level Audit Actions Database-level actions support the auditing of specific actions directly on database schema and schema objects, such as Tables, Views, Stored Procedures, Functions, Extended Stored Procedures, Queues, Synonyms. Types, XML Schema Collection, Database, and Schema are not audited. The audit of schema objects may be configured on Schema and Database, which means that events on all schema objects contained by the specified schema or database will be audited. The following table describes database-level audit actions.
Action Description SELECT This event is raised whenever a SELECT is issued. UPDATE This event is raised whenever an UPDATE is issued. INSERT This event is raised whenever an INSERT is issued. DELETE This event is raised whenever a DELETE is issued. EXECUTE This event is raised whenever an EXECUTE is issued. RECEIVE This event is raised whenever a RECEIVE is issued. REFERENCES This event is raised whenever a REFERENCES permission is checked Audit-Level Audit Action Groups You can also audit the actions in the auditing process. This can be in the server scope or the database scope. In the database scope, it only occurs for database audit specifications. The following table describes audit-level audit action groups.
Action group name
AUDIT_ CHANGE_GROUP
Description This event is raised whenever one of the following commands are issued:
CREATE SERVER AUDIT ALTER SERVER AUDIT DROP SERVER AUDIT CREATE SERVER AUDIT SPECIFICATION ALTER SERVER AUDIT SPECIFICATION DROP SERVER AUDIT SPECIFICATION CREATE DATABASE AUDIT SPECIFICATION ALTER DATABASE AUDIT SPECIFICATION DROP DATABASE AUDIT SPECIFICATION
Create a Server Audit and Database Audit Specification https://msdn.microsoft.com/en-us/library/cc280424.aspx Using SQL Server Management Studio: To create a server audit
1. In Object Explorer, expand the Security folder. 2. Right-click the Audits folder and select New Audit…. For more information, see Create a Server Audit and Server Audit Specification. 3. When you are finished selecting options, click OK. To create a database-level audit specification
1. In Object Explorer, expand the database where you want to create an audit specification. 2. Expand the Security folder. 3. Right-click the Database Audit Specifications folder and select New Database Audit Specification…. The following options are available on the Create Database Audit Specification dialog box. Name
The name of the database audit specification. This is generated automatically when you create a new server audit specification but is editable. Audit
The name of an existing database audit. Either type in the name of the audit or select it from the list. Audit Action Type
Specifies the database-level audit action groups and audit actions to capture. For the list of database-level audit action groups and audit actions and a description of the events they contain, see SQL Server Audit Action Groups and Actions. Object Schema
Displays the schema for the specified Object Name. Object Name
The name of the object to audit. This is only available for audit actions; it does not apply to audit groups. Ellipsis (…)
Opens the Select Objects dialog to browse for and select an available object, based on the specified Audit Action Type. Principal Name
The account to filter the audit by for the object being audited. Ellipsis (…)
Opens the Select Objects dialog to browse for and select an available object, based on the specified Object Name. 4. When you are finished selecting option, click OK. Using Using Transact-SQL: To create a server audit
1. In Object Explorer, connect to an instance of Database Engine. 2. On the Standard bar, click New Query. 3. Copy and paste the following example into the query window and click Execute. USE master ; GO -- Create the server audit. CREATE SERVER AUDIT Payrole_Security_Audit TO FILE ( FILEPATH = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA' ) ; GO -- Enable the server audit. ALTER SERVER AUDIT Payrole_Security_Audit WITH (STATE = ON) ;
To create a database-level audit specification
1. In Object Explorer, connect to an instance of Database Engine. 2. On the Standard bar, click New Query. 3. Copy and paste the following example into the query window and click Execute. The example creates a database audit specification called Audit_Pay_Tables that audits SELECT and INSERT statements by the dbo user, for the HumanResources.EmployeePayHistory table based on the server audit defined above. USE AdventureWorks2012 ; GO -- Create the database audit specification. CREATE DATABASE AUDIT SPECIFICATION Audit_Pay_Tables FOR SERVER AUDIT Payrole_Security_Audit ADD (SELECT , INSERT ON HumanResources.EmployeePayHistory BY dbo ) WITH (STATE = ON) ; GO
For more information, see CREATE SERVER AUDIT (Transact-SQL) and CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL). View a SQL Server Audit Log https://msdn.microsoft.com/en-us/library/cc280728.aspx Requires the CONTROL SERVER permission. Using SQL Server Management Studio To view a SQL Server audit log 1. In Object Explorer, expand the Security folder. 2. Expand the Audits folder. 3. Right-click the audit log that you want to view and select View Audit Logs. This opens the Log File Viewer – server_name dialog box. For more information, see Log File Viewer F1 Help. 4. When finished, click Close. Microsoft recommends viewing the audit log by using the Log File Viewer. However, if you are creating an automated monitoring system, the information in the audit file can be read directly by using the sys.fn_get_audit_file (Transact-SQL) function. Reading the file directly returns data in a slightly different (unprocessed) format. See sys.fn_get_audit_file for more information.
2.1.5 Configure transparent database encryption Transparent Data Encryption with Azure SQL Database
2.2 Implement Azure SQL Database security https://azure.microsoft.com/en-us/documentation/articles/sql-database-security Security and Azure SQL Database technical white paper (PDF)
2.2.1 Configure firewalls https://azure.microsoft.com/en-us/documentation/articles/sql-database-firewall-configure Microsoft Azure SQL Database provides a relational database service for Azure and other Internetbased applications. To help protect your data, the SQL Database firewall prevents all access to your SQL Database server until you specify which computers have permission. The database firewall grants access based on the originating IP address of each request. To configure your database firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server and database levels.
Server-level firewall rules: These rules enable clients to access your entire Azure SQL Database server, that is, all the databases within the same logical server. These rules are stored in the master database. Database-level firewall rules: These rules enable clients to access individual databases within your Azure SQL Database server. These rules are created per database and are stored in the individual databases (including master). These rules can be helpful in restricting access to certain (secure) databases within the same logical server.
Recommendation: Microsoft recommends using database-level firewall rules whenever possible to make your database more portable. Use server-level firewall rules when you have many databases
that have the same access requirements, and you don't want to spend time configuring each database individually.
2.2.2 Manage logins, users, and roles Managing databases and logins in Azure SQL Database Security administration in SQL Database is similar to security administration for an on-premises instance of SQL Server. Managing security at the database-level is almost identical, with differences only in the parameters available. Because SQL Databases can scale to one or more physical computers, Azure SQL Database uses a different strategy for server-level administration. The following table summarizes how security administration for an on-premises SQL Server is different than in Azure SQL Database. Point of Difference
On-premises SQL Server
Azure SQL Database
Where you manage server-level security
The master database and through the Azure portal The Security folder in SQL Server Management Studio's Object Explorer
Windows Authentication Active Directory identities
Azure Active Directory identities
Server-level security role securityadmin fixed server role for creating logins
loginmanager database role in the master database
Commands for managing CREATE LOGIN, ALTER LOGIN, DROP logins LOGIN
CREATE LOGIN, ALTER LOGIN, DROP LOGIN (There are some parameter limitations and you must be connected to the master database.)
View that shows all logins
sys.server_principals
sys.sql_logins (You must be connected to the master database.)
Server-level role for creating databases
dbcreator fixed database role
dbmanager database role in the master database
Command for creating a CREATE DATABASE database
View that lists all databases
sys.databases
CREATE DATABASE (There are some parameter limitations and you must be connected to the master database.) sys.databases (You must be connected to the master database.)
2.2.3 Assign permissions GRANT (Transact_SQL)
2.2.4 Configure auditing Auditing is available for all Basic, Standard and Premium databases, and configurable via the new Azure Portal or via standard APIs. Get started with SQL database auditing Channel 9 video: Auditing in Azure SQL Database 20 minutes, 30 seconds. By implementing the auditing feature in SQL Database, you can retain your audit trail over time, as well as analyze reports showing database activity of success or failure conditions for the following predefined events:
Plain SQL Parameterized SQL Stored procedure Logins Transaction management
2.2.5 Configure row-level security Applies To: Azure SQL Database, SQL Server 2016 Preview Row-Level Security enables customers to control access to rows in a database table based on the characteristics of the user executing a query (e.g., group membership or execution context). Row-Level Security (RLS) simplifies the design and coding of security in your application. RLS enables you to implement restrictions on data row access. For example ensuring that workers can access only those data rows that are pertinent to their department, or restricting a customer's data access to only the data relevant to their company. The access restriction logic is located in the database tier rather than away from the data in another application tier. The database system applies the access restrictions every time that data access is attempted from any tier. This makes your security system more reliable and robust by reducing the surface area of your security system. Implement RLS by using the CREATE SECURITY POLICY Transact-SQL statement, and predicates created as inline table valued functions.
2.2.6 Configure data encryption Transparent Data Encryption (TDE) has been an on-premises SQL Server option since SQL Server 2008, available exclusively for data at rest. That is, your data files and backups are encrypted, while data tables are not directly encrypted. Specifically, if a user has given permissions to a database with TDE enabled, the user can see all data. Instead, TDE protects both of the physical data files and transaction log files. If these are moved to another server, they cannot be opened and viewed on that server. Prior to the introduction of stand alone database backup encryption in SQL Server 2014, TDE was the only option for natively encrypting database backups. SQL Database TDE works similarly, but the configuration in SQL Database is much simpler than it is in on-premises SQL Server. To enable TDE, click the Data Encryption On button in the Database Settings menu in the Azure Management Portal. Encrypting data in transit. SQL Database connections are encrypted using TLS/SSL for the Tabular Data Stream (TDS) transfer of data. In fact, v12 now supports the strongest version of Transport Layer Security (TLS) 1.2 when connecting with the latest versions of the ADO.Net (4.6), JDBC (4.2) or ODBC [??]. Support for ODBC on Linux, PHP, and node.js is coming soon. For Azure SQL Database Microsoft provides a valid certificate for the TLS connection. For increased security and to eliminate the possibility of “man-in-the-middle” attacks, do the following for each of the different drivers:
Setting Encrypt=True will assure the client is using a connection that is encrypted. Setting TrustServerCertificate=False ensures that the client will verify the certificate before accepting the connection.
2.2.7 Configure data masking Dynamic Data Masking (DDM) is a feature that allows you to limit access to your sensitive data without making client or application changes, while also enabling visibility of a portion of the data. The underlying data in the database remains intact (data is obfuscated dynamically), and it is applied based on user privilege. DDM requires the following components:
Privileged SQL users: These SQL users always have access to unmasked data. Masking function: This set of methods controls access to data for different scenarios. Masking rules: This set of rules defines the fields to mask and the masking function.
Important: Dynamic Data Masking does not protect against brute force attacks of the data from a malicious administrator. To implement DDM, you need to open the Dynamic Data Masking setings for your database in the Azure Management Portal, as shown in Figure 5. Here you can add masking rules to apply to your data. For example, you can select an existing masking field format for credit card numbers, Social Security Numbers, or email, among others, or you can create a custom format. You can make use of Masking Recommendations to easily discover potentially sensitive fields in your database that you would like to mask. Adding masking rules from this list of recommendations is as easy as clicking on ‘add’ for each relevant mask and saving the DDM settings.
2.2.8 Configure Always Encrypted Always Encrypted, which introduces a set of client libraries to allow operations on encrypted data transparently inside of an application. With the introduction of Always Encrypted, Microsoft simplifies the process of encrypting your data, as the data is transparently encrypted at the client and stays encrypted throughout the rest of the application stack. Since this security is performed by an ADO.NET client library, minimal changes are needed for an existing application to use Always Encrypted. This allows encryption to be easily configured at the application layer and data to be encrypted at all layers of the application. Always Encrypted has the following characteristics:
The key is always under control of the client and application, and is never on the server. Neither server nor database administrators can recover data in plain text. Encrypted columns of data are never sent to the server as plain text. Limited query operations on encrypted data are possible.
With Always Encrypted, data stays encrypted whether at rest or in motion. The encryption key remains inside the application in a trusted environment, thereby reducing the surface area for attack and simplifying implementation. To learn more about and get a first hand introduction of how to protect sensitive data with Always Encrypted refer to the Always Encrypted blog.
3 Design for high availability, disaster recovery, and scalability (25– 30%) 3.1 Design and implement high availability solutions 3.1.1 Design a high availability solution topology High Availability Solutions (SQL Server): https://msdn.microsoft.com/en-us/library/ms190202.aspx AlwaysOn Failover Cluster Instances As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable. For more information, see AlwaysOn Failover Cluster Instances (SQL Server). AlwaysOn Availability Groups AlwaysOn Availability Groups is an enterprise-level high-availability and disaster recovery solution introduced in SQL Server 2012 to enable you to maximize availability for one or more user databases. AlwaysOn Availability Groups requires that the SQL Server instances reside on Windows Server Failover Clustering (WSFC) nodes. For more information, see AlwaysOn Availability Groups (SQL Server). Database mirroring This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. We recommend that you use AlwaysOn Availability Groups instead. Database mirroring is a solution to increase database availability by supporting almost instantaneous failover. Database mirroring can be used to maintain a single standby database, or mirror database, for a corresponding production database that is referred to as the principal database. For more information, see Database Mirroring (SQL Server). Log shipping Like AlwaysOn Availability Groups and database mirroring, log shipping operates at the database level. You can use log shipping to maintain one or more warm standby databases (referred to as secondary databases) for a single production database that is referred to as the primary database. For more information about log shipping, see About Log Shipping (SQL Server).
3.1.2 Implement high availability solutions between on-premises and Azure https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-highavailability-and-disaster-recovery-solutions
Technology
Example Architectures
AlwaysOn Availability Groups
Some availability replicas running in Azure VMs and other replicas running on-premises for cross-site disaster recovery. The production site can be either on-premises or in an Azure datacenter.
Because all availability replicas must be in the same WSFC cluster, the WSFC cluster must span both networks (a multi-subnet WSFC cluster). This configuration requires a VPN connection between Azure and the on-premises network. For successful disaster recovery of your databases, you should also install a replica domain controller at the disaster recovery site. It is possible to use the Add Replica Wizard in SSMS to add an Azure replica to an existing AlwaysOn Availability Group. For more information, see Tutorial: Extend your AlwaysOn Availability Group to Azure.
Database Mirroring
One partner running in an Azure VM and the other running on-premises for cross-site disaster recovery using server certificates. Partners do not need to be in the same Active Directory domain, and no VPN connection is required.
Technology
Example Architectures
Another database mirroring sceanario involves one partner running in an Azure VM and the other running on-premises in the same Active Directory domain for cross-site disaster recovery. A VPN connection between the Azure virtual network and the onpremises network is required. For successful disaster recovery of your databases, you should also install a replica domain controller at the disaster recovery site.
Log Shipping
One server running in an Azure VM and the other running on-premises for cross-site disaster recovery. Log shipping depends on Windows file sharing, so a VPN connection between the Azure virtual network and the on-premises network is required.
For successful disaster recovery of your databases, you should also install a replica domain controller at the disaster recovery site.
Backup and Restore with Azure Blob Storage Service
On-premises production databases backed up directly to Azure blob storage for disaster recovery.
For more information, see Backup and Restore for SQL Server in Azure Virtual Machines.
3.1.3 Design cloud-based backup solutions https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-backup-andrestore/ The sections below include information specific to the different versions of SQL Server supported in an Azure virtual machine. Backup Considerations When Database Files are Stored in the Microsoft Azure Blob service The reasons for the performing database backups and the underlying backup technology itself changes when your database files are stored in Microsoft Azure Blob storage. For more information on storing database files in Azure blob storage, see SQL Server Data Files in Azure.
You no longer need to perform database backups to provide protection against hardware or media failure because Microsoft Azure provides this protection as part of the Microsoft Azure service.
You still need to perform database backups to provide protection against user errors, or for archival purposes, regulatory reasons, or administrative purposes.
You can perform nearly instantaneous backups and rapid restores using the SQL Server File-Snapshot Backup feature in Microsoft SQL Server 2016 Community Technology Preview 3 (CTP3). For more information, see File-Snapshot Backups for Database Files in Azure. Backup and Restore in Microsoft SQL Server 2016 Community Technology Preview 3 (CTP3) Microsoft SQL Server 2016 Community Technology Preview 3 (CTP3) supports the backup and restore with Azure blobs features found in SQL Server 2014 and described below. But it also includes the following enhancements:
Striping: When backing up to Microsoft Azure blob storage, SQL Server 2016 supports backing up to multiple blobs to enable backing up large databases, up to a maximum of 12.8 TB.
Snapshot Backup: Through the use of Azure snapshots, SQL Server File-Snapshot Backup provides nearly instantaneous backups and rapid restores for database files stored using the Azure Blob storage service. This capability enables you to simplify your backup and restore policies. Filesnapshot backup also supports point in time restore. For more information, see Snapshot Backups for Database Files in Azure.
Managed Backup Scheduling: SQL Server Managed Backup to Azure now supports custom schedules. For more information, see SQL Server Managed Backup to Microsoft Azure. NOTE: For a tutorial of the capabilities of SQL Server 2016 when using Azure Blob storage, see Tutorial: Using the Microsoft Azure Blob storage service with SQL Server 2016 databases. Backup and Restore in SQL Server 2014 SQL Server 2014 includes the following enhancement: Backup and Restore to Azure:
SQL Server Backup to URL now has support in SQL Server Management Studio. The option to backup to Azure is now available when using Backup or Restore task, or maintenance plan wizard in SQL Server Management Studio. For more information, see SQL Server Backup to URL.
SQL Server Managed Backup to Azure has new functionality that enables automated backup management. This is especially useful for automating backup management for SQL Server 2014 instances running on an Azure Machine. For more information, see SQL Server Managed Backup to Microsoft Azure.
Automated Backup provides additional automation to automatically enable SQL Server Managed Backup to Azure on all existing and new databases for a SQL Server VM in Azure. For more information, see Automated Backup for SQL Server in Azure Virtual Machines.
For an overview of all the options for SQL Server 2014 Backup to Azure, see SQL Server Backup and Restore with Microsoft Azure Blob Storage Service. Encryption: SQL Server 2014 supports encrypting data when creating a backup. It supports several encryption algorithms and the use osf a certificate or asymmetric key. For more information, see Backup Encryption. Backup and Restore in SQL Server 2012 For detailed information on SQL Server Backup and Restore in SQL Server 2012, see Backup and Restore of SQL Server Databases (SQL Server 2012). Starting in SQL Server 2012 SP1 Cumulative Update 2, you can back up to and restore from the Azure Blob Storage service. This enhancement can be used to backup SQL Server databases on a SQL Server running on an Azure Virtual Machine or an on-premises instance. For more information, see SQL Server Backup and Restore with Azure Blob Storage Service. Some of the benefits of using the Azure Blob storage service include the ability to bypass the 16 disk limit for attached disks, ease of management, the direct availability of the backup file to another instance of SQL Server instance running on an Azure virtual machine, or an on-premises instances for migration or disaster recovery purposes. For a full list of benefits to using an Azure blob storage service for SQL Server backups, see the Benefits section in SQL Server Backup and Restore with Azure Blob Storage Service. For Best Practice recommendations and troubleshooting information, see Backup and Restore Best Practices (Azure Blob Storage Service). Backup and Restore in other versions of SQL Server supported in an Azure Virtual Machine For SQL Server Backup and Restore in SQL Server 2008 R2, see Backing up and Restoring Databases in SQL Server (SQL Server 2008 R2). For SQL Server Backup and Restore in SQL Server 2008, see Backing up and Restoring Databases in SQL Server (SQL Server 2008). Next Steps If you are still planning your deployment of SQL Server in an Azure VM, you can find provisioning guidance in the following tutorial: Provisioning a SQL Server Virtual Machine on Azure.
Although backup and restore can be used to migrate your data, there are potentially easier data migration paths to SQL Server on an Azure VM. For a full discussion of migration options and recommendations, see Migrating a Database to SQL Server on an Azure VM. Review other resources for running SQL Server in Azure Virtual Machines.
3.1.4 Implement backup and recovery strategies Azure SQL Database Backup and Restore Strategy Read through Windows Azure SQL Database Backup and Restore Strategy. For the latest information on SQL Database backup and restore strategy, read SQL Database: Business Continuity Overview. Below some info from the SQL DB: Business Continuity Overview: To discuss the business continuity solutions there are several concepts you need be familiar with. Disaster recovery (DR): a process of restoring the normal business function of the application Estimated Recovery Time (ERT): The estimated duration for the database to be fully available after a restore or failover request. Recovery time objective (RTO): maximum acceptable time before the application fully recovers after the disruptive event. RTO measures the maximum loss of availability during the failures. Recovery point objective (RPO): maximum amount of last updates (time interval) the application can lose by the moment it fully recovers after the disruptive event. RPO measures the maximum loss of data during the failures. The following table shows the differences of the business continuity features across the service tiers: Capability
Basic tier
Standard tier
Premium tier
Point In Time Restore
Any restore point within 7 days Any restore point within 14 days Any restore point within 35 days
Geo-Restore
ERT < 12h, RPO < 1h
ERT < 12h, RPO < 1h
ERT < 12h, RPO < 1h
Standard Geo-Replication not included
ERT < 30s, RPO < 5s
ERT < 30s, RPO < 5s
Active Geo-Replication
not included
ERT < 30s, RPO < 5s
not included
Back Up and Restore of SQL Server Databases (SQL Server 2014) Impact of the Recovery Model on Backup and Restore Backup and restore operations occur within the context of a recovery model. A recovery model is a database property that controls how the transaction log is managed. Also, the recovery model of a database determines what types of backups and what restore scenarios are supported for the database. Typically, a database uses either the simple recovery model or the full recovery model. The full recovery model can be supplemented by switching to the bulk-logged recovery model before bulk operations. For an introduction to these recovery models and how they affect transaction log management, see The Transaction Log (SQL Server). The best choice of recovery model for the database depends on your business requirements. To avoid transaction log management and simplify backup and restore, use the simple recovery model.
To minimize work-loss exposure, at the cost of administrative overhead, use the full recovery model. For information about the effect of recovery models on backup and restore, see Backup Overview (SQL Server). Design the Backup Strategy After you have selected a recovery model that meets your business requirements for a specific database, you have to plan and implement a corresponding backup strategy. The optimal backup strategy depends on a variety of factors, of which the following are especially significant:
How many hours a day do applications have to access the database? If there is a predictable off-peak period, we recommend that you schedule full database backups for that period.
How frequently are changes and updates likely to occur? If changes are frequent, consider the following: o
Under the simple recovery model, consider scheduling differential backups between full database backups. A differential backup captures only the changes since the last full database backup. o Under the full recovery model, you should schedule frequent log backups. Scheduling differential backups between full backups can reduce restore time by reducing the number of log backups you have to restore after restoring the data. Are changes likely to occur in only a small part of the database or in a large part of the database? For a large database in which changes are concentrated in a part of the files or filegroups, partial backups and or file backups can be useful. For more information, see Partial Backups (SQL Server) and Full File Backups (SQL Server).
How much disk space will a full database backup require? For more information, see Estimate the Size of a Full Database Backup, later in this section.
Estimate the Size of a Full Database Backup Before you implement a backup and restore strategy, you should estimate how much disk space a full database backup will use. The backup operation copies the data in the database to the backup file. The backup contains only the actual data in the database and not any unused space. Therefore, the backup is usually smaller than the database itself. You can estimate the size of a full database backup by using the sp_spaceused system stored procedure. For more information, see sp_spaceused (Transact-SQL). Schedule Backups Performing a backup operation has minimal effect on transactions that are running; therefore, backup operations can be run during regular operations. You can perform a SQL Server backup with minimal effect on production workloads.
Note For information about concurrency restrictions during backup, see Backup Overview (SQL Server). After you decide what types of backups you require and how frequently you have to perform each type, we recommend that you schedule regular backups as part of a database maintenance plan for the database. For information about maintenance plans and how to create them for database backups and log backups, see Use the Maintenance Plan Wizard. Test Your Backups You do not have a restore strategy until you have tested your backups. It is very important to thoroughly test your backup strategy for each of your databases by restoring a copy of the database onto a test system. You must test restoring every type of backup that you intend to use. We recommend that you maintain an operations manual for each database. This operations manual should document the location of the backups, backup device names (if any), and the amount of time that is required to restore the test backups.
3.2 Design and implement scalable solutions 3.2.1 Design a scale-out solution Scaling Out SQL Server https://msdn.microsoft.com/en-us/library/aa479364.aspx Configure a Native Mode Report Server Scale-Out Deployment (SSRS Configuration Manager) https://msdn.microsoft.com/en-us/library/ms159114.aspx
3.2.2 Implement multi-master scenarios with database replication WIKI Multi master replication https://en.wikipedia.org/wiki/Multi-master_replication#Microsoft_SQL Microsoft SQL provides multi-master replication through peer-to-peer replication. It provides a scaleout and high-availability solution by maintaining copies of data across multiple nodes. Built on the foundation of transactional replication, peer-to-peer replication propagates transactionally consistent changes in near real-time. Peer-to-Peer Transactional Replication: https://msdn.microsoft.com/en-us/library/ms151196.aspx
3.2.3 Implement elastic scale for SQL Database Elastic Scale with Azure SQL Database - Getting Started: https://channel9.msdn.com/Blogs/Windows-Azure/Elastic-Scale-with-Azure-SQL-Database-GettingStarted Azure SQL Database Elastic Scale: https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Elastic-Scale
3.3 Design and implement SQL Database data recovery 3.3.1 Design a backup solution for SQL Database https://azure.microsoft.com/en-us/documentation/articles/sql-database-bcdr-faq/
https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ Business continuity features The following table shows the differences of the business continuity features across the service tiers:
Capability
Basic tier
Standard tier
Premium tier
Point In Time Restore
Any restore point within 7 days
Any restore point within 14 days
Any restore point within 35 days
Geo-Restore
ERT < 12h, RPO < 1h
ERT < 12h, RPO < 1h
ERT < 12h, RPO < 1h
Standard GeoReplication
not included
ERT < 30s, RPO < 5s
ERT < 30s, RPO < 5s
Active GeoReplication
not included
not included
ERT < 30s, RPO < 5s
https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity-design Designing your application for business continuity requires you to answer the following questions: 1. Which business continuity feature is appropriate for protecting my application from outages? 2. What level of redundancy and replication topology do I use? When to use Geo-Restore SQL Database provides a built-in basic protection of every database by default. It is done by storing the database backups in the geo-redundant Azure storage (GRS). If you choose this method, no special configuration or additional resource allocation is necessary. With these backups, you can recover your database in any region using the Geo-Restore command. Use Recover from an outage section for the details of using geo-restore to recover your application. You should use the built-in protection if your application meets the following criteria: 1. It is not considered mission critical. It doesn't have a binding SLA therefore the downtime of 24 hours or longer will not result in financial liability. 2. The rate of data change is low (e.g. transactions per hour). The RPO of 1 hour will not result in a massive data loss. 3. The application is cost sensitive and cannot justify the additional cost of Geo-Replication NOTE: Geo-Restore does not pre-allocate the compute capacity in any particular region to restore active databases from the backup during the outage. The service will manage the workload
associated with the geo-restore requests in a manner that minimizes the impact on the existing databases in that region and their capacity demands will have priority. Therefore, the recovery time of your database will depend on how many other databases will be recovering in the same region at the same time. When to use Geo-Replication Geo-Replication creates a replica database (secondary) in a different region from your primary. It guarantees that your database will have the necessary data and compute resources to support the application's workload after the recovery. Refer to Recover from an outage section for using failover to recover your application. You should use the Geo-Replication if your application meets the following criteria: 1. It is mission critical. It has a binding SLA with aggressive RPO and RTO. Loss of data and availability will result in financial liability. 2. The rate of data change is high (e.g. transactions per minute or seconds). The RPO of 1 hr associated with the default protection will likely result in unacceptable data loss. 3. The cost associated with using Geo-Replication is significantly lower than the potential financial liability and associated loss of business. NOTE: If your application uses Basic tier database(s) Geo-Repliation is not supported When to choose Standard vs. Active Geo-Replication Standard tier databases do not have the option of using Active Geo-Replication so if your application uses standard databases and meets the above criteria it should enable Standard Geo-Replication. Premium databases on the other hand can choose either option. Standard Geo-Replication has been designed as a simpler and less expensive disaster recovery solution, particularly suited to applications that use it only to protect from unplanned events such as outages. With Standard Geo-Replication you can only use the DR paired region for the recovery and can create only one secondary for each primary. An additional secondary may be necessary for the application upgrade scenario. So if this scenario is critical for your application you should enable Active Geo-Replication instead. Please refer to Upgrade application without downtime for additional details. NOTE: Active Geo-Replication also supports read-only access to the secondary database thus providing additional capacity for the read-only workloads. How to enable Geo-Replication You can enable Geo-Replication using Azure Classic Portal or by calling REST API or PowerShell command.
Azure Classic Portal
1. Log in to the Azure Classic Portal 2. On the left side of the screen select BROWSE and then select SQL Databases 3. Navigate to your database blade, select the Geo Replication map and click Configure GeoReplication. 4. Navigate to Geo-Replication blade. Select the target region. 5. Navigate to the Create Secondary blade. Select an existing server in the target region or create a new one. 6. Select the secondary type (Readable or Non-readable) 7. Click Create to complete the configuration NOTE: The DR paired region on the Geo-Replication blade will be marked as recommended. If you use a Premium tier database you can choose a different region. If you are using a Standard database you cannot change it. The Premium database will have a choice of the secondary type (Readable or Nonreadable). Standard database can only select a Non-readable secondary. PowerShell Use the New-AzureRmSqlDatabaseSecondary PowerShell cmdlet to create Geo-Replication configuration. This command is synchronous and returns when the primary and secondary databases are in sync. To configure Geo-Replication with a non-readable secondary for a Premium or Standard database: Copy
$database = Get-AzureRmSqlDatabase –DatabaseName "mydb" $secondaryLink = $database | New-AzureRmSqlDatabaseSecondary –PartnerResourceGroupName "rg2" – PartnerServerName "srv2" -AllowConnections "None"
To create Geo-Replication with a readable secondary for a Premium database: Copy
$database = Get-AzureRmSqlDatabase –DatabaseName "mydb" $secondaryLink = $database | New-AzureRmSqlDatabaseSecondary –PartnerResourceGroupName "rg2" – PartnerServerName "srv2" -AllowConnections "All"
REST API Use Create Database API with createMode set to NonReadableSecondary or Secondary to programmatically create a Geo-Replication secondary database. This API is asynchronous. After it returns use the Get Replication Link API to check the status of this operation. The replicationState field of the response body will have the value CATCHUP when the operation is completed. How to choose the failover configuration When designing your application for business continuity you should consider several configuration options. The choice will depend on the application deployment topology and what parts of your applications are most vulnerable to an outage. Please refer to Designing Cloud Solutions for Disaster Recovery Using Geo-Replication for guidance.
3.3.2 Implement self-service restore From http://blogs.technet.com/b/dataplatforminsider/archive/2014/05/05/azure-sql-databaseservice-tiers-amp-performance-q-amp-a.aspx : Self-service Restore: SQL Database Premium allows you to restore your database to any point in time within the last 35 days, in the case of a human or programmatic data deletion scenario. Replace import/export workarounds with self-service control over database restore. For more on using Self-service Restore, see Restore Service documentation.
3.3.3 Copy and export databases
4 Monitor and manage database implementations on Azure (25– 30%) 4.1 Monitor and troubleshoot SQL Server VMs on Azure 4.1.1 Monitor database and instance activity Server Performance and Activity Monitoring https://msdn.microsoft.com/en-us/library/ms191511(v=sql.120).aspx To perform monitoring tasks with Windows tools
Start System Monitor (Windows) View the Windows Application Log (Windows)
To create SQL Server database alerts with Windows tools Set Up a SQL Server Database Alert (Windows) To perform monitoring tasks with SQL Server Management Studio View the SQL Server Error Log (SQL Server Management Studio) Open Activity Monitor (SQL Server Management Studio) To perform monitoring tasks with SQL Trace by using Transact-SQL stored procedures
Create a Trace (Transact-SQL) Set a Trace Filter (Transact-SQL) Modify an Existing Trace (Transact-SQL) View a Saved Trace (Transact-SQL) View Filter Information (Transact-SQL) Delete a Trace (Transact-SQL)
To create and modify traces by using SQL Server Profiler
Create a Trace (SQL Server Profiler) Set Global Trace Options (SQL Server Profiler) Specify Events and Data Columns for a Trace File (SQL Server Profiler) Create a Transact-SQL Script for Running a Trace (SQL Server Profiler) Save Trace Results to a File (SQL Server Profiler) Set a Maximum File Size for a Trace File (SQL Server Profiler) Save Trace Results to a Table (SQL Server Profiler) Set a Maximum Table Size for a Trace Table (SQL Server Profiler) Filter Events in a Trace (SQL Server Profiler) View Filter Information (SQL Server Profiler) Modify a Filter (SQL Server Profiler) Filter Events Based on the Event Start Time (SQL Server Profiler) Filter Events Based on the Event End Time (SQL Server Profiler) Filter Server Process IDs (SPIDs) in a Trace (SQL Server Profiler) Organize Columns Displayed in a Trace (SQL Server Profiler)
To start, pause, and stop traces by using SQL Server Profiler
Start a Trace Automatically after Connecting to a Server (SQL Server Profiler) Pause a Trace (SQL Server Profiler) Stop a Trace (SQL Server Profiler) Run a Trace After It Has Been Paused or Stopped (SQL Server Profiler)
To open traces and configure how traces are displayed by using SQL Server Profiler
Open a Trace File (SQL Server Profiler) Open a Trace Table (SQL Server Profiler) Clear a Trace Window (SQL Server Profiler) Close a Trace Window (SQL Server Profiler) Set Trace Definition Defaults (SQL Server Profiler) Set Trace Display Defaults (SQL Server Profiler)
To replay traces by using SQL Server Profiler
Replay a Trace File (SQL Server Profiler) Replay a Trace Table (SQL Server Profiler) Replay a Single Event at a Time (SQL Server Profiler) Replay to a Breakpoint (SQL Server Profiler) Replay to a Cursor (SQL Server Profiler) Replay a Transact-SQL Script (SQL Server Profiler)
To create, modify, and use trace templates by using SQL Server Profiler
Create a Trace Template (SQL Server Profiler) Modify a Trace Template (SQL Server Profiler) Derive a Template from a Running Trace (SQL Server Profiler) Derive a Template from a Trace File or Trace Table (SQL Server Profiler) Export a Trace Template (SQL Server Profiler) Import a Trace Template (SQL Server Profiler)
To use SQL Server Profiler traces to collect and monitor server performance
Find a Value or Data Column While Tracing (SQL Server Profiler) Save Deadlock Graphs (SQL Server Profiler) Save Showplan XML Events Separately (SQL Server Profiler) Save Showplan XML Statistics Profile Events Separately (SQL Server Profiler) Extract a Script from a Trace (SQL Server Profiler) Correlate a Trace with Windows Performance Log Data (SQL Server Profiler)
4.1.2 Monitor using dynamic management views (DMVs) and dynamic management functions (DMFs) Dynamic management views and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. There are two types of dynamic management views and functions:
Server-scoped dynamic management views and functions. These require VIEW SERVER STATE permission on the server. Database-scoped dynamic management views and functions. These require VIEW DATABASE STATE permission on the database.
The article Dynamic Management Views and Functions (Transact-SQL) explains DMVs and DMFs and gives several examples.
4.1.3 Monitor performance and scalability
4.2 Monitor and troubleshoot SQL Database 4.2.1 Monitor and troubleshoot SQL Database Windows Azure SQL Database Management Pack for System Center 2012 https://www.microsoft.com/en-us/download/details.aspx?id=38829 The Microsoft Windows Azure SQL Database Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure SQL Database.
Feature Summary After configuration, the Microsoft Windows Azure SQL Database Monitoring Management Pack offers the following functionalities:
User-friendly wizard to discover Windows Azure SQL Database servers. Provides availability status of Windows Azure SQL Database server. Collects and monitors health of Windows Azure SQL Database databases. o Space monitoring: Used space Free space Total allocated quota o Track the total number of databases per server o Collects and monitors performance information: Average memory per session Total memory per session Total CPU time per session Total I/O per session Number of database sessions Maximum Transaction execution time Maximum Transaction lock count Maximum Transaction log space used Network Egress/Ingress bandwidth o Ability to define Custom thresholds for each monitor to configure the warning and critical alerts. o Run-as profile to securely connect to Windows Azure SQL Database. o Detailed knowledge to guide the IT operator with troubleshooting the problem o Custom tasks to redirect the user to the Windows Azure SQL Database online portal o Custom query support to enable application-specific availability and performance monitoring
https://azure.microsoft.com/en-in/documentation/articles/sql-database-troubleshoot-performance https://azure.microsoft.com/en-in/documentation/articles/sql-database-monitoring-with-dmvs/ Microsoft Azure SQL Database enables a subset of dynamic management views to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on. This topic provides information on how to detect common performance problems by using dynamic management views. SQL Database partially supports three categories of dynamic management views:
Database-related dynamic management views.
Execution-related dynamic management views.
Transaction-related dynamic management views. For detailed information on dynamic management views, see Dynamic Management Views and Functions (Transact-SQL) in SQL Server Books Online.
4.2.2 Monitor database activity
4.2.3 Monitor using DMVs and DMFs Microsoft Azure SQL Database enables a subset of dynamic management views to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on. This topic provides information on how to detect common performance problems by using dynamic management views. SQL Database partially supports three categories of dynamic management views:
Database-related dynamic management views. Execution-related dynamic management views. Transaction-related dynamic management views.
For detailed information on dynamic management views, see Dynamic Management Views and Functions (Transact-SQL) in SQL Server Books Online. (This is the same article as mentioned in 4.1.2) In SQL Database, querying a dynamic management view requires VIEW DATABASE STATE permissions. The VIEW DATABASE STATE permission returns information about all objects within the current database. To grant the VIEW DATABASE STATE permission to a specific database user, run the following query: GRANT VIEW DATABASE STATE TO database_user In an instance of on-premises SQL Server, dynamic management views return server state information. In SQL Database, they return information regarding your current logical database only. The information above is from the article SQL Database Monitoring with Dynamic Management Views. Read this article and also go through the examples mentioned in this article:
Calculating database sizes Monitoring connections Monitoring query performance o Finding top N queries o Monitoring blocked queries o Monitoring query plans
4.2.4 Monitor performance and scalability.
4.3 Automate and manage database implementations on Azure Overview: management tools for SQL Database
Azure Classic Portal SQL Server Management Studio (SSMS) and SQL Server Data Tools (SSDT) in Visual Studio PowerShell
4.3.1 Manage SQL Server in Azure VMs with PowerShell SQL Server PowerShell Task Description
Topic
Describes the preferred mechanism for running the SQL Server PowerShell Import the SQLPS components; to open a PowerShell session and load the sqlps module. The Module sqlps module loads in the SQL Server PowerShell provider and cmdlets, and the SQL Server Management Object (SMO) assemblies used by the provider and cmdlets.
Describes how to load only the SMO assemblies without the provider or cmdlets.
Load the SMO Assemblies in Windows PowerShell
Describes how to run a Windows PowerShell session by right-clicking a node in Object Explorer. Management Studio launches a Windows PowerShell session, loads the sqlps module, and sets the SQL Server provider path to the object selected.
Run Windows PowerShell from SQL Server Management Studio
Describes how to create SQL Server Agent job steps that run a Windows Run Windows PowerShell script. The jobs can then be scheduled to run at specific times PowerShell Steps in SQL or in response to events. Server Agent Describes how to use the SQL Server provider to navigate a hierarchy of SQL Server objects.
SQL Server PowerShell Provider
Describes how to use the SQL Server cmdlets that specify Database Engine Use the Database actions such as running a Transact-SQL script. Engine cmdlets Describes how to specify SQL Server delimited identifiers that contain characters not supported by Windows PowerShell.
SQL Server Identifiers in PowerShell
Describes how to make SQL Server Authentication connections. By default, Manage Authentication the SQL Server PowerShell components use Windows Authentication in Database Engine connections using the Windows credentials of the process running PowerShell Windows PowerShell. Describes how to use variables implemented by the SQL Server PowerShell Manage Tab Completion provider to control how many objects are listed when using Windows (SQL Server PowerShell) PowerShell tab completion. This is particularly useful when working on databases that contain large numbers of objects. Describes how to use Get-Help to get information about the SQL Server components in the Windows PowerShell environment.
Get Help SQL Server PowerShell
4.3.2 Manage Azure SQL Database with PowerShell Manage Azure SQL Database with PowerShell Add-AzureRmAccount Select-AzureRmSubscription -SubscriptionId 4cac86b0-1e56-bbbb-aaaa-000000000000 $AzureSQLLocations = (Get-AzureRmResourceProvider -ListAvailable | Where-Object {$_.ProviderNamespace -eq 'Microsoft.Sql'}).Locations New-AzureRmResourceGroup -Name "resourcegroupJapanWest" -Location "Japan West" New-AzureRmSqlServer -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" Location "Japan West" -ServerVersion "12.0" New-AzureRmSqlServerFirewallRule -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -FirewallRuleName "clientFirewallRule1" -StartIpAddress "192.168.0.198" -EndIpAddress "192.168.0.199" New-AzureRmSqlDatabase -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -DatabaseName "TestDB12" -Edition Standard -RequestedServiceObjectiveName "S1"
Set-AzureRmSqlDatabase -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -DatabaseName "TestDB12" -Edition Standard -RequestedServiceObjectiveName "S3" Remove-AzureRmSqlDatabase -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -DatabaseName "TestDB12" Remove-AzureRmSqlServer -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" Get-command *-AzureRMSql* gives all Azure SQL commands. Azure SQL cmdlets
4.3.3 Configure Automation and Runbooks https://azure.microsoft.com/en-us/documentation/articles/automation-manage-sql-database/ https://azure.microsoft.com/en-us/blog/azure-automation-your-sql-agent-in-the-cloud/ http://davidjrh.intelequia.com/2015/10/rebuilding-sql-database-indexes-using.html How to perform index maintenance on Azure SQL Database
View more...
Comments