- time rsync -avh -E --progress --rsync-path="sudo rsync" user@remote_hist:/mnt/database-backups/BAK
Some news and tidbits that we share
In summary, need to add the article to the publication and then generate a new snapshot.
The link below contains the details:
A common issue with running Oracle on AWS involves the listener and the lack of ip/name consistency across AWS instance restarts. The link below contains some good info along with a script that deals with the ip/name consistentcy problem.
BINARY_CHECKSUM is a nifty SQL Server function that allows for a hash of the provided columns. This can come in handy when trying to look for differences in table column values. The function can receive a variable number of columns (or * for all columns) of a table. A great use of this would be in detecting change between values over time for the Dimensions in a Data Warehouse data model.
select BINARY_CHECKSUM(*) from table
select BINARY_CHECKSUM(column1, column2) from table
This morning I was searching for a tool for basic diagramming to draw up an environment that includes several servers and a replication topology between them. I found a nice little tool at https://www.draw.io. No nonsense, as soon as you hit the link you are in the diagramming editor. Disgrams can be saved locally and exported to several format. They even have a plug-in for Confluence.
If you are using Availability Groups in SQL Server 2012 there are times that you may want to hinge some login on the current primary of the Availability Group. For example, you may want to run jobs against the primary of the Availability Group. This primary can vary over time so it would be to your advantage to be able to determine this in a dynamic way. Because of this I created a function to assist.
create function [AG].[is_ag_primary]() returns smallint
declare @l_is_primary smallint
select @l_is_primary = max(primary_status) from (
select case when replica_server_name=@@servername and role_desc='PRIMARY' then 1 else 0 end primary_status
from sys.availability_replicas r
left join sys.dm_hadr_availability_replica_states rs on rs.replica_id = r.replica_id
and rs.role_desc ='PRIMARY'
Recently I was trying to move a decent sized mysql database to a new server. The database was about 40GB and it was taking several hours to restore the database using the mysql client. Searching for faster options I came across an option to mysqldump that produces a fileset per table instead of one big file. Haing multiple files allows usage of mysqlimport. mysqlimport has a --use-threads option so if you are working on a host with multiple cpu's you can leverage them for a much faster import. In my case I had 80 cores at my disposal. This resulted in a 7 hour import reducing down to 37 minutes.
The Postgres COPY command can be run within psql to extract csv files for exchange to an external system. You supply
a file location, delimiter, and format.
Here's an example:
In order to take advantage of SQL Server Availability Groups Read Only functionality you need to setup a routing list. The query below displays the current state of Read Only routing in an Availaibility Group
Once in a while it may become necessary to rename an instance of SQL Server. Because this isn't something that you do frequently it's easy to forget the commands. It's just a couple of commands (see below) followed by a restart of SQL Server.
exec master.dbo.sp_dropserver ''
exec master.dbo.sp_addserver '', 'local'
now, Restart SQL Server
Below is a great introduction to the Change Data capture functionality in SQL Server. CDC is a great way to capture data changes to your database without having to code your own triggers and logic. CDC reads the transaction log and so is asynchronous and minimizes contention within SQL Server. CDC can be configured as wide spread as the entire database or can be a specific as one column on one table in your database.
Check out the link below for an understanding on how to implement CDC.
If you have a reason to change the port that the EM web server is running try the command below (in this case changing to port 1158)
emca -reconfig ports -DBCONTROL_HTTP_PORT 1158
Postgres by default does not allow all network clients access to the databases by simply providing a username and password. The clients must be "registered" via the pg_hba.conf file. See http://www.postgresql.org/docs/8.4/static/auth-pg-hba-conf.html for details. After changes are made to pg_hba.conf the server must be signaled to reload the configuration files using pg_ctl reload
If you add new logins to your environment and want the associated database permissions to flow through to replicas you must create the logins with common sid identifiers. To do so query for the sid for the login from the syslogins table. After you have the sid you can create the login on the replicas as follows:
create login <name> with password='', sid=<sid>
After you do this database permissions will flow through from the primaries to the replicas and in the event of fail over users will have their appropriate access.
Snapshot Isolation allows for readers to access data without blocking writers. This is more like how Oracle works and can improve overall throughput of your system. Keep in mind that snapshot isolation will place greater demands on tempdb as before images of data will be stored there in order to support this functionality.
With that said, this is the process to enable snapshot isolation on a database:
ALTER DATABASE <my_db> SET allow_snapshot_isolation ON
ALTER DATABASE <my_db> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE <my_db> SET read_committed_snapshot ON
ALTER DATABASE <my_db> SET MULTI_USER
SELECT is_read_committed_snapshot_on, snapshot_isolation_state_desc,snapshot_isolation_state FROM sys.databases WHERE name='<my_db>'
Here is a query to monitor index rebuild progress:
;WITH cte AS
ROW_NUMBER() OVER(PARTITION BY object_id, index_id, partition_number ORDER BY partition_id) as rn
object_name(cur.object_id) as TableName,
WHEN pre.rows = 0 THEN 0
((cur.rows * 100.0) / pre.rows)
pre.rows - cur.rows as MissingRows
FROM cte as cur
INNER JOIN cte as pre on (cur.object_id = pre.object_id) AND (cur.index_id = pre.index_id) AND (cur.partition_number = pre.partition_number) AND (cur.rn = pre.rn +1)
ORDER BY 4
If you have a need to change the passwords for the sysman or dbsnmp users you will also need to edit configuration files used by Enterprise Manager. These files are $ORACLE_HOME/<hostname>.<instance_name>/sysman/config/emomes.properties and $ORACLE_HOME/<hostname>.<instance_name>/sysman/emd/targets.xml
The passwords may appear as encrypted. You can replace with clear text if you change the encrypted directives to FALSE.
AWR snashots are a useful way to gain insight into what was going on in the database in recent history. Viewing snapshot reports can supply very detailed metrics for a specific interval (default is hourly). One limitation here is when you would like to analyze a specific metric over multiple snapshots. A way to accomplish this type analysis is to query the AWR tables directly. In my case I wanted to review session logical reads over all snapshots to get a feel for overall workload of the database over time. One thing to be aware of is that some AWR values are cumulative and so must be compared to the previous snapshot in order to see what accumulated over the interval. Using analytic functions here allows you to do this easily. See the example below and along with a link for more details on the AWR tables.
select s.END_INTERVAL_TIME, value-lag(value) over(partition by stat_name order by s.BEGIN_INTERVAL_TIME) session_logical_reads
from DBA_HIST_SNAPSHOT s
join dba_hist_sysstat ss on s.snap_id = ss.snap_id
where ss.stat_name like '%session logical reads%'
On occasion I have had to move rman backup pieces around due to disk space shortages, etc. In order to avoid complexities if a restore is required you should re-"register" these backup pieces with rman so that it knows where they are. The CATALOG BACKUPPIECE command serves this purpose.
CATALOG BACKUPPIECE '/path_to_backup_piece/filename1', '/path_to_backup_piece/filename2', ...
After you are done you should follow up with a crosscheck backup command.