Hello,
I'm using Windows Server 2003 Standard Edition which has multiple instances
of SQL Server 2005 standard edition. Am running a few tests that require the
switching on/off of the physical & logical disk counters.
Diskperf on Windows Server 2003 Standard Edition does not allow me to do this.
Either both (Logical & Physical) are enabled or both are disabled.
How can I turn them on/off individually?
Cheers!
Sqlcatz
The implementation of this changed on Windows 2003. Both
logical and physical are automatically enabled "on demand"
under Win 2003.
-Sue
On Mon, 9 Oct 2006 03:51:02 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello,
>I'm using Windows Server 2003 Standard Edition which has multiple instances
>of SQL Server 2005 standard edition. Am running a few tests that require the
>switching on/off of the physical & logical disk counters.
>Diskperf on Windows Server 2003 Standard Edition does not allow me to do this.
>Either both (Logical & Physical) are enabled or both are disabled.
>How can I turn them on/off individually?
>Cheers!
>Sqlcatz
|||Hello Sue,
Thank you for the reply.
Yes. They are automatically enabled - I came across that from an article.
But I'm not able to find anything that will allow me to disable them
(individually).
I'm executing certain tests for an application one of which requires that
these counters be disabled.
Is there any specific reason for Windows Server 2003 Standard Edition to
prevent the enabling/disabling of these counters? Can you point me to some
literature regarding this. I can accept the fact that this kind of behaviour
is not allowed - but need to know why - so that I provide a technical/logical
reason.
Cheers!
sqlcatz
|||The are enabled when an application calls them - it's
automatic and built into the OS.
Previously, there were problems when you would have a
production performance issue and needed to monitor disks. If
the performance counters weren't enabled, you had to run
diskperf and reboot - not good for a production server.
You can run diskperf /help from the command line and read
the comments. That's one area of "documentation" on it - for
whatever that's worth. But the output is along the lines of:
NOTE: Disk performance counters are permanently enabled on
systems beyond Windows 2000.
-Sue
On Tue, 10 Oct 2006 22:13:01 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello Sue,
>Thank you for the reply.
>Yes. They are automatically enabled - I came across that from an article.
>But I'm not able to find anything that will allow me to disable them
>(individually).
>I'm executing certain tests for an application one of which requires that
>these counters be disabled.
>Is there any specific reason for Windows Server 2003 Standard Edition to
>prevent the enabling/disabling of these counters? Can you point me to some
>literature regarding this. I can accept the fact that this kind of behaviour
>is not allowed - but need to know why - so that I provide a technical/logical
>reason.
>Cheers!
>sqlcatz
|||Thank you Sue!
sqlcatz
2012年3月29日星期四
Diskperf on Windows Server 2003 Standard Edition.
Hello,
I'm using Windows Server 2003 Standard Edition which has multiple instances
of SQL Server 2005 standard edition. Am running a few tests that require the
switching on/off of the physical & logical disk counters.
Diskperf on Windows Server 2003 Standard Edition does not allow me to do thi
s.
Either both (Logical & Physical) are enabled or both are disabled.
How can I turn them on/off individually?
Cheers!
SqlcatzThe implementation of this changed on Windows 2003. Both
logical and physical are automatically enabled "on demand"
under Win 2003.
-Sue
On Mon, 9 Oct 2006 03:51:02 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello,
>I'm using Windows Server 2003 Standard Edition which has multiple instances
>of SQL Server 2005 standard edition. Am running a few tests that require th
e
>switching on/off of the physical & logical disk counters.
>Diskperf on Windows Server 2003 Standard Edition does not allow me to do th
is.
>Either both (Logical & Physical) are enabled or both are disabled.
>How can I turn them on/off individually?
>Cheers!
>Sqlcatz|||Hello Sue,
Thank you for the reply.
Yes. They are automatically enabled - I came across that from an article.
But I'm not able to find anything that will allow me to disable them
(individually).
I'm executing certain tests for an application one of which requires that
these counters be disabled.
Is there any specific reason for Windows Server 2003 Standard Edition to
prevent the enabling/disabling of these counters? Can you point me to some
literature regarding this. I can accept the fact that this kind of behaviour
is not allowed - but need to know why - so that I provide a technical/logica
l
reason.
Cheers!
sqlcatz|||The are enabled when an application calls them - it's
automatic and built into the OS.
Previously, there were problems when you would have a
production performance issue and needed to monitor disks. If
the performance counters weren't enabled, you had to run
diskperf and reboot - not good for a production server.
You can run diskperf /help from the command line and read
the comments. That's one area of "documentation" on it - for
whatever that's worth. But the output is along the lines of:
NOTE: Disk performance counters are permanently enabled on
systems beyond Windows 2000.
-Sue
On Tue, 10 Oct 2006 22:13:01 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello Sue,
>Thank you for the reply.
>Yes. They are automatically enabled - I came across that from an article.
>But I'm not able to find anything that will allow me to disable them
>(individually).
>I'm executing certain tests for an application one of which requires that
>these counters be disabled.
>Is there any specific reason for Windows Server 2003 Standard Edition to
>prevent the enabling/disabling of these counters? Can you point me to some
>literature regarding this. I can accept the fact that this kind of behaviou
r
>is not allowed - but need to know why - so that I provide a technical/logic
al
>reason.
>Cheers!
>sqlcatz|||Thank you Sue!
sqlcatz
I'm using Windows Server 2003 Standard Edition which has multiple instances
of SQL Server 2005 standard edition. Am running a few tests that require the
switching on/off of the physical & logical disk counters.
Diskperf on Windows Server 2003 Standard Edition does not allow me to do thi
s.
Either both (Logical & Physical) are enabled or both are disabled.
How can I turn them on/off individually?
Cheers!
SqlcatzThe implementation of this changed on Windows 2003. Both
logical and physical are automatically enabled "on demand"
under Win 2003.
-Sue
On Mon, 9 Oct 2006 03:51:02 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello,
>I'm using Windows Server 2003 Standard Edition which has multiple instances
>of SQL Server 2005 standard edition. Am running a few tests that require th
e
>switching on/off of the physical & logical disk counters.
>Diskperf on Windows Server 2003 Standard Edition does not allow me to do th
is.
>Either both (Logical & Physical) are enabled or both are disabled.
>How can I turn them on/off individually?
>Cheers!
>Sqlcatz|||Hello Sue,
Thank you for the reply.
Yes. They are automatically enabled - I came across that from an article.
But I'm not able to find anything that will allow me to disable them
(individually).
I'm executing certain tests for an application one of which requires that
these counters be disabled.
Is there any specific reason for Windows Server 2003 Standard Edition to
prevent the enabling/disabling of these counters? Can you point me to some
literature regarding this. I can accept the fact that this kind of behaviour
is not allowed - but need to know why - so that I provide a technical/logica
l
reason.
Cheers!
sqlcatz|||The are enabled when an application calls them - it's
automatic and built into the OS.
Previously, there were problems when you would have a
production performance issue and needed to monitor disks. If
the performance counters weren't enabled, you had to run
diskperf and reboot - not good for a production server.
You can run diskperf /help from the command line and read
the comments. That's one area of "documentation" on it - for
whatever that's worth. But the output is along the lines of:
NOTE: Disk performance counters are permanently enabled on
systems beyond Windows 2000.
-Sue
On Tue, 10 Oct 2006 22:13:01 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello Sue,
>Thank you for the reply.
>Yes. They are automatically enabled - I came across that from an article.
>But I'm not able to find anything that will allow me to disable them
>(individually).
>I'm executing certain tests for an application one of which requires that
>these counters be disabled.
>Is there any specific reason for Windows Server 2003 Standard Edition to
>prevent the enabling/disabling of these counters? Can you point me to some
>literature regarding this. I can accept the fact that this kind of behaviou
r
>is not allowed - but need to know why - so that I provide a technical/logic
al
>reason.
>Cheers!
>sqlcatz|||Thank you Sue!
sqlcatz
Diskperf on Windows Server 2003 Standard Edition.
Hello,
I'm using Windows Server 2003 Standard Edition which has multiple instances
of SQL Server 2005 standard edition. Am running a few tests that require the
switching on/off of the physical & logical disk counters.
Diskperf on Windows Server 2003 Standard Edition does not allow me to do this.
Either both (Logical & Physical) are enabled or both are disabled.
How can I turn them on/off individually?
Cheers!
SqlcatzThe implementation of this changed on Windows 2003. Both
logical and physical are automatically enabled "on demand"
under Win 2003.
-Sue
On Mon, 9 Oct 2006 03:51:02 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello,
>I'm using Windows Server 2003 Standard Edition which has multiple instances
>of SQL Server 2005 standard edition. Am running a few tests that require the
>switching on/off of the physical & logical disk counters.
>Diskperf on Windows Server 2003 Standard Edition does not allow me to do this.
>Either both (Logical & Physical) are enabled or both are disabled.
>How can I turn them on/off individually?
>Cheers!
>Sqlcatz|||Hello Sue,
Thank you for the reply.
Yes. They are automatically enabled - I came across that from an article.
But I'm not able to find anything that will allow me to disable them
(individually).
I'm executing certain tests for an application one of which requires that
these counters be disabled.
Is there any specific reason for Windows Server 2003 Standard Edition to
prevent the enabling/disabling of these counters? Can you point me to some
literature regarding this. I can accept the fact that this kind of behaviour
is not allowed - but need to know why - so that I provide a technical/logical
reason.
Cheers!
sqlcatz|||The are enabled when an application calls them - it's
automatic and built into the OS.
Previously, there were problems when you would have a
production performance issue and needed to monitor disks. If
the performance counters weren't enabled, you had to run
diskperf and reboot - not good for a production server.
You can run diskperf /help from the command line and read
the comments. That's one area of "documentation" on it - for
whatever that's worth. But the output is along the lines of:
NOTE: Disk performance counters are permanently enabled on
systems beyond Windows 2000.
-Sue
On Tue, 10 Oct 2006 22:13:01 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello Sue,
>Thank you for the reply.
>Yes. They are automatically enabled - I came across that from an article.
>But I'm not able to find anything that will allow me to disable them
>(individually).
>I'm executing certain tests for an application one of which requires that
>these counters be disabled.
>Is there any specific reason for Windows Server 2003 Standard Edition to
>prevent the enabling/disabling of these counters? Can you point me to some
>literature regarding this. I can accept the fact that this kind of behaviour
>is not allowed - but need to know why - so that I provide a technical/logical
>reason.
>Cheers!
>sqlcatz|||Thank you Sue!
sqlcatz
I'm using Windows Server 2003 Standard Edition which has multiple instances
of SQL Server 2005 standard edition. Am running a few tests that require the
switching on/off of the physical & logical disk counters.
Diskperf on Windows Server 2003 Standard Edition does not allow me to do this.
Either both (Logical & Physical) are enabled or both are disabled.
How can I turn them on/off individually?
Cheers!
SqlcatzThe implementation of this changed on Windows 2003. Both
logical and physical are automatically enabled "on demand"
under Win 2003.
-Sue
On Mon, 9 Oct 2006 03:51:02 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello,
>I'm using Windows Server 2003 Standard Edition which has multiple instances
>of SQL Server 2005 standard edition. Am running a few tests that require the
>switching on/off of the physical & logical disk counters.
>Diskperf on Windows Server 2003 Standard Edition does not allow me to do this.
>Either both (Logical & Physical) are enabled or both are disabled.
>How can I turn them on/off individually?
>Cheers!
>Sqlcatz|||Hello Sue,
Thank you for the reply.
Yes. They are automatically enabled - I came across that from an article.
But I'm not able to find anything that will allow me to disable them
(individually).
I'm executing certain tests for an application one of which requires that
these counters be disabled.
Is there any specific reason for Windows Server 2003 Standard Edition to
prevent the enabling/disabling of these counters? Can you point me to some
literature regarding this. I can accept the fact that this kind of behaviour
is not allowed - but need to know why - so that I provide a technical/logical
reason.
Cheers!
sqlcatz|||The are enabled when an application calls them - it's
automatic and built into the OS.
Previously, there were problems when you would have a
production performance issue and needed to monitor disks. If
the performance counters weren't enabled, you had to run
diskperf and reboot - not good for a production server.
You can run diskperf /help from the command line and read
the comments. That's one area of "documentation" on it - for
whatever that's worth. But the output is along the lines of:
NOTE: Disk performance counters are permanently enabled on
systems beyond Windows 2000.
-Sue
On Tue, 10 Oct 2006 22:13:01 -0700, SQLCatz
<SQLCatz@.discussions.microsoft.com> wrote:
>Hello Sue,
>Thank you for the reply.
>Yes. They are automatically enabled - I came across that from an article.
>But I'm not able to find anything that will allow me to disable them
>(individually).
>I'm executing certain tests for an application one of which requires that
>these counters be disabled.
>Is there any specific reason for Windows Server 2003 Standard Edition to
>prevent the enabling/disabling of these counters? Can you point me to some
>literature regarding this. I can accept the fact that this kind of behaviour
>is not allowed - but need to know why - so that I provide a technical/logical
>reason.
>Cheers!
>sqlcatz|||Thank you Sue!
sqlcatz
2012年3月27日星期二
Disk Queue Length counters not meaningful in SQL 2000?
Yeah that's what I said...what are they talking about. I am in the
process of doing some research on multiple files vs. multiple
filgroups. I have seen it discussed a bunch on these groups but I need
to perform some tests to back it up. During my research I came across
the following from the "SQL Server 2000 Operations Guide"
"Note on Disk Queue Length
This counter no longer yields meaningful results in interpreting disk
I/O problems with SQL Server 2000. This is because the SQL Server
engines dynamically manage disk I/O, which is why the Max Async I/O
setting no longer exists. A simplified explanation is that SQL Server
issues an order for disk I/O, but does not wait for that process to
complete before issuing another request. When the I/O request has
completed its work, the database engine is notified and processes the
results. SQL Server monitors the throughput on these requests, and will
manage the amount of I/O that the disks can effectively handle."
You can see it here
http://www.microsoft.com/technet/pro...n/sqlops6.mspx
can anyone make any sense of this.
"SQL Server monitors the throughput on these requests, and will manage
the amount of I/O that the disks can effectively handle."
ok what does this mean? it will mange the amount of IO...ok how.
Has anyone seen this, or has an explanation?
Thanks
Bert
This means SQL Server 2000 tries not to issue 200 IO per second if your IO
system can only handle 100 IO per second. So normally you would not see
average disk queue length to be more than 1. However sometimes disk queue
length can be high (more than 1) in some heavily loaded systems. If that
happens then you know you have an IO bottleneck. The % disk busy is also a
good indication and it should be used in conjunction with average disk queue
length.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegr oups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/pro...n/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>
|||Disk queue length isn't particularly interesting because SQL Server throws a
lot of IOs at the disk and then goes to do other work. For instance,
read-ahead is a good example: if the server has a reasonable idea that it
will need several pages from disk in order to complete a scan, it will issue
the IOs for those pages before it actually needs them, with the expectation
that when we actually do need the pages, the IOs will have completed and we
won't have to wait. Here's the executive summary, I guess: by design, SQL
Server loads up the disk subsystem with IO requests so that we don't have to
wait for the pages when we need them. That causes IO queue lengths to
increase since there are typically always outstanding IOs posted by the
server. In some applications this might indicate a bottleneck, but SQL
Server is smart about it and uses asynchronous IO to its benefit.
Perhaps the following article (just recently posted) might illuminate the
subject a bit:
http://www.microsoft.com/technet/pro...IObasics.mspx.
Ryan Stonecipher
Microsoft Sql Server Storage Engine, DBCC
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegr oups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/pro...n/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>
process of doing some research on multiple files vs. multiple
filgroups. I have seen it discussed a bunch on these groups but I need
to perform some tests to back it up. During my research I came across
the following from the "SQL Server 2000 Operations Guide"
"Note on Disk Queue Length
This counter no longer yields meaningful results in interpreting disk
I/O problems with SQL Server 2000. This is because the SQL Server
engines dynamically manage disk I/O, which is why the Max Async I/O
setting no longer exists. A simplified explanation is that SQL Server
issues an order for disk I/O, but does not wait for that process to
complete before issuing another request. When the I/O request has
completed its work, the database engine is notified and processes the
results. SQL Server monitors the throughput on these requests, and will
manage the amount of I/O that the disks can effectively handle."
You can see it here
http://www.microsoft.com/technet/pro...n/sqlops6.mspx
can anyone make any sense of this.
"SQL Server monitors the throughput on these requests, and will manage
the amount of I/O that the disks can effectively handle."
ok what does this mean? it will mange the amount of IO...ok how.
Has anyone seen this, or has an explanation?
Thanks
Bert
This means SQL Server 2000 tries not to issue 200 IO per second if your IO
system can only handle 100 IO per second. So normally you would not see
average disk queue length to be more than 1. However sometimes disk queue
length can be high (more than 1) in some heavily loaded systems. If that
happens then you know you have an IO bottleneck. The % disk busy is also a
good indication and it should be used in conjunction with average disk queue
length.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegr oups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/pro...n/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>
|||Disk queue length isn't particularly interesting because SQL Server throws a
lot of IOs at the disk and then goes to do other work. For instance,
read-ahead is a good example: if the server has a reasonable idea that it
will need several pages from disk in order to complete a scan, it will issue
the IOs for those pages before it actually needs them, with the expectation
that when we actually do need the pages, the IOs will have completed and we
won't have to wait. Here's the executive summary, I guess: by design, SQL
Server loads up the disk subsystem with IO requests so that we don't have to
wait for the pages when we need them. That causes IO queue lengths to
increase since there are typically always outstanding IOs posted by the
server. In some applications this might indicate a bottleneck, but SQL
Server is smart about it and uses asynchronous IO to its benefit.
Perhaps the following article (just recently posted) might illuminate the
subject a bit:
http://www.microsoft.com/technet/pro...IObasics.mspx.
Ryan Stonecipher
Microsoft Sql Server Storage Engine, DBCC
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegr oups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/pro...n/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>
Disk Queue Length counters not meaningful in SQL 2000?
Yeah that's what I said...what are they talking about. I am in the
process of doing some research on multiple files vs. multiple
filgroups. I have seen it discussed a bunch on these groups but I need
to perform some tests to back it up. During my research I came across
the following from the "SQL Server 2000 Operations Guide"
"Note on Disk Queue Length
This counter no longer yields meaningful results in interpreting disk
I/O problems with SQL Server 2000. This is because the SQL Server
engines dynamically manage disk I/O, which is why the Max Async I/O
setting no longer exists. A simplified explanation is that SQL Server
issues an order for disk I/O, but does not wait for that process to
complete before issuing another request. When the I/O request has
completed its work, the database engine is notified and processes the
results. SQL Server monitors the throughput on these requests, and will
manage the amount of I/O that the disks can effectively handle."
You can see it here
http://www.microsoft.com/technet/pr...in/sqlops6.mspx
can anyone make any sense of this.
"SQL Server monitors the throughput on these requests, and will manage
the amount of I/O that the disks can effectively handle."
ok what does this mean? it will mange the amount of IO...ok how.
Has anyone seen this, or has an explanation?
Thanks
BertThis means SQL Server 2000 tries not to issue 200 IO per second if your IO
system can only handle 100 IO per second. So normally you would not see
average disk queue length to be more than 1. However sometimes disk queue
length can be high (more than 1) in some heavily loaded systems. If that
happens then you know you have an IO bottleneck. The % disk busy is also a
good indication and it should be used in conjunction with average disk queue
length.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx[/url
]
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||Disk queue length isn't particularly interesting because SQL Server throws a
lot of IOs at the disk and then goes to do other work. For instance,
read-ahead is a good example: if the server has a reasonable idea that it
will need several pages from disk in order to complete a scan, it will issue
the IOs for those pages before it actually needs them, with the expectation
that when we actually do need the pages, the IOs will have completed and we
won't have to wait. Here's the executive summary, I guess: by design, SQL
Server loads up the disk subsystem with IO requests so that we don't have to
wait for the pages when we need them. That causes IO queue lengths to
increase since there are typically always outstanding IOs posted by the
server. In some applications this might indicate a bottleneck, but SQL
Server is smart about it and uses asynchronous IO to its benefit.
Perhaps the following article (just recently posted) might illuminate the
subject a bit:
[url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx.[/
url]
Ryan Stonecipher
Microsoft Sql Server Storage Engine, DBCC
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx[/url
]
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||Hello,
I am also interested in disk queue lengths reported by System Monitor when r
unning SQL Server 2000.
I have tempdb on a dedicated (mirrored) disk. I am seeing average disk queue
lengths of up to 8000 for a 5 minute sampling period on this disk. This see
ms to contradict Wei's point that SQL Server 2000 throttle's disk accesses w
hilst corroborating Ryan's that it loads the system up with requests.
However, as I am seeing response times of above a ten seconds on this disk ,
I still think I have a disk bottleneck. As I already have a fast disk syste
m I believe the application needs rewriting to reduce its use of tempdb.
I would be grateful for any further comment or insight on my particular prob
lem.
Thanks,
Paul
process of doing some research on multiple files vs. multiple
filgroups. I have seen it discussed a bunch on these groups but I need
to perform some tests to back it up. During my research I came across
the following from the "SQL Server 2000 Operations Guide"
"Note on Disk Queue Length
This counter no longer yields meaningful results in interpreting disk
I/O problems with SQL Server 2000. This is because the SQL Server
engines dynamically manage disk I/O, which is why the Max Async I/O
setting no longer exists. A simplified explanation is that SQL Server
issues an order for disk I/O, but does not wait for that process to
complete before issuing another request. When the I/O request has
completed its work, the database engine is notified and processes the
results. SQL Server monitors the throughput on these requests, and will
manage the amount of I/O that the disks can effectively handle."
You can see it here
http://www.microsoft.com/technet/pr...in/sqlops6.mspx
can anyone make any sense of this.
"SQL Server monitors the throughput on these requests, and will manage
the amount of I/O that the disks can effectively handle."
ok what does this mean? it will mange the amount of IO...ok how.
Has anyone seen this, or has an explanation?
Thanks
BertThis means SQL Server 2000 tries not to issue 200 IO per second if your IO
system can only handle 100 IO per second. So normally you would not see
average disk queue length to be more than 1. However sometimes disk queue
length can be high (more than 1) in some heavily loaded systems. If that
happens then you know you have an IO bottleneck. The % disk busy is also a
good indication and it should be used in conjunction with average disk queue
length.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx[/url
]
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||Disk queue length isn't particularly interesting because SQL Server throws a
lot of IOs at the disk and then goes to do other work. For instance,
read-ahead is a good example: if the server has a reasonable idea that it
will need several pages from disk in order to complete a scan, it will issue
the IOs for those pages before it actually needs them, with the expectation
that when we actually do need the pages, the IOs will have completed and we
won't have to wait. Here's the executive summary, I guess: by design, SQL
Server loads up the disk subsystem with IO requests so that we don't have to
wait for the pages when we need them. That causes IO queue lengths to
increase since there are typically always outstanding IOs posted by the
server. In some applications this might indicate a bottleneck, but SQL
Server is smart about it and uses asynchronous IO to its benefit.
Perhaps the following article (just recently posted) might illuminate the
subject a bit:
[url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx.[/
url]
Ryan Stonecipher
Microsoft Sql Server Storage Engine, DBCC
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx[/url
]
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||Hello,
I am also interested in disk queue lengths reported by System Monitor when r
unning SQL Server 2000.
I have tempdb on a dedicated (mirrored) disk. I am seeing average disk queue
lengths of up to 8000 for a 5 minute sampling period on this disk. This see
ms to contradict Wei's point that SQL Server 2000 throttle's disk accesses w
hilst corroborating Ryan's that it loads the system up with requests.
However, as I am seeing response times of above a ten seconds on this disk ,
I still think I have a disk bottleneck. As I already have a fast disk syste
m I believe the application needs rewriting to reduce its use of tempdb.
I would be grateful for any further comment or insight on my particular prob
lem.
Thanks,
Paul
Disk Queue Length counters not meaningful in SQL 2000?
Yeah that's what I said...what are they talking about. I am in the
process of doing some research on multiple files vs. multiple
filgroups. I have seen it discussed a bunch on these groups but I need
to perform some tests to back it up. During my research I came across
the following from the "SQL Server 2000 Operations Guide"
"Note on Disk Queue Length
This counter no longer yields meaningful results in interpreting disk
I/O problems with SQL Server 2000. This is because the SQL Server
engines dynamically manage disk I/O, which is why the Max Async I/O
setting no longer exists. A simplified explanation is that SQL Server
issues an order for disk I/O, but does not wait for that process to
complete before issuing another request. When the I/O request has
completed its work, the database engine is notified and processes the
results. SQL Server monitors the throughput on these requests, and will
manage the amount of I/O that the disks can effectively handle."
You can see it here
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
can anyone make any sense of this.
"SQL Server monitors the throughput on these requests, and will manage
the amount of I/O that the disks can effectively handle."
ok what does this mean? it will mange the amount of IO...ok how.
Has anyone seen this, or has an explanation?
Thanks
BertDisk queue length isn't particularly interesting because SQL Server throws a
lot of IOs at the disk and then goes to do other work. For instance,
read-ahead is a good example: if the server has a reasonable idea that it
will need several pages from disk in order to complete a scan, it will issue
the IOs for those pages before it actually needs them, with the expectation
that when we actually do need the pages, the IOs will have completed and we
won't have to wait. Here's the executive summary, I guess: by design, SQL
Server loads up the disk subsystem with IO requests so that we don't have to
wait for the pages when we need them. That causes IO queue lengths to
increase since there are typically always outstanding IOs posted by the
server. In some applications this might indicate a bottleneck, but SQL
Server is smart about it and uses asynchronous IO to its benefit.
Perhaps the following article (just recently posted) might illuminate the
subject a bit:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx.
--
Ryan Stonecipher
Microsoft Sql Server Storage Engine, DBCC
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||This means SQL Server 2000 tries not to issue 200 IO per second if your IO
system can only handle 100 IO per second. So normally you would not see
average disk queue length to be more than 1. However sometimes disk queue
length can be high (more than 1) in some heavily loaded systems. If that
happens then you know you have an IO bottleneck. The % disk busy is also a
good indication and it should be used in conjunction with average disk queue
length.
--
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||Thanks for responding. I read the article today and it was very good.
I am however interested in how SQL manages IO when working with
multiple files in a single filegroup.
For example how Flushing Pages to disk is affected when a database has
multiple files. My assumption is that it is the same except that the
process performing the write (lazy write, Checkpoint or Eager write)
would pass off the write requests to the thread that is working on the
specific file. Based on my test results writing to multiple files was
always a tiny bit slower if the disk subsystem was not under heavy
load.
I am aware that here is some confusion about how SQL server issue
threads for reads. Some people feel that having more files is the only
way SQL will spawn more threads. Andrew Kelly pointed out in the
following Google threads this is not correct for SQL2000.
http://linktrim.com/x2
http://linktrim.com/x3
For writes I confirmed that SQL does write to all files in parallel. I
used fn_virtualfilestats to test. I had a Query Analyzer windows open
and I held down the F5 key so it was constantly refreshing, I could see
that the NumberReads column for all the files where incrementing on
each refresh. I do admit that my method was not 100% scientific but it
was the only way I could think of to confirm this.
The specific things I want to test are.
1. Do multiple files on the same drive perform better than a single
file
2. Do multiple files spread on separate drives perform better than a
single file.
My test server configuration is as follows
Dell 6650, 4 Xeon 1595, 5.2 GB of Ram
Disk configuration as follows
ArrayGroup0 - Channel1 E: RAID5 - 3 Drives
ArrayGroup1 - Channel0 F: RAID5 - 7 Drives
ArrayGroup1 - Channel1 J: RAID5 - 7 Drives
ArrayGroup2 - Channel0 G: RAID10 - 12 Drives
Drives E,F, and J only have data files for the test databases. Drive G
contains the tempDB log files for the test databases and all BULK
INSERT files.
I created 3 separate databases all having a single fileGroup, the file
configuration is as follows.
1. BulkInOut1 - One data File on the E drive.
2. BulkInOut2 - Three datafiles, one each on E, F and J
3. BulkInOut3 - Three datafiles all on the E drive.
Test1 - Bulk inserting 11,000,000 into an empty table with no indexes.
I ran each test 5 times and here are the results averaged in seconds.
BulkInOut1 - 165 (One File)
BulkInOut2 - 173 (3 files sep drives)
BulkInOut3 - 172 (3 files same drive)
Raw Results
BulkInOut1 - 164,162,169,163,165
BulkInOut2 - 169,175,184,166,169
BulkInOut3 - 172,170,168,180,171
Test2 - Create indexes on table from test 1. Create Clustered and two
non-clustered.
BulkInOut1 - 127 (One File)
BulkInOut2 - 98 (3 files sep drives)
BulkInOut3 - 132 (3 files same drive)
Raw Results
BulkInOut1 - 126,129,126
BulkInOut2 - 101,98,96
BulkInOut3 - 135,132,130
Based on these two test I am make the following conclusions.
1. Multiple files on the same drive is always slower do not do it.
2. Having multiple files spread onto multiple drives can perform
better. What happened in my test cases is that test one hardly
stressed the disk-subsystem. Therefore the multiple disks performed
slower. This I assume is due to the overhead of context switching. It
test two IO demands increased and disk queuing occurred on BulkInOut3
(2 files same drive), this caused it to be the slowest.
Next week I am going to perform some more tests with a better simulated
load. I am going to write an app that opens multiple connections and
inserts and selects data.
Any comments on my results or methodology would be appreciated. IF any
one else has performance data on file IO test I would love to see them
Thanks for you time
Bert Corderman|||I have completed a few more tests and here are the results. As a recap
my drive config is as follows
ArrayGroup0 - Channel1 E: RAID5 - 3 Drives
ArrayGroup1 - Channel0 F: RAID5 - 7 Drives
ArrayGroup1 - Channel1 J: RAID5 - 7 Drives
ArrayGroup2 - Channel0 G: RAID10 - 12 Drives
Test DB's
1. BulkInOut1 - One data File on the E drive.
2. BulkInOut2 - Three data files, one each on E, F and J
3. BulkInOut3 - Three data files all on the E drive.
Creation of Clustered Indexes only
BulkInOut1 - 73 (One File)
BulkInOut2 - 48 (3 files sep drives)
BulkInOut3 - 78 (3 files same drive)
As another test I wanted to see the performance of creating a separate
Filegroup for the non-clustered indexes. I tested a few configuration.
For these tests I used the database with one data file on E
(BulkInOut1)
The first test was a baseline
1. Non-Clustered Indexes Created on Primary filegroup - 66
The following tests added the Index filegroup with different file
placement.
2. 1 File on J = 62
3. 2 Files on J = 62
4. 1 File on J, one on F = 60
5. 5 Files on J = 65
Again my tests show that having multiple files in the same FileGroup
does not perform any better than having only one file, and in some
cases will even perform slower.
Any suggestions or input would be appreciated
Thanks
Bert|||I don't know, Bert. You've seemed to have done the most methodical and
exhaustive test that I've seen reported to in this forum.
If you've seen better, or at least equal performance, using single file
filegroups, Occam's Razor would say to go that route... KISS principle,
right?
Also, did you try reformatting the drives using 64 KB clusters? We have
experience quite a bit of throughput increase using this cluster allocation.
If you can afford it, use RAID 10 for everything. However, you should be
able to get buy with RAID 5 for data files, RAID 10 for log files, and RAID
1 for the tempdb, which should be on a dedicated set of disks.
Also, having more active spindles per database file is more important than
files themselves. If you can keep each processor running I/O against each
spindle, then whenever I/O requests come in, the more efficient they will
be. Second, having as many channels per parallel request will also increase
your throughput.
Check out this link for some Case Studies on this topic. Granted, the cover
a broad range of installation issues, but all of them have disk and file
layout strategies. I found the EMC CLARiiON whitepaper extremely
interesting. They introduce a term called "Plaiding," which is a
combination of RAID striping at the hardware level and multiple SQL Server
file placements at the software level, similar to a plaid pattern. This way
you maximize spindle usage per request.
There is A LOT of information on this site. Take your time to check out
each and every one. I have and it has provided quite a bit of information
to consider.
http://www.microsoft.com/sql/techinfo/administration/2000/scalability.asp
See what you think.
Sincerely,
Anthony Thomas
"bert" <bertcord@.gmail.com> wrote in message
news:1107304742.349333.272980@.z14g2000cwz.googlegroups.com...
I have completed a few more tests and here are the results. As a recap
my drive config is as follows
ArrayGroup0 - Channel1 E: RAID5 - 3 Drives
ArrayGroup1 - Channel0 F: RAID5 - 7 Drives
ArrayGroup1 - Channel1 J: RAID5 - 7 Drives
ArrayGroup2 - Channel0 G: RAID10 - 12 Drives
Test DB's
1. BulkInOut1 - One data File on the E drive.
2. BulkInOut2 - Three data files, one each on E, F and J
3. BulkInOut3 - Three data files all on the E drive.
Creation of Clustered Indexes only
BulkInOut1 - 73 (One File)
BulkInOut2 - 48 (3 files sep drives)
BulkInOut3 - 78 (3 files same drive)
As another test I wanted to see the performance of creating a separate
Filegroup for the non-clustered indexes. I tested a few configuration.
For these tests I used the database with one data file on E
(BulkInOut1)
The first test was a baseline
1. Non-Clustered Indexes Created on Primary filegroup - 66
The following tests added the Index filegroup with different file
placement.
2. 1 File on J = 62
3. 2 Files on J = 62
4. 1 File on J, one on F = 60
5. 5 Files on J = 65
Again my tests show that having multiple files in the same FileGroup
does not perform any better than having only one file, and in some
cases will even perform slower.
Any suggestions or input would be appreciated
Thanks
Bert|||Thanks for the input Anthony...
Yes KISS is what I want to do...but again if the difference is that
much greater sometimes it just isn't simple any more.
>>Also, did you try reformatting the drives using 64 KB clusters?
No I have not tried this yet...this is actually the first I have heard
of this but will definitely look into it. They actually talk of using
64kb clusters in the Rosetta Genomics 10-Terabyte Human Genome Database
white paper. (From the other link you mentioned) I had actually
thought of this but I did not know it was possible to format a drive
like this. For example sitting on my XP machine at home when I go to
format a drive the only options I have are 512, 1024, 2048 and 4096
bytes... I will have to look at this some more.
>>Also, having more active spindles per database file is more important
than files themselves.
I agree 110%. I actually performed a bunch more tests than what I
posted here. Having multiple files spread to different arrays only
increased performance if there was enough IO to saturate the IO of one
drive\controller. IF the IO wasn't enough the performance was
actually slower. The KISS philosophy leads me to recommend only
creating multiple files per file group if the IO generated can saturate
a single drive. Before adding more files I would recommend first
placing heavily accessed tables onto a new file group onto a separate
drive. But again I would like to do more research in this area. For
example lets say you have a big database with only 2 tables (simple
example) and you have 2 drives. Would you a) create one file group
with a file on each drive or b) two file groups with a file on each
drive?
>>They introduce a term called "Plaiding," which is a combination of
RAID striping at the hardware level and multiple SQL Server file
placements at the >>software level, similar to a plaid pattern.
Hum this sound very interesting. Unfortunately the link from the
Microsoft website no longer works. I tried searching the EMC site for
the article by searching for the article title and for the term plaid
and Plaiding... no dice... you don't happen to have the article
saved anywhere do you? I most cases I would prefer to let the hardware
perform the striping...but I would love to read the article
Thanks
Bert|||For the 64 KB allocation units (disk clusters), as well as many other useful
operational tips, check out the Microsoft SQL Server 2000 Operations Manual.
Be aware, however, that once you go above 4 KB clusters, disk compression
and defragmentation are no longer available through the native Windows
tools.
As far as your question concerning the two big tables, the answer depends on
the usage, as always. If you were to join the two tables frequently, I
would place each table within a sperate, single file, filegroup, one placed
on each drive in order to get simultaneous read-ahead reads for the JOIN
operation. However, an option not provided, with only two drives, I'd want
to maximize spindle usage; so, I'd stripe the two dirves together, at the
hardware level, of course, and then one file, one filegroup against the
stripe set.
As far as the "plaiding" article is concerned, yes, I noticed EMC moved the
site. I have the .pdf file and will forward it to you when I get a chance.
When I "Googled" the net, I did come across an article between Dell and
Oracle that explained similar features.
Also, know that if you have DAS, it would make no sense to "plaid" because
you would have total control over the layout of the disks and could
manipulate partitions for logs and data files and control the striping;
however, in a SAN situation, you oftentimes do not have this luxary. In
this scenario, "plaiding" can not only potentially increase the spindle
usage, but also give back to the DBA some measure of I/O performance
control, especially given the DBA is in the prime position to determine the
"changing" environmental charactersitics of the DBMS that no Storage
Administrator could possibly invision.
Sincerely,
Anthony Thomas
"bert" <bertcord@.gmail.com> wrote in message
news:1109015102.338558.43420@.l41g2000cwc.googlegroups.com...
Thanks for the input Anthony...
Yes KISS is what I want to do...but again if the difference is that
much greater sometimes it just isn't simple any more.
>>Also, did you try reformatting the drives using 64 KB clusters?
No I have not tried this yet...this is actually the first I have heard
of this but will definitely look into it. They actually talk of using
64kb clusters in the Rosetta Genomics 10-Terabyte Human Genome Database
white paper. (From the other link you mentioned) I had actually
thought of this but I did not know it was possible to format a drive
like this. For example sitting on my XP machine at home when I go to
format a drive the only options I have are 512, 1024, 2048 and 4096
bytes... I will have to look at this some more.
>>Also, having more active spindles per database file is more important
than files themselves.
I agree 110%. I actually performed a bunch more tests than what I
posted here. Having multiple files spread to different arrays only
increased performance if there was enough IO to saturate the IO of one
drive\controller. IF the IO wasn't enough the performance was
actually slower. The KISS philosophy leads me to recommend only
creating multiple files per file group if the IO generated can saturate
a single drive. Before adding more files I would recommend first
placing heavily accessed tables onto a new file group onto a separate
drive. But again I would like to do more research in this area. For
example lets say you have a big database with only 2 tables (simple
example) and you have 2 drives. Would you a) create one file group
with a file on each drive or b) two file groups with a file on each
drive?
>>They introduce a term called "Plaiding," which is a combination of
RAID striping at the hardware level and multiple SQL Server file
placements at the >>software level, similar to a plaid pattern.
Hum this sound very interesting. Unfortunately the link from the
Microsoft website no longer works. I tried searching the EMC site for
the article by searching for the article title and for the term plaid
and Plaiding... no dice... you don't happen to have the article
saved anywhere do you? I most cases I would prefer to let the hardware
perform the striping...but I would love to read the article
Thanks
Bert|||I am still digging over all the white papers on the MS site...that is a
great link thanks
>>However, an option not provided, with only two drives, I'd want to
maximize spindle usage; so, I'd stripe the two >>drives together, at
the hardware level, of course, and then one file, one filegroup against
the stripe set.
I guess my question about the 1 tables and two drives was a bit too
simplistic. Maybe I can ask the question in a different way as I am
curios as to what your conclusions would be.
How do you maximize spindle usage when you have created arrays as large
as the controller can support? In my simple example above lets say the
two drives are actually two arrays configured as large as possible (15
drives in RAID5 or RAID10), would you still want one filegroup with the
2 tables on one array and a second filegroup with the NonClustered
Indexes on the second filegroup? At first glance this is what I would
think also but...lets say the NonClustered indexes are very large it
is possible that when inserting data, or when joining more IO is
generated on The NonClustedIndex array. You could theoretically end up
in a situation where the NONClustered array is becoming a bottleneck
but the TableArray is fine
Wouldn't it make more sense to just create one filegroup with a single
file on each array and place all tables and indexes on this single
filegroup. Let that hardware handle the striping?
It becomes even much more complicated when you add multiple tables to
the file groups.
The best reason I can think of for using multiple user defined
filegroups, is if you are creating the FileGroup for a single object,
and you know that the data is going to accessed sequentially.
>>I have the .pdf file and will forward it to you when I get a chance.
Thanks that would be great
>>Also, know that if you have DAS,...
I am not sure what you are referring to when you say DAS, what does it
stand for?
Bert|||I would keep it simple. Put your system and user databases' data files on
one RAID 5 or RAID 10 volume. Put you system and user databases' log files
on one RAID 10 volume. Put your TempDB data file on one RAID 1 volume and
the TempDB log file on another RAID 1 volume.
You could then go with, for all of user databases only, for the data file(s)
only, leave the system tables and objects in the PRIMARY FILEGROUP with one
file on one disk. Put the clustered indexes in one file in a SECONDARY or
TABLE FILEGROUP on one disk. Then, finally, put all the non-clustered
indexes in one file in a TIRTIARY or INDEX FILEGROUP.
Whenever you perform CRUD operations, ALL INDEXES WILL BE UPDATED, both
clustered and non-clustered indexes. Only the UPDATE operations could
possibly be directed at non-clustered index(es) only. If the clustered
index is augmented or modified, so will the non-clustered indexes. Also, if
you have a mix of large and small Read operations, large queries would force
scans, which would use the clustered index, small queries would be directed
at the non-clustered indexes or an index seek of the clustered index.
It only makes sense to use "plaiding," multiple files per filegroup, when
you have limited control over the SAN. Another potential use would be in a
consolidated environment where a single SS instance could be hosting
numerous application databases. We have one where we host nearly 100 vended
applicaiton databases. "Plaiding" helps us further reduce potential SAN
hotspots from the pseudo-random drive usage.
Also keep in mind, that for vended databases, you will have little control
over the use of FILEGROUPs and object placement on those file groups. The
vendors tend to through everything into a single file PRIMARY FILEGROUP and
you would have to modify their script to get the objects created elsewhere.
"Plaiding" isolates the file layouts from the database object definitions.
I also sent you that .pdf whitepaper by private communication.
Sincerely,
Anthony Thomas
"bert" <bertcord@.gmail.com> wrote in message
news:1109115612.962063.112300@.c13g2000cwb.googlegroups.com...
I am still digging over all the white papers on the MS site...that is a
great link thanks
>>However, an option not provided, with only two drives, I'd want to
maximize spindle usage; so, I'd stripe the two >>drives together, at
the hardware level, of course, and then one file, one filegroup against
the stripe set.
I guess my question about the 1 tables and two drives was a bit too
simplistic. Maybe I can ask the question in a different way as I am
curios as to what your conclusions would be.
How do you maximize spindle usage when you have created arrays as large
as the controller can support? In my simple example above lets say the
two drives are actually two arrays configured as large as possible (15
drives in RAID5 or RAID10), would you still want one filegroup with the
2 tables on one array and a second filegroup with the NonClustered
Indexes on the second filegroup? At first glance this is what I would
think also but...lets say the NonClustered indexes are very large it
is possible that when inserting data, or when joining more IO is
generated on The NonClustedIndex array. You could theoretically end up
in a situation where the NONClustered array is becoming a bottleneck
but the TableArray is fine
Wouldn't it make more sense to just create one filegroup with a single
file on each array and place all tables and indexes on this single
filegroup. Let that hardware handle the striping?
It becomes even much more complicated when you add multiple tables to
the file groups.
The best reason I can think of for using multiple user defined
filegroups, is if you are creating the FileGroup for a single object,
and you know that the data is going to accessed sequentially.
>>I have the .pdf file and will forward it to you when I get a chance.
Thanks that would be great
>>Also, know that if you have DAS,...
I am not sure what you are referring to when you say DAS, what does it
stand for?
Bertsql
process of doing some research on multiple files vs. multiple
filgroups. I have seen it discussed a bunch on these groups but I need
to perform some tests to back it up. During my research I came across
the following from the "SQL Server 2000 Operations Guide"
"Note on Disk Queue Length
This counter no longer yields meaningful results in interpreting disk
I/O problems with SQL Server 2000. This is because the SQL Server
engines dynamically manage disk I/O, which is why the Max Async I/O
setting no longer exists. A simplified explanation is that SQL Server
issues an order for disk I/O, but does not wait for that process to
complete before issuing another request. When the I/O request has
completed its work, the database engine is notified and processes the
results. SQL Server monitors the throughput on these requests, and will
manage the amount of I/O that the disks can effectively handle."
You can see it here
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
can anyone make any sense of this.
"SQL Server monitors the throughput on these requests, and will manage
the amount of I/O that the disks can effectively handle."
ok what does this mean? it will mange the amount of IO...ok how.
Has anyone seen this, or has an explanation?
Thanks
BertDisk queue length isn't particularly interesting because SQL Server throws a
lot of IOs at the disk and then goes to do other work. For instance,
read-ahead is a good example: if the server has a reasonable idea that it
will need several pages from disk in order to complete a scan, it will issue
the IOs for those pages before it actually needs them, with the expectation
that when we actually do need the pages, the IOs will have completed and we
won't have to wait. Here's the executive summary, I guess: by design, SQL
Server loads up the disk subsystem with IO requests so that we don't have to
wait for the pages when we need them. That causes IO queue lengths to
increase since there are typically always outstanding IOs posted by the
server. In some applications this might indicate a bottleneck, but SQL
Server is smart about it and uses asynchronous IO to its benefit.
Perhaps the following article (just recently posted) might illuminate the
subject a bit:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx.
--
Ryan Stonecipher
Microsoft Sql Server Storage Engine, DBCC
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||This means SQL Server 2000 tries not to issue 200 IO per second if your IO
system can only handle 100 IO per second. So normally you would not see
average disk queue length to be more than 1. However sometimes disk queue
length can be high (more than 1) in some heavily loaded systems. If that
happens then you know you have an IO bottleneck. The % disk busy is also a
good indication and it should be used in conjunction with average disk queue
length.
--
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
<bertcord@.gmail.com> wrote in message
news:1106861683.038716.139100@.c13g2000cwb.googlegroups.com...
> Yeah that's what I said...what are they talking about. I am in the
> process of doing some research on multiple files vs. multiple
> filgroups. I have seen it discussed a bunch on these groups but I need
> to perform some tests to back it up. During my research I came across
> the following from the "SQL Server 2000 Operations Guide"
> "Note on Disk Queue Length
> This counter no longer yields meaningful results in interpreting disk
> I/O problems with SQL Server 2000. This is because the SQL Server
> engines dynamically manage disk I/O, which is why the Max Async I/O
> setting no longer exists. A simplified explanation is that SQL Server
> issues an order for disk I/O, but does not wait for that process to
> complete before issuing another request. When the I/O request has
> completed its work, the database engine is notified and processes the
> results. SQL Server monitors the throughput on these requests, and will
> manage the amount of I/O that the disks can effectively handle."
> You can see it here
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops6.mspx
> can anyone make any sense of this.
> "SQL Server monitors the throughput on these requests, and will manage
> the amount of I/O that the disks can effectively handle."
> ok what does this mean? it will mange the amount of IO...ok how.
> Has anyone seen this, or has an explanation?
> Thanks
> Bert
>|||Thanks for responding. I read the article today and it was very good.
I am however interested in how SQL manages IO when working with
multiple files in a single filegroup.
For example how Flushing Pages to disk is affected when a database has
multiple files. My assumption is that it is the same except that the
process performing the write (lazy write, Checkpoint or Eager write)
would pass off the write requests to the thread that is working on the
specific file. Based on my test results writing to multiple files was
always a tiny bit slower if the disk subsystem was not under heavy
load.
I am aware that here is some confusion about how SQL server issue
threads for reads. Some people feel that having more files is the only
way SQL will spawn more threads. Andrew Kelly pointed out in the
following Google threads this is not correct for SQL2000.
http://linktrim.com/x2
http://linktrim.com/x3
For writes I confirmed that SQL does write to all files in parallel. I
used fn_virtualfilestats to test. I had a Query Analyzer windows open
and I held down the F5 key so it was constantly refreshing, I could see
that the NumberReads column for all the files where incrementing on
each refresh. I do admit that my method was not 100% scientific but it
was the only way I could think of to confirm this.
The specific things I want to test are.
1. Do multiple files on the same drive perform better than a single
file
2. Do multiple files spread on separate drives perform better than a
single file.
My test server configuration is as follows
Dell 6650, 4 Xeon 1595, 5.2 GB of Ram
Disk configuration as follows
ArrayGroup0 - Channel1 E: RAID5 - 3 Drives
ArrayGroup1 - Channel0 F: RAID5 - 7 Drives
ArrayGroup1 - Channel1 J: RAID5 - 7 Drives
ArrayGroup2 - Channel0 G: RAID10 - 12 Drives
Drives E,F, and J only have data files for the test databases. Drive G
contains the tempDB log files for the test databases and all BULK
INSERT files.
I created 3 separate databases all having a single fileGroup, the file
configuration is as follows.
1. BulkInOut1 - One data File on the E drive.
2. BulkInOut2 - Three datafiles, one each on E, F and J
3. BulkInOut3 - Three datafiles all on the E drive.
Test1 - Bulk inserting 11,000,000 into an empty table with no indexes.
I ran each test 5 times and here are the results averaged in seconds.
BulkInOut1 - 165 (One File)
BulkInOut2 - 173 (3 files sep drives)
BulkInOut3 - 172 (3 files same drive)
Raw Results
BulkInOut1 - 164,162,169,163,165
BulkInOut2 - 169,175,184,166,169
BulkInOut3 - 172,170,168,180,171
Test2 - Create indexes on table from test 1. Create Clustered and two
non-clustered.
BulkInOut1 - 127 (One File)
BulkInOut2 - 98 (3 files sep drives)
BulkInOut3 - 132 (3 files same drive)
Raw Results
BulkInOut1 - 126,129,126
BulkInOut2 - 101,98,96
BulkInOut3 - 135,132,130
Based on these two test I am make the following conclusions.
1. Multiple files on the same drive is always slower do not do it.
2. Having multiple files spread onto multiple drives can perform
better. What happened in my test cases is that test one hardly
stressed the disk-subsystem. Therefore the multiple disks performed
slower. This I assume is due to the overhead of context switching. It
test two IO demands increased and disk queuing occurred on BulkInOut3
(2 files same drive), this caused it to be the slowest.
Next week I am going to perform some more tests with a better simulated
load. I am going to write an app that opens multiple connections and
inserts and selects data.
Any comments on my results or methodology would be appreciated. IF any
one else has performance data on file IO test I would love to see them
Thanks for you time
Bert Corderman|||I have completed a few more tests and here are the results. As a recap
my drive config is as follows
ArrayGroup0 - Channel1 E: RAID5 - 3 Drives
ArrayGroup1 - Channel0 F: RAID5 - 7 Drives
ArrayGroup1 - Channel1 J: RAID5 - 7 Drives
ArrayGroup2 - Channel0 G: RAID10 - 12 Drives
Test DB's
1. BulkInOut1 - One data File on the E drive.
2. BulkInOut2 - Three data files, one each on E, F and J
3. BulkInOut3 - Three data files all on the E drive.
Creation of Clustered Indexes only
BulkInOut1 - 73 (One File)
BulkInOut2 - 48 (3 files sep drives)
BulkInOut3 - 78 (3 files same drive)
As another test I wanted to see the performance of creating a separate
Filegroup for the non-clustered indexes. I tested a few configuration.
For these tests I used the database with one data file on E
(BulkInOut1)
The first test was a baseline
1. Non-Clustered Indexes Created on Primary filegroup - 66
The following tests added the Index filegroup with different file
placement.
2. 1 File on J = 62
3. 2 Files on J = 62
4. 1 File on J, one on F = 60
5. 5 Files on J = 65
Again my tests show that having multiple files in the same FileGroup
does not perform any better than having only one file, and in some
cases will even perform slower.
Any suggestions or input would be appreciated
Thanks
Bert|||I don't know, Bert. You've seemed to have done the most methodical and
exhaustive test that I've seen reported to in this forum.
If you've seen better, or at least equal performance, using single file
filegroups, Occam's Razor would say to go that route... KISS principle,
right?
Also, did you try reformatting the drives using 64 KB clusters? We have
experience quite a bit of throughput increase using this cluster allocation.
If you can afford it, use RAID 10 for everything. However, you should be
able to get buy with RAID 5 for data files, RAID 10 for log files, and RAID
1 for the tempdb, which should be on a dedicated set of disks.
Also, having more active spindles per database file is more important than
files themselves. If you can keep each processor running I/O against each
spindle, then whenever I/O requests come in, the more efficient they will
be. Second, having as many channels per parallel request will also increase
your throughput.
Check out this link for some Case Studies on this topic. Granted, the cover
a broad range of installation issues, but all of them have disk and file
layout strategies. I found the EMC CLARiiON whitepaper extremely
interesting. They introduce a term called "Plaiding," which is a
combination of RAID striping at the hardware level and multiple SQL Server
file placements at the software level, similar to a plaid pattern. This way
you maximize spindle usage per request.
There is A LOT of information on this site. Take your time to check out
each and every one. I have and it has provided quite a bit of information
to consider.
http://www.microsoft.com/sql/techinfo/administration/2000/scalability.asp
See what you think.
Sincerely,
Anthony Thomas
"bert" <bertcord@.gmail.com> wrote in message
news:1107304742.349333.272980@.z14g2000cwz.googlegroups.com...
I have completed a few more tests and here are the results. As a recap
my drive config is as follows
ArrayGroup0 - Channel1 E: RAID5 - 3 Drives
ArrayGroup1 - Channel0 F: RAID5 - 7 Drives
ArrayGroup1 - Channel1 J: RAID5 - 7 Drives
ArrayGroup2 - Channel0 G: RAID10 - 12 Drives
Test DB's
1. BulkInOut1 - One data File on the E drive.
2. BulkInOut2 - Three data files, one each on E, F and J
3. BulkInOut3 - Three data files all on the E drive.
Creation of Clustered Indexes only
BulkInOut1 - 73 (One File)
BulkInOut2 - 48 (3 files sep drives)
BulkInOut3 - 78 (3 files same drive)
As another test I wanted to see the performance of creating a separate
Filegroup for the non-clustered indexes. I tested a few configuration.
For these tests I used the database with one data file on E
(BulkInOut1)
The first test was a baseline
1. Non-Clustered Indexes Created on Primary filegroup - 66
The following tests added the Index filegroup with different file
placement.
2. 1 File on J = 62
3. 2 Files on J = 62
4. 1 File on J, one on F = 60
5. 5 Files on J = 65
Again my tests show that having multiple files in the same FileGroup
does not perform any better than having only one file, and in some
cases will even perform slower.
Any suggestions or input would be appreciated
Thanks
Bert|||Thanks for the input Anthony...
Yes KISS is what I want to do...but again if the difference is that
much greater sometimes it just isn't simple any more.
>>Also, did you try reformatting the drives using 64 KB clusters?
No I have not tried this yet...this is actually the first I have heard
of this but will definitely look into it. They actually talk of using
64kb clusters in the Rosetta Genomics 10-Terabyte Human Genome Database
white paper. (From the other link you mentioned) I had actually
thought of this but I did not know it was possible to format a drive
like this. For example sitting on my XP machine at home when I go to
format a drive the only options I have are 512, 1024, 2048 and 4096
bytes... I will have to look at this some more.
>>Also, having more active spindles per database file is more important
than files themselves.
I agree 110%. I actually performed a bunch more tests than what I
posted here. Having multiple files spread to different arrays only
increased performance if there was enough IO to saturate the IO of one
drive\controller. IF the IO wasn't enough the performance was
actually slower. The KISS philosophy leads me to recommend only
creating multiple files per file group if the IO generated can saturate
a single drive. Before adding more files I would recommend first
placing heavily accessed tables onto a new file group onto a separate
drive. But again I would like to do more research in this area. For
example lets say you have a big database with only 2 tables (simple
example) and you have 2 drives. Would you a) create one file group
with a file on each drive or b) two file groups with a file on each
drive?
>>They introduce a term called "Plaiding," which is a combination of
RAID striping at the hardware level and multiple SQL Server file
placements at the >>software level, similar to a plaid pattern.
Hum this sound very interesting. Unfortunately the link from the
Microsoft website no longer works. I tried searching the EMC site for
the article by searching for the article title and for the term plaid
and Plaiding... no dice... you don't happen to have the article
saved anywhere do you? I most cases I would prefer to let the hardware
perform the striping...but I would love to read the article
Thanks
Bert|||For the 64 KB allocation units (disk clusters), as well as many other useful
operational tips, check out the Microsoft SQL Server 2000 Operations Manual.
Be aware, however, that once you go above 4 KB clusters, disk compression
and defragmentation are no longer available through the native Windows
tools.
As far as your question concerning the two big tables, the answer depends on
the usage, as always. If you were to join the two tables frequently, I
would place each table within a sperate, single file, filegroup, one placed
on each drive in order to get simultaneous read-ahead reads for the JOIN
operation. However, an option not provided, with only two drives, I'd want
to maximize spindle usage; so, I'd stripe the two dirves together, at the
hardware level, of course, and then one file, one filegroup against the
stripe set.
As far as the "plaiding" article is concerned, yes, I noticed EMC moved the
site. I have the .pdf file and will forward it to you when I get a chance.
When I "Googled" the net, I did come across an article between Dell and
Oracle that explained similar features.
Also, know that if you have DAS, it would make no sense to "plaid" because
you would have total control over the layout of the disks and could
manipulate partitions for logs and data files and control the striping;
however, in a SAN situation, you oftentimes do not have this luxary. In
this scenario, "plaiding" can not only potentially increase the spindle
usage, but also give back to the DBA some measure of I/O performance
control, especially given the DBA is in the prime position to determine the
"changing" environmental charactersitics of the DBMS that no Storage
Administrator could possibly invision.
Sincerely,
Anthony Thomas
"bert" <bertcord@.gmail.com> wrote in message
news:1109015102.338558.43420@.l41g2000cwc.googlegroups.com...
Thanks for the input Anthony...
Yes KISS is what I want to do...but again if the difference is that
much greater sometimes it just isn't simple any more.
>>Also, did you try reformatting the drives using 64 KB clusters?
No I have not tried this yet...this is actually the first I have heard
of this but will definitely look into it. They actually talk of using
64kb clusters in the Rosetta Genomics 10-Terabyte Human Genome Database
white paper. (From the other link you mentioned) I had actually
thought of this but I did not know it was possible to format a drive
like this. For example sitting on my XP machine at home when I go to
format a drive the only options I have are 512, 1024, 2048 and 4096
bytes... I will have to look at this some more.
>>Also, having more active spindles per database file is more important
than files themselves.
I agree 110%. I actually performed a bunch more tests than what I
posted here. Having multiple files spread to different arrays only
increased performance if there was enough IO to saturate the IO of one
drive\controller. IF the IO wasn't enough the performance was
actually slower. The KISS philosophy leads me to recommend only
creating multiple files per file group if the IO generated can saturate
a single drive. Before adding more files I would recommend first
placing heavily accessed tables onto a new file group onto a separate
drive. But again I would like to do more research in this area. For
example lets say you have a big database with only 2 tables (simple
example) and you have 2 drives. Would you a) create one file group
with a file on each drive or b) two file groups with a file on each
drive?
>>They introduce a term called "Plaiding," which is a combination of
RAID striping at the hardware level and multiple SQL Server file
placements at the >>software level, similar to a plaid pattern.
Hum this sound very interesting. Unfortunately the link from the
Microsoft website no longer works. I tried searching the EMC site for
the article by searching for the article title and for the term plaid
and Plaiding... no dice... you don't happen to have the article
saved anywhere do you? I most cases I would prefer to let the hardware
perform the striping...but I would love to read the article
Thanks
Bert|||I am still digging over all the white papers on the MS site...that is a
great link thanks
>>However, an option not provided, with only two drives, I'd want to
maximize spindle usage; so, I'd stripe the two >>drives together, at
the hardware level, of course, and then one file, one filegroup against
the stripe set.
I guess my question about the 1 tables and two drives was a bit too
simplistic. Maybe I can ask the question in a different way as I am
curios as to what your conclusions would be.
How do you maximize spindle usage when you have created arrays as large
as the controller can support? In my simple example above lets say the
two drives are actually two arrays configured as large as possible (15
drives in RAID5 or RAID10), would you still want one filegroup with the
2 tables on one array and a second filegroup with the NonClustered
Indexes on the second filegroup? At first glance this is what I would
think also but...lets say the NonClustered indexes are very large it
is possible that when inserting data, or when joining more IO is
generated on The NonClustedIndex array. You could theoretically end up
in a situation where the NONClustered array is becoming a bottleneck
but the TableArray is fine
Wouldn't it make more sense to just create one filegroup with a single
file on each array and place all tables and indexes on this single
filegroup. Let that hardware handle the striping?
It becomes even much more complicated when you add multiple tables to
the file groups.
The best reason I can think of for using multiple user defined
filegroups, is if you are creating the FileGroup for a single object,
and you know that the data is going to accessed sequentially.
>>I have the .pdf file and will forward it to you when I get a chance.
Thanks that would be great
>>Also, know that if you have DAS,...
I am not sure what you are referring to when you say DAS, what does it
stand for?
Bert|||I would keep it simple. Put your system and user databases' data files on
one RAID 5 or RAID 10 volume. Put you system and user databases' log files
on one RAID 10 volume. Put your TempDB data file on one RAID 1 volume and
the TempDB log file on another RAID 1 volume.
You could then go with, for all of user databases only, for the data file(s)
only, leave the system tables and objects in the PRIMARY FILEGROUP with one
file on one disk. Put the clustered indexes in one file in a SECONDARY or
TABLE FILEGROUP on one disk. Then, finally, put all the non-clustered
indexes in one file in a TIRTIARY or INDEX FILEGROUP.
Whenever you perform CRUD operations, ALL INDEXES WILL BE UPDATED, both
clustered and non-clustered indexes. Only the UPDATE operations could
possibly be directed at non-clustered index(es) only. If the clustered
index is augmented or modified, so will the non-clustered indexes. Also, if
you have a mix of large and small Read operations, large queries would force
scans, which would use the clustered index, small queries would be directed
at the non-clustered indexes or an index seek of the clustered index.
It only makes sense to use "plaiding," multiple files per filegroup, when
you have limited control over the SAN. Another potential use would be in a
consolidated environment where a single SS instance could be hosting
numerous application databases. We have one where we host nearly 100 vended
applicaiton databases. "Plaiding" helps us further reduce potential SAN
hotspots from the pseudo-random drive usage.
Also keep in mind, that for vended databases, you will have little control
over the use of FILEGROUPs and object placement on those file groups. The
vendors tend to through everything into a single file PRIMARY FILEGROUP and
you would have to modify their script to get the objects created elsewhere.
"Plaiding" isolates the file layouts from the database object definitions.
I also sent you that .pdf whitepaper by private communication.
Sincerely,
Anthony Thomas
"bert" <bertcord@.gmail.com> wrote in message
news:1109115612.962063.112300@.c13g2000cwb.googlegroups.com...
I am still digging over all the white papers on the MS site...that is a
great link thanks
>>However, an option not provided, with only two drives, I'd want to
maximize spindle usage; so, I'd stripe the two >>drives together, at
the hardware level, of course, and then one file, one filegroup against
the stripe set.
I guess my question about the 1 tables and two drives was a bit too
simplistic. Maybe I can ask the question in a different way as I am
curios as to what your conclusions would be.
How do you maximize spindle usage when you have created arrays as large
as the controller can support? In my simple example above lets say the
two drives are actually two arrays configured as large as possible (15
drives in RAID5 or RAID10), would you still want one filegroup with the
2 tables on one array and a second filegroup with the NonClustered
Indexes on the second filegroup? At first glance this is what I would
think also but...lets say the NonClustered indexes are very large it
is possible that when inserting data, or when joining more IO is
generated on The NonClustedIndex array. You could theoretically end up
in a situation where the NONClustered array is becoming a bottleneck
but the TableArray is fine
Wouldn't it make more sense to just create one filegroup with a single
file on each array and place all tables and indexes on this single
filegroup. Let that hardware handle the striping?
It becomes even much more complicated when you add multiple tables to
the file groups.
The best reason I can think of for using multiple user defined
filegroups, is if you are creating the FileGroup for a single object,
and you know that the data is going to accessed sequentially.
>>I have the .pdf file and will forward it to you when I get a chance.
Thanks that would be great
>>Also, know that if you have DAS,...
I am not sure what you are referring to when you say DAS, what does it
stand for?
Bertsql
2012年3月11日星期日
Disaster recovery in multiple publishers environment
Hi,
I configured multiple publishers and one subscriber architecture on sql 2005
by having one publisher control the schema and indexes on the subscriber, and
the other publishers only append their data to the subscriber. I'm trying to
mimic a disaster recovery issue where one of the publishers fails (one
scenario is for the main publisher to fail, and one scenario is for one of
the other publishers to fail), but I'm having issue with restoring any of the
publishers (either type) and re-initializing the replication with the one
subscriber in a way that doesn't generate data conflicts and allow the
replication to continue.
Every time I try to do this, the subscriber reports of data conflicts
against the data that the same publisher has replicated already (this is
because the records have a primary uniqueidentifier key), and now tries to
replicate again, and therefore replication doesn't seem to keep pushing new
data to the subscriber.
How can approach either scenario without having to delete the failed
publisher data from the subscriber, and without allowing for duplicate data
(if I remove the primary key from the subscriber) before I re-initialize the
replication?
Thanks,
Gil
Can you outline exactly how the disaster recovery test is being done - do you
restore the publisher's publication database to the same or another server?
Are you synchronising the distribution database backups? Are you using the
same distribution database or does each publisher have its own one.
Thanks,
Paul Ibison
|||Have a look at the continue on data consistency error profile. This will
allow you to skip the pk violations.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Gil Shabat" <GilShabat@.discussions.microsoft.com> wrote in message
news:96ABD1A2-F355-4A77-B44C-2B60EA410CDD@.microsoft.com...
> Hi,
> I configured multiple publishers and one subscriber architecture on sql
> 2005
> by having one publisher control the schema and indexes on the subscriber,
> and
> the other publishers only append their data to the subscriber. I'm trying
> to
> mimic a disaster recovery issue where one of the publishers fails (one
> scenario is for the main publisher to fail, and one scenario is for one of
> the other publishers to fail), but I'm having issue with restoring any of
> the
> publishers (either type) and re-initializing the replication with the one
> subscriber in a way that doesn't generate data conflicts and allow the
> replication to continue.
> Every time I try to do this, the subscriber reports of data conflicts
> against the data that the same publisher has replicated already (this is
> because the records have a primary uniqueidentifier key), and now tries to
> replicate again, and therefore replication doesn't seem to keep pushing
> new
> data to the subscriber.
> How can approach either scenario without having to delete the failed
> publisher data from the subscriber, and without allowing for duplicate
> data
> (if I remove the primary key from the subscriber) before I re-initialize
> the
> replication?
> Thanks,
> Gil
|||Thanks for your replies.
For now I'm using one server to backup, and then restore the database. I
haven't got to restore the distribution database, and this also exists on the
same server. After some experimenting, I found that if I delete the
subscription + publisher, then restore the database, run the following
command exec sp_removedbreplication @.dbname=’dbname’,@.type='both' to remove
records of the publication,
and re-create the publication + subscription, but configure the subscriber
not to initialize, I get to a point that the new data is replicated again
without too many issues on both scenarios. however, I'm not sure whether this
is the right approach to take. Can you please let me know your thoughts?
Also, on a related issue, I tested using the SQL 2005 Developer edition
changing the schema of the subscriber, by changing the main publisher, and
this seems to work just fine. Problem is that when I run same scripts against
a publisher that has the setting to 'Keep existing object unchanged', it
still tries to replicate the DDL (tried to add a column to an existing
replicated table), and therefore returns with error (something about the same
column name can't exist twice on the same table). In addition, if I update
one of the other publishers first I can clearly see that despite the 'Keep
existing object unchanged' setting it still replicates the changes.
I tried to follow the process above and this allowed me to get around this
issue (i.e. delete all publications other than main publication, make schema
changes to all including the main publication, so subscriber schema would
also be modified, then re-create the publications, and disable the
initialize), but this seems completely wrong to me. Any thoughts on that
Thanks,
Gil
"Gil Shabat" wrote:
> Hi,
> I configured multiple publishers and one subscriber architecture on sql 2005
> by having one publisher control the schema and indexes on the subscriber, and
> the other publishers only append their data to the subscriber. I'm trying to
> mimic a disaster recovery issue where one of the publishers fails (one
> scenario is for the main publisher to fail, and one scenario is for one of
> the other publishers to fail), but I'm having issue with restoring any of the
> publishers (either type) and re-initializing the replication with the one
> subscriber in a way that doesn't generate data conflicts and allow the
> replication to continue.
> Every time I try to do this, the subscriber reports of data conflicts
> against the data that the same publisher has replicated already (this is
> because the records have a primary uniqueidentifier key), and now tries to
> replicate again, and therefore replication doesn't seem to keep pushing new
> data to the subscriber.
> How can approach either scenario without having to delete the failed
> publisher data from the subscriber, and without allowing for duplicate data
> (if I remove the primary key from the subscriber) before I re-initialize the
> replication?
> Thanks,
> Gil
|||What you initially describe is known as a nosync initialization and this is
quite standard.
Am not too sure why the replication of ddl changes would cause a duplicate
column - can you elaborate. EG is the schema of the table on the subscriber
already different to the publisher before the change?
As a workaround which avoids a complete reinitialization, you could drop the
article, drop the subscription to the article, make the change then readd,
(see http://www.replicationanswers.com/AddColumn.asp) but this might not be
necessary depending on more info from the question above.
HTH,
Paul Ibison
|||Thanks for the info.
I basically tested by having two schema identical databases (publishers),
and an identical schema database subscriber. Any change to the schema (ALTER
TABLE ADD...) on either publisher caused an immediate replication of the DDL
to the subscriber regardless whether or not the 'Keep existing object
unchanged' setting was used or not (the main publisher used the 'Drop
existing object and create a new one' and the secondary publisher used the
'Keep existing object unchanged' setting). In our production environment I
envision running the same DDL code against all publishers, and having only
the main publisher replicate those schema changes to the subscriber.
Unfortunately, when I tried to run a simple add column statement, the first
one to update its ddl statements have also replicated its changes to the
subscriber, while the 2nd one tried to do the same, but return an error that
the same column name cannot exist multiple times. The following is the exact
message: "Column name 'additional_Sync' in table 'dbo.Primary1' is specified
more than once"
To create the environment, I created two separate databases, and I ran
against them the same DDL to create one table. Then I created the main
publisher and subscription, which created the environment on the 3rd database
(subscriber), and finally, I created a new publication for the secondary
publisher, but used the 'Keep existing object unchanged' setting, so schema
wouldn't be re-created. This resulted in having data from the 2 publishers on
the subscriber. At this point I tried to test with adding a new column to the
table I created on the publishers.
I would appreciate any thought you might have.
Thanks,
Gil
|||Please can you test something for me with your setup. I believe that in SQL
2000 I set this up differently. One publisher/subscriber was set up as per
usual. The other one was set up with the "keep the existing table unchanged"
option. Data was sent from the second publisher but there was no DROP TABLE
statement there. If the data is partitioned this is fine. The thing is that
after setting it up this way, using sp_repladdcolumn, the column was only
propagated from the first publisher and not the second. This is where I
believe the difference might lie - you are using ALTER TABLE instead, and you
have the "keep existing..." setting on each publisher's articles. Please try
setting it up my way (not yet tried in SQL 2005) and see if this works for
you...
HTH,
Paul Ibison
|||I believe that in SQL 2005 MS changed the option from "keep the existing
table unchanged" to 'Keep existing object unchanged', but both do exactly the
same thing
I tried already yesterday to disable the replication schema updates by
executing: exec sp_changepublication @.publication ='Publication
Name',@.property ='replicate_ddl', @.value=0
and then executed the alter table command, which wasn't included in the
replication. then I added the column to replication by executing the
sp_repladdcolumn on both publishers, and the column has been added, and again
like before, both DDL commands were sent to subscriber, and I ended up with
the same error
Gil
I configured multiple publishers and one subscriber architecture on sql 2005
by having one publisher control the schema and indexes on the subscriber, and
the other publishers only append their data to the subscriber. I'm trying to
mimic a disaster recovery issue where one of the publishers fails (one
scenario is for the main publisher to fail, and one scenario is for one of
the other publishers to fail), but I'm having issue with restoring any of the
publishers (either type) and re-initializing the replication with the one
subscriber in a way that doesn't generate data conflicts and allow the
replication to continue.
Every time I try to do this, the subscriber reports of data conflicts
against the data that the same publisher has replicated already (this is
because the records have a primary uniqueidentifier key), and now tries to
replicate again, and therefore replication doesn't seem to keep pushing new
data to the subscriber.
How can approach either scenario without having to delete the failed
publisher data from the subscriber, and without allowing for duplicate data
(if I remove the primary key from the subscriber) before I re-initialize the
replication?
Thanks,
Gil
Can you outline exactly how the disaster recovery test is being done - do you
restore the publisher's publication database to the same or another server?
Are you synchronising the distribution database backups? Are you using the
same distribution database or does each publisher have its own one.
Thanks,
Paul Ibison
|||Have a look at the continue on data consistency error profile. This will
allow you to skip the pk violations.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Gil Shabat" <GilShabat@.discussions.microsoft.com> wrote in message
news:96ABD1A2-F355-4A77-B44C-2B60EA410CDD@.microsoft.com...
> Hi,
> I configured multiple publishers and one subscriber architecture on sql
> 2005
> by having one publisher control the schema and indexes on the subscriber,
> and
> the other publishers only append their data to the subscriber. I'm trying
> to
> mimic a disaster recovery issue where one of the publishers fails (one
> scenario is for the main publisher to fail, and one scenario is for one of
> the other publishers to fail), but I'm having issue with restoring any of
> the
> publishers (either type) and re-initializing the replication with the one
> subscriber in a way that doesn't generate data conflicts and allow the
> replication to continue.
> Every time I try to do this, the subscriber reports of data conflicts
> against the data that the same publisher has replicated already (this is
> because the records have a primary uniqueidentifier key), and now tries to
> replicate again, and therefore replication doesn't seem to keep pushing
> new
> data to the subscriber.
> How can approach either scenario without having to delete the failed
> publisher data from the subscriber, and without allowing for duplicate
> data
> (if I remove the primary key from the subscriber) before I re-initialize
> the
> replication?
> Thanks,
> Gil
|||Thanks for your replies.
For now I'm using one server to backup, and then restore the database. I
haven't got to restore the distribution database, and this also exists on the
same server. After some experimenting, I found that if I delete the
subscription + publisher, then restore the database, run the following
command exec sp_removedbreplication @.dbname=’dbname’,@.type='both' to remove
records of the publication,
and re-create the publication + subscription, but configure the subscriber
not to initialize, I get to a point that the new data is replicated again
without too many issues on both scenarios. however, I'm not sure whether this
is the right approach to take. Can you please let me know your thoughts?
Also, on a related issue, I tested using the SQL 2005 Developer edition
changing the schema of the subscriber, by changing the main publisher, and
this seems to work just fine. Problem is that when I run same scripts against
a publisher that has the setting to 'Keep existing object unchanged', it
still tries to replicate the DDL (tried to add a column to an existing
replicated table), and therefore returns with error (something about the same
column name can't exist twice on the same table). In addition, if I update
one of the other publishers first I can clearly see that despite the 'Keep
existing object unchanged' setting it still replicates the changes.
I tried to follow the process above and this allowed me to get around this
issue (i.e. delete all publications other than main publication, make schema
changes to all including the main publication, so subscriber schema would
also be modified, then re-create the publications, and disable the
initialize), but this seems completely wrong to me. Any thoughts on that
Thanks,
Gil
"Gil Shabat" wrote:
> Hi,
> I configured multiple publishers and one subscriber architecture on sql 2005
> by having one publisher control the schema and indexes on the subscriber, and
> the other publishers only append their data to the subscriber. I'm trying to
> mimic a disaster recovery issue where one of the publishers fails (one
> scenario is for the main publisher to fail, and one scenario is for one of
> the other publishers to fail), but I'm having issue with restoring any of the
> publishers (either type) and re-initializing the replication with the one
> subscriber in a way that doesn't generate data conflicts and allow the
> replication to continue.
> Every time I try to do this, the subscriber reports of data conflicts
> against the data that the same publisher has replicated already (this is
> because the records have a primary uniqueidentifier key), and now tries to
> replicate again, and therefore replication doesn't seem to keep pushing new
> data to the subscriber.
> How can approach either scenario without having to delete the failed
> publisher data from the subscriber, and without allowing for duplicate data
> (if I remove the primary key from the subscriber) before I re-initialize the
> replication?
> Thanks,
> Gil
|||What you initially describe is known as a nosync initialization and this is
quite standard.
Am not too sure why the replication of ddl changes would cause a duplicate
column - can you elaborate. EG is the schema of the table on the subscriber
already different to the publisher before the change?
As a workaround which avoids a complete reinitialization, you could drop the
article, drop the subscription to the article, make the change then readd,
(see http://www.replicationanswers.com/AddColumn.asp) but this might not be
necessary depending on more info from the question above.
HTH,
Paul Ibison
|||Thanks for the info.
I basically tested by having two schema identical databases (publishers),
and an identical schema database subscriber. Any change to the schema (ALTER
TABLE ADD...) on either publisher caused an immediate replication of the DDL
to the subscriber regardless whether or not the 'Keep existing object
unchanged' setting was used or not (the main publisher used the 'Drop
existing object and create a new one' and the secondary publisher used the
'Keep existing object unchanged' setting). In our production environment I
envision running the same DDL code against all publishers, and having only
the main publisher replicate those schema changes to the subscriber.
Unfortunately, when I tried to run a simple add column statement, the first
one to update its ddl statements have also replicated its changes to the
subscriber, while the 2nd one tried to do the same, but return an error that
the same column name cannot exist multiple times. The following is the exact
message: "Column name 'additional_Sync' in table 'dbo.Primary1' is specified
more than once"
To create the environment, I created two separate databases, and I ran
against them the same DDL to create one table. Then I created the main
publisher and subscription, which created the environment on the 3rd database
(subscriber), and finally, I created a new publication for the secondary
publisher, but used the 'Keep existing object unchanged' setting, so schema
wouldn't be re-created. This resulted in having data from the 2 publishers on
the subscriber. At this point I tried to test with adding a new column to the
table I created on the publishers.
I would appreciate any thought you might have.
Thanks,
Gil
|||Please can you test something for me with your setup. I believe that in SQL
2000 I set this up differently. One publisher/subscriber was set up as per
usual. The other one was set up with the "keep the existing table unchanged"
option. Data was sent from the second publisher but there was no DROP TABLE
statement there. If the data is partitioned this is fine. The thing is that
after setting it up this way, using sp_repladdcolumn, the column was only
propagated from the first publisher and not the second. This is where I
believe the difference might lie - you are using ALTER TABLE instead, and you
have the "keep existing..." setting on each publisher's articles. Please try
setting it up my way (not yet tried in SQL 2005) and see if this works for
you...
HTH,
Paul Ibison
|||I believe that in SQL 2005 MS changed the option from "keep the existing
table unchanged" to 'Keep existing object unchanged', but both do exactly the
same thing
I tried already yesterday to disable the replication schema updates by
executing: exec sp_changepublication @.publication ='Publication
Name',@.property ='replicate_ddl', @.value=0
and then executed the alter table command, which wasn't included in the
replication. then I added the column to replication by executing the
sp_repladdcolumn on both publishers, and the column has been added, and again
like before, both DDL commands were sent to subscriber, and I ended up with
the same error
Gil
标签:
2005by,
architecture,
configured,
control,
database,
disaster,
environment,
indexes,
microsoft,
multiple,
mysql,
oracle,
publisher,
publishers,
recovery,
schema,
server,
sql,
subscriber
2012年2月17日星期五
Disable Job Steps
Hi
I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
packages. Each package is run as a separate step in the job. I am busy
testing the job and want to skip some of the longer running packages for now.
Is there a way to disable a job step and then enable it again when I am
ready. Deleting the step and recreating it again is one option, but there
must be a better way
ThanksTake a look at
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/tsqlref9/html/e158802c-c347-4a5d-bf75-c03e5ae56e6b.htm
"Trevor Howe" <TrevorHowe@.discussions.microsoft.com> wrote in message
news:13A6603B-003D-4BD8-805F-F37C8AC23A6F@.microsoft.com...
> Hi
> I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
> packages. Each package is run as a separate step in the job. I am busy
> testing the job and want to skip some of the longer running packages for
> now.
> Is there a way to disable a job step and then enable it again when I am
> ready. Deleting the step and recreating it again is one option, but there
> must be a better way
> Thanks|||Trevor Howe wrote:
> Hi
> I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
> packages. Each package is run as a separate step in the job. I am busy
> testing the job and want to skip some of the longer running packages for now.
> Is there a way to disable a job step and then enable it again when I am
> ready. Deleting the step and recreating it again is one option, but there
> must be a better way
> Thanks
On each job step you have to option to decide what to do on "Success"
and on "Failure". Here you can choose to go to certain step in the job
on e.g. "Success".
--
Regards
Steen Schlüter Persson
Database Administrator / System Administrator|||Thanks guys. That was fast!! I also found the solution about five minutes
after I sent out the original post. Would'nt it be nice if all responses were
as successful and quick as this one. Now for my next mind bender: What is the
first name of the founder of Microsoft - :0) - Jokes
""Steen Schlüter Persson (DK)"" wrote:
> Trevor Howe wrote:
> > Hi
> >
> > I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
> > packages. Each package is run as a separate step in the job. I am busy
> > testing the job and want to skip some of the longer running packages for now.
> > Is there a way to disable a job step and then enable it again when I am
> > ready. Deleting the step and recreating it again is one option, but there
> > must be a better way
> >
> > Thanks
> On each job step you have to option to decide what to do on "Success"
> and on "Failure". Here you can choose to go to certain step in the job
> on e.g. "Success".
> --
> Regards
> Steen Schlüter Persson
> Database Administrator / System Administrator
>
I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
packages. Each package is run as a separate step in the job. I am busy
testing the job and want to skip some of the longer running packages for now.
Is there a way to disable a job step and then enable it again when I am
ready. Deleting the step and recreating it again is one option, but there
must be a better way
ThanksTake a look at
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/tsqlref9/html/e158802c-c347-4a5d-bf75-c03e5ae56e6b.htm
"Trevor Howe" <TrevorHowe@.discussions.microsoft.com> wrote in message
news:13A6603B-003D-4BD8-805F-F37C8AC23A6F@.microsoft.com...
> Hi
> I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
> packages. Each package is run as a separate step in the job. I am busy
> testing the job and want to skip some of the longer running packages for
> now.
> Is there a way to disable a job step and then enable it again when I am
> ready. Deleting the step and recreating it again is one option, but there
> must be a better way
> Thanks|||Trevor Howe wrote:
> Hi
> I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
> packages. Each package is run as a separate step in the job. I am busy
> testing the job and want to skip some of the longer running packages for now.
> Is there a way to disable a job step and then enable it again when I am
> ready. Deleting the step and recreating it again is one option, but there
> must be a better way
> Thanks
On each job step you have to option to decide what to do on "Success"
and on "Failure". Here you can choose to go to certain step in the job
on e.g. "Success".
--
Regards
Steen Schlüter Persson
Database Administrator / System Administrator|||Thanks guys. That was fast!! I also found the solution about five minutes
after I sent out the original post. Would'nt it be nice if all responses were
as successful and quick as this one. Now for my next mind bender: What is the
first name of the founder of Microsoft - :0) - Jokes
""Steen Schlüter Persson (DK)"" wrote:
> Trevor Howe wrote:
> > Hi
> >
> > I have a job setup in SQL Server Agent (SQL 2005) that runs multiple
> > packages. Each package is run as a separate step in the job. I am busy
> > testing the job and want to skip some of the longer running packages for now.
> > Is there a way to disable a job step and then enable it again when I am
> > ready. Deleting the step and recreating it again is one option, but there
> > must be a better way
> >
> > Thanks
> On each job step you have to option to decide what to do on "Success"
> and on "Failure". Here you can choose to go to certain step in the job
> on e.g. "Success".
> --
> Regards
> Steen Schlüter Persson
> Database Administrator / System Administrator
>
订阅:
博文 (Atom)