显示标签为“query”的博文。显示所有博文
显示标签为“query”的博文。显示所有博文

2012年3月27日星期二

DISK SPACE QUERY

Hello, is there a way to make a sql statement that checks disk space and
returns a value if it is below x%?Hi,
See the below query, values for each drive will returned in mega bytes.
create table space_check (drive_letter char(1), space_free int)
insert into tmp exec xp_fixeddrives
select * from space_check where space_free < 20
Thanks
Hari
SQL Server MVP
"Gervaise" <Gervaise@.discussions.microsoft.com> wrote in message
news:86C35096-4F7D-4CBB-8A1A-9BDBF028F0E7@.microsoft.com...
> Hello, is there a way to make a sql statement that checks disk space and
> returns a value if it is below x%?

2012年3月22日星期四

Disk Defragmentation, Query Performance?

Ok. How does Disk defragmentation and Query Performance relate to each other
?
How often should we defragment SQL boxes in DEV and PROD? Do we see any data
retrieval performance benefit if we regularly defragment the disk? Any
pointers on this topic is appreciated. Assuming the box is dedicated SQL
server with no additional applications.Have a look at this:
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
> Ok. How does Disk defragmentation and Query Performance relate to each
> other?
> How often should we defragment SQL boxes in DEV and PROD? Do we see any
> data
> retrieval performance benefit if we regularly defragment the disk? Any
> pointers on this topic is appreciated. Assuming the box is dedicated SQL
> server with no additional applications.|||Andrew,
Thanks for your response but my question is different here. I am asking
about Disk defragmenation not Index defragmentation.
"Andrew J. Kelly" wrote:

> Have a look at this:
>
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx[/ur
l]
>
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
>
>|||On Mon, 28 Nov 2005 14:49:04 -0800, "S" <S@.discussions.microsoft.com>
wrote:
>Thanks for your response but my question is different here. I am asking
>about Disk defragmenation not Index defragmentation.
The logical defrags determine how many pages have to be fetched.
The physical defrags determine what it costs to fetch a physical page.
Obviously, they interact.
But (to generalize), most queries use mostly memory-cached pages, so
the physical fragmentation tends to be hidden behind the logical -
that is the logical generally goes bad first. I'd say logical defrags
are done more often on most systems than physical (though rebuilding
the clustered index does at least some of both). OTOH, if you have a
mostly batch system, with lots of scans and/or with huge tables, you
might have a system where the physical predominates.
Yes, it can help a lot, depending on your local circumstances, but if
you do regular rebuilds of the clustered indexes it tends to go bad
rather slowly, I guess would be the bottom line.
J.
ps - also probably matters less coming off RAID-5 then it would off
isolated spindles.|||Sorry about that. If you create your database files at or near the size
that you need for a many months to come you should not need to worry about
continuing fragmentation at the file level. This usually comes from having
Auto Shrink or a job that regularly shrinks the files which is a bad
practice in general. So if it is an existing system then you should only
need to defrag the disk once. Dev boxes may require it done more often if
you create and drop dbs a lot.
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...[vbcol=seagreen]
> Andrew,
> Thanks for your response but my question is different here. I am asking
> about Disk defragmenation not Index defragmentation.
> "Andrew J. Kelly" wrote:
>|||Assuming we have Transactional Databases on the server, when databases are
created we will have to create a large file size such that it need not grow
often. But when we have many databases on the server and over the years we
will defintely see growth in file sizes and ultimately leading to disk
fragmentation causing more disk I/O than necessary.
I have read that if the mdf, ldf files are in use then disk defragment will
not be done and might be skipped? If its a 24/7 shop then the possibility of
defragmenting the disk tends to be very less unless you put up a maintenance
page not even allowing users to log in. I wonder how often disk
defragmentation is done in the industry on the sql boxes. Do we see
significant performance improvment if at all we defragment disk?
As J said, I guess this would be at the bottom of the list, making priority
for
1) Tune Sql Queries
2) REINDEX/INDEXDEFRAG
3) Adding more RAM etc.,
And finally what tools are used in the industry apart from windows DISK
Defragmenter and Diskeeper for this purpose?
"Andrew J. Kelly" wrote:

> Sorry about that. If you create your database files at or near the size
> that you need for a many months to come you should not need to worry about
> continuing fragmentation at the file level. This usually comes from havin
g
> Auto Shrink or a job that regularly shrinks the files which is a bad
> practice in general. So if it is an existing system then you should only
> need to defrag the disk once. Dev boxes may require it done more often if
> you create and drop dbs a lot.
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...
>
>|||A little bit of file fragmentation is not of concern. It has to be fairly
fragmented to make a dent in performance. But unfortunately there is no hard
guideline as to how much is too much. There are tools such as DiskKeeper
that will defragment on-line. You always should have a good backup first
and there may be a performance hit while defragging but they can be done
online. I think you will find that if you defrag all the db's once it will
be quite a while before fragmentation at the file level will be an issue if
you don't autoshrink.
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:1122FF26-03BD-4C5D-8E2F-33DD9E8A85BA@.microsoft.com...[vbcol=seagreen]
> Assuming we have Transactional Databases on the server, when databases are
> created we will have to create a large file size such that it need not
> grow
> often. But when we have many databases on the server and over the years we
> will defintely see growth in file sizes and ultimately leading to disk
> fragmentation causing more disk I/O than necessary.
> I have read that if the mdf, ldf files are in use then disk defragment
> will
> not be done and might be skipped? If its a 24/7 shop then the possibility
> of
> defragmenting the disk tends to be very less unless you put up a
> maintenance
> page not even allowing users to log in. I wonder how often disk
> defragmentation is done in the industry on the sql boxes. Do we see
> significant performance improvment if at all we defragment disk?
> As J said, I guess this would be at the bottom of the list, making
> priority
> for
> 1) Tune Sql Queries
> 2) REINDEX/INDEXDEFRAG
> 3) Adding more RAM etc.,
> And finally what tools are used in the industry apart from windows DISK
> Defragmenter and Diskeeper for this purpose?
> "Andrew J. Kelly" wrote:
>|||Thanks for getting back to me. I have downloaded a trial version of
Diskkeeper and it has some nice features like exclusion list, scheduling etc
and more over as you said its on-line operation.
Appreciate it for your time!
"Andrew J. Kelly" wrote:

> A little bit of file fragmentation is not of concern. It has to be fairly
> fragmented to make a dent in performance. But unfortunately there is no ha
rd
> guideline as to how much is too much. There are tools such as DiskKeeper
> that will defragment on-line. You always should have a good backup first
> and there may be a performance hit while defragging but they can be done
> online. I think you will find that if you defrag all the db's once it wil
l
> be quite a while before fragmentation at the file level will be an issue i
f
> you don't autoshrink.
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:1122FF26-03BD-4C5D-8E2F-33DD9E8A85BA@.microsoft.com...
>
>

Disk Defragmentation, Query Performance?

Ok. How does Disk defragmentation and Query Performance relate to each other?
How often should we defragment SQL boxes in DEV and PROD? Do we see any data
retrieval performance benefit if we regularly defragment the disk? Any
pointers on this topic is appreciated. Assuming the box is dedicated SQL
server with no additional applications.
Have a look at this:
http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
> Ok. How does Disk defragmentation and Query Performance relate to each
> other?
> How often should we defragment SQL boxes in DEV and PROD? Do we see any
> data
> retrieval performance benefit if we regularly defragment the disk? Any
> pointers on this topic is appreciated. Assuming the box is dedicated SQL
> server with no additional applications.
|||Andrew,
Thanks for your response but my question is different here. I am asking
about Disk defragmenation not Index defragmentation.
"Andrew J. Kelly" wrote:

> Have a look at this:
>
> http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
>
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
>
>
|||On Mon, 28 Nov 2005 14:49:04 -0800, "S" <S@.discussions.microsoft.com>
wrote:
>Thanks for your response but my question is different here. I am asking
>about Disk defragmenation not Index defragmentation.
The logical defrags determine how many pages have to be fetched.
The physical defrags determine what it costs to fetch a physical page.
Obviously, they interact.
But (to generalize), most queries use mostly memory-cached pages, so
the physical fragmentation tends to be hidden behind the logical -
that is the logical generally goes bad first. I'd say logical defrags
are done more often on most systems than physical (though rebuilding
the clustered index does at least some of both). OTOH, if you have a
mostly batch system, with lots of scans and/or with huge tables, you
might have a system where the physical predominates.
Yes, it can help a lot, depending on your local circumstances, but if
you do regular rebuilds of the clustered indexes it tends to go bad
rather slowly, I guess would be the bottom line.
J.
ps - also probably matters less coming off RAID-5 then it would off
isolated spindles.
|||Sorry about that. If you create your database files at or near the size
that you need for a many months to come you should not need to worry about
continuing fragmentation at the file level. This usually comes from having
Auto Shrink or a job that regularly shrinks the files which is a bad
practice in general. So if it is an existing system then you should only
need to defrag the disk once. Dev boxes may require it done more often if
you create and drop dbs a lot.
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...[vbcol=seagreen]
> Andrew,
> Thanks for your response but my question is different here. I am asking
> about Disk defragmenation not Index defragmentation.
> "Andrew J. Kelly" wrote:
|||Assuming we have Transactional Databases on the server, when databases are
created we will have to create a large file size such that it need not grow
often. But when we have many databases on the server and over the years we
will defintely see growth in file sizes and ultimately leading to disk
fragmentation causing more disk I/O than necessary.
I have read that if the mdf, ldf files are in use then disk defragment will
not be done and might be skipped? If its a 24/7 shop then the possibility of
defragmenting the disk tends to be very less unless you put up a maintenance
page not even allowing users to log in. I wonder how often disk
defragmentation is done in the industry on the sql boxes. Do we see
significant performance improvment if at all we defragment disk?
As J said, I guess this would be at the bottom of the list, making priority
for
1) Tune Sql Queries
2) REINDEX/INDEXDEFRAG
3) Adding more RAM etc.,
And finally what tools are used in the industry apart from windows DISK
Defragmenter and Diskeeper for this purpose?
"Andrew J. Kelly" wrote:

> Sorry about that. If you create your database files at or near the size
> that you need for a many months to come you should not need to worry about
> continuing fragmentation at the file level. This usually comes from having
> Auto Shrink or a job that regularly shrinks the files which is a bad
> practice in general. So if it is an existing system then you should only
> need to defrag the disk once. Dev boxes may require it done more often if
> you create and drop dbs a lot.
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...
>
>
|||A little bit of file fragmentation is not of concern. It has to be fairly
fragmented to make a dent in performance. But unfortunately there is no hard
guideline as to how much is too much. There are tools such as DiskKeeper
that will defragment on-line. You always should have a good backup first
and there may be a performance hit while defragging but they can be done
online. I think you will find that if you defrag all the db's once it will
be quite a while before fragmentation at the file level will be an issue if
you don't autoshrink.
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:1122FF26-03BD-4C5D-8E2F-33DD9E8A85BA@.microsoft.com...[vbcol=seagreen]
> Assuming we have Transactional Databases on the server, when databases are
> created we will have to create a large file size such that it need not
> grow
> often. But when we have many databases on the server and over the years we
> will defintely see growth in file sizes and ultimately leading to disk
> fragmentation causing more disk I/O than necessary.
> I have read that if the mdf, ldf files are in use then disk defragment
> will
> not be done and might be skipped? If its a 24/7 shop then the possibility
> of
> defragmenting the disk tends to be very less unless you put up a
> maintenance
> page not even allowing users to log in. I wonder how often disk
> defragmentation is done in the industry on the sql boxes. Do we see
> significant performance improvment if at all we defragment disk?
> As J said, I guess this would be at the bottom of the list, making
> priority
> for
> 1) Tune Sql Queries
> 2) REINDEX/INDEXDEFRAG
> 3) Adding more RAM etc.,
> And finally what tools are used in the industry apart from windows DISK
> Defragmenter and Diskeeper for this purpose?
> "Andrew J. Kelly" wrote:
|||Thanks for getting back to me. I have downloaded a trial version of
Diskkeeper and it has some nice features like exclusion list, scheduling etc
and more over as you said its on-line operation.
Appreciate it for your time!
"Andrew J. Kelly" wrote:

> A little bit of file fragmentation is not of concern. It has to be fairly
> fragmented to make a dent in performance. But unfortunately there is no hard
> guideline as to how much is too much. There are tools such as DiskKeeper
> that will defragment on-line. You always should have a good backup first
> and there may be a performance hit while defragging but they can be done
> online. I think you will find that if you defrag all the db's once it will
> be quite a while before fragmentation at the file level will be an issue if
> you don't autoshrink.
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:1122FF26-03BD-4C5D-8E2F-33DD9E8A85BA@.microsoft.com...
>
>
sql

Disk Defragmentation, Query Performance?

Ok. How does Disk defragmentation and Query Performance relate to each other?
How often should we defragment SQL boxes in DEV and PROD? Do we see any data
retrieval performance benefit if we regularly defragment the disk? Any
pointers on this topic is appreciated. Assuming the box is dedicated SQL
server with no additional applications.Have a look at this:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
> Ok. How does Disk defragmentation and Query Performance relate to each
> other?
> How often should we defragment SQL boxes in DEV and PROD? Do we see any
> data
> retrieval performance benefit if we regularly defragment the disk? Any
> pointers on this topic is appreciated. Assuming the box is dedicated SQL
> server with no additional applications.|||Andrew,
Thanks for your response but my question is different here. I am asking
about Disk defragmenation not Index defragmentation.
"Andrew J. Kelly" wrote:
> Have a look at this:
>
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
> > Ok. How does Disk defragmentation and Query Performance relate to each
> > other?
> >
> > How often should we defragment SQL boxes in DEV and PROD? Do we see any
> > data
> > retrieval performance benefit if we regularly defragment the disk? Any
> > pointers on this topic is appreciated. Assuming the box is dedicated SQL
> > server with no additional applications.
>
>|||On Mon, 28 Nov 2005 14:49:04 -0800, "S" <S@.discussions.microsoft.com>
wrote:
>Thanks for your response but my question is different here. I am asking
>about Disk defragmenation not Index defragmentation.
The logical defrags determine how many pages have to be fetched.
The physical defrags determine what it costs to fetch a physical page.
Obviously, they interact.
But (to generalize), most queries use mostly memory-cached pages, so
the physical fragmentation tends to be hidden behind the logical -
that is the logical generally goes bad first. I'd say logical defrags
are done more often on most systems than physical (though rebuilding
the clustered index does at least some of both). OTOH, if you have a
mostly batch system, with lots of scans and/or with huge tables, you
might have a system where the physical predominates.
Yes, it can help a lot, depending on your local circumstances, but if
you do regular rebuilds of the clustered indexes it tends to go bad
rather slowly, I guess would be the bottom line.
J.
ps - also probably matters less coming off RAID-5 then it would off
isolated spindles.|||Sorry about that. If you create your database files at or near the size
that you need for a many months to come you should not need to worry about
continuing fragmentation at the file level. This usually comes from having
Auto Shrink or a job that regularly shrinks the files which is a bad
practice in general. So if it is an existing system then you should only
need to defrag the disk once. Dev boxes may require it done more often if
you create and drop dbs a lot.
--
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...
> Andrew,
> Thanks for your response but my question is different here. I am asking
> about Disk defragmenation not Index defragmentation.
> "Andrew J. Kelly" wrote:
>> Have a look at this:
>>
>> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>>
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "S" <S@.discussions.microsoft.com> wrote in message
>> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
>> > Ok. How does Disk defragmentation and Query Performance relate to each
>> > other?
>> >
>> > How often should we defragment SQL boxes in DEV and PROD? Do we see any
>> > data
>> > retrieval performance benefit if we regularly defragment the disk? Any
>> > pointers on this topic is appreciated. Assuming the box is dedicated
>> > SQL
>> > server with no additional applications.
>>|||Assuming we have Transactional Databases on the server, when databases are
created we will have to create a large file size such that it need not grow
often. But when we have many databases on the server and over the years we
will defintely see growth in file sizes and ultimately leading to disk
fragmentation causing more disk I/O than necessary.
I have read that if the mdf, ldf files are in use then disk defragment will
not be done and might be skipped? If its a 24/7 shop then the possibility of
defragmenting the disk tends to be very less unless you put up a maintenance
page not even allowing users to log in. I wonder how often disk
defragmentation is done in the industry on the sql boxes. Do we see
significant performance improvment if at all we defragment disk?
As J said, I guess this would be at the bottom of the list, making priority
for
1) Tune Sql Queries
2) REINDEX/INDEXDEFRAG
3) Adding more RAM etc.,
And finally what tools are used in the industry apart from windows DISK
Defragmenter and Diskeeper for this purpose?
"Andrew J. Kelly" wrote:
> Sorry about that. If you create your database files at or near the size
> that you need for a many months to come you should not need to worry about
> continuing fragmentation at the file level. This usually comes from having
> Auto Shrink or a job that regularly shrinks the files which is a bad
> practice in general. So if it is an existing system then you should only
> need to defrag the disk once. Dev boxes may require it done more often if
> you create and drop dbs a lot.
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...
> > Andrew,
> >
> > Thanks for your response but my question is different here. I am asking
> > about Disk defragmenation not Index defragmentation.
> >
> > "Andrew J. Kelly" wrote:
> >
> >> Have a look at this:
> >>
> >>
> >> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> >>
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "S" <S@.discussions.microsoft.com> wrote in message
> >> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
> >> > Ok. How does Disk defragmentation and Query Performance relate to each
> >> > other?
> >> >
> >> > How often should we defragment SQL boxes in DEV and PROD? Do we see any
> >> > data
> >> > retrieval performance benefit if we regularly defragment the disk? Any
> >> > pointers on this topic is appreciated. Assuming the box is dedicated
> >> > SQL
> >> > server with no additional applications.
> >>
> >>
> >>
>
>|||A little bit of file fragmentation is not of concern. It has to be fairly
fragmented to make a dent in performance. But unfortunately there is no hard
guideline as to how much is too much. There are tools such as DiskKeeper
that will defragment on-line. You always should have a good backup first
and there may be a performance hit while defragging but they can be done
online. I think you will find that if you defrag all the db's once it will
be quite a while before fragmentation at the file level will be an issue if
you don't autoshrink.
--
Andrew J. Kelly SQL MVP
"S" <S@.discussions.microsoft.com> wrote in message
news:1122FF26-03BD-4C5D-8E2F-33DD9E8A85BA@.microsoft.com...
> Assuming we have Transactional Databases on the server, when databases are
> created we will have to create a large file size such that it need not
> grow
> often. But when we have many databases on the server and over the years we
> will defintely see growth in file sizes and ultimately leading to disk
> fragmentation causing more disk I/O than necessary.
> I have read that if the mdf, ldf files are in use then disk defragment
> will
> not be done and might be skipped? If its a 24/7 shop then the possibility
> of
> defragmenting the disk tends to be very less unless you put up a
> maintenance
> page not even allowing users to log in. I wonder how often disk
> defragmentation is done in the industry on the sql boxes. Do we see
> significant performance improvment if at all we defragment disk?
> As J said, I guess this would be at the bottom of the list, making
> priority
> for
> 1) Tune Sql Queries
> 2) REINDEX/INDEXDEFRAG
> 3) Adding more RAM etc.,
> And finally what tools are used in the industry apart from windows DISK
> Defragmenter and Diskeeper for this purpose?
> "Andrew J. Kelly" wrote:
>> Sorry about that. If you create your database files at or near the size
>> that you need for a many months to come you should not need to worry
>> about
>> continuing fragmentation at the file level. This usually comes from
>> having
>> Auto Shrink or a job that regularly shrinks the files which is a bad
>> practice in general. So if it is an existing system then you should only
>> need to defrag the disk once. Dev boxes may require it done more often if
>> you create and drop dbs a lot.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "S" <S@.discussions.microsoft.com> wrote in message
>> news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...
>> > Andrew,
>> >
>> > Thanks for your response but my question is different here. I am asking
>> > about Disk defragmenation not Index defragmentation.
>> >
>> > "Andrew J. Kelly" wrote:
>> >
>> >> Have a look at this:
>> >>
>> >>
>> >> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>> >>
>> >>
>> >> --
>> >> Andrew J. Kelly SQL MVP
>> >>
>> >>
>> >> "S" <S@.discussions.microsoft.com> wrote in message
>> >> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
>> >> > Ok. How does Disk defragmentation and Query Performance relate to
>> >> > each
>> >> > other?
>> >> >
>> >> > How often should we defragment SQL boxes in DEV and PROD? Do we see
>> >> > any
>> >> > data
>> >> > retrieval performance benefit if we regularly defragment the disk?
>> >> > Any
>> >> > pointers on this topic is appreciated. Assuming the box is dedicated
>> >> > SQL
>> >> > server with no additional applications.
>> >>
>> >>
>> >>
>>|||Thanks for getting back to me. I have downloaded a trial version of
Diskkeeper and it has some nice features like exclusion list, scheduling etc
and more over as you said its on-line operation.
Appreciate it for your time!
"Andrew J. Kelly" wrote:
> A little bit of file fragmentation is not of concern. It has to be fairly
> fragmented to make a dent in performance. But unfortunately there is no hard
> guideline as to how much is too much. There are tools such as DiskKeeper
> that will defragment on-line. You always should have a good backup first
> and there may be a performance hit while defragging but they can be done
> online. I think you will find that if you defrag all the db's once it will
> be quite a while before fragmentation at the file level will be an issue if
> you don't autoshrink.
> --
> Andrew J. Kelly SQL MVP
>
> "S" <S@.discussions.microsoft.com> wrote in message
> news:1122FF26-03BD-4C5D-8E2F-33DD9E8A85BA@.microsoft.com...
> > Assuming we have Transactional Databases on the server, when databases are
> > created we will have to create a large file size such that it need not
> > grow
> > often. But when we have many databases on the server and over the years we
> > will defintely see growth in file sizes and ultimately leading to disk
> > fragmentation causing more disk I/O than necessary.
> >
> > I have read that if the mdf, ldf files are in use then disk defragment
> > will
> > not be done and might be skipped? If its a 24/7 shop then the possibility
> > of
> > defragmenting the disk tends to be very less unless you put up a
> > maintenance
> > page not even allowing users to log in. I wonder how often disk
> > defragmentation is done in the industry on the sql boxes. Do we see
> > significant performance improvment if at all we defragment disk?
> >
> > As J said, I guess this would be at the bottom of the list, making
> > priority
> > for
> >
> > 1) Tune Sql Queries
> > 2) REINDEX/INDEXDEFRAG
> > 3) Adding more RAM etc.,
> >
> > And finally what tools are used in the industry apart from windows DISK
> > Defragmenter and Diskeeper for this purpose?
> >
> > "Andrew J. Kelly" wrote:
> >
> >> Sorry about that. If you create your database files at or near the size
> >> that you need for a many months to come you should not need to worry
> >> about
> >> continuing fragmentation at the file level. This usually comes from
> >> having
> >> Auto Shrink or a job that regularly shrinks the files which is a bad
> >> practice in general. So if it is an existing system then you should only
> >> need to defrag the disk once. Dev boxes may require it done more often if
> >> you create and drop dbs a lot.
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "S" <S@.discussions.microsoft.com> wrote in message
> >> news:454C3E18-23CA-47A8-8B8A-485CFA1D8DA1@.microsoft.com...
> >> > Andrew,
> >> >
> >> > Thanks for your response but my question is different here. I am asking
> >> > about Disk defragmenation not Index defragmentation.
> >> >
> >> > "Andrew J. Kelly" wrote:
> >> >
> >> >> Have a look at this:
> >> >>
> >> >>
> >> >> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> >> >>
> >> >>
> >> >> --
> >> >> Andrew J. Kelly SQL MVP
> >> >>
> >> >>
> >> >> "S" <S@.discussions.microsoft.com> wrote in message
> >> >> news:8CC6A253-6B54-4806-BCF9-E5E2889241BB@.microsoft.com...
> >> >> > Ok. How does Disk defragmentation and Query Performance relate to
> >> >> > each
> >> >> > other?
> >> >> >
> >> >> > How often should we defragment SQL boxes in DEV and PROD? Do we see
> >> >> > any
> >> >> > data
> >> >> > retrieval performance benefit if we regularly defragment the disk?
> >> >> > Any
> >> >> > pointers on this topic is appreciated. Assuming the box is dedicated
> >> >> > SQL
> >> >> > server with no additional applications.
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>

2012年3月20日星期二

Disctinct selections

Hi I am trying to figure out how to use the DISCTINCT function in s SELECT Query for one particular column, but output more that the disctinct column

for example:

table 1

Alan Andrews 1 main st 07465
John Andrews 1 main st 07465
Erick Andrews 1 main st 07465

I want to select by disctinct last name, but on my results I want to see all the other fields as well, and not just the last name. In this case the first name address and zip code.

So is there a way of doing this in SQL?

It does not have to be with the DISCTINCT function, but I need to net down to 1 per last name in a select query.

Thanks in advance!

AlanHi I am trying to figure out how to use the DISCTINCT function in s SELECT Query for one particular column, but output more that the disctinct column


for example:

table 1

Alan Andrews 1 main st 07465
John Andrews 1 main st 07465
Erick Andrews 1 main st 07465

I want to select by disctinct last name, but on my results I want to see all the other fields as well, and not just the last name. In this case the first name address and zip code.

So is there a way of doing this in SQL?

It does not have to be with the DISCTINCT function, but I need to net down to 1 per last name in a select query.

Thanks in advance!

Alan

is there a way to tell which one of the records from thr above example should be displayed ... or should it be a random one.|||It really doesn't matter which one of the records gets selected. I just need to end up with one.

Is there a nth funtion in SQL perhaps?

Alan|||no, there is no "nth" function

i should also like to take this opportunity to point out that DISTINCT is not a function either
select lastname
, min(firstname) as lowest_firstname_for_this_lastname
, max(address) as highest_address_for_this_lastname
, avg(zipcode) as average_zip_assuming_its_numeric
from daTable
group
by lastname|||no, there is no "nth" function

i should also like to take this opportunity to point out that DISTINCT is not a function either
select lastname
, min(firstname) as lowest_firstname_for_this_lastname
, max(address) as highest_address_for_this_lastname
, avg(zipcode) as average_zip_assuming_its_numeric
from daTable
group
by lastname

Ok, this could work but will it return my actual record. For my example I've excluded other fields that I need to select from my table such as Individual number, family number, gender and much more.

I think my original example was not the best

Your query if used with the MIN aggregated function won't I end up with the lowest value first name, lowest value gender etc..

Won't I ended up with the lowest value first name, lowest value gender, and lowest value induvidual number. etc?

So Alan Andrews might end up with gender code of F and the wrong customer number, etc.?|||So Alan Andrews might end up with gender code of F and the wrong customer number, etc.?that is quite correct

it indicates that what you are doing (combining multiple rows for the same last name) is probably the wrong approach, as it will likely mash up different people

if the same lastname occurs on multiple rows, do you have any way of differentiating the various rows? any rule for which one you want? and please don't say again "oh, any one"

what is the primary key of your table?|||that is quite correct

it indicates that what you are doing (combining multiple rows for the same last name) is probably the wrong approach, as it will likely mash up different people

if the same lastname occurs on multiple rows, do you have any way of differentiating the various rows? any rule for which one you want? and please don't say again "oh, any one"

what is the primary key of your table?

I have a Unique Site number that is unique to each family. I will probably be doing the group by this number instead of the last name.

How about the 1st record in a group?

I tried using FIRST, but it doesn't appear to be an aggregated function in SQL server 2005.|||there is no such concept as "first" because rows don't have a position

and using Unique Site number merely shifts the problem from lastname, it does not make the problem go away

you will still need to somehow specify which row you want from the group of rows which all have the same Unique Site number

the best way to do this is to designate which row based on its primary key, since primary keys are unique

what is the primary key of your table?|||Abritrary data is garbage, so why use it at all...

Can you explain to us what you are trying to do?|||there is no such concept as "first" because rows don't have a position

and using Unique Site number merely shifts the problem from lastname, it does not make the problem go away

you will still need to somehow specify which row you want from the group of rows which all have the same Unique Site number

the best way to do this is to designate which row based on its primary key, since primary keys are unique

what is the primary key of your table?

I currently don't have a primary key, but I can add one. Once I create this ID field, how would I designate it on my query?

Thanks for all the help!|||I know cursors are forbidden or banished to the WASTELAND:shocked: , and am try to find a way to do this without cursor, sure its possible!

:cool: Till than here is the cursor solution:
(tweeked the employee table a bit)

1> run next 11 line as a batch
declare @.tempname varchar(20)
declare rpt cursor for select distinct lastname from employees
open rpt
fetch next from rpt into @.tempname
while @.@.fetch_status=0
begin
select top 1 * from employees where lastname = @.tempname
fetch next from rpt into @.tempname
end
close rpt
deallocate rpt

Take r937 suggestion, using primary key is better any day, besides two people with same surname can reside a two totally different locations.

:angel: Hope it works, have fun|||Once I create this ID field, how would I designate it on my query?with a correlated subquery, which has the same effect as groupingselect lastname
, firstname
, address
, zipcode
, newPKcolumn
from daTable as T
where newPKcolumn
= ( select min(newPKcolumn)
from daTable
where lastname = T.lastname )the effect of the correlated subquery is to chose the (single) row which has the lowest newPKcolumn value from amongst all the rows with the same lastname|||Abritrary data is garbage, so why use it at all...

Can you explain to us what you are trying to do?

Bret, I am trying to net my results to one per site number. unsing a select query.

Thanks|||r937 you just gave me an idea:beer: .
This is really weird, the following sql command works:

select * from employees as A
where firstname=(select min(firstname)
from employees as B
where A.lastname = B.lastname)

Even though i have three persons with the same firstname and it returns unique records. does it work for anybody else!!! (i tweaked the employee table)|||with a correlated subquery, which has the same effect as groupingselect lastname
, firstname
, address
, zipcode
, newPKcolumn
from daTable as T
where newPKcolumn
= ( select min(newPKcolumn)
from daTable
where lastname = T.lastname )the effect of the correlated subquery is to chose the (single) row which has the lowest newPKcolumn value from amongst all the rows with the same lastname

This worked great. Thanks for your help!|||Not bad eh, for a guy who isn't a dba|||sql skills are not restricted to DBAs, man

that's like saying "wow, you can speak english -- not bad for a guy who isn't a DBA"

being a DBA means you do stuff like replication, installation, permissions, tuning, administration, etc.

you don't have to know any of that cr@.p to be really good at sql :)|||all the stuff I hate to do...especially on DB2 OS/390

Disctinct Query Help

Can anyone tell me how to write this query correctly... Right now it is returning all of the results instead of just the results with unique emails...

SELECT ID, firstname, address1, lastname, address2, city, province, postalCode, phoneNumber, email, referral, cpauseInfo, yukonInfo, dateAdded
FROM Results
WHERE email IN (SELECT DISTINCT email FROM Results)

Please help.

Your query is doing exactly what it was written to do. Post some sample data from the table and the expected output so someone here can help fix your query.

|||

This table has entries we represent contest submissions. The form wasn't built very well and it allowed users to submit results numerous times. I want to write a query that will filter the data based on email address and phone numbers. So any records that have the same email address should be deleted. Same goes for phone number.

Should I be creating a temporary table or something?

|||

I am going toguess that you want to get a resultset where the COUNT(*) of each email is 1. How this would look:

SELECT r.ID, r.firstname, r.address1, r.lastname, r.address2, r.city, r.province, r.postalCode, r.phoneNumber, r.email, r.referral, r.cpauseInfo, r.yukonInfo, r.dateAddedFROM Results rWHEREEXISTS (select 1from Results rxwhere rx.email = r.emailgroup by rx.emailhavingcount(*) = 1 )
|||

sheldonj:

This table has entries we represent contest submissions. The form wasn't built very well and it allowed users to submit results numerous times. I want to write a query that will filter the data based on email address and phone numbers. So any records that have the same email address should be deleted. Same goes for phone number.

Should I be creating a temporary table or something?

You don't need a temp table. You are attempting to clean duplicated data. Let me refer you to an article I wrote several years agohere. Here is a sample that could remove duplicates by email address.

/* 1. Query heading */DELETEFROM/* 2. Table with duplicates */ ResultsWHEREEXISTS (SELECT 1FROM/* 3. Table with duplicates, with an alias */ Results bWHERE/* 4. Join each field with *itself*. These are fields that could be Primary Keys */ b.Email = Results.EmailGROUP BY/* 5. I must GROUP BY these fields because of the HAVING clause and because these are the possible PK */ b.EmailHAVING/* 6. This is the determining factor. We can control our output from here. In this case, we want to pick records where dateAdded is greater than the MIN dateAdded */ Results.[dateAdded] >MIN(b.[dateAdded]) )
|||

Thank you David that was exactly what I was looking for

|||

I think that simplest way to do this is:

SELECT ID, firstname, address1, lastname, address2, city, province, postalCode, phoneNumber, email, referral, cpauseInfo, yukonInfo, dateAdded
FROM Results
WHERE ID IN (SELECT max(ID)
FROM Results
group by email)

it will return you only single email entries from you table.
If you would like to keep only single record with unique email in your table run

DELETE FROM Results
WHERE not ID IN (SELECT max(ID)
FROM Results
group by email)
Good luck

discover all permissions for a user

Hi folks,

Is there any easy way of finding out, using a query, all permissions that a user has on any securable? Also, what sort of permission (e.g. ALTER, SELECT, INSERT etc..)

I'm going to hunt around on Google but thought I'd post here in case anyone can tell me before I find it.

thanks in advance.

-Jamie

ignore this.

select *

from sys.database_permissions

does me right!

Doh!

-Jamie

Disconnection when 'xp_cmd' is in a comment

If I run the following in Query Analyzer against Microsoft SQL Server
2000 - 8.00.818 while logged in as 'sa':
ALTER PROCEDURE __Test AS
BEGIN
/*
-- x p_cmd
*/
RETURN 0
END
GO
then all is well, but if I remove the space between the 'x' and the
'p' in the comment line then I get the following error:
[Microsoft][ODBC SQL Server Driver][TCP/IP
Sockets]ConnectionCheckForData (CheckforData()).
Server: Msg 11, Level 16, State 1, Line 0
General network error. Check your network documentation.
ODBC: Msg 0, Level 16, State 1
Communication link failure
ODBC: Msg 0, Level 16, State 1
Communication link failure
Connection Broken
I am connecting remotely. Another user who is on the same domain as
the server does not have this problem. I assume some hardware or
software on the connection between the server and myself and the
server is causing the problem.
Does anyone have any idea what this could be?
Thanks
SylvesterIf it's hardware or software connectivity between you and
the server then you'd likely find something in the windows
event logs so you may want to start checking those.
-Sue
On 10 Oct 2004 22:01:39 -0700, junk@.riddell.co.nz
(Sylvester) wrote:

>If I run the following in Query Analyzer against Microsoft SQL Server
>2000 - 8.00.818 while logged in as 'sa':
>
> ALTER PROCEDURE __Test AS
> BEGIN
> /*
> -- x p_cmd
> */
> RETURN 0
> END
> GO
>then all is well, but if I remove the space between the 'x' and the
>'p' in the comment line then I get the following error:
> [Microsoft][ODBC SQL Server Driver][TCP/IP
>Sockets]ConnectionCheckForData (CheckforData()).
> Server: Msg 11, Level 16, State 1, Line 0
> General network error. Check your network documentation.
> ODBC: Msg 0, Level 16, State 1
> Communication link failure
> ODBC: Msg 0, Level 16, State 1
> Communication link failure
> Connection Broken
>
>I am connecting remotely. Another user who is on the same domain as
>the server does not have this problem. I assume some hardware or
>software on the connection between the server and myself and the
>server is causing the problem.
>Does anyone have any idea what this could be?
>Thanks
>Sylvester

Disconnection when 'xp_cmd' is in a comment

If I run the following in Query Analyzer against Microsoft SQL Server
2000 - 8.00.818 while logged in as 'sa':
ALTER PROCEDURE __Test AS
BEGIN
/*
-- x p_cmd
*/
RETURN 0
END
GO
then all is well, but if I remove the space between the 'x' and the
'p' in the comment line then I get the following error:
[Microsoft][ODBC SQL Server Driver][TCP/IP
Sockets]ConnectionCheckForData (CheckforData()).
Server: Msg 11, Level 16, State 1, Line 0
General network error. Check your network documentation.
ODBC: Msg 0, Level 16, State 1
Communication link failure
ODBC: Msg 0, Level 16, State 1
Communication link failure
Connection Broken
I am connecting remotely. Another user who is on the same domain as
the server does not have this problem. I assume some hardware or
software on the connection between the server and myself and the
server is causing the problem.
Does anyone have any idea what this could be?
Thanks
Sylvester
If it's hardware or software connectivity between you and
the server then you'd likely find something in the windows
event logs so you may want to start checking those.
-Sue
On 10 Oct 2004 22:01:39 -0700, junk@.riddell.co.nz
(Sylvester) wrote:

>If I run the following in Query Analyzer against Microsoft SQL Server
>2000 - 8.00.818 while logged in as 'sa':
>
> ALTER PROCEDURE __Test AS
> BEGIN
> /*
> -- x p_cmd
> */
>RETURN 0
> END
> GO
>then all is well, but if I remove the space between the 'x' and the
>'p' in the comment line then I get the following error:
> [Microsoft][ODBC SQL Server Driver][TCP/IP
>Sockets]ConnectionCheckForData (CheckforData()).
> Server: Msg 11, Level 16, State 1, Line 0
> General network error. Check your network documentation.
> ODBC: Msg 0, Level 16, State 1
> Communication link failure
> ODBC: Msg 0, Level 16, State 1
> Communication link failure
> Connection Broken
>
>I am connecting remotely. Another user who is on the same domain as
>the server does not have this problem. I assume some hardware or
>software on the connection between the server and myself and the
>server is causing the problem.
>Does anyone have any idea what this could be?
>Thanks
>Sylvester

Disconnection when 'xp_cmd' is in a comment

If I run the following in Query Analyzer against Microsoft SQL Server
2000 - 8.00.818 while logged in as 'sa':
ALTER PROCEDURE __Test AS
BEGIN
/*
-- x p_cmd
*/
RETURN 0
END
GO
then all is well, but if I remove the space between the 'x' and the
'p' in the comment line then I get the following error:
[Microsoft][ODBC SQL Server Driver][TCP/IP
Sockets]ConnectionCheckForData (CheckforData()).
Server: Msg 11, Level 16, State 1, Line 0
General network error. Check your network documentation.
ODBC: Msg 0, Level 16, State 1
Communication link failure
ODBC: Msg 0, Level 16, State 1
Communication link failure
Connection Broken
I am connecting remotely. Another user who is on the same domain as
the server does not have this problem. I assume some hardware or
software on the connection between the server and myself and the
server is causing the problem.
Does anyone have any idea what this could be?
Thanks
SylvesterIf it's hardware or software connectivity between you and
the server then you'd likely find something in the windows
event logs so you may want to start checking those.
-Sue
On 10 Oct 2004 22:01:39 -0700, junk@.riddell.co.nz
(Sylvester) wrote:
>If I run the following in Query Analyzer against Microsoft SQL Server
>2000 - 8.00.818 while logged in as 'sa':
>
> ALTER PROCEDURE __Test AS
> BEGIN
> /*
> -- x p_cmd
> */
> RETURN 0
> END
> GO
>then all is well, but if I remove the space between the 'x' and the
>'p' in the comment line then I get the following error:
> [Microsoft][ODBC SQL Server Driver][TCP/IP
>Sockets]ConnectionCheckForData (CheckforData()).
> Server: Msg 11, Level 16, State 1, Line 0
> General network error. Check your network documentation.
> ODBC: Msg 0, Level 16, State 1
> Communication link failure
> ODBC: Msg 0, Level 16, State 1
> Communication link failure
> Connection Broken
>
>I am connecting remotely. Another user who is on the same domain as
>the server does not have this problem. I assume some hardware or
>software on the connection between the server and myself and the
>server is causing the problem.
>Does anyone have any idea what this could be?
>Thanks
>Sylvester

2012年3月19日星期一

Discarding an empty result set in a stored proc

I have a stored proc with code similar to this:
<SQL Query 1 Here>
If @.@.RowCount < 1
Begin
<SQL Query 2 Here>
End
Basically, I want to execute SQL Query 1 and if I don't get any rows,
then execute Query 2. What happens here is that I get two result sets
returned if the first one is empty. I changed my query to this:
Declare @.RecCount int
Select @.RecCount = <SQL Query 1 Here>
If @.RecCount > 0
Begin
<SQL Query 1 Here>
End
Else
Begin
<SQL Query 2 Here>
End
This way, I only get one result set, but I am wondering about
duplicating Query 1 and if that incurs a performance penalty. If so,
how can I get rid of it.
The queries themselves are simple queries that query one table with no
joins or anything else in them.
Any thoughts on this approach? Is there a 'more elegant' way of
accomplishing what I want?
Thanks,
ChrisI would do it like this
If exists (select * from table1 where ...)
Begin
select * from table1 where ...
End
Else
Begin
select * from table2 where ...
End
http://sqlservercode.blogspot.com/|||> What happens here is that I get two result sets
> returned if the first one is empty.
So? Isn't the client smart enough to see that resultset 1 is empty, and
move to the next one? In ADO, you would check for recordset.eof.
Another idea would be to do a UNION, if the resultsets are similar. If the
resultsets are not similar, then the client is going to have to perform some
logic based on which query was successful, no?
A

2012年3月11日星期日

Disaster recovery through log file alone?

A collegue (really not me ;) has accidentally emptied a table in one of his
databases by making a simple typing error in a query, only to notice after
it happened that he had forgotten to create a backup schedule for that
database.
He has full recovery enabled the log has never been shrunk, so all his data
should still be present there.
Is there any way to use that log file to get his data back up to the point
where he emptied the table?
Automatic recovery would restore everything including the deletion, so what
we need is something like the 'STOPAT' option from RESTORE LOG, except that
there's no backup so RESTORE LOG is out of the question.
Or isn't it?
hi,
"Lucvdv" <replace_name@.null.net> ha scritto nel messaggio
news:v8ads0pkece7kjeelaalribkgt6dtec1b2@.4ax.com
> A collegue (really not me ;) has accidentally emptied a table in one
> of his databases by making a simple typing error in a query, only to
> notice after it happened that he had forgotten to create a backup
> schedule for that database.
> He has full recovery enabled the log has never been shrunk, so all
> his data should still be present there.
> Is there any way to use that log file to get his data back up to the
> point where he emptied the table?
> Automatic recovery would restore everything including the deletion,
> so what we need is something like the 'STOPAT' option from RESTORE
> LOG, except that there's no backup so RESTORE LOG is out of the
> question.
> Or isn't it?
as you do not have a backup to restore from, you are aout of luck with
standard SQL Server tools... but..
but Lumigent has a tool, Log Explorer,
http://www.lumigent.com/products/le_sql.html, which can be of interest..
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.9.1 - DbaMgr ver 0.55.1
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
|||On Mon, 20 Dec 2004 13:23:50 +0100, "Andrea Montanari"
<andrea.sqlDMO@.virgilio.it> wrote:

> as you do not have a backup to restore from, you are aout of luck with
> standard SQL Server tools... but..
> but Lumigent has a tool, Log Explorer,
> http://www.lumigent.com/products/le_sql.html, which can be of interest..
Thanks.
I passed it on, but I wonder if his data are going to be worth the $1850
price tag of the product ;)
|||"Lucvdv" <replace_name@.null.net> ha scritto nel messaggio
news:bsnds0to23e1rm8p2lndi1nepcq97idvej@.4ax.com
> On Mon, 20 Dec 2004 13:23:50 +0100, "Andrea Montanari"
> <andrea.sqlDMO@.virgilio.it> wrote:
>
> Thanks.
> I passed it on, but I wonder if his data are going to be worth the
> $1850 price tag of the product ;)
:D
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.9.1 - DbaMgr ver 0.55.1
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply

2012年3月8日星期四

Disappering sp??

Does anyone know why a sp would disappear after loging out & in of the Enterprise Manager in SQL Server 2000? When I execute the sp from the Query Analizer it seems to loop but it is still there after I cancel it. If I run just the code in the Query Anaylizer, it runs fine. Due that it involves tables in 2 sql servers, to create the sp I run :

SET ANSI_NULLS ON
SET ANSI_WARNINGS ON
SET ANSI_DEFAULTS ON
set XACT_ABORT on
GO
Create procedure xyz as ...

The error reported in the log is :
Error: 15457, Severity: 0, State: 1

It's weird how after I close EM and go back again, the sp is not in the sp list anymore.

Any ideas? PLS HELP!The SPs displayed in EM are cached. If the SP is dropped or created e.g. from a script then the current state will not be displayed in EM. If you refresh the list by right clicking on Stored Procedures then you will get the correct state at that moment - this has the same effect as logging out and logging in.
If an SP is dropped or changes ID the EM will give an error if you try to display it.

Try select name from sysobjects where xtype = 'P' to get the list.

2012年2月24日星期五

Disable update to a table

I want to be able to disable updates to a specific table from the SQL prompt
i.e the Query Analyzer. This table can only be update from the front end
screens. How do I accomplish this?
VJDon't give the users who have QA permissions to update that table.
Andrew J. Kelly
SQL Server MVP
"VJ" <vijaybalki@.yahoo.com> wrote in message
news:OpI6o0b8DHA.1636@.TK2MSFTNGP12.phx.gbl...
> I want to be able to disable updates to a specific table from the SQL
prompt
> i.e the Query Analyzer. This table can only be update from the front end
> screens. How do I accomplish this?
> VJ
>|||>> I want to be able to disable updates to a specific table
> Don't give the users who have QA permissions to update that table.
If updates = data changes, you may also try APP_NAME() +
UPDATE/INSERT/DELETE triggers, allowing you to trap these operations and
react to them.
If updates = table schema changes, in Yukon you may also try APP_NAME() +
Data Definition Language (DDL) triggers, allowing you to trap DDL operations
and react to them, optionally rolling back the activity.
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi ,
You can try application roles . which will allow user access specific to
your application . You can restrict query analyzer .
Regards
Ajay
"Sebastian K. Zaklada" <szaklada-dont-like-spam@.skilledsoftware.com> wrote
in message news:OHXqzYc8DHA.488@.TK2MSFTNGP12.phx.gbl...
> If updates = data changes, you may also try APP_NAME() +
> UPDATE/INSERT/DELETE triggers, allowing you to trap these operations and
> react to them.
> If updates = table schema changes, in Yukon you may also try APP_NAME() +
> Data Definition Language (DDL) triggers, allowing you to trap DDL
operations
> and react to them, optionally rolling back the activity.
> sincerely,
> --
> Sebastian K. Zaklada
> Skilled Software
> http://www.skilledsoftware.com
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>|||Yet another method is to create an instead of trigger for update on the
table and do nothing in that trigger.
Tibor Karaszi, SQL Server MVP
Archive at:
http://groups.google.com/groups?oi=...ublic.sqlserver
"Aju" <ajuonline@.yahoo.com> wrote in message
news:ORhLk1f8DHA.2028@.TK2MSFTNGP10.phx.gbl...
> Hi ,
> You can try application roles . which will allow user access specific to
> your application . You can restrict query analyzer .
> Regards
> Ajay
> "Sebastian K. Zaklada" <szaklada-dont-like-spam@.skilledsoftware.com> wrote
> in message news:OHXqzYc8DHA.488@.TK2MSFTNGP12.phx.gbl...
+
> operations
> rights.
>

Disable update to a table

I want to be able to disable updates to a specific table from the SQL prompt
i.e the Query Analyzer. This table can only be update from the front end
screens. How do I accomplish this?
VJDon't give the users who have QA permissions to update that table.
--
Andrew J. Kelly
SQL Server MVP
"VJ" <vijaybalki@.yahoo.com> wrote in message
news:OpI6o0b8DHA.1636@.TK2MSFTNGP12.phx.gbl...
> I want to be able to disable updates to a specific table from the SQL
prompt
> i.e the Query Analyzer. This table can only be update from the front end
> screens. How do I accomplish this?
> VJ
>|||>> I want to be able to disable updates to a specific table
> Don't give the users who have QA permissions to update that table.
If updates = data changes, you may also try APP_NAME() +
UPDATE/INSERT/DELETE triggers, allowing you to trap these operations and
react to them.
If updates = table schema changes, in Yukon you may also try APP_NAME() +
Data Definition Language (DDL) triggers, allowing you to trap DDL operations
and react to them, optionally rolling back the activity.
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi ,
You can try application roles . which will allow user access specific to
your application . You can restrict query analyzer .
Regards
Ajay
"Sebastian K. Zaklada" <szaklada-dont-like-spam@.skilledsoftware.com> wrote
in message news:OHXqzYc8DHA.488@.TK2MSFTNGP12.phx.gbl...
> >> I want to be able to disable updates to a specific table
> > Don't give the users who have QA permissions to update that table.
> If updates = data changes, you may also try APP_NAME() +
> UPDATE/INSERT/DELETE triggers, allowing you to trap these operations and
> react to them.
> If updates = table schema changes, in Yukon you may also try APP_NAME() +
> Data Definition Language (DDL) triggers, allowing you to trap DDL
operations
> and react to them, optionally rolling back the activity.
> sincerely,
> --
> Sebastian K. Zaklada
> Skilled Software
> http://www.skilledsoftware.com
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>|||Yet another method is to create an instead of trigger for update on the
table and do nothing in that trigger.
--
Tibor Karaszi, SQL Server MVP
Archive at:
http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"Aju" <ajuonline@.yahoo.com> wrote in message
news:ORhLk1f8DHA.2028@.TK2MSFTNGP10.phx.gbl...
> Hi ,
> You can try application roles . which will allow user access specific to
> your application . You can restrict query analyzer .
> Regards
> Ajay
> "Sebastian K. Zaklada" <szaklada-dont-like-spam@.skilledsoftware.com> wrote
> in message news:OHXqzYc8DHA.488@.TK2MSFTNGP12.phx.gbl...
> > >> I want to be able to disable updates to a specific table
> > > Don't give the users who have QA permissions to update that table.
> >
> > If updates = data changes, you may also try APP_NAME() +
> > UPDATE/INSERT/DELETE triggers, allowing you to trap these operations and
> > react to them.
> >
> > If updates = table schema changes, in Yukon you may also try APP_NAME()
+
> > Data Definition Language (DDL) triggers, allowing you to trap DDL
> operations
> > and react to them, optionally rolling back the activity.
> >
> > sincerely,
> > --
> > Sebastian K. Zaklada
> > Skilled Software
> > http://www.skilledsoftware.com
> > This posting is provided "AS IS" with no warranties, and confers no
> rights.
> >
> >
>

2012年2月19日星期日

Disable Save Changes prompt

Is there a way in SQL Mgt Studio to disable the "Save changes to the following items?" prompt that comes up every time I close a query window?

No, currently there is no way to disable the prompt to save unsaved changes in the T-SQL script editor.

You can submit suggestions for Management Studio functionality at http://connect.microsoft.com/sqlserver. We use customer feedback like this when we are prioritizing future work.

Thanks,

Steve

Disable Save Changes prompt

Is there a way in SQL Mgt Studio to disable the "Save changes to the following items?" prompt that comes up every time I close a query window?

No, currently there is no way to disable the prompt to save unsaved changes in the T-SQL script editor.

You can submit suggestions for Management Studio functionality at http://connect.microsoft.com/sqlserver. We use customer feedback like this when we are prioritizing future work.

Thanks,

Steve

disable recompile

Sometimes our stored procedures recompile and they get a poor query plan.
How can one stop the stored procs from recompiling ?
Possibly investigate the KEEP FIXEDPLAN optimizer hint. But read
http://msdn.microsoft.com/library/en...ompilation.asp and
http://www.microsoft.com/technet/pro...05/recomp.mspx first.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Just Me" <justme@.hotmail.com> wrote in message news:e3PdX787FHA.720@.TK2MSFTNGP10.phx.gbl...
> Sometimes our stored procedures recompile and they get a poor query plan.
> How can one stop the stored procs from recompiling ?
>

disable recompile

Sometimes our stored procedures recompile and they get a poor query plan.
How can one stop the stored procs from recompiling ?Possibly investigate the KEEP FIXEDPLAN optimizer hint. But read
http://msdn.microsoft.com/library/e...005/recomp.mspx first.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Just Me" <justme@.hotmail.com> wrote in message news:e3PdX787FHA.720@.TK2MSFTNGP10.phx.gbl...

> Sometimes our stored procedures recompile and they get a poor query plan.
> How can one stop the stored procs from recompiling ?
>

disable recompile

Sometimes our stored procedures recompile and they get a poor query plan.
How can one stop the stored procs from recompiling ?Possibly investigate the KEEP FIXEDPLAN optimizer hint. But read
http://msdn.microsoft.com/library/en-us/dnsql2k/html/sql_queryrecompilation.asp and
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx first.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Just Me" <justme@.hotmail.com> wrote in message news:e3PdX787FHA.720@.TK2MSFTNGP10.phx.gbl...
> Sometimes our stored procedures recompile and they get a poor query plan.
> How can one stop the stored procs from recompiling ?
>