显示标签为“creating”的博文。显示所有博文
显示标签为“creating”的博文。显示所有博文

2012年3月25日星期日

disk drive layout for best database performance

In creating a database which option will provide the best performance in a
OLTP environment. Our server has 2 fiber channel HBAs that are load balance
d
attached to an EMC SAN. Should the database be created in a single file on
one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
spread accross 6 disk drives each?
--
DanneHi,
You can get more out of a 4 files, but you'll have to do some tables
movement between files so you'll find the optimal places for tables so work
can be equally distributed through all 4 files.
Danijel Novak
"Danne" <Danne@.discussions.microsoft.com> wrote in message
news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
> In creating a database which option will provide the best performance in a
> OLTP environment. Our server has 2 fiber channel HBAs that are load
> balanced
> attached to an EMC SAN. Should the database be created in a single file
> on
> one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
> spread accross 6 disk drives each?
> --
> Danne|||Not sure on your setup, but with our servers running of SAN it is hard to
tweak performance by spreading accross disk as the Shark has a crap load of
cache which makes the point mute, the only thing to check it to see if other
aps will contend for same disks on your storage. If disks a dedicated to
that server then i would bet that cache would even out almost any setup you
make. but again i dont know your setup
"Danijel Novak" <danijel.novak@.triera.net> wrote in message
news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
> Hi,
> You can get more out of a 4 files, but you'll have to do some tables
> movement between files so you'll find the optimal places for tables so
> work can be equally distributed through all 4 files.
> --
> Danijel Novak
>
> "Danne" <Danne@.discussions.microsoft.com> wrote in message
> news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
>|||We have 16 GB of cache on the Symm. The physical drives are 146 GB with 9 G
B
logical slices. There are other applciations using the same physical drives
.
We have allocated 1 9 GB drive for transaction log and a second 9 GB drive
for system databases. We are allocating 4 9 GB devices for tempdb per a
recomendation from the software vendor. the 200 GB drive or 4 50 GB drives
are for the data file(s). I am not sure if we will see any difference
between the two choices, but I wanted to ask to see what best practices are.
Thanks for your comments.
--
Danne
"David J. Cartwright" wrote:

> Not sure on your setup, but with our servers running of SAN it is hard to
> tweak performance by spreading accross disk as the Shark has a crap load o
f
> cache which makes the point mute, the only thing to check it to see if oth
er
> aps will contend for same disks on your storage. If disks a dedicated to
> that server then i would bet that cache would even out almost any setup yo
u
> make. but again i dont know your setup
> "Danijel Novak" <danijel.novak@.triera.net> wrote in message
> news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
>
>|||Danne,
The general rule of thumb for RAID is the more head/disks you have, the
better your performance. Of course it also has to do with the total
cache available on the SAN controllers. This is from the hardware/RAID
level. All the RAID drives created for SQL Server, or any database
should be RAID 0+1 or 10. DO NOT USE RAID 5!
Shahryar
Danne wrote:

>In creating a database which option will provide the best performance in a
>OLTP environment. Our server has 2 fiber channel HBAs that are load balanc
ed
>attached to an EMC SAN. Should the database be created in a single file on
>one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
>spread accross 6 disk drives each?
>
Shahryar G. Hashemi | Sr. DBA Consultant
InfoSpace, Inc.
601 108th Ave NE | Suite 1200 | Bellevue, WA 98004 USA
Mobile +1 206.459.6203 | Office +1 425.201.8853 | Fax +1 425.201.6150
shashem@.infospace.com | www.infospaceinc.com
This e-mail and any attachments may contain confidential information that is
legally privileged. The information is solely for the use of the intended
recipient(s); any disclosure, copying, distribution, or other use of this in
formation is strictly prohi
bited. If you have received this e-mail in error, please notify the sender
by return e-mail and delete this message. Thank you.|||*best practice would accomplish as many as possible of the following
avoid having to read and write to the same disk at same time
avoid contention with other applications
log and data files on separate drives
have tempdb on seperate than from log and data files
with that said, having 4 50GB drives for data would give you some flexiblity
with file groups and putting certain tables on certain groups (ie drives),
but a SAN blured the strategy as even though the drives may be seperated
from each other, there may be other app needing to use that same drive when
you need it. or the *different drives could actualy be the same drives
anyway. If you can get drives allocated from different physical drives on
the SAN, and have these drives exclusivly, that would be the best choice.
If you can do this, then having 4 50 GB drives spread accross the storage
unit, would allow you to put tables that are joined etc...on different
drives, then when a query is run that joines or updates or whatever would
read read/write from different location where the data would come from on
drive, tempdb proccessing would accour on another and transaction would be
logged to yet another. But i would guess that unless the storage unit is
heavily used that this would all happen in cache on the storage unit and
would appear to be light speed to your server...your server would continue
on its way and the actualy data would be read or written from cache at the
units leasure.
locating the data, log and tempdb on seperate drives gives you flexibility
in the future also.
but.....on one of our systems we found the best performance when all where
on the same drive.
location on the storage solution, cache, and contention with other apps is
what you need to tweak.
"Danne" <Danne@.discussions.microsoft.com> wrote in message
news:8A8E8377-198A-439A-9D4B-F32D74EC455C@.microsoft.com...[vbcol=seagreen]
> We have 16 GB of cache on the Symm. The physical drives are 146 GB with 9
> GB
> logical slices. There are other applciations using the same physical
> drives.
> We have allocated 1 9 GB drive for transaction log and a second 9 GB drive
> for system databases. We are allocating 4 9 GB devices for tempdb per a
> recomendation from the software vendor. the 200 GB drive or 4 50 GB
> drives
> are for the data file(s). I am not sure if we will see any difference
> between the two choices, but I wanted to ask to see what best practices
> are.
> Thanks for your comments.
> --
> Danne
>
> "David J. Cartwright" wrote:
>|||What's wrong with RAID 5' We have been using it for a long time. It
has been an industry standard for while, right?
Shahryar G. Hashemi wrote:

> Danne,
> The general rule of thumb for RAID is the more head/disks you have, the
> better your performance. Of course it also has to do with the total
> cache available on the SAN controllers. This is from the hardware/RAID
> level. All the RAID drives created for SQL Server, or any database
> should be RAID 0+1 or 10. DO NOT USE RAID 5!
> Shahryar
> Danne wrote:
>
>|||In article <e2GoRhnAGHA.3804@.TK2MSFTNGP14.phx.gbl>, kkogan@.haiint.com
says...
> What's wrong with RAID 5' We have been using it for a long time. It
> has been an industry standard for while, right?
The information he lists (in another post) for reasons to NOT use R5
have not been experienced on any of the hundreds of servers we maintain.
Not once, never, nada.
We do Mirrors for Log files and R5 (normally 5xdrives + hot spare) for
the database data files.
All RAID Arrays on their own controller, on a single channel, with RAM
on the card (and battery).
spam999free@.rrohio.com
remove 999 in order to email me|||"Shahryar G. Hashemi" wrote:
> All the RAID drives created for SQL Server, or any database
> should be RAID 0+1 or 10. DO NOT USE RAID 5!
wow, all of my production servers run raid 5.
i'll go immediately change them to raid 10.sql

disk drive layout for best database performance

In creating a database which option will provide the best performance in a
OLTP environment. Our server has 2 fiber channel HBAs that are load balanced
attached to an EMC SAN. Should the database be created in a single file on
one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
spread accross 6 disk drives each?
Danne
Hi,
You can get more out of a 4 files, but you'll have to do some tables
movement between files so you'll find the optimal places for tables so work
can be equally distributed through all 4 files.
Danijel Novak
"Danne" <Danne@.discussions.microsoft.com> wrote in message
news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
> In creating a database which option will provide the best performance in a
> OLTP environment. Our server has 2 fiber channel HBAs that are load
> balanced
> attached to an EMC SAN. Should the database be created in a single file
> on
> one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
> spread accross 6 disk drives each?
> --
> Danne
|||Not sure on your setup, but with our servers running of SAN it is hard to
tweak performance by spreading accross disk as the Shark has a crap load of
cache which makes the point mute, the only thing to check it to see if other
aps will contend for same disks on your storage. If disks a dedicated to
that server then i would bet that cache would even out almost any setup you
make. but again i dont know your setup
"Danijel Novak" <danijel.novak@.triera.net> wrote in message
news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
> Hi,
> You can get more out of a 4 files, but you'll have to do some tables
> movement between files so you'll find the optimal places for tables so
> work can be equally distributed through all 4 files.
> --
> Danijel Novak
>
> "Danne" <Danne@.discussions.microsoft.com> wrote in message
> news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
>
|||We have 16 GB of cache on the Symm. The physical drives are 146 GB with 9 GB
logical slices. There are other applciations using the same physical drives.
We have allocated 1 9 GB drive for transaction log and a second 9 GB drive
for system databases. We are allocating 4 9 GB devices for tempdb per a
recomendation from the software vendor. the 200 GB drive or 4 50 GB drives
are for the data file(s). I am not sure if we will see any difference
between the two choices, but I wanted to ask to see what best practices are.
Thanks for your comments.
Danne
"David J. Cartwright" wrote:

> Not sure on your setup, but with our servers running of SAN it is hard to
> tweak performance by spreading accross disk as the Shark has a crap load of
> cache which makes the point mute, the only thing to check it to see if other
> aps will contend for same disks on your storage. If disks a dedicated to
> that server then i would bet that cache would even out almost any setup you
> make. but again i dont know your setup
> "Danijel Novak" <danijel.novak@.triera.net> wrote in message
> news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
>
>
|||Danne,
The general rule of thumb for RAID is the more head/disks you have, the
better your performance. Of course it also has to do with the total
cache available on the SAN controllers. This is from the hardware/RAID
level. All the RAID drives created for SQL Server, or any database
should be RAID 0+1 or 10. DO NOT USE RAID 5!
Shahryar
Danne wrote:

>In creating a database which option will provide the best performance in a
>OLTP environment. Our server has 2 fiber channel HBAs that are load balanced
>attached to an EMC SAN. Should the database be created in a single file on
>one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
>spread accross 6 disk drives each?
>
Shahryar G. Hashemi | Sr. DBA Consultant
InfoSpace, Inc.
601 108th Ave NE | Suite 1200 | Bellevue, WA 98004 USA
Mobile +1 206.459.6203 | Office +1 425.201.8853 | Fax +1 425.201.6150
shashem@.infospace.com | www.infospaceinc.com
This e-mail and any attachments may contain confidential information that is legally privileged. The information is solely for the use of the intended recipient(s); any disclosure, copying, distribution, or other use of this information is strictly prohi
bited. If you have received this e-mail in error, please notify the sender by return e-mail and delete this message. Thank you.
|||*best practice would accomplish as many as possible of the following
avoid having to read and write to the same disk at same time
avoid contention with other applications
log and data files on separate drives
have tempdb on seperate than from log and data files
with that said, having 4 50GB drives for data would give you some flexiblity
with file groups and putting certain tables on certain groups (ie drives),
but a SAN blured the strategy as even though the drives may be seperated
from each other, there may be other app needing to use that same drive when
you need it. or the *different drives could actualy be the same drives
anyway. If you can get drives allocated from different physical drives on
the SAN, and have these drives exclusivly, that would be the best choice.
If you can do this, then having 4 50 GB drives spread accross the storage
unit, would allow you to put tables that are joined etc...on different
drives, then when a query is run that joines or updates or whatever would
read read/write from different location where the data would come from on
drive, tempdb proccessing would accour on another and transaction would be
logged to yet another. But i would guess that unless the storage unit is
heavily used that this would all happen in cache on the storage unit and
would appear to be light speed to your server...your server would continue
on its way and the actualy data would be read or written from cache at the
units leasure.
locating the data, log and tempdb on seperate drives gives you flexibility
in the future also.
but.....on one of our systems we found the best performance when all where
on the same drive.
location on the storage solution, cache, and contention with other apps is
what you need to tweak.
"Danne" <Danne@.discussions.microsoft.com> wrote in message
news:8A8E8377-198A-439A-9D4B-F32D74EC455C@.microsoft.com...[vbcol=seagreen]
> We have 16 GB of cache on the Symm. The physical drives are 146 GB with 9
> GB
> logical slices. There are other applciations using the same physical
> drives.
> We have allocated 1 9 GB drive for transaction log and a second 9 GB drive
> for system databases. We are allocating 4 9 GB devices for tempdb per a
> recomendation from the software vendor. the 200 GB drive or 4 50 GB
> drives
> are for the data file(s). I am not sure if we will see any difference
> between the two choices, but I wanted to ask to see what best practices
> are.
> Thanks for your comments.
> --
> Danne
>
> "David J. Cartwright" wrote:
|||What's wrong with RAID 5? We have been using it for a long time. It
has been an industry standard for while, right?
Shahryar G. Hashemi wrote:

> Danne,
> The general rule of thumb for RAID is the more head/disks you have, the
> better your performance. Of course it also has to do with the total
> cache available on the SAN controllers. This is from the hardware/RAID
> level. All the RAID drives created for SQL Server, or any database
> should be RAID 0+1 or 10. DO NOT USE RAID 5!
> Shahryar
> Danne wrote:
>
>
|||In article <e2GoRhnAGHA.3804@.TK2MSFTNGP14.phx.gbl>, kkogan@.haiint.com
says...
> What's wrong with RAID 5? We have been using it for a long time. It
> has been an industry standard for while, right?
The information he lists (in another post) for reasons to NOT use R5
have not been experienced on any of the hundreds of servers we maintain.
Not once, never, nada.
We do Mirrors for Log files and R5 (normally 5xdrives + hot spare) for
the database data files.
All RAID Arrays on their own controller, on a single channel, with RAM
on the card (and battery).

spam999free@.rrohio.com
remove 999 in order to email me
|||"Shahryar G. Hashemi" wrote:
> All the RAID drives created for SQL Server, or any database
> should be RAID 0+1 or 10. DO NOT USE RAID 5!
wow, all of my production servers run raid 5.
i'll go immediately change them to raid 10.

disk drive layout for best database performance

In creating a database which option will provide the best performance in a
OLTP environment. Our server has 2 fiber channel HBAs that are load balanced
attached to an EMC SAN. Should the database be created in a single file on
one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
spread accross 6 disk drives each?
--
DanneHi,
You can get more out of a 4 files, but you'll have to do some tables
movement between files so you'll find the optimal places for tables so work
can be equally distributed through all 4 files.
--
Danijel Novak
"Danne" <Danne@.discussions.microsoft.com> wrote in message
news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
> In creating a database which option will provide the best performance in a
> OLTP environment. Our server has 2 fiber channel HBAs that are load
> balanced
> attached to an EMC SAN. Should the database be created in a single file
> on
> one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
> spread accross 6 disk drives each?
> --
> Danne|||Not sure on your setup, but with our servers running of SAN it is hard to
tweak performance by spreading accross disk as the Shark has a crap load of
cache which makes the point mute, the only thing to check it to see if other
aps will contend for same disks on your storage. If disks a dedicated to
that server then i would bet that cache would even out almost any setup you
make. but again i dont know your setup
"Danijel Novak" <danijel.novak@.triera.net> wrote in message
news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
> Hi,
> You can get more out of a 4 files, but you'll have to do some tables
> movement between files so you'll find the optimal places for tables so
> work can be equally distributed through all 4 files.
> --
> Danijel Novak
>
> "Danne" <Danne@.discussions.microsoft.com> wrote in message
> news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
>> In creating a database which option will provide the best performance in
>> a
>> OLTP environment. Our server has 2 fiber channel HBAs that are load
>> balanced
>> attached to an EMC SAN. Should the database be created in a single file
>> on
>> one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
>> spread accross 6 disk drives each?
>> --
>> Danne
>|||We have 16 GB of cache on the Symm. The physical drives are 146 GB with 9 GB
logical slices. There are other applciations using the same physical drives.
We have allocated 1 9 GB drive for transaction log and a second 9 GB drive
for system databases. We are allocating 4 9 GB devices for tempdb per a
recomendation from the software vendor. the 200 GB drive or 4 50 GB drives
are for the data file(s). I am not sure if we will see any difference
between the two choices, but I wanted to ask to see what best practices are.
Thanks for your comments.
--
Danne
"David J. Cartwright" wrote:
> Not sure on your setup, but with our servers running of SAN it is hard to
> tweak performance by spreading accross disk as the Shark has a crap load of
> cache which makes the point mute, the only thing to check it to see if other
> aps will contend for same disks on your storage. If disks a dedicated to
> that server then i would bet that cache would even out almost any setup you
> make. but again i dont know your setup
> "Danijel Novak" <danijel.novak@.triera.net> wrote in message
> news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
> > Hi,
> >
> > You can get more out of a 4 files, but you'll have to do some tables
> > movement between files so you'll find the optimal places for tables so
> > work can be equally distributed through all 4 files.
> >
> > --
> > Danijel Novak
> >
> >
> >
> > "Danne" <Danne@.discussions.microsoft.com> wrote in message
> > news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
> >> In creating a database which option will provide the best performance in
> >> a
> >> OLTP environment. Our server has 2 fiber channel HBAs that are load
> >> balanced
> >> attached to an EMC SAN. Should the database be created in a single file
> >> on
> >> one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
> >> spread accross 6 disk drives each?
> >> --
> >> Danne
> >
> >
>
>|||Danne,
The general rule of thumb for RAID is the more head/disks you have, the
better your performance. Of course it also has to do with the total
cache available on the SAN controllers. This is from the hardware/RAID
level. All the RAID drives created for SQL Server, or any database
should be RAID 0+1 or 10. DO NOT USE RAID 5!
Shahryar
Danne wrote:
>In creating a database which option will provide the best performance in a
>OLTP environment. Our server has 2 fiber channel HBAs that are load balanced
>attached to an EMC SAN. Should the database be created in a single file on
>one 200 GB drive spread across 24 disk drives or four 50 GB disk drives
>spread accross 6 disk drives each?
>
Shahryar G. Hashemi | Sr. DBA Consultant
InfoSpace, Inc.
601 108th Ave NE | Suite 1200 | Bellevue, WA 98004 USA
Mobile +1 206.459.6203 | Office +1 425.201.8853 | Fax +1 425.201.6150
shashem@.infospace.com | www.infospaceinc.com
This e-mail and any attachments may contain confidential information that is legally privileged. The information is solely for the use of the intended recipient(s); any disclosure, copying, distribution, or other use of this information is strictly prohibited. If you have received this e-mail in error, please notify the sender by return e-mail and delete this message. Thank you.|||*best practice would accomplish as many as possible of the following
avoid having to read and write to the same disk at same time
avoid contention with other applications
log and data files on separate drives
have tempdb on seperate than from log and data files
with that said, having 4 50GB drives for data would give you some flexiblity
with file groups and putting certain tables on certain groups (ie drives),
but a SAN blured the strategy as even though the drives may be seperated
from each other, there may be other app needing to use that same drive when
you need it. or the *different drives could actualy be the same drives
anyway. If you can get drives allocated from different physical drives on
the SAN, and have these drives exclusivly, that would be the best choice.
If you can do this, then having 4 50 GB drives spread accross the storage
unit, would allow you to put tables that are joined etc...on different
drives, then when a query is run that joines or updates or whatever would
read read/write from different location where the data would come from on
drive, tempdb proccessing would accour on another and transaction would be
logged to yet another. But i would guess that unless the storage unit is
heavily used that this would all happen in cache on the storage unit and
would appear to be light speed to your server...your server would continue
on its way and the actualy data would be read or written from cache at the
units leasure.
locating the data, log and tempdb on seperate drives gives you flexibility
in the future also.
but.....on one of our systems we found the best performance when all where
on the same drive.
location on the storage solution, cache, and contention with other apps is
what you need to tweak.
"Danne" <Danne@.discussions.microsoft.com> wrote in message
news:8A8E8377-198A-439A-9D4B-F32D74EC455C@.microsoft.com...
> We have 16 GB of cache on the Symm. The physical drives are 146 GB with 9
> GB
> logical slices. There are other applciations using the same physical
> drives.
> We have allocated 1 9 GB drive for transaction log and a second 9 GB drive
> for system databases. We are allocating 4 9 GB devices for tempdb per a
> recomendation from the software vendor. the 200 GB drive or 4 50 GB
> drives
> are for the data file(s). I am not sure if we will see any difference
> between the two choices, but I wanted to ask to see what best practices
> are.
> Thanks for your comments.
> --
> Danne
>
> "David J. Cartwright" wrote:
>> Not sure on your setup, but with our servers running of SAN it is hard to
>> tweak performance by spreading accross disk as the Shark has a crap load
>> of
>> cache which makes the point mute, the only thing to check it to see if
>> other
>> aps will contend for same disks on your storage. If disks a dedicated to
>> that server then i would bet that cache would even out almost any setup
>> you
>> make. but again i dont know your setup
>> "Danijel Novak" <danijel.novak@.triera.net> wrote in message
>> news:Okdge5lAGHA.516@.TK2MSFTNGP15.phx.gbl...
>> > Hi,
>> >
>> > You can get more out of a 4 files, but you'll have to do some tables
>> > movement between files so you'll find the optimal places for tables so
>> > work can be equally distributed through all 4 files.
>> >
>> > --
>> > Danijel Novak
>> >
>> >
>> >
>> > "Danne" <Danne@.discussions.microsoft.com> wrote in message
>> > news:DF3F5578-1E5D-4DBC-B7B1-8FBDA952DCB3@.microsoft.com...
>> >> In creating a database which option will provide the best performance
>> >> in
>> >> a
>> >> OLTP environment. Our server has 2 fiber channel HBAs that are load
>> >> balanced
>> >> attached to an EMC SAN. Should the database be created in a single
>> >> file
>> >> on
>> >> one 200 GB drive spread across 24 disk drives or four 50 GB disk
>> >> drives
>> >> spread accross 6 disk drives each?
>> >> --
>> >> Danne
>> >
>> >
>>|||What's wrong with RAID 5' We have been using it for a long time. It
has been an industry standard for while, right?
Shahryar G. Hashemi wrote:
> Danne,
> The general rule of thumb for RAID is the more head/disks you have, the
> better your performance. Of course it also has to do with the total
> cache available on the SAN controllers. This is from the hardware/RAID
> level. All the RAID drives created for SQL Server, or any database
> should be RAID 0+1 or 10. DO NOT USE RAID 5!
> Shahryar
> Danne wrote:
>> In creating a database which option will provide the best performance
>> in a OLTP environment. Our server has 2 fiber channel HBAs that are
>> load balanced attached to an EMC SAN. Should the database be created
>> in a single file on one 200 GB drive spread across 24 disk drives or
>> four 50 GB disk drives spread accross 6 disk drives each?
>>
>|||"Shahryar G. Hashemi" wrote:
> All the RAID drives created for SQL Server, or any database
> should be RAID 0+1 or 10. DO NOT USE RAID 5!
wow, all of my production servers run raid 5.
i'll go immediately change them to raid 10.

2012年3月22日星期四

Discussion forum - datastructure

I'm creating a discussion forum for my website, using Sql Server 2000. I need to display 'Number of threads', 'Number of posts', 'Last post by' (username/id and date) for each forum, and 'Number of replies' and 'Last post by' (username/id and date) for each thread.
Here are a couple of ideas I have come up with to solve this problem:
1) Poll the database (using a stored procedure that returns the number I'm looking for) for each forum when I loop all the fourms.
- I suspect this approach isn't optimal, since it creates more traffic to the database.
2) Have fields in my Forum and ForumPost tables for 'Number of threads' and so on. Now, create triggers that updates these fields every time a post is made.
- I guess this would be much more effective than the first apporach, since everything is done on the database server directly.
Are there any other ways that are better? Please advice!
Thanks a bunch for any help!
You need to look into using aggregation functions like count and sum, you might want to look into max for you last date time.
select f.ForumName, tc.PostCount
from Forum f,
( Select Count(*) AS PostCount, ForumId
From ForumPost
Group By ForumId
) tc
where tc.ForumId = f.ForumId
This will give you a count of all the posts for a forum for example.
Another idea is download the code for Community Server Forum and see how they do it as they seem to have it working ok.http://www.telligentsystems.com/Solutions/Forums/|||Yup, I know how to write the SQL to retrieve the values I need. What I was looking for was more what way is more efficient. Retrieving the values from the database for each forum I have (doing it while I loop the forums and draw out my page) or if maybe it would be wiser to create some triggers on the tables that update some fields every time a new post has been saved.
I will take a look at the community server forum to see how they did it. It might give me a good idea.
Thanks!|||Sorry, I misunderstood.
I can only speculate, which I will do. It think your trigger option would be the fastest in retreival. More people tend to look at threads and posts over actually posting. I am not sure why you would need to iterate through for each forum. Your SQL should be able to return all this information in one recordset.
Good luck.|||

It's all good!

I think you have a good point there. I guess I could return the number of posts etc. when I query for forums and wherever I need to sum up values like that. I don't really need to store that stuff anywhere.

Thanks alot for that input!

|||

I'm struggeling a little bit with my sql statement. I need it to return all records from the Forum table regardless of data in other tables (ForumMessage and Member). This statement doesn't do so. Any ideas?
Here's the statement:
SELECT F.ForumID, F.ForumCategoryID, F.Name, F.Description, F.FromDate, F.ToDate,
TC.Threads, TC.ForumId AS LastPostId, TC.MemberId AS LastPostByMemberId,
MC.Posts, MN.LastPostUserName

FROM
Forum F,
(
SELECT COUNT(*) AS Threads, ForumId, MemberId FROM ForumMessage
WHERE ParentForumMessageId = 0
GROUP BY ForumId, MemberId
) TC,
(
SELECT COUNT(*) AS Posts FROM ForumMessage
) MC,
(
SELECT MemberId, DisplayName AS LastPostUserName FROM Member
) MN

WHERE TC.ForumId = F.ForumId
AND MN.MemberId = TC.MemberId

ORDER BY F.Name

|||When you need to return data from one table regardless of the existenceof data in a related table, this is the case for an OUTER JOIN. What you have will always use an INNER JOIN and will therefore limityour resultset to rows where there is related data.
Try it like this instead:
DECLARE @.PostCount INT
SELECT @.PostCount = COUNT(*) FROM ForumMessage
SELECT F.ForumID, F.ForumCategoryID, F.Name, F.Description, F.FromDate, F.ToDate,
TC.Threads, TC.ForumId AS LastPostId, TC.MemberId AS LastPostByMemberId,
@.PostCount, MN.LastPostUserName
FROM
Forum F
LEFT OUTER JOIN
(
SELECT COUNT(*) AS Threads, ForumId, MemberId FROM ForumMessage
WHERE ParentForumMessageId = 0
GROUP BY ForumId, MemberId
) TC ON F.ForumID = TC.ForumID
LEFT OUTER JOIN
(
SELECT MemberId, DisplayName AS LastPostUserName FROM Member
) MN ON TC.MemberID = MN.MemberID
ORDER BY F.Name


|||Excellent! Thank you!

Discussion about how to create a fact drillthrough dimension the best way..

I have been running into some troubles creating a fact drillthrough dimension. The scenario is as follows:

You have a fact table (in this case with around 50 million rows) holding accounting data. Within that facttable you have a column that holds some information in a free text field (let's say varchar(150) and name "Account_information") that is useful for the business user when they want to do a drillthrough on a cell in the cube.

Now how would you design your cube to be able to drillthrough to that information ?

The solution, the way i see it, is to create fact dimension containing those 50 million rows....

The fact table(Fact_Finance) looks like this

Unique_ID(int), Account_information (varchar(150)), Amount, Column_x....Column_y

In the DSV i then add a view that is a "Select Unique_ID, Account_information from Fact_Finance)

With that view i then create a dimension with

Attribute keycolumn = Unique_ID and

Attribute namecolumn = Account_information

Setting the dimension to processmode ROLAP often results in a timeout in the 3. part front-end tool, so in my case the only option is to use molap for the dimension.

To be able to add the Attribute in the Action i need to add the dimension in the dimension structure, binding the Unique_ID from the dimension to the Fact_Finance table. To me that seems to dramaticly slow down browsing the cube - comments ?

The Molap process of the drillthrough dimension becommes pretty heavy with a select distinct on the varchar column, even with index created though the Data Tuning Wisard.

I hope that someone can comment on their experience with creating a fact drillthrough dimension, since this approach has a big impact on both the processing and browsing of the cube

"To be able to add the Attribute in the Action i need to add the dimension in the dimension structure, binding the Unique_ID from the dimension to the Fact_Finance table. " - are you saying that the fact dimension is configured with a regular relation (not a fact relation) to the measure group? This shouldn't be necessary - for example, the "Reseller Details" action in Adventure Works returns attributes from the fact dimension.

Another option worth trying (if you have Enterprise Edition) is to create a (text) measure with AggregateFunction of "None" on the Account_Information field (rather than configuring it as the name of the fact key attribute). This measure can be included in the drillthorugh action return list.

|||

If I try that i'll get the following error:

Error 1 Errors in the metadata manager. The data type of the 'Account_Description' measure is not valid because the data type of the measure is a string type.

AggregationType set to None

Datatype is WChar

I'm trying to run in the Developer edition.

|||

Looks like you're right - when I set up a test scenario, I got the same error as well. This approach was suggested in an earlier thread here:

the attributes of Fact Table could not be displayed when drill through?

>>

...

Sure you have got a huge dimension.
But you shouldn't create dimension, you can make text measure.

...

You can also create a non aggregated measure and then use that when drilling through.
>>

So scratch that - here's another thread on a similar subject, with an approach from (ahem) a more authoritative source:

String measures in Analysis services cube

>>

...

The usual workaround in both AS2000 and AS2005 for such requirement is to create hidden dimension TextDescriptions with integer key and string name, i.e. you will have dimension table as

Key Name

1 Optimal

2 Needs Improvement

3 Failed

...

You will also have hidden measure DescriptionKey in the cube.

Then you use integers as a measure, and create calculation for it, like following:

CREATE MEMBER CURRENTCUBE.Measures.Description AS

' IIF(DescriptionKey = 0, "",

Filter(TextDescriptions.MEMBERS, Val(TextDescriptions.CurrentMember.Properties("Key"))=Measures.DescriptionKey).Item(0).Item(0).Name)


Mosha - http://www.mosha.com/msolap
>>

Another approach which I've used recently is to add a text attribute to the fact table, with its attribute hierarchy disabled. But in this case, since the client tool is Excel 2003, users don't access this text via drillthrough - instead, the fact dimension is directly added (after appropriate slices have been set, of course) on the rows of a pivot table. The text is then dropped in the data area, using a calculated measure which references the Member Property for the text. Excel 2003 allows a Member Property to be added to a pivot table as well, but I found this to be slower and less obvious than a calculated text measure.

The Analysis Services 2005 Performance Guide discusses when to disable an attribute hierarchy:

...

Reducing attribute overhead

...

Turn off the attribute hierarchy and use member properties

As an alternative to attribute hierarchies, member properties provide a different mechanism to expose dimension information. For a given attribute, member properties are automatically created for every attribute relationship. For the primary key attribute, this means that every attribute that is directly related to the primary key is available as a member property of the primary key attribute.

If you only want to access an attribute as member property, once you verify that the correct relationship is in place, you can disable the attribute’s hierarchy by setting the AttributeHierarchyEnabled property to False. From a processing perspective, disabling the attribute hierarchy can improve performance and decrease cube size because the attribute will no longer be indexed or aggregated. This can be especially useful for high cardinality attributes that have a one-to-one relationship with the primary key. High cardinality attributes such as phone numbers and addresses typically do not require slice-and-dice analysis. By disabling the hierarchies for these attributes and accessing them via member properties, you can save processing time and reduce cube size.

Deciding whether to disable the attribute’s hierarchy requires that you consider both the querying and processing impacts of using member properties. Member properties cannot be placed on a query axis in the same manner as attribute hierarchies and user hierarchies. To query a member property, you must query the properties of the attribute that contains the member property. For example, if you require the work phone number for a customer, you must query the properties of customer. As a convenience, most front-end tools easily display member properties in their user interfaces.

...

Discussion about how to create a fact drillthrough dimension the best way..

I have been running into some troubles creating a fact drillthrough dimension. The scenario is as follows:

You have a fact table (in this case with around 50 million rows) holding accounting data. Within that facttable you have a column that holds some information in a free text field (let's say varchar(150) and name "Account_information") that is useful for the business user when they want to do a drillthrough on a cell in the cube.

Now how would you design your cube to be able to drillthrough to that information ?

The solution, the way i see it, is to create fact dimension containing those 50 million rows....

The fact table(Fact_Finance) looks like this

Unique_ID(int), Account_information (varchar(150)), Amount, Column_x....Column_y

In the DSV i then add a view that is a "Select Unique_ID, Account_information from Fact_Finance)

With that view i then create a dimension with

Attribute keycolumn = Unique_ID and

Attribute namecolumn = Account_information

Setting the dimension to processmode ROLAP often results in a timeout in the 3. part front-end tool, so in my case the only option is to use molap for the dimension.

To be able to add the Attribute in the Action i need to add the dimension in the dimension structure, binding the Unique_ID from the dimension to the Fact_Finance table. To me that seems to dramaticly slow down browsing the cube - comments ?

The Molap process of the drillthrough dimension becommes pretty heavy with a select distinct on the varchar column, even with index created though the Data Tuning Wisard.

I hope that someone can comment on their experience with creating a fact drillthrough dimension, since this approach has a big impact on both the processing and browsing of the cube

"To be able to add the Attribute in the Action i need to add the dimension in the dimension structure, binding the Unique_ID from the dimension to the Fact_Finance table. " - are you saying that the fact dimension is configured with a regular relation (not a fact relation) to the measure group? This shouldn't be necessary - for example, the "Reseller Details" action in Adventure Works returns attributes from the fact dimension.

Another option worth trying (if you have Enterprise Edition) is to create a (text) measure with AggregateFunction of "None" on the Account_Information field (rather than configuring it as the name of the fact key attribute). This measure can be included in the drillthorugh action return list.

|||

If I try that i'll get the following error:

Error 1 Errors in the metadata manager. The data type of the 'Account_Description' measure is not valid because the data type of the measure is a string type.

AggregationType set to None

Datatype is WChar

I'm trying to run in the Developer edition.

|||

Looks like you're right - when I set up a test scenario, I got the same error as well. This approach was suggested in an earlier thread here:

the attributes of Fact Table could not be displayed when drill through?

>>

...

Sure you have got a huge dimension.
But you shouldn't create dimension, you can make text measure.

...

You can also create a non aggregated measure and then use that when drilling through.
>>

So scratch that - here's another thread on a similar subject, with an approach from (ahem) a more authoritative source:

String measures in Analysis services cube

>>

...

The usual workaround in both AS2000 and AS2005 for such requirement is to create hidden dimension TextDescriptions with integer key and string name, i.e. you will have dimension table as

Key Name

1 Optimal

2 Needs Improvement

3 Failed

...

You will also have hidden measure DescriptionKey in the cube.

Then you use integers as a measure, and create calculation for it, like following:

CREATE MEMBER CURRENTCUBE.Measures.Description AS

' IIF(DescriptionKey = 0, "",

Filter(TextDescriptions.MEMBERS, Val(TextDescriptions.CurrentMember.Properties("Key"))=Measures.DescriptionKey).Item(0).Item(0).Name)


Mosha - http://www.mosha.com/msolap
>>

Another approach which I've used recently is to add a text attribute to the fact table, with its attribute hierarchy disabled. But in this case, since the client tool is Excel 2003, users don't access this text via drillthrough - instead, the fact dimension is directly added (after appropriate slices have been set, of course) on the rows of a pivot table. The text is then dropped in the data area, using a calculated measure which references the Member Property for the text. Excel 2003 allows a Member Property to be added to a pivot table as well, but I found this to be slower and less obvious than a calculated text measure.

The Analysis Services 2005 Performance Guide discusses when to disable an attribute hierarchy:

...

Reducing attribute overhead

...

Turn off the attribute hierarchy and use member properties

As an alternative to attribute hierarchies, member properties provide a different mechanism to expose dimension information. For a given attribute, member properties are automatically created for every attribute relationship. For the primary key attribute, this means that every attribute that is directly related to the primary key is available as a member property of the primary key attribute.

If you only want to access an attribute as member property, once you verify that the correct relationship is in place, you can disable the attribute’s hierarchy by setting the AttributeHierarchyEnabled property to False. From a processing perspective, disabling the attribute hierarchy can improve performance and decrease cube size because the attribute will no longer be indexed or aggregated. This can be especially useful for high cardinality attributes that have a one-to-one relationship with the primary key. High cardinality attributes such as phone numbers and addresses typically do not require slice-and-dice analysis. By disabling the hierarchies for these attributes and accessing them via member properties, you can save processing time and reduce cube size.

Deciding whether to disable the attribute’s hierarchy requires that you consider both the querying and processing impacts of using member properties. Member properties cannot be placed on a query axis in the same manner as attribute hierarchies and user hierarchies. To query a member property, you must query the properties of the attribute that contains the member property. For example, if you require the work phone number for a customer, you must query the properties of customer. As a convenience, most front-end tools easily display member properties in their user interfaces.

...

Discussion about dynamic creation of partitions in AS2005

I have searched around and found very little about dynamically creating (and processesing) partitions in AS2005.

In AS2000 it is actually quite easy to manage partitions dynamically using a metadata table and DSO, but i had hoped that they came a little further in AS2005. Basically it's all about dynamically creating a partition for each month of a year or just partitioning over years. Does anyone have some good input about how to do it in AS2005 ?

Furthermore does someone have a comment on which prefferable option to choose (or maybe a completely other way) concerning partitions.

Option 1:

Partition "FACT_SALES_2005" points to a view called "v_FACT_SALES_2005" that is a "Select col1,col2 from FACT_SALES where period_year = 2005". The FACT_SALES table has a clustered index on period_year.

This way it's easy to manage views using a stored_procedure (create new view when we enter a new month etc.) and all the data resides in one single table which eases the ETL (in my opinion)

Option 2:

Partition "FACT_SALES_2005" points to a table called "FACT_SALES_2005".

This way it connects directly to the table, which someway is ok when it's partitioned in years, but what if you want to partition in months or weeks (52 weeks a year for 5 years of data). Then it becomes hard to manage.

Option 3:

Use partitioned tables and somehow point to a specific partition in the partitioned table from AS2005.

This i havent tried. But how will this perform. If you have a partitioned table that is partitioned in months, how do you reffer to let's say April 2006 in that partitioned table (FACT_SALES) from AS2005. And how about setting the Slice value of that partition....

Hi there,

i've had the same Problem and written a Assembly in .Net. This Assembly do all the things you can do from the GUI.

So i can process, create, ... Partitions from a stored Procedure in SQL. It works very nice. If you have detail Questions, aske me

Kind Regards

Andy L?wen

|||

You might take a look at:
http://www.codeplex.com/ASStoredProcedures/Wiki/View.aspx?title=Partition

That function isn't in the latest release, so you'll have to download the source code and compile it yourself from:
http://www.codeplex.com/ASStoredProcedures/SourceControl/ListDownloadableCommits.aspx

|||

It is definitely worth having a look at the code that furmangg points to. You should be able to use this "out of the box" or if not it will give you a good starting point.

There is one variation that is probably the easiest to manage which is similar to option 1. And that is to have all your data in a single table and then use a query as the data source for the partition to only select the data for a specific period. (essentially storing the view definition in the partition rather than creating a separate view object in the relational database)

And just a quick note on option 3. When you have a partitioned table it appears to all the other applications as if it is a single table, the data is partitioned based on specific data values, so selecting data from a particular partition is simply a matter of using a where clause that matches the partitioning scheme.

|||

Nice, i'll have a look at it.

I just think it's funny the Microsoft hasn't put any more attention into partitioning setup/handling since it's the most obvious thing to use when working with large cubes, which again is what SQL Server 2005 is profiling itself to handle very well.

sql

2012年2月25日星期六

Disabling One Input Parameter...?

Hi All,
I am creating a report which calls for four input parameters A, B, C, and D.
How can I disable D if A, B, and C are selected or disable C if A, B, and D
are selected?
Thx, ChuckThis is not supported. The only way to achieve this right now is to
implement your own front end application which handles the parameter UI and
then e.g. uses SOAP or URL-access to pass the "validated" parameters to the
report server.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Chuck" <Chuck@.discussions.microsoft.com> wrote in message
news:32F07069-7C11-4608-9E86-68796ED25A99@.microsoft.com...
> Hi All,
> I am creating a report which calls for four input parameters A, B, C, and
> D.
> How can I disable D if A, B, and C are selected or disable C if A, B, and
> D
> are selected?
> Thx, Chuck|||Set a table up in your database called "NILL" with a column called "NILL"
as well. Use the from query section in the report parameters and create a
query which points to this table. Set your Non-queried Default value to
NILL.
Create a dataset for each report parameter. Some duplication, but is quick
and it works. Have done it for database table and columns names where one
or more can be selected and the column info is passed back as a results
alias and is returned independently of the other report parameters.
Eg. DBTableNames
Select Name, Id
From sysobjects where xtype ='U'And ID <> 197555541
Order by Name
DBTablesColNames relys on a tablename to be selected before the report will
allow the selection of a column
Select Name
From syscolumns
Where id = (Select Id
From sysobjects
Where xtype = 'U' AND Name = @.DBName)
** @.DBName = Parameters!TableParameter.Value **
I had to implement custom code in order for this to be of benefit.
Eg. Dataset CustomSQL
= Code.GetSQL(Parameters!TableParameter.Value, Parameters!
ColumnParameter.Value, Parameters!EndDate.Value, Parameters!Interval.Value,
Parameters!NoOfIntervals.Value)
As you can see from the dataset CustomSQL this is one of my so called
"Adhoc Reports" which allows the user to choose a database table and one of
its columns with a given date, an iterval (ie. Daily, Monthly etc), and the
No. of intevals. I have three tables on the report page that each call this
dataset passing in their own report parameters.
Note you have to manually set the name and database feild under "Fields" to
Results to get any data back

Disabling drill down functionality for a column

Hi,

I am creating an adhoc report using report builde in RS2005, now the created report has few columns and some if them have functonality of drillng diwn but is it possible to have a column where a particular column can not be drilled into >

prashant

i am sorry ...it was easy ..i found it out myself its in VS

2012年2月19日星期日

disable publishing and distribution error

Ok, which database are you creating sysmergepublications and
sysmergesubscriptions in ?
I've created sysmergesubscriptions in distribution , master , msdb ...
I still receive an error when I run
use master
exec sp_dropdistributor @.no_checks = 1
go
I receive :
Msg 208, Level 16, State 1, Procedure sp_MSmergepublishdb, Line 103
Invalid object name 'dbo.sysmergesubscriptions'.
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%233PUslE$GHA.896@.TK2MSFTNGP03.phx.gbl...
> create table sysmergepublications
> (
> publisher sysname,
> publisher_db sysname,
> name sysname,
> description nvarchar(510),
> retention int,
> publication_type tinyint,
> pubid uniqueidentifier,
> designmasterid uniqueidentifier,
> parentid uniqueidentifier,
> sync_mode tinyint,
> allow_push int,
> allow_pull int,
> allow_anonymous int,
> centralized_conflicts int,
> status tinyint,
> snapshot_ready tinyint,
> enabled_for_internet bit,
> dynamic_filters bit,
> snapshot_in_defaultfolder bit,
> alt_snapshot_folder nvarchar(510),
> pre_snapshot_script nvarchar(510),
> post_snapshot_script nvarchar(510),
> compress_snapshot bit,
> ftp_address sysname,
> ftp_port int,
> ftp_subdirectory nvarchar(510),
> ftp_login sysname,
> ftp_password nvarchar(1048),
> conflict_retention int,
> keep_before_values int,
> allow_subscription_copy bit,
> allow_synctoalternate bit,
> validate_subscriber_info nvarchar(1000),
> ad_guidname sysname,
> backward_comp_level int,
> max_concurrent_merge int,
> max_concurrent_dynamic_snapshots int,
> use_partition_groups smallint,
> dynamic_filters_function_list nvarchar(1000),
> partition_id_eval_proc sysname,
> publication_number smallint,
> replicate_ddl int,
> allow_subscriber_initiated_snapshot bit,
> distributor sysname,
> snapshot_jobid binary(16),
> allow_web_synchronization bit,
> web_synchronization_url nvarchar(1000),
> allow_partition_realignment bit,
> retention_period_unit tinyint,
> decentralized_conflicts int,
> generation_leveling_threshold int,
> automatic_reinitialization_policy bit
> )
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "John A Grandy" <johnagrandy-at-yahoo-dot-com> wrote in message
> news:%23exvteE$GHA.4196@.TK2MSFTNGP03.phx.gbl...
>
Publication database. Check which databases are published for merge
replication and put it there.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"John A Grandy" <johnagrandy-at-yahoo-dot-com> wrote in message
news:uyEq9AQBHHA.4672@.TK2MSFTNGP02.phx.gbl...
> Ok, which database are you creating sysmergepublications and
> sysmergesubscriptions in ?
> I've created sysmergesubscriptions in distribution , master , msdb ...
> I still receive an error when I run
> use master
> exec sp_dropdistributor @.no_checks = 1
> go
> I receive :
> Msg 208, Level 16, State 1, Procedure sp_MSmergepublishdb, Line 103
> Invalid object name 'dbo.sysmergesubscriptions'.
>
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:%233PUslE$GHA.896@.TK2MSFTNGP03.phx.gbl...
>
|||My database is no longer being published for replication because I used
exec sp_replicationdboption 'databasename','merge publish',false
Nonetheless, I added a dbo.sysmergesubscriptions table to this database and
ran
use master
exec sp_dropdistributor @.no_checks = 1
go
but I still receive the error :
Msg 208, Level 16, State 1, Procedure sp_MSmergepublishdb, Line 103
Invalid object name 'dbo.sysmergesubscriptions'.
Could it be because I need to add dbo.sysmergesubscriptions as a system
object ? How do I do this ?
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:eZifFJQBHHA.2304@.TK2MSFTNGP02.phx.gbl...
> Publication database. Check which databases are published for merge
> replication and put it there.
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "John A Grandy" <johnagrandy-at-yahoo-dot-com> wrote in message
> news:uyEq9AQBHHA.4672@.TK2MSFTNGP02.phx.gbl...
>
|||these are the lines around 103
if not exists (select * from dbo.sysmergesubscriptions
where UPPER(subscriber_server) =
UPPER(publishingservername()) and db_name = db_name() and subid <> pubid)
begin
select @.ignore_merge_metadata = 1
end
It is complaining about the database you are running the command in. Is the
table there? Does it have the owner dbo?
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"John A Grandy" <johnagrandy-at-yahoo-dot-com> wrote in message
news:ePkGbNQBHHA.1220@.TK2MSFTNGP04.phx.gbl...
> My database is no longer being published for replication because I used
> exec sp_replicationdboption 'databasename','merge publish',false
> Nonetheless, I added a dbo.sysmergesubscriptions table to this database
> and ran
> use master
> exec sp_dropdistributor @.no_checks = 1
> go
> but I still receive the error :
> Msg 208, Level 16, State 1, Procedure sp_MSmergepublishdb, Line 103
> Invalid object name 'dbo.sysmergesubscriptions'.
> Could it be because I need to add dbo.sysmergesubscriptions as a system
> object ? How do I do this ?
>
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:eZifFJQBHHA.2304@.TK2MSFTNGP02.phx.gbl...
>

2012年2月14日星期二

Disable dependence checks when creating view

Is it possible to disable checks for existance of referenced tables(views,
user functions) when creating new view
Thanks in advance.Only way would be through CREATE SCHEMA:
CREATE SCHEMA AUTHORIZATION ross
create view v as select * from t
create table t(c1 int)
More information in Books Online.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Amid" <Amid@.discussions.microsoft.com> wrote in message
news:DEFF0F05-66DD-4ABF-A483-9D934E8A529C@.microsoft.com...
> Is it possible to disable checks for existance of referenced tables(views,
> user functions) when creating new view
> Thanks in advance.|||Thanks for the quick reply.
What if i'm unable to create table [t] within the same statement where [v]
is created?
I need to create it much later.
Any help will be appreciated.
"Tibor Karaszi" wrote:

> Only way would be through CREATE SCHEMA:
> CREATE SCHEMA AUTHORIZATION ross
> create view v as select * from t
> create table t(c1 int)
> More information in Books Online.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Amid" <Amid@.discussions.microsoft.com> wrote in message
> news:DEFF0F05-66DD-4ABF-A483-9D934E8A529C@.microsoft.com...
>|||You can't do that. When a view is created, SQL Server need to populate the s
ystem tables
(syscolumns) with the columns that the view has (the SELECT in the view defi
nition). SQL Server
cannot do that if the table(s) that the view uses doesn't exist.
Why do you have this need? Sounds like a strange requirement to me...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Amid" <Amid@.discussions.microsoft.com> wrote in message
news:2C520E2E-FE0C-41D7-95C8-DA19CB87A78F@.microsoft.com...
> Thanks for the quick reply.
> What if i'm unable to create table [t] within the same statement where [v]
> is created?
> I need to create it much later.
> Any help will be appreciated.
> "Tibor Karaszi" wrote:
>|||I need to make copy of the database structure (tables, views, sp, uddt, uf,
constraints etc) with several modifications. Unfortunately original database
may me inconsistent (views, user functions in it may refer to the others
database tables or views). Now I cant gain access to that tables but
structure has to be copied. Thus I need to create views and user function
(sp) without actually having needed tables or views.
Thanks.
"Tibor Karaszi" wrote:

> You can't do that. When a view is created, SQL Server need to populate the
system tables
> (syscolumns) with the columns that the view has (the SELECT in the view de
finition). SQL Server
> cannot do that if the table(s) that the view uses doesn't exist.
> Why do you have this need? Sounds like a strange requirement to me...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Amid" <Amid@.discussions.microsoft.com> wrote in message
> news:2C520E2E-FE0C-41D7-95C8-DA19CB87A78F@.microsoft.com...
>