Friday, March 30, 2012
How do I move a web assistant job?
I'm bringing up a replacement production SQL 7 server and I've got a few
Web Assistant jobs that create web pages on the old server. I want to
script them out and automatically move them, but they keep failing on the
create web page step. Is there any way to transfer these jobs with out
going through the wizards?
Thanks,
AceIIRC, They should be stored as a job.
Look for the sp_makewebtask job steps.
Rick Sawtell
MCT, MCSD, MCDBA
"Ace McDugan" <alohnsql@.hotmail.com> wrote in message
news:eoSLRkIsEHA.1816@.TK2MSFTNGP09.phx.gbl...
> Hi-
> I'm bringing up a replacement production SQL 7 server and I've got a
few
> Web Assistant jobs that create web pages on the old server. I want to
> script them out and automatically move them, but they keep failing on the
> create web page step. Is there any way to transfer these jobs with out
> going through the wizards?
> Thanks,
> Ace
>
How do I move a web assistant job?
I'm bringing up a replacement production SQL 7 server and I've got a few
Web Assistant jobs that create web pages on the old server. I want to
script them out and automatically move them, but they keep failing on the
create web page step. Is there any way to transfer these jobs with out
going through the wizards?
Thanks,
AceIIRC, They should be stored as a job.
Look for the sp_makewebtask job steps.
Rick Sawtell
MCT, MCSD, MCDBA
"Ace McDugan" <alohnsql@.hotmail.com> wrote in message
news:eoSLRkIsEHA.1816@.TK2MSFTNGP09.phx.gbl...
> Hi-
> I'm bringing up a replacement production SQL 7 server and I've got a
few
> Web Assistant jobs that create web pages on the old server. I want to
> script them out and automatically move them, but they keep failing on the
> create web page step. Is there any way to transfer these jobs with out
> going through the wizards?
> Thanks,
> Ace
>sql
How do I move a project between machines?
Hi,
I'm an SQL Server novice.
After developing some websites (VS 2005 VWD), my machine started to crumble. (e.g, SQL Server won't uninstall and lots of other software problems)
So I bought a new machine and set up a clean installation of SQL Server 2005 Standard and Visual Studio 2005 standard.
I copied my old "projects" folder to the new machine but I cannot use it.
1: When I try to import I get a "cannot import external files" error.
2: When I click on a project, it launches VS 2005 and I can edit etc.
However, when I try to run the website in VS 2005, I get the following error.
An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) an error
Please can someone let me know how I can get my "old" projects running in my new environment?
Thanks
Ken
Has your sql server name or instance changed if so you will have to change your dataset to represent the new server.|||
Hi Ray,
Thanks for the reply.
Yes - I'm using a completely new setup. New server name and new instance name.
But I'm still struggling to get to grips with the management console. I bought several books on SQL Server 2005 and spent lots of time ploughing through the zillions of help files but I have not yet found a "How to" that meets my level of need to understand what I'm doing with SQL Serverr Management Studio.
So, when you say "change the dataset" I don't know what steps to take to do that. If you have a minute, I would very much appreciate some basic guide that shows me how to "change the dataset to represent the new server".
Its hard being a novice!
Ken
|||In your visual studios project you should have set up your project to connect to the database this is a dataset. It is a setup for a connection to the database from visual studios. You should look in your project, and should be somewhere in your solution depending on which type you are creating.|||Hi Ray,
Thanks for your comment.
One of the problems I'm experienceing is that most of the well intended advice is couched in metaphorical terms rather than in "Press this then click that" terms.
Let's use the sample SQL Express personal website starter kit as an example.
Which DB?: In the App_Data folder thare are two .mdf files: ASPNETDB.MDF and Personal.mdf I know that the application needs both of these DB's. If I change the connection string, do I have to do it for both of these DB's ?
Connection String?: I have been floundering about in the help files without finding anything that I have found to be helpful. Where do I find the "connection string" to these DB's? what does a connection string look like? , How do I change it? What effect does it have on the rest of the project? Do I have to do anything else to make the app run under SQL Server Standard?
If I can get answers to these questions it will help me to get moving forward.
Thanks for your help.
Ken
How do I move a project between machines?
Hi,
I'm an SQL Server novice.
After developing some websites (VS 2005 VWD), my machine started to crumble. (e.g, SQL Server won't uninstall and lots of other software problems)
So I bought a new machine and set up a clean installation of SQL Server 2005 Standard and Visual Studio 2005 standard.
I copied my old "projects" folder to the new machine but I cannot use it.
1: When I try to import I get a "cannot import external files" error.
2: When I click on a project, it launches VS 2005 and I can edit etc.
However, when I try to run the website in VS 2005, I get the following error.
An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) an error
Please can someone let me know how I can get my "old" projects running in my new environment?
Thanks
Ken
Has your sql server name or instance changed if so you will have to change your dataset to represent the new server.|||
Hi Ray,
Thanks for the reply.
Yes - I'm using a completely new setup. New server name and new instance name.
But I'm still struggling to get to grips with the management console. I bought several books on SQL Server 2005 and spent lots of time ploughing through the zillions of help files but I have not yet found a "How to" that meets my level of need to understand what I'm doing with SQL Serverr Management Studio.
So, when you say "change the dataset" I don't know what steps to take to do that. If you have a minute, I would very much appreciate some basic guide that shows me how to "change the dataset to represent the new server".
Its hard being a novice!
Ken
|||In your visual studios project you should have set up your project to connect to the database this is a dataset. It is a setup for a connection to the database from visual studios. You should look in your project, and should be somewhere in your solution depending on which type you are creating.|||Hi Ray,
Thanks for your comment.
One of the problems I'm experienceing is that most of the well intended advice is couched in metaphorical terms rather than in "Press this then click that" terms.
Let's use the sample SQL Express personal website starter kit as an example.
Which DB?: In the App_Data folder thare are two .mdf files: ASPNETDB.MDF and Personal.mdf I know that the application needs both of these DB's. If I change the connection string, do I have to do it for both of these DB's ?
Connection String?: I have been floundering about in the help files without finding anything that I have found to be helpful. Where do I find the "connection string" to these DB's? what does a connection string look like? , How do I change it? What effect does it have on the rest of the project? Do I have to do anything else to make the app run under SQL Server Standard?
If I can get answers to these questions it will help me to get moving forward.
Thanks for your help.
Ken
How do I move a database I have developed on my local server (2005) to my client?
have to move the database I created in 2005 Enterprise to 2005 Express on my
client site. Do I just do a back up and burn to a CD and after installing
2005 Express restore the backup? Don't I need the .mdf and the .ldf files
or does the backup procedure in the Studio backup both? I set it to backup
the Database.
Regards,
RobertYou can A) drop the database, copy the mdf and ldf files to CD or
somewhere that you can move them and at the client site attach them or
b) do a backup of the database and restore at the client site. Did you
use any Enterprise features that Express may not support?
Robert Johnson wrote:
> Hi all. Probably a simple thing to do but I need to know how to do this. I
> have to move the database I created in 2005 Enterprise to 2005 Express on my
> client site. Do I just do a back up and burn to a CD and after installing
> 2005 Express restore the backup? Don't I need the .mdf and the .ldf files
> or does the backup procedure in the Studio backup both? I set it to backup
> the Database.
> Regards,
> Robert
>|||I don't believe so.. Just added tables and one view. No triggers, functions
or stuff like that. Relationships between tables and constraints are in
place. The client has less than 5 concurent users so Express should work
fine for him. It's a very small client and small database needs.
Interesting idea about detaching and copy the mdf and ldf to CD. Didn't
think that would work. I'll give that a shot.
Regards and Thanks
Robert
"Brett I. Holcomb" <brettholcomb@.bellsouth.net> wrote in message
news:unSxUqUBIHA.3780@.TK2MSFTNGP05.phx.gbl...
> You can A) drop the database, copy the mdf and ldf files to CD or
> somewhere that you can move them and at the client site attach them or b)
> do a backup of the database and restore at the client site. Did you use
> any Enterprise features that Express may not support?
>
> Robert Johnson wrote:
>> Hi all. Probably a simple thing to do but I need to know how to do this.
>> I have to move the database I created in 2005 Enterprise to 2005 Express
>> on my client site. Do I just do a back up and burn to a CD and after
>> installing 2005 Express restore the backup? Don't I need the .mdf and
>> the .ldf files or does the backup procedure in the Studio backup both? I
>> set it to backup the Database.
>> Regards,
>> Robert|||I had to ask <G>. I use the attach/detach all the time. After I've
copied the ldf and mdf I reattach to the original db. That's how I move
from our test to production server.
Robert Johnson wrote:
> I don't believe so.. Just added tables and one view. No triggers, functions
> or stuff like that. Relationships between tables and constraints are in
> place. The client has less than 5 concurent users so Express should work
> fine for him. It's a very small client and small database needs.
> Interesting idea about detaching and copy the mdf and ldf to CD. Didn't
> think that would work. I'll give that a shot.
> Regards and Thanks
> Robert
> "Brett I. Holcomb" <brettholcomb@.bellsouth.net> wrote in message
> news:unSxUqUBIHA.3780@.TK2MSFTNGP05.phx.gbl...
>> You can A) drop the database, copy the mdf and ldf files to CD or
>> somewhere that you can move them and at the client site attach them or b)
>> do a backup of the database and restore at the client site. Did you use
>> any Enterprise features that Express may not support?
>>
>> Robert Johnson wrote:
>> Hi all. Probably a simple thing to do but I need to know how to do this.
>> I have to move the database I created in 2005 Enterprise to 2005 Express
>> on my client site. Do I just do a back up and burn to a CD and after
>> installing 2005 Express restore the backup? Don't I need the .mdf and
>> the .ldf files or does the backup procedure in the Studio backup both? I
>> set it to backup the Database.
>> Regards,
>> Robert
>|||Thanks again Brett. I'll test it out tomorrow on site. If your ears are
burning...lol.
Robert
"Brett I. Holcomb" <brettholcomb@.bellsouth.net> wrote in message
news:eKtpB4UBIHA.4836@.TK2MSFTNGP06.phx.gbl...
>I had to ask <G>. I use the attach/detach all the time. After I've copied
>the ldf and mdf I reattach to the original db. That's how I move from our
>test to production server.
> Robert Johnson wrote:
>> I don't believe so.. Just added tables and one view. No triggers,
>> functions or stuff like that. Relationships between tables and
>> constraints are in place. The client has less than 5 concurent users so
>> Express should work fine for him. It's a very small client and small
>> database needs. Interesting idea about detaching and copy the mdf and ldf
>> to CD. Didn't think that would work. I'll give that a shot.
>> Regards and Thanks
>> Robert
>> "Brett I. Holcomb" <brettholcomb@.bellsouth.net> wrote in message
>> news:unSxUqUBIHA.3780@.TK2MSFTNGP05.phx.gbl...
>> You can A) drop the database, copy the mdf and ldf files to CD or
>> somewhere that you can move them and at the client site attach them or
>> b) do a backup of the database and restore at the client site. Did you
>> use any Enterprise features that Express may not support?
>>
>> Robert Johnson wrote:
>> Hi all. Probably a simple thing to do but I need to know how to do
>> this. I have to move the database I created in 2005 Enterprise to 2005
>> Express on my client site. Do I just do a back up and burn to a CD and
>> after installing 2005 Express restore the backup? Don't I need the
>> .mdf and the .ldf files or does the backup procedure in the Studio
>> backup both? I set it to backup the Database.
>> Regards,
>> Robert
>>|||Hello Robert,
When you take a backup of a database, you do not need mdf or ldf files of
your database. *.BAK files (which is the backup file) contains all that
files.
This is a comman way used to copy databases or as Brett mentioned, you can
attach\detach your database files.
For additional info: Attaching and Detaching a database is the fastest
method to copy or move a database. However, it takes your database offline
and this would not be good for production environments. So, backing up a
database does not cause a problem like this.
Ekrem Önsoy
"Robert Johnson" <johnson_r@.sbcglobal.net> wrote in message
news:%23FPMBlUBIHA.4836@.TK2MSFTNGP06.phx.gbl...
> Hi all. Probably a simple thing to do but I need to know how to do this. I
> have to move the database I created in 2005 Enterprise to 2005 Express on
> my client site. Do I just do a back up and burn to a CD and after
> installing 2005 Express restore the backup? Don't I need the .mdf and the
> .ldf files or does the backup procedure in the Studio backup both? I set
> it to backup the Database.
> Regards,
> Robert
>|||Thanks Ekrem. I wasn't sure if it had all that I needed.
Regards,
Robert
"Ekrem Önsoy" <ekrem@.btegitim.com> wrote in message
news:F98D9090-F8AE-4DF2-AEFC-30898DE47197@.microsoft.com...
> Hello Robert,
>
> When you take a backup of a database, you do not need mdf or ldf files of
> your database. *.BAK files (which is the backup file) contains all that
> files.
> This is a comman way used to copy databases or as Brett mentioned, you can
> attach\detach your database files.
> For additional info: Attaching and Detaching a database is the fastest
> method to copy or move a database. However, it takes your database offline
> and this would not be good for production environments. So, backing up a
> database does not cause a problem like this.
>
> --
> Ekrem Önsoy
>
> "Robert Johnson" <johnson_r@.sbcglobal.net> wrote in message
> news:%23FPMBlUBIHA.4836@.TK2MSFTNGP06.phx.gbl...
>> Hi all. Probably a simple thing to do but I need to know how to do this.
>> I have to move the database I created in 2005 Enterprise to 2005 Express
>> on my client site. Do I just do a back up and burn to a CD and after
>> installing 2005 Express restore the backup? Don't I need the .mdf and
>> the .ldf files or does the backup procedure in the Studio backup both? I
>> set it to backup the Database.
>> Regards,
>> Robert
>>
>
How do I monitor my SQL JOB?
1) How do I monitor this? I would like to have another Sql Job monitor the first one hourly, and send me an email if it is hanging. Isn't there a system table that tells me a job status?
1a) Once I find the job, how can I "Stop it" automatically?
2) Is there a way to have the Job stop itself it it runs for more than 20 minutes?
Thanks.
~LeNo one Knows...?
~Le
"No one knows? No one Knows if the Hot Tea is Hot or Cold...?"
"Is it Iced Tea?"
"NO!"
"Well then, I have no idea."|||Well, I have a suspision that most dba's would probably want to spend their time investigating the cause of a 'hanging' job rather than developing other jobs to manage such a situation. I think it's more correct to say that a job step stays 'in-progress'. The only time I have seen this is in relation to replication jobs that seem to have a tendency to report being continiously 'in-progress'. Apart from that 'feature', I can't remember a situation where a job would routinely misbehave without a good reason.
Have you looked at the job history to see which step the job is failing on? I know it's not much use if it's a one step job but I thought I would mention it anyway. Either way, look at the code in the step that is staying 'in-progress' and see if you can isolate it further. Perhaps add a few audit steps to write out to a table (like print statements if you were running code via QA). The objective should be to keep drilling down until you've found the cause.
Perhaps post more detail about the code that is staying in-progress and more useful feedback will come your way.
Clive|||Well, I have a suspision that most dba's would probably want to spend their time investigating the cause of a 'hanging' job rather than developing other jobs to manage such a situation. I think it's more correct to say that a job step stays 'in-progress'. The only time I have seen this is in relation to replication jobs that seem to have a tendency to report being continiously 'in-progress'. Apart from that 'feature', I can't remember a situation where a job would routinely misbehave without a good reason.
Have you looked at the job history to see which step the job is failing on? I know it's not much use if it's a one step job but I thought I would mention it anyway. Either way, look at the code in the step that is staying 'in-progress' and see if you can isolate it further. Perhaps add a few audit steps to write out to a table (like print statements if you were running code via QA). The objective should be to keep drilling down until you've found the cause.
Perhaps post more detail about the code that is staying in-progress and more useful feedback will come your way.
Clive
LOL. You are right, the better thing to do would be to stop it from hanging in the first place! The SQL Agent fires off a VB App that I wrote. The VB App is hanging on the the custom PGP command. But it does not happen often. The PGP 8 Service itself seems to be hanging about every 13-14 days, which in turn causes my VB App to hang, which causes the SQL Agent to hang.
Since there is a code freeze, I cannot edit the VB app to trap the error at the moment. So I just want to monitor the SQL Agent jobs, and if possible stop it if it is hanging.
`Le|||You can find what jobs are running by looking in the sysprocesses table for jobs running:
select *
from master..sysprocesses
where program_name like 'SQLAgent%Job%'
The problem is that the job_id you get: 0xC778DE2759DD354B9C219B301886EA43
equates to the binary version of the job_id stored in the sysjobs table.
Still if you know you shouldn't have any jobs running at all except the one you are checking on to see if it hung, you should be able to use this select to get the SPID of the hanging job and kill it if need be until you can figure out what's causing it to fail.
How do I monitor MSDE performance
How can I monitor the MSDE performance. I read about it
on the MSDN site but was not clear what tool to use.
Can u help?
Thanks,
FP
hi FP,
FP wrote:
> Hi there,
> How can I monitor the MSDE performance. I read about it
> on the MSDN site but was not clear what tool to use.
> Can u help?
depending on what you want to monitor, you can even use the System Monitor,
adding your preferred counters, like
http://www.windowsitpro.com/Articles...layTab=Article
http://www.windowsitpro.com/Articles...layTab=Article
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.10.0 - DbaMgr ver 0.56.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
sql
How do I monitor mirroring?
I set up mirroring and the target DB now says (Mirror, Synchronized /
Restoring) but it would seem as though there should be a way for monitor as
well? To make sure its staying in synch, doesnt fail, etc?
TIA, ChrisRChris,
There is an entire section named 'Monitoring Database Mirroring' on BOL
April 2006. Take a look at it.
Ben Nevarez, MCDBA, OCP
Database Administrator
"ChrisR" wrote:
> SQL2K5 SP1.
> I set up mirroring and the target DB now says (Mirror, Synchronized /
> Restoring) but it would seem as though there should be a way for monitor as
> well? To make sure its staying in synch, doesnt fail, etc?
> TIA, ChrisR
>
>
How do I monitor mirroring?
There is an entire section named 'Monitoring Database Mirroring' on BOL
April 2006. Take a look at it.
Ben Nevarez, MCDBA, OCP
Database Administrator
"ChrisR" wrote:
> SQL2K5 SP1.
> I set up mirroring and the target DB now says (Mirror, Synchronized /
> Restoring) but it would seem as though there should be a way for monitor a
s
> well? To make sure its staying in synch, doesnt fail, etc?
> TIA, ChrisR
>
>SQL2K5 SP1.
I set up mirroring and the target DB now says (Mirror, Synchronized /
Restoring) but it would seem as though there should be a way for monitor as
well? To make sure its staying in synch, doesnt fail, etc?
TIA, ChrisR|||Chris,
There is an entire section named 'Monitoring Database Mirroring' on BOL
April 2006. Take a look at it.
Ben Nevarez, MCDBA, OCP
Database Administrator
"ChrisR" wrote:
> SQL2K5 SP1.
> I set up mirroring and the target DB now says (Mirror, Synchronized /
> Restoring) but it would seem as though there should be a way for monitor a
s
> well? To make sure its staying in synch, doesnt fail, etc?
> TIA, ChrisR
>
>
How do I Modify a Stored Procedure?
stored procedure I have a modify option. I use this, make my changes,
and hit save. But is always saves a separate .sql file and never
changes the original stored procedure. What am I doing wrong? How can
I modify the stored procedure in my database? Thank you for any help.
You have to execute the changing script, NOT saving it, which always
saves the script to a file.
HTH; Jens Suessmeyer.
http://www.sqlserver2005.de
|||Jens wrote:
> You have to execute the changing script, NOT saving it, which always
> saves the script to a file.
> HTH; Jens Suessmeyer.
> --
> http://www.sqlserver2005.de
> --
So you are saying that when I hit modify, change the code, and then hit
Execute (!) and that automatically saves it?
|||needin4mation@.gmail.com wrote:
> Jens wrote:
> So you are saying that when I hit modify, change the code, and then hit
> Execute (!) and that automatically saves it?
>
It's not correct wording to say it's being saved. As Jens says, you'll
have to execute the new stored procedure to get it added to the
database, but that's not the same as saving it...:-). When you save it,
it's being saved as a regular sql file but that will not do anything to
the actual SP in the database.
Regards
Steen
|||right.
How do I Modify a Stored Procedure?
stored procedure I have a modify option. I use this, make my changes,
and hit save. But is always saves a separate .sql file and never
changes the original stored procedure. What am I doing wrong? How can
I modify the stored procedure in my database? Thank you for any help.You have to execute the changing script, NOT saving it, which always
saves the script to a file.
HTH; Jens Suessmeyer.
http://www.sqlserver2005.de
--|||Jens wrote:
> You have to execute the changing script, NOT saving it, which always
> saves the script to a file.
> HTH; Jens Suessmeyer.
> --
> http://www.sqlserver2005.de
> --
So you are saying that when I hit modify, change the code, and then hit
Execute (!) and that automatically saves it?|||needin4mation@.gmail.com wrote:
> Jens wrote:
> So you are saying that when I hit modify, change the code, and then hit
> Execute (!) and that automatically saves it?
>
It's not correct wording to say it's being saved. As Jens says, you'll
have to execute the new stored procedure to get it added to the
database, but that's not the same as saving it...:-). When you save it,
it's being saved as a regular sql file but that will not do anything to
the actual SP in the database.
Regards
Steen|||right.
How do I Modify a Stored Procedure?
stored procedure I have a modify option. I use this, make my changes,
and hit save. But is always saves a separate .sql file and never
changes the original stored procedure. What am I doing wrong? How can
I modify the stored procedure in my database? Thank you for any help.You have to execute the changing script, NOT saving it, which always
saves the script to a file.
HTH; Jens Suessmeyer.
--
http://www.sqlserver2005.de
--|||Jens wrote:
> You have to execute the changing script, NOT saving it, which always
> saves the script to a file.
> HTH; Jens Suessmeyer.
> --
> http://www.sqlserver2005.de
> --
So you are saying that when I hit modify, change the code, and then hit
Execute (!) and that automatically saves it?|||needin4mation@.gmail.com wrote:
> Jens wrote:
>> You have to execute the changing script, NOT saving it, which always
>> saves the script to a file.
>> HTH; Jens Suessmeyer.
>> --
>> http://www.sqlserver2005.de
>> --
> So you are saying that when I hit modify, change the code, and then hit
> Execute (!) and that automatically saves it?
>
It's not correct wording to say it's being saved. As Jens says, you'll
have to execute the new stored procedure to get it added to the
database, but that's not the same as saving it...:-). When you save it,
it's being saved as a regular sql file but that will not do anything to
the actual SP in the database.
Regards
Steen|||right.sql
How do I model Slowly Changing Hierarchy?
changes. With that, we are only allowed to have "1" hierarchy. It is
basically an organization hierarchy where people get promoted, demoted
and change ranks over time. The business requirement is to be able to
see that 1 appropriate hierarchy over time and the hierarchy changes.
I know all about slowly changing dimensions and know how to manage
those. But what about hierarchys. We would like to use a Type 2 but
really need some pointers in how to model this.
Thanks,
Rico
Slowly Changing Dimensions Type 2
Hello Rico,
Type 2 is relatively easy to implement, but it can be a pain for users
to understand and use.
Implementing Type 2
Generally I use a set of standard flags to track the changes in the
dimension. Each flag is used to determine the state of the record. I am
assuming that you are using Surrogate keys in your dimensions and Fact
tables.
Date From -The date the record arrived in the dimension
Date To - The date the record is deemed to be changed
Current Flag -The State of the record Y or N
With these flags you can track changes of the dimension at the lowest
level such as employee changes.
e.g.
Employee Dimension
EmployeeKeyEmployeeID EmployeeNameEmployeeJobTitleManager
DateFrom DateTo CurrentFlag
1001SD00301Billy Bob Sales RepKate
Hawkins01/01/200412/05/2005N
1200SD00301Billy Bob Sales RepJohn
Simon12/05/200512/08/2005N
1250SD00301Billy Bob Sales ManagerJohn
Simon12/08/200512/08/2005Y
You will have to come up with at change capture process. If you have
type 1 deployed already it would be the same logic for identifying the
changes, with the exception of updating old records and a creating new
records in the dimension. Remember you will have to update your
surrogate key lookup in you fact table build to load with the current
dimension record. I.E. "WHERE Current Flag ='Y' "
Implementing Type 2 without Flags
To be honest I have not tried this method but it worth considering if
your Business users have problems reporting using the Type 2 Flags.
Rather than having one Employee Dimension create a series of mini
dimensions against the fact table. This would allow you to track
changes against the fact record instead of the Dimension.
E.G.
Employee Dim is broken into three new dimensions
Dim Manager
Dim Employee
Dim Job Title
There is a HUGE draw back to this you end up a large amount of
dimensions and a really wide fact table if you have to include a lot of
them.
Hope this gets you started.
Myles Matheson
Data Warehouse Architect
http://bi-on-sql-server.blogspot.com/
|||Hello Rico,
The table in this post has come out a bit crap. I have reposted it on
Blog.
see:
http://bi-on-sql-server.blogspot.com...-changing.html
hope this helps
Myles
|||Hi Rico,
the particular question is one of the most important yet poorly
understood questions in all of BI. I have written extensively on this
topic on the DWLIST (www.datawarehousing.com).
The 'publicly available answer' is build a type 2 dimension that can be
linked to facts where you need it. This shows you the organisation
hierarchy at the point in time.
However, this is useless when answering the 'BIG QUESTION' from the
CEO/CFO/Director of marketing.
The 'BIG QUESTION' is.....now we have re-organised how are we doing
this year vs last year?
And these guys do not like the answer..."Well, our DW only shows us
the organisation hierarchy at the current point in time so we cannot
really compare this year vs last year."
So, to be able to do this you need to maintain an archive of the
organisation structure so that you can build what are called 'hot
swappable dimensions' to be able to compare last years results against
this years structure and this years results against last years
structure etc.
The 'Sybase wants you to pay' answer is that in the Sybase models
(sorry MSFT folk) there is a technique that allows you to compare the
value of anything across any point in time...so you could compare
transactions against the organisational hierarchy across any point in
time with the exception that people who were not there at that point in
time might show up as 'unknown'. (I implement Sybase IWS for a
living...as far as I am aware MSFT does not have a similar data model
offering today.)
So, the best you can go with from public infomation is type 2 hot
swappable dimensions...you can search Ralph Kimballs web page and tips
for details on hot swappable dimensions and I think they are still in
his data modeling book...
Best Regards
Peter Nolan
www.peternolan.com
How do I model Slowly Changing Hierarchy?
changes. With that, we are only allowed to have "1" hierarchy. It is
basically an organization hierarchy where people get promoted, demoted
and change ranks over time. The business requirement is to be able to
see that 1 appropriate hierarchy over time and the hierarchy changes.
I know all about slowly changing dimensions and know how to manage
those. But what about hierarchys. We would like to use a Type 2 but
really need some pointers in how to model this.
Thanks,
RicoSlowly Changing Dimensions Type 2
Hello Rico,
Type 2 is relatively easy to implement, but it can be a pain for users
to understand and use.
Implementing Type 2
Generally I use a set of standard flags to track the changes in the
dimension. Each flag is used to determine the state of the record. I am
assuming that you are using Surrogate keys in your dimensions and Fact
tables.
Date From - The date the record arrived in the dimension
Date To - The date the record is deemed to be changed
Current Flag - The State of the record Y or N
With these flags you can track changes of the dimension at the lowest
level such as employee changes.
e.g.
Employee Dimension
EmployeeKey EmployeeID EmployeeName EmployeeJobTitle Manager
DateFrom DateTo CurrentFlag
1001 SD00301 Billy Bob Sales Rep Kate
Hawkins 01/01/2004 12/05/2005 N
1200 SD00301 Billy Bob Sales Rep John
Simon 12/05/2005 12/08/2005 N
1250 SD00301 Billy Bob Sales Manager John
Simon 12/08/2005 12/08/2005 Y
You will have to come up with at change capture process. If you have
type 1 deployed already it would be the same logic for identifying the
changes, with the exception of updating old records and a creating new
records in the dimension. Remember you will have to update your
surrogate key lookup in you fact table build to load with the current
dimension record. I.E. "WHERE Current Flag ='Y' "
Implementing Type 2 without Flags
To be honest I have not tried this method but it worth considering if
your Business users have problems reporting using the Type 2 Flags.
Rather than having one Employee Dimension create a series of mini
dimensions against the fact table. This would allow you to track
changes against the fact record instead of the Dimension.
E.G.
Employee Dim is broken into three new dimensions
Dim Manager
Dim Employee
Dim Job Title
There is a HUGE draw back to this you end up a large amount of
dimensions and a really wide fact table if you have to include a lot of
them.
Hope this gets you started.
Myles Matheson
Data Warehouse Architect
http://bi-on-sql-server.blogspot.com/|||Hello Rico,
The table in this post has come out a bit crap. I have reposted it on
Blog.
see:
[url]http://bi-on-sql-server.blogspot.com/2005/07/news-group-post-slowly-changing.html[
/url]
hope this helps
Myles|||Hi Rico,
the particular question is one of the most important yet poorly
understood questions in all of BI. I have written extensively on this
topic on the DWLIST (www.datawarehousing.com).
The 'publicly available answer' is build a type 2 dimension that can be
linked to facts where you need it. This shows you the organisation
hierarchy at the point in time.
However, this is useless when answering the 'BIG QUESTION' from the
CEO/CFO/Director of marketing.
The 'BIG QUESTION' is.....now we have re-organised how are we doing
this year vs last year?
And these guys do not like the answer..."Well, our DW only shows us
the organisation hierarchy at the current point in time so we cannot
really compare this year vs last year."
So, to be able to do this you need to maintain an archive of the
organisation structure so that you can build what are called 'hot
swappable dimensions' to be able to compare last years results against
this years structure and this years results against last years
structure etc.
The 'Sybase wants you to pay' answer is that in the Sybase models
(sorry MSFT folk) there is a technique that allows you to compare the
value of anything across any point in time...so you could compare
transactions against the organisational hierarchy across any point in
time with the exception that people who were not there at that point in
time might show up as 'unknown'. (I implement Sybase IWS for a
living...as far as I am aware MSFT does not have a similar data model
offering today.)
So, the best you can go with from public infomation is type 2 hot
swappable dimensions...you can search Ralph Kimballs web page and tips
for details on hot swappable dimensions and I think they are still in
his data modeling book...
Best Regards
Peter Nolan
www.peternolan.com
How do I mimic autoNumber for non-identity columns?
wants each item in their inventory to have a number, and they don't want any
gaps.
So, there's an Item table that has information about each item, as well as
the customer the item belongs to. When an item is inserted, I cannot use an
Identity column to autonumber the ItemId because if a customer inserts an
item, then a second customer inserts a hundred items, then the original
customer inserts another item, there is a gap of one hundred items from the
perspective of the original customer. This isn't desired behavior.
I need a way to do a per-customer autonumber, but I don't know how to do
this without running into concurrency problems.Greg, If I understood you properly you need
DECLARE @.max_item
BEGIN TRAN
SELECT @.max_item=COALESCE(MAX(item),0) FROM Table WITH (UPDLOCK,HOLDLOCK)
WHERE custid=.....
INSERT INTO AnothetTable VALUES (@.max_item)
COMMIT TRAN
"Greg Smalter" <GregSmalter@.discussions.microsoft.com> wrote in message
news:5BA8A4B0-26E8-4C20-8198-F63D10B36AE4@.microsoft.com...
> Assume I have an inventory system used by several customers. Each
> customer
> wants each item in their inventory to have a number, and they don't want
> any
> gaps.
> So, there's an Item table that has information about each item, as well as
> the customer the item belongs to. When an item is inserted, I cannot use
> an
> Identity column to autonumber the ItemId because if a customer inserts an
> item, then a second customer inserts a hundred items, then the original
> customer inserts another item, there is a gap of one hundred items from
> the
> perspective of the original customer. This isn't desired behavior.
> I need a way to do a per-customer autonumber, but I don't know how to do
> this without running into concurrency problems.|||If Table can be the same as AnotherTable, I think this could work. So,
assuming ItemNumber is the column I want to mimic autonumber on, we'd have:
DECLARE @.max_item
BEGIN TRAN
SELECT @.max_item=COALESCE(MAX(ItenNumber),0) FROM Inventory WITH
(UPDLOCK,HOLDLOCK)
WHERE custid=4
INSERT INTO Inventory VALUES (@.max_item + 1)
COMMIT TRAN
Would that work? Are UPDLOCK and HOLDLOCK merely hints? What if the hints
get ignored?
Thanks.
"Uri Dimant" wrote:
> Greg, If I understood you properly you need
> DECLARE @.max_item
> BEGIN TRAN
> SELECT @.max_item=COALESCE(MAX(item),0) FROM Table WITH (UPDLOCK,HOLDLOCK)
> WHERE custid=.....
> INSERT INTO AnothetTable VALUES (@.max_item)
> COMMIT TRAN
>
>
> "Greg Smalter" <GregSmalter@.discussions.microsoft.com> wrote in message
> news:5BA8A4B0-26E8-4C20-8198-F63D10B36AE4@.microsoft.com...
>
>
How do I migrate reports to new server
I just installed reporting services on a new server and need to migrate
existing reports from an old server to the new one. Being new to Reporting
Services, I am not sure of how to do this. Any help you gurus can provide
will be greatly appreciated.
Thanks,
--
LynnYou might want to use Reporting Services Scripter
http://www.sqldbatips.com/showarticle.asp?ID=62
Any questions just drop me a mail using the link in the readme file
--
HTH
Jasper Smith (SQL Server MVP)
http://www.sqldbatips.com
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Lynn" <Lynn@.discussions.microsoft.com> wrote in message
news:7BEBF34B-420F-42D0-BEB0-33BF5E01AD46@.microsoft.com...
> Hi all,
> I just installed reporting services on a new server and need to migrate
> existing reports from an old server to the new one. Being new to
> Reporting
> Services, I am not sure of how to do this. Any help you gurus can provide
> will be greatly appreciated.
> Thanks,
> --
> Lynn|||> I just installed reporting services on a new server and need to migrate
> existing reports from an old server to the new one. Being new to
> Reporting
> Services, I am not sure of how to do this. Any help you gurus can provide
> will be greatly appreciated.
Hi Lynn,
I think this tool [1] can help you.
Look in the generated script to find out how it is working.
[1] http://www.sqldbatips.com/showarticle.asp?ID=62
Best regards,
--
Martin Kulov
http://www.codeattest.com/blogs/martin
MCAD Charter Member
MCSD.NET Early Achiever
MCSD
How do I Merge two databases?
both databases to the same server with no problems. But how do I merge
them?
This is what I came up with so far but how do I do the same process for
multiple rows and tables?
IF NOT EXISTS (SELECT tmp_DB1.dbo.tb1.key1 FROM tmpDB1.dbo.tb1 WHERE
tmp_DB1.dbo.tb1.key1 = tmp_DB2.dbo.tb1.key1)
INSERT INTO tmp_DB1.dbo.tb1 (<all fields>)
VALUES (<all fields>)
Thanks in advance.You probably want to do something like this (insert into a new table only
the rows that don't already exist):
INSERT INTO DB1.dbo.TargetTable (key_col, col1, col2, ...)
SELECT S.key_col, S.col1, S.col2, ...
FROM DB2.dbo.SourceTable AS S
LEFT JOIN DB1.dbo.TargetTable AS T
ON S.key_col = T.key_col
WHERE T.key_col IS NULL
Obviously this won't be useful if Key_col is an IDENTITY column. Use the
natural key of your table(s).
--
David Portas
SQL Server MVP
--|||Thanks, it works great. Now, I just have to do this for about 20 more
tables. :)sql
How do I merge NDF files into the MDF ?
I have a database with a whole lot of secondary data files
(*.ndf's). What do I run to merge all of those into the
Primary data file (*.mdf)?
Thanks
HHi,
To remove a NDF file, you must first moved the data off from the NDF file
onto
the other members in the data set. To do this, use the EMPTY FILE
parameter in DBCC SHRINKFILE command.
This will empty the file and mark it as unavailable. From there, you should
be able to use the REMOVE FILE parameter in ALTER DATABASE command.
Steps:-
1. Do a Full database backup
2. Execute below to move the data of the NDF file to other files
DBCC SHRINKFILE('logical_ndf_name',EMPTYFILE)
3. Now you execute the command to remove the RDF file
ALTER database <dbname> REMOVE FILE 'logical_ndf_name'
Do the same steps for all available NDF files in the database.
Thanks
Hari
MCDBA
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:2329c01c45e5e$176c5950$a301280a@.phx
.gbl...
> Hi
> I have a database with a whole lot of secondary data files
> (*.ndf's). What do I run to merge all of those into the
> Primary data file (*.mdf)?
> Thanks
> H|||Thanks Hari
I'll give it a go!
Regards
H|||Hari
I see that each *.ndf is also in it's own filegroup, so it
won't let me run a dbcc shrinkfile...it keeps saying that
the filegroup is full (it isn't physically and expand
dynamically).
Thanks
H|||Then you need to get the tables and indexes in that file group onto some oth
er filegroup. Recreate
the indexes. If you have a table without a clustered index, move that by cre
ate a clustered index
(on another filegroup) and then possibly dropping that clustered index.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:22e0101c45e6a$13448b20$a601280a@.phx
.gbl...
> Hari
> I see that each *.ndf is also in it's own filegroup, so it
> won't let me run a dbcc shrinkfile...it keeps saying that
> the filegroup is full (it isn't physically and expand
> dynamically).
> Thanks
> H|||Look ,H
CREATE DATABASE mywind
GO
ALTER DATABASE mywind ADD FILEGROUP new_customers
GO
ALTER DATABASE mywind ADD FILE
(NAME='mywind_data_1',
FILENAME='d:\mw.dat1')
TO FILEGROUP new_customers
GO
CREATE TABLE mywind..t1 (id int) ON new_customers
GO
INSERT INTO mywind..t1 (id ) VALUES (1)
GO
ALTER DATABASE mywind REMOVE FILE mywind_data_1
--Server: Msg 5042, Level 16, State 1, Line 1
--The file 'mywind_data_1' cannot be removed because it is not empty.
USE mywind
DBCC SHRINKFILE (mywind_data_1, EMPTYFILE)
--
I went to EM and change the filegroup for t1 to PRIMARY FILEGROUP
--
GO
ALTER DATABASE mywind REMOVE FILE mywind_data_1
ALTER DATABASE mywind REMOVE FILEGROUP new_customers
GO
DROP DATABASE mywind
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:22e0101c45e6a$13448b20$a601280a@.phx
.gbl...
> Hari
> I see that each *.ndf is also in it's own filegroup, so it
> won't let me run a dbcc shrinkfile...it keeps saying that
> the filegroup is full (it isn't physically and expand
> dynamically).
> Thanks
> H|||I use the method to remove a .ndf file from the primary group. But after I r
an the DBCC, the .ndf is still not empty and there are 0.6 MB for data in th
at file, which I couldn't clear. and I couldn't remove the file from the gro
up as well since the file i
s not empty. and suggestion?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine sup
ports Post Alerts, Ratings, and Searching.|||I use the method to remove a .ndf file from the primary group. But after I r
an the DBCC, the .ndf is still not empty and there are 0.6 MB for data in th
at file, which I couldn't clear. and I couldn't remove the file from the gro
up as well since the file i
s not empty. and suggestion?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine sup
ports Post Alerts, Ratings, and Searching.|||I use the method to remove a .ndf file from the primary group. But after I r
an the DBCC, the .ndf is still not empty and there are 0.6 MB for data in th
at file, which I couldn't clear. and I couldn't remove the file from the gro
up as well since the file i
s not empty. and suggestion?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine sup
ports Post Alerts, Ratings, and Searching.|||Try SHRINKFILE with EMPTYFILE option again. I've heard of cases where you ne
ed to do it twice before it is
completely empty...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SqlJunkies User" <User@.-NOSPAM-SqlJunkies.com> wrote in message
news:%231Ab8wDjEHA.3524@.TK2MSFTNGP10.phx.gbl...
> I use the method to remove a .ndf file from the primary group. But after I ran the
DBCC, the .ndf is still
not empty and there are 0.6 MB for data in that file, which I couldn't clear
. and I couldn't remove the file
from the group as well since the file is not empty. and suggestion?
> --
> Posted using Wimdows.net NntpNews Component -
> Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports
Post Alerts, Ratings, and
Searching.
How do I merge NDF files into the MDF ?
I have a database with a whole lot of secondary data files
(*.ndf's). What do I run to merge all of those into the
Primary data file (*.mdf)?
Thanks
H
Hi,
To remove a NDF file, you must first moved the data off from the NDF file
onto
the other members in the data set. To do this, use the EMPTY FILE
parameter in DBCC SHRINKFILE command.
This will empty the file and mark it as unavailable. From there, you should
be able to use the REMOVE FILE parameter in ALTER DATABASE command.
Steps:-
1. Do a Full database backup
2. Execute below to move the data of the NDF file to other files
DBCC SHRINKFILE('logical_ndf_name',EMPTYFILE)
3. Now you execute the command to remove the RDF file
ALTER database <dbname> REMOVE FILE 'logical_ndf_name'
Do the same steps for all available NDF files in the database.
Thanks
Hari
MCDBA
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:2329c01c45e5e$176c5950$a301280a@.phx.gbl...
> Hi
> I have a database with a whole lot of secondary data files
> (*.ndf's). What do I run to merge all of those into the
> Primary data file (*.mdf)?
> Thanks
> H
|||Thanks Hari
I'll give it a go!
Regards
H
|||Hari
I see that each *.ndf is also in it's own filegroup, so it
won't let me run a dbcc shrinkfile...it keeps saying that
the filegroup is full (it isn't physically and expand
dynamically).
Thanks
H
|||Then you need to get the tables and indexes in that file group onto some other filegroup. Recreate
the indexes. If you have a table without a clustered index, move that by create a clustered index
(on another filegroup) and then possibly dropping that clustered index.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:22e0101c45e6a$13448b20$a601280a@.phx.gbl...
> Hari
> I see that each *.ndf is also in it's own filegroup, so it
> won't let me run a dbcc shrinkfile...it keeps saying that
> the filegroup is full (it isn't physically and expand
> dynamically).
> Thanks
> H
|||Look ,H
CREATE DATABASE mywind
GO
ALTER DATABASE mywind ADD FILEGROUP new_customers
GO
ALTER DATABASE mywind ADD FILE
(NAME='mywind_data_1',
FILENAME='d:\mw.dat1')
TO FILEGROUP new_customers
GO
CREATE TABLE mywind..t1 (id int) ON new_customers
GO
INSERT INTO mywind..t1 (id ) VALUES (1)
GO
ALTER DATABASE mywind REMOVE FILE mywind_data_1
--Server: Msg 5042, Level 16, State 1, Line 1
--The file 'mywind_data_1' cannot be removed because it is not empty.
USE mywind
DBCC SHRINKFILE (mywind_data_1, EMPTYFILE)
I went to EM and change the filegroup for t1 to PRIMARY FILEGROUP
GO
ALTER DATABASE mywind REMOVE FILE mywind_data_1
ALTER DATABASE mywind REMOVE FILEGROUP new_customers
GO
DROP DATABASE mywind
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:22e0101c45e6a$13448b20$a601280a@.phx.gbl...
> Hari
> I see that each *.ndf is also in it's own filegroup, so it
> won't let me run a dbcc shrinkfile...it keeps saying that
> the filegroup is full (it isn't physically and expand
> dynamically).
> Thanks
> H
|||I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file from the group as well since the file i
s not empty. and suggestion?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and Searching.
|||I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file from the group as well since the file i
s not empty. and suggestion?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and Searching.
|||I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file from the group as well since the file i
s not empty. and suggestion?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and Searching.
|||Try SHRINKFILE with EMPTYFILE option again. I've heard of cases where you need to do it twice before it is
completely empty...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SqlJunkies User" <User@.-NOSPAM-SqlJunkies.com> wrote in message
news:%231Ab8wDjEHA.3524@.TK2MSFTNGP10.phx.gbl...
> I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still
not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file
from the group as well since the file is not empty. and suggestion?
> --
> Posted using Wimdows.net NntpNews Component -
> Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and
Searching.
How do I merge NDF files into the MDF ?
I have a database with a whole lot of secondary data files
(*.ndf's). What do I run to merge all of those into the
Primary data file (*.mdf)?
Thanks
HHi,
To remove a NDF file, you must first moved the data off from the NDF file
onto
the other members in the data set. To do this, use the EMPTY FILE
parameter in DBCC SHRINKFILE command.
This will empty the file and mark it as unavailable. From there, you should
be able to use the REMOVE FILE parameter in ALTER DATABASE command.
Steps:-
1. Do a Full database backup
2. Execute below to move the data of the NDF file to other files
DBCC SHRINKFILE('logical_ndf_name',EMPTYFILE)
3. Now you execute the command to remove the RDF file
ALTER database <dbname> REMOVE FILE 'logical_ndf_name'
Do the same steps for all available NDF files in the database.
--
Thanks
Hari
MCDBA
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:2329c01c45e5e$176c5950$a301280a@.phx.gbl...
> Hi
> I have a database with a whole lot of secondary data files
> (*.ndf's). What do I run to merge all of those into the
> Primary data file (*.mdf)?
> Thanks
> H|||Thanks Hari
I'll give it a go!
Regards
H|||Hari
I see that each *.ndf is also in it's own filegroup, so it
won't let me run a dbcc shrinkfile...it keeps saying that
the filegroup is full (it isn't physically and expand
dynamically).
Thanks
H|||Then you need to get the tables and indexes in that file group onto some other filegroup. Recreate
the indexes. If you have a table without a clustered index, move that by create a clustered index
(on another filegroup) and then possibly dropping that clustered index.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:22e0101c45e6a$13448b20$a601280a@.phx.gbl...
> Hari
> I see that each *.ndf is also in it's own filegroup, so it
> won't let me run a dbcc shrinkfile...it keeps saying that
> the filegroup is full (it isn't physically and expand
> dynamically).
> Thanks
> H|||Look ,H
CREATE DATABASE mywind
GO
ALTER DATABASE mywind ADD FILEGROUP new_customers
GO
ALTER DATABASE mywind ADD FILE
(NAME='mywind_data_1',
FILENAME='d:\mw.dat1')
TO FILEGROUP new_customers
GO
CREATE TABLE mywind..t1 (id int) ON new_customers
GO
INSERT INTO mywind..t1 (id ) VALUES (1)
GO
ALTER DATABASE mywind REMOVE FILE mywind_data_1
--Server: Msg 5042, Level 16, State 1, Line 1
--The file 'mywind_data_1' cannot be removed because it is not empty.
USE mywind
DBCC SHRINKFILE (mywind_data_1, EMPTYFILE)
--
I went to EM and change the filegroup for t1 to PRIMARY FILEGROUP
--
GO
ALTER DATABASE mywind REMOVE FILE mywind_data_1
ALTER DATABASE mywind REMOVE FILEGROUP new_customers
GO
DROP DATABASE mywind
"H" <anonymous@.discussions.microsoft.com> wrote in message
news:22e0101c45e6a$13448b20$a601280a@.phx.gbl...
> Hari
> I see that each *.ndf is also in it's own filegroup, so it
> won't let me run a dbcc shrinkfile...it keeps saying that
> the filegroup is full (it isn't physically and expand
> dynamically).
> Thanks
> H|||I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file from the group as well since the file is not empty. and suggestion?
--
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and Searching.|||I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file from the group as well since the file is not empty. and suggestion?
--
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and Searching.|||I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file from the group as well since the file is not empty. and suggestion?
--
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and Searching.|||Try SHRINKFILE with EMPTYFILE option again. I've heard of cases where you need to do it twice before it is
completely empty...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SqlJunkies User" <User@.-NOSPAM-SqlJunkies.com> wrote in message
news:%231Ab8wDjEHA.3524@.TK2MSFTNGP10.phx.gbl...
> I use the method to remove a .ndf file from the primary group. But after I ran the DBCC, the .ndf is still
not empty and there are 0.6 MB for data in that file, which I couldn't clear. and I couldn't remove the file
from the group as well since the file is not empty. and suggestion?
> --
> Posted using Wimdows.net NntpNews Component -
> Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine supports Post Alerts, Ratings, and
Searching.
How do I measure the cost of compiling an execution plan?
them to move to stored procedures. However certain managers don't
believe that the compilation of ad hoc queries and the fact that they
usually aren't reused is really anything to worry about. They think
that the cost of doing this is minimal and instead want to focus on
tuning bad queries.
I of course agree that badly written queries should be tuned but it's
also a basic tenet of databases users that stored procedures should
always be used where possible to take advantage of precompilation and
plan caching. However I don't know how to actually prove that with
data.
How can I measure what the overhead is for compiling and building an
execution plan for an ad hoc query vs. the putting the query in a
stored procedure? I've looked at the graphical execution plan for an ad
hoc query and a stored procedure and they look the same.
ThanksSET STATISTICS TIME ON
The first set of data before the query is executed will be the time for
parse and compile. The stats after the data is returned will be the time to
execute.
If the first set of data shows 0, it means the plan is being reused, and
note that SQL Server 2000 can reuse plans for queries other than stored
procedures.
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
<pshroads@.gmail.com> wrote in message
news:1116376117.157139.259730@.g44g2000cwa.googlegroups.com...
> My company is using a lot of ad hoc queries and I am trying to convince
> them to move to stored procedures. However certain managers don't
> believe that the compilation of ad hoc queries and the fact that they
> usually aren't reused is really anything to worry about. They think
> that the cost of doing this is minimal and instead want to focus on
> tuning bad queries.
> I of course agree that badly written queries should be tuned but it's
> also a basic tenet of databases users that stored procedures should
> always be used where possible to take advantage of precompilation and
> plan caching. However I don't know how to actually prove that with
> data.
> How can I measure what the overhead is for compiling and building an
> execution plan for an ad hoc query vs. the putting the query in a
> stored procedure? I've looked at the graphical execution plan for an ad
> hoc query and a stored procedure and they look the same.
> Thanks
>|||In addition to that you need to consider how much memory all those adhoc
plans are taking up in the procedure cache. I have no idea what yours is
like but I have seen systems with all adhoc queries use well over 1GB for
procedure cache on 32 bit systems and larger on 64 bit. That memory can
better be utilized for data and index caching instead of useless plans.
Andrew J. Kelly SQL MVP
<pshroads@.gmail.com> wrote in message
news:1116376117.157139.259730@.g44g2000cwa.googlegroups.com...
> My company is using a lot of ad hoc queries and I am trying to convince
> them to move to stored procedures. However certain managers don't
> believe that the compilation of ad hoc queries and the fact that they
> usually aren't reused is really anything to worry about. They think
> that the cost of doing this is minimal and instead want to focus on
> tuning bad queries.
> I of course agree that badly written queries should be tuned but it's
> also a basic tenet of databases users that stored procedures should
> always be used where possible to take advantage of precompilation and
> plan caching. However I don't know how to actually prove that with
> data.
> How can I measure what the overhead is for compiling and building an
> execution plan for an ad hoc query vs. the putting the query in a
> stored procedure? I've looked at the graphical execution plan for an ad
> hoc query and a stored procedure and they look the same.
> Thanks
>|||Thanks for your reply. Would I look at the size of syscacheobjects with
sp_spaceused to determine the amount of memory that the procedure cache
is using?|||Use DBCC MEMORYSTATUS for that:
http://support.microsoft.com/?id=271624
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<pshroads@.gmail.com> wrote in message news:1116388073.318788.301230@.f14g2000cwb.googlegroups
.com...
> Thanks for your reply. Would I look at the size of syscacheobjects with
> sp_spaceused to determine the amount of memory that the procedure cache
> is using?
>|||No, syscacheobjects is a pseudo-table and takes no space on disk, so
sp_spacedused reports 0. The size of the plans is in the pagesused column,
so you can just retrieve the SUM of pagesused.
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
<pshroads@.gmail.com> wrote in message
news:1116388073.318788.301230@.f14g2000cwb.googlegroups.com...
> Thanks for your reply. Would I look at the size of syscacheobjects with
> sp_spaceused to determine the amount of memory that the procedure cache
> is using?
>
How do I measure the cost of compiling an execution plan?
them to move to stored procedures. However certain managers don't
believe that the compilation of ad hoc queries and the fact that they
usually aren't reused is really anything to worry about. They think
that the cost of doing this is minimal and instead want to focus on
tuning bad queries.
I of course agree that badly written queries should be tuned but it's
also a basic tenet of databases users that stored procedures should
always be used where possible to take advantage of precompilation and
plan caching. However I don't know how to actually prove that with
data.
How can I measure what the overhead is for compiling and building an
execution plan for an ad hoc query vs. the putting the query in a
stored procedure? I've looked at the graphical execution plan for an ad
hoc query and a stored procedure and they look the same.
ThanksSET STATISTICS TIME ON
The first set of data before the query is executed will be the time for
parse and compile. The stats after the data is returned will be the time to
execute.
If the first set of data shows 0, it means the plan is being reused, and
note that SQL Server 2000 can reuse plans for queries other than stored
procedures.
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
<pshroads@.gmail.com> wrote in message
news:1116376117.157139.259730@.g44g2000cwa.googlegroups.com...
> My company is using a lot of ad hoc queries and I am trying to convince
> them to move to stored procedures. However certain managers don't
> believe that the compilation of ad hoc queries and the fact that they
> usually aren't reused is really anything to worry about. They think
> that the cost of doing this is minimal and instead want to focus on
> tuning bad queries.
> I of course agree that badly written queries should be tuned but it's
> also a basic tenet of databases users that stored procedures should
> always be used where possible to take advantage of precompilation and
> plan caching. However I don't know how to actually prove that with
> data.
> How can I measure what the overhead is for compiling and building an
> execution plan for an ad hoc query vs. the putting the query in a
> stored procedure? I've looked at the graphical execution plan for an ad
> hoc query and a stored procedure and they look the same.
> Thanks
>|||In addition to that you need to consider how much memory all those adhoc
plans are taking up in the procedure cache. I have no idea what yours is
like but I have seen systems with all adhoc queries use well over 1GB for
procedure cache on 32 bit systems and larger on 64 bit. That memory can
better be utilized for data and index caching instead of useless plans.
--
Andrew J. Kelly SQL MVP
<pshroads@.gmail.com> wrote in message
news:1116376117.157139.259730@.g44g2000cwa.googlegroups.com...
> My company is using a lot of ad hoc queries and I am trying to convince
> them to move to stored procedures. However certain managers don't
> believe that the compilation of ad hoc queries and the fact that they
> usually aren't reused is really anything to worry about. They think
> that the cost of doing this is minimal and instead want to focus on
> tuning bad queries.
> I of course agree that badly written queries should be tuned but it's
> also a basic tenet of databases users that stored procedures should
> always be used where possible to take advantage of precompilation and
> plan caching. However I don't know how to actually prove that with
> data.
> How can I measure what the overhead is for compiling and building an
> execution plan for an ad hoc query vs. the putting the query in a
> stored procedure? I've looked at the graphical execution plan for an ad
> hoc query and a stored procedure and they look the same.
> Thanks
>|||Thanks for your reply. Would I look at the size of syscacheobjects with
sp_spaceused to determine the amount of memory that the procedure cache
is using?|||Use DBCC MEMORYSTATUS for that:
http://support.microsoft.com/?id=271624
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<pshroads@.gmail.com> wrote in message news:1116388073.318788.301230@.f14g2000cwb.googlegroups.com...
> Thanks for your reply. Would I look at the size of syscacheobjects with
> sp_spaceused to determine the amount of memory that the procedure cache
> is using?
>|||No, syscacheobjects is a pseudo-table and takes no space on disk, so
sp_spacedused reports 0. The size of the plans is in the pagesused column,
so you can just retrieve the SUM of pagesused.
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
<pshroads@.gmail.com> wrote in message
news:1116388073.318788.301230@.f14g2000cwb.googlegroups.com...
> Thanks for your reply. Would I look at the size of syscacheobjects with
> sp_spaceused to determine the amount of memory that the procedure cache
> is using?
>sql
How do I measure the cost of compiling an execution plan?
them to move to stored procedures. However certain managers don't
believe that the compilation of ad hoc queries and the fact that they
usually aren't reused is really anything to worry about. They think
that the cost of doing this is minimal and instead want to focus on
tuning bad queries.
I of course agree that badly written queries should be tuned but it's
also a basic tenet of databases users that stored procedures should
always be used where possible to take advantage of precompilation and
plan caching. However I don't know how to actually prove that with
data.
How can I measure what the overhead is for compiling and building an
execution plan for an ad hoc query vs. the putting the query in a
stored procedure? I've looked at the graphical execution plan for an ad
hoc query and a stored procedure and they look the same.
Thanks
SET STATISTICS TIME ON
The first set of data before the query is executed will be the time for
parse and compile. The stats after the data is returned will be the time to
execute.
If the first set of data shows 0, it means the plan is being reused, and
note that SQL Server 2000 can reuse plans for queries other than stored
procedures.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
<pshroads@.gmail.com> wrote in message
news:1116376117.157139.259730@.g44g2000cwa.googlegr oups.com...
> My company is using a lot of ad hoc queries and I am trying to convince
> them to move to stored procedures. However certain managers don't
> believe that the compilation of ad hoc queries and the fact that they
> usually aren't reused is really anything to worry about. They think
> that the cost of doing this is minimal and instead want to focus on
> tuning bad queries.
> I of course agree that badly written queries should be tuned but it's
> also a basic tenet of databases users that stored procedures should
> always be used where possible to take advantage of precompilation and
> plan caching. However I don't know how to actually prove that with
> data.
> How can I measure what the overhead is for compiling and building an
> execution plan for an ad hoc query vs. the putting the query in a
> stored procedure? I've looked at the graphical execution plan for an ad
> hoc query and a stored procedure and they look the same.
> Thanks
>
|||In addition to that you need to consider how much memory all those adhoc
plans are taking up in the procedure cache. I have no idea what yours is
like but I have seen systems with all adhoc queries use well over 1GB for
procedure cache on 32 bit systems and larger on 64 bit. That memory can
better be utilized for data and index caching instead of useless plans.
Andrew J. Kelly SQL MVP
<pshroads@.gmail.com> wrote in message
news:1116376117.157139.259730@.g44g2000cwa.googlegr oups.com...
> My company is using a lot of ad hoc queries and I am trying to convince
> them to move to stored procedures. However certain managers don't
> believe that the compilation of ad hoc queries and the fact that they
> usually aren't reused is really anything to worry about. They think
> that the cost of doing this is minimal and instead want to focus on
> tuning bad queries.
> I of course agree that badly written queries should be tuned but it's
> also a basic tenet of databases users that stored procedures should
> always be used where possible to take advantage of precompilation and
> plan caching. However I don't know how to actually prove that with
> data.
> How can I measure what the overhead is for compiling and building an
> execution plan for an ad hoc query vs. the putting the query in a
> stored procedure? I've looked at the graphical execution plan for an ad
> hoc query and a stored procedure and they look the same.
> Thanks
>
|||Thanks for your reply. Would I look at the size of syscacheobjects with
sp_spaceused to determine the amount of memory that the procedure cache
is using?
|||Use DBCC MEMORYSTATUS for that:
http://support.microsoft.com/?id=271624
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<pshroads@.gmail.com> wrote in message news:1116388073.318788.301230@.f14g2000cwb.googlegr oups.com...
> Thanks for your reply. Would I look at the size of syscacheobjects with
> sp_spaceused to determine the amount of memory that the procedure cache
> is using?
>
|||No, syscacheobjects is a pseudo-table and takes no space on disk, so
sp_spacedused reports 0. The size of the plans is in the pagesused column,
so you can just retrieve the SUM of pagesused.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
<pshroads@.gmail.com> wrote in message
news:1116388073.318788.301230@.f14g2000cwb.googlegr oups.com...
> Thanks for your reply. Would I look at the size of syscacheobjects with
> sp_spaceused to determine the amount of memory that the procedure cache
> is using?
>
How do i measure bandwith usage for merge replication SQL 2005 to Sql server mobile 2005?
Hi,
Is there any way to measure bandwith usage during merge replication between sql server 2005 and sql server mobile 2005 running on a cradled wm5 mobile device.
Attaching the windows performance monitor to the network connection established over usb would work although I was wondering if there was something specific for this case integrated into Sql server 2005 / sql server mobile 2005 / Sql server management studio / third party tools that i could use ?
thnx,
pdns.
Not for mobile, but if you had a SQL Server subscriber (Dev, Ent, Std, Express) you can add (unsupported) merge agent parameter "-T 101" which will give command-line output statistics for that particular sync, there might be some net statistics IO output which is not always accurate unfortunately.
However we can take this into consideration for hte next release of SQL Server, although I cannot make any guarantees.
|||can u tell me full command syntax that how we could use it on command line
thanxs in advanc
Ahmad Drshen
|||Search BOoks Online for topic "Replication merge agent", you should see the command line utility topic. YOu can also drill down into the agent job steps to see the exact parameters and values that your job is using to execute a push/pull merge agent.How do I manage this mess? Thanks!
Say for example I have the following 2 database tables, the first one contains the old employee data, and has the fields shown below:
oldEmployeeID
FirstName
LastName
DateOfBirth
HiringDate
TerminationDate
and another one containing the new employee data with similar fields but instead of oldEmployeeID, it is showing the newEmployeeID.
During the conversion process, something were messed up and instead of putting in the original hiring date of the workers into the new employee database, the conversion date was put in, which, depending on the mood of HR ladies, could be any date, and at the same time, of course, new employee join the company, and we assume their hiring dates were entered correctly. On top of that, there are some employee who were terminated before the conversion took place but we still need to keep a record of that.
And I created a third table, say, emplyeeAll with similar fields to the employee data tables.
So here is what I need to do: if the firstName, lastName and DateOfBirth in the old employee data table and the new employee data table matches, I would assume they are the same employee, hence I would put the information for the employee obtained from the new employee data table to the employeeAll table, with the Hiring Date changed to the Hiring Date of the old employee data table (and do not copy the record from the old employee table to prevent duplicates), otherwise, I would simply copy and paste the data in new and old employee table to my employeeAll table.
I know this is really confusing, but...well...hope you know what I am saying...
Is it possible to have a SQL statement for all these? If so, how should the statement looks like?
Thanks a lot!
Regards,
Anyi
Hi,
If I understand this correctly, (I think I do....perhaps) ... if you have data in two tables (and I know you have three, the last would be the destinationtable employeeAll) and you wish to collect the difference between table 1 and 2 (employeeOld, employeeNew) then the left outer join is your answer...
--If you wish to collect differential rows from old table
select employeeOld.*
from employeeOld
left outer join employeeNew on employeeOld.FirstName = employeeNew.FirstName and employeeOld.DateOfBirth = employeeNew.DateOfBirth
where employeeNew.DateOfBirth is null
Similarly you can mix 'n match the above to collect differetial data from the other two tables...note the 'select' is table where the additional data lies and the where clause (is null) is the comparison table.
Hope it helps
|||I thinks that something like this is what you want:
Code Snippet
-- This collects the data
INSERT INTO EmployeeAll
SELECT
n.EmployeeID,
n.LastName,
n.FirstName,
n.DateOfBirth,
coalesce( o.HiringDate, n.HiringDate ),
coalesce( o.TerminationDate, n.TerminationDate ),
n.{RemainingColumns}
FROM NewEmployees n
LEFT JOIN OldEmployees o
ON ( n.LastName = o.LastName
AND n.FirstName = o.FirstName
AND n.DateOfBirth = o.DateOfBirth
)
But would this actually copy the rest of the record into EmployeeAll, i.e., the records that only showed up in the old employee table (the employees terminated before the conversion took place) or the records that only showed up on the new employee table (i.e., the employees hired after the conversion was completed)?
Thanks!
Regards,
Anyi
|||
Arnie Rowland wrote:
I thinks that something like this is what you want:
Code Snippet
-- This collects the data
INSERT INTO EmployeeAll
SELECT
n.EmployeeID,
n.LastName,
n.FirstName,
n.DateOfBirth,
coalesce( o.HiringDate, n.HiringDate ),
coalesce( o.TerminationDate, n.TerminationDate ),
n.{RemainingColumns}
FROM NewEmployees n
LEFT JOIN OldEmployees o
ON ( n.LastName = o.LastName
AND n.FirstName = o.FirstName
AND n.DateOfBirth = o.DateOfBirth
)
If you want both the records in the old table that don't exists in the new table, the records in the new table that don't exists in the old table, and the records that are in both (with the corrected HiringDate), then change the JOIN from a LEFT JOIN to a FULL JOIN. (This works in SQL 2005 -NOT SQL 2000.)
If you are using SQL 2000, you will need three queries to accomplish the same task.
How Do I Manage a Report Header
that:
"It sounds like you have used a page header rather than a report header in
your report. A report header automatically does what you are wanting to do.
To create a report header, just place a textbox above your data region and
type in the text for your header. It will automatically appear only on the
first page."
I'll be danged if I can find the Report Header option in SRS. I have a Page
Header and I also need a Report Header. Can anyone help, please
Thank you,
msflinxReport Header and Footer is hidden by Default in Designer. Go to "Report" on
the Menu and You can See "Report header" and "Report Footer" options. Click
on the options to enable it.
--
THANKS & PLEASE RATE THE POSTING.
--RAVI--
"msflinx" wrote:
> In a post titled "Report Header Only on First Page", Charles Kangai replied
> that:
> "It sounds like you have used a page header rather than a report header in
> your report. A report header automatically does what you are wanting to do.
> To create a report header, just place a textbox above your data region and
> type in the text for your header. It will automatically appear only on the
> first page."
> I'll be danged if I can find the Report Header option in SRS. I have a Page
> Header and I also need a Report Header. Can anyone help, please
> Thank you,
> msflinx|||I see "Page Header" & "Page Footer". I do not see "Report Header" & "Report
Footer". I'm using SRS2000. Is this a SRS2005 only option?
Thanks,
msflinx
"Ravi" wrote:
> Report Header and Footer is hidden by Default in Designer. Go to "Report" on
> the Menu and You can See "Report header" and "Report Footer" options. Click
> on the options to enable it.
> --
> THANKS & PLEASE RATE THE POSTING.
> --RAVI--
>
> "msflinx" wrote:
> > In a post titled "Report Header Only on First Page", Charles Kangai replied
> > that:
> >
> > "It sounds like you have used a page header rather than a report header in
> > your report. A report header automatically does what you are wanting to do.
> > To create a report header, just place a textbox above your data region and
> > type in the text for your header. It will automatically appear only on the
> > first page."
> >
> > I'll be danged if I can find the Report Header option in SRS. I have a Page
> > Header and I also need a Report Header. Can anyone help, please
> >
> > Thank you,
> >
> > msflinx|||Actually in RS2005, when you click on Report, you see "Page Header" and
"Page Footer", not the report header and footer.
"Ravi" <ravishankart@.hotmail.com> wrote in message
news:CB7C7C32-2911-4977-8D94-5B9EFD3EEDBB@.microsoft.com...
> Report Header and Footer is hidden by Default in Designer. Go to "Report"
> on
> the Menu and You can See "Report header" and "Report Footer" options.
> Click
> on the options to enable it.
> --
> THANKS & PLEASE RATE THE POSTING.
> --RAVI--
>
> "msflinx" wrote:
>> In a post titled "Report Header Only on First Page", Charles Kangai
>> replied
>> that:
>> "It sounds like you have used a page header rather than a report header
>> in
>> your report. A report header automatically does what you are wanting to
>> do.
>> To create a report header, just place a textbox above your data region
>> and
>> type in the text for your header. It will automatically appear only on
>> the
>> first page."
>> I'll be danged if I can find the Report Header option in SRS. I have a
>> Page
>> Header and I also need a Report Header. Can anyone help, please
>> Thank you,
>> msflinx|||Are you using a table?
--
THANKS & PLEASE RATE THE POSTING.
--RAVI--
"msflinx" wrote:
> I see "Page Header" & "Page Footer". I do not see "Report Header" & "Report
> Footer". I'm using SRS2000. Is this a SRS2005 only option?
> Thanks,
> msflinx
> "Ravi" wrote:
> > Report Header and Footer is hidden by Default in Designer. Go to "Report" on
> > the Menu and You can See "Report header" and "Report Footer" options. Click
> > on the options to enable it.
> >
> > --
> > THANKS & PLEASE RATE THE POSTING.
> > --RAVI--
> >
> >
> > "msflinx" wrote:
> >
> > > In a post titled "Report Header Only on First Page", Charles Kangai replied
> > > that:
> > >
> > > "It sounds like you have used a page header rather than a report header in
> > > your report. A report header automatically does what you are wanting to do.
> > > To create a report header, just place a textbox above your data region and
> > > type in the text for your header. It will automatically appear only on the
> > > first page."
> > >
> > > I'll be danged if I can find the Report Header option in SRS. I have a Page
> > > Header and I also need a Report Header. Can anyone help, please
> > >
> > > Thank you,
> > >
> > > msflinx|||Sorry for the delay in answering. Had a "child emergency" and had to leave
the office.
Yes, I am using a table.
"Ravi" wrote:
> Are you using a table?
> --
> THANKS & PLEASE RATE THE POSTING.
> --RAVI--
>
> "msflinx" wrote:
> > I see "Page Header" & "Page Footer". I do not see "Report Header" & "Report
> > Footer". I'm using SRS2000. Is this a SRS2005 only option?
> >
> > Thanks,
> > msflinx
> >
> > "Ravi" wrote:
> >
> > > Report Header and Footer is hidden by Default in Designer. Go to "Report" on
> > > the Menu and You can See "Report header" and "Report Footer" options. Click
> > > on the options to enable it.
> > >
> > > --
> > > THANKS & PLEASE RATE THE POSTING.
> > > --RAVI--
> > >
> > >
> > > "msflinx" wrote:
> > >
> > > > In a post titled "Report Header Only on First Page", Charles Kangai replied
> > > > that:
> > > >
> > > > "It sounds like you have used a page header rather than a report header in
> > > > your report. A report header automatically does what you are wanting to do.
> > > > To create a report header, just place a textbox above your data region and
> > > > type in the text for your header. It will automatically appear only on the
> > > > first page."
> > > >
> > > > I'll be danged if I can find the Report Header option in SRS. I have a Page
> > > > Header and I also need a Report Header. Can anyone help, please
> > > >
> > > > Thank you,
> > > >
> > > > msflinx
How do I make xp_cmdshell transactions run on the client instead of the server
have gotten lost in the shuffle.
Randall Arnoldxp_cmdshell runs OS level commands on the server -not the client. Using
xp_cmdshell to run executables on the client is not, in my experience,
something that you even want to attempt. (I'm not saying it can't be
done -just that it is too risky and very troublesome.)
However, if you are trying to have xp_cmdshell read the client computer's
file system, you could take the following steps.
Put xp_cmdshell into a Stored Procedure, use host_name to determine the
client computer and build the unc filepath.
You will be able to read the directory, and read files and write files to
the client computer. (Assuming you have appropriate permissions on the
client computer. Of course, you would NEVER be executing xp_cmdshell with
network admin priviledges, would you?)
While this can be done, the better question is why would you want to do it,
and what are the security ramifications?
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
news:GGaog.32766$Nb2.601914@.news1.nokia.com...
> Post title says it all. Any ideas? I asked this earlier but it seems to
> have gotten lost in the shuffle.
> Randall Arnold
>|||I want to be able to periodically launch a VB script on the client PC.
Security is not an issue at all in this environment.
Randall
"Arnie Rowland" <arnie@.1568.com> wrote in message
news:Ohd5kvfmGHA.4700@.TK2MSFTNGP02.phx.gbl...
> xp_cmdshell runs OS level commands on the server -not the client. Using
> xp_cmdshell to run executables on the client is not, in my experience,
> something that you even want to attempt. (I'm not saying it can't be
> done -just that it is too risky and very troublesome.)
> However, if you are trying to have xp_cmdshell read the client computer's
> file system, you could take the following steps.
> Put xp_cmdshell into a Stored Procedure, use host_name to determine the
> client computer and build the unc filepath.
> You will be able to read the directory, and read files and write files to
> the client computer. (Assuming you have appropriate permissions on the
> client computer. Of course, you would NEVER be executing xp_cmdshell with
> network admin priviledges, would you?)
> While this can be done, the better question is why would you want to do
> it, and what are the security ramifications?
> --
> Arnie Rowland, YACE*
> "To be successful, your heart must accompany your knowledge."
> *Yet Another certification Exam
>
> "Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
> news:GGaog.32766$Nb2.601914@.news1.nokia.com...
>|||Security is always an issue :) My question would be do you absolutely have
to do this from within SQL Server? If not, I would create a Windows Service
to launch your script on a timer, or something to that effect.
"Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
news:fObog.32769$Nb2.601990@.news1.nokia.com...
>I want to be able to periodically launch a VB script on the client PC.
>Security is not an issue at all in this environment.
> Randall
> "Arnie Rowland" <arnie@.1568.com> wrote in message
> news:Ohd5kvfmGHA.4700@.TK2MSFTNGP02.phx.gbl...
>|||I'm looking at any and all reasonable options to solving this need, working
several threads in parallel. One idea similar to this one but it requires
the symin role to have write/execute privileges on another server in a
certain folder but the IT guys here can't figure out how to give symin
that ability.
*sigh*
Randall
"Mike C#" <xyz@.xyz.com> wrote in message
news:OzgeLjgmGHA.3600@.TK2MSFTNGP02.phx.gbl...
> Security is always an issue :) My question would be do you absolutely
> have to do this from within SQL Server? If not, I would create a Windows
> Service to launch your script on a timer, or something to that effect.
> "Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
> news:fObog.32769$Nb2.601990@.news1.nokia.com...
>|||You are using SQL 2000 right? Honestly this task doesn't belong on SQL
Server. You can probably force the issue, but you'd be better off overall
if you made this a separate application that operated independently of SQL
Server. Is there some particular reason you feel you need to have it kick
off from inside SQL Server?
"Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
news:MsAog.33256$_k2.585855@.news2.nokia.com...
> I'm looking at any and all reasonable options to solving this need,
> working several threads in parallel. One idea similar to this one but it
> requires the symin role to have write/execute privileges on another
> server in a certain folder but the IT guys here can't figure out how to
> give symin that ability.
> *sigh*
>|||Well there's always VBScript, .BAT files and the dos command shell "at"
command :) I think someone else already mentioned sharing a directory on
the client and mapping a drive to it from the server. Another possibility
(note that I haven't actually tried this...) might be to install MSDE on the
client and run an SP via linked server that runs xp_cmdshell on the client.
Note again that I'm not even sure this would work, as I haven't tried it,
but it might be worth a try... I'd still recommend using a scripting
language of some sort to do the job, but if xp_cmdshell is what you want,
then you might give this a try.
"Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
news:ldDog.33266$_k2.585753@.news2.nokia.com...
> I'm trying to do as much as possible via SQL server because that's what we
> have and I lack the tools to "do it right" otherwise. I'd much rather be
> doing this in ASP.NET and deploying everything on the intranet the way it
> SHOULD be done. This facility will be closed by this time next year so
> I'm not exactly seeing people jump all over my resource requests. ; )
> But in the meantime, I still have this demand to deal with...
> Randall|||I've tried scripting, but can't get our IT guys to figure out how to give
symin write/execute privileges on a protected share (where the work needs
to take place and results stored).
Randall
"Mike C#" <xxx@.yyy.com> wrote in message news:jNEog.63$Ur7.47@.fe09.lga...
> Well there's always VBScript, .BAT files and the dos command shell "at"
> command :) I think someone else already mentioned sharing a directory on
> the client and mapping a drive to it from the server. Another possibility
> (note that I haven't actually tried this...) might be to install MSDE on
> the client and run an SP via linked server that runs xp_cmdshell on the
> client. Note again that I'm not even sure this would work, as I haven't
> tried it, but it might be worth a try... I'd still recommend using a
> scripting language of some sort to do the job, but if xp_cmdshell is what
> you want, then you might give this a try.
> "Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
> news:ldDog.33266$_k2.585753@.news2.nokia.com...
>|||If you don't mind, can I ask specifically what you're trying to accomplish?
Someone might be able to give you a better solution if we knew exactly what
you were trying to do. So far it sounds like you might be trying to read
some file(s) in, do some processing on them and write them back out? Or are
you trying to import or export data from SQL Server? Or maybe some
combination?
As for periodically executing a VBScript on a timer, the "at" command could
probably do the trick for you. As for setting share permissions, it all
depends on your network -- you might want to try one of the .networking, .vb
or .vbscript newsgroups (if you haven't already)
"Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
news:nyUog.32924$Nb2.605940@.news1.nokia.com...
> I've tried scripting, but can't get our IT guys to figure out how to give
> symin write/execute privileges on a protected share (where the work
> needs to take place and results stored).
> Randall|||I'm trying to take daily SQL queries and automatically build Powerpoint
presentations containing charts and tables representing data from those
queries (production performance/yield metrics). It's too much work for
people do be constantly burdened with, but I lack the proper tools and have
had trouble getting expenditures approved. This facility will be shuttered
by this time next year... but meanwhile work has to get out and we're told
we have to solve everything for the folks in Mexico who will be taking our
jobs.
Anyway, one poster here showed me how to get SQL Server 2000 Reporting
Services for free, so I've ordered that. I'll try to put management off
until it comes in.
As for the folder access, I have found that simply granting the
Adminstrators group on server A write/execute privileges on a folder on
server B solves my scripting problem. Just having a hard time getting IT
guys to do it.
Thanks for your interest.
Randall
"Mike C#" <xyz@.xyz.com> wrote in message
news:utHt9nEnGHA.3656@.TK2MSFTNGP03.phx.gbl...
> If you don't mind, can I ask specifically what you're trying to
> accomplish? Someone might be able to give you a better solution if we knew
> exactly what you were trying to do. So far it sounds like you might be
> trying to read some file(s) in, do some processing on them and write them
> back out? Or are you trying to import or export data from SQL Server? Or
> maybe some combination?
> As for periodically executing a VBScript on a timer, the "at" command
> could probably do the trick for you. As for setting share permissions, it
> all depends on your network -- you might want to try one of the
> .networking, .vb or .vbscript newsgroups (if you haven't already)
> "Randall Arnold" <randall.nospam.arnold@.nospamnokia.com.> wrote in message
> news:nyUog.32924$Nb2.605940@.news1.nokia.com...
>sql