Showing posts with label written. Show all posts
Showing posts with label written. Show all posts

Tuesday, March 27, 2012

executing multiple query at one time

hi,

i am making a n application which in between deletes the multiple tables from the sql database.

for that i have written the following code:

SqlCommand cmd = newSqlCommand();

cmd.CommandText = "delete from " + dbConstt.DBSchema + ".PicassoSelectivityLog where QTID=" + qtid;

cmd.ExecuteNonQuery();

cmd.CommandText = "delete from " + dbConstt.DBSchema + ".PicassoSelectivityLog where QTID=" + qtid;

cmd.ExecuteNonQuery();

in this way, many more tables are to be deleted.

is there any need to create the new SQLCommand object again an\d agin for each and every query. can iot be done like the given above or can there be some better method?

thanz in advance..

divya

What about creating a static method, passing in the connection and the things that can change in the query, composing query query within the method and executing this thing asynchronously ?

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de
sql

Monday, March 12, 2012

execute SSIS package stored in remote SQL server from a client machine.

I have written a VB component to execute a SSIS package stored in SQL server.

I am using the Microsoft.sqlserver.dts.dtsclient, dtsconnection/dtscommand

and executereader and everything works perfect on the local machine. This is descibed in a msdn site .

But my customer have a remote SQL server and want no extra BI clients/SQL server clients etc on the client machine, only an ordinary windows client and network connection to the SQL server.

My question is: Can you still use dtsclient or do you have to do in some other way?

rose-marie

It depends where your VB component is going to be running. You will need SSIS installed on the same machine.

-Jamie

|||

The whole idea was to run the VB component in a client machine without any extra i.e without SSIS.

I understand that this is not possible, is it possible to achieve this in another way?

rose-marie

|||

Can you not run the component somewhere else?

|||

No, in this case there is a demand on a stand-alone windows application

without any dependencies other than oledb connections to SQL server.

Otherwise I would have liked a web service in the server machine or something like that, but...

rose-marie

Sunday, February 19, 2012

Execute Package Task with SQL Server Location

Have written a SSIS package which in turn runs a couple of child packages via the Execute Package control flow task. All is fine if I use a Location of File System for the task. However when I deploy to production I'd like the main package and it's child packages to be under SQL Server.

I can set the Execute Package task location to SQL Server after copying my packages into a development SQL box and again it works. However as it's now executing the package from SQL rather than wihin BIDS it doesn't give me the debugging/flow information for the Child package when I run the parent package. It's also a bit of a pain because even though all the packages are part of the same solution if I make a change to a child package I've got to remember to re-save a copy of it to the SQL database.

Have looked into seeing if I can make the location field for the Execute package task configurable so that I could run as a File System package within BIDS but as a SQL package in production but the Location field doesn't seem to be exposed as a property for the Execute Package task so can't set this as an expression or from a configuration.

Does anyone have any advice on using the Executing package task for child packages which are stored in SQL.

What I do, for smaller packages, is to create a boolean variable -- SQLServerExecution -- and then create two flows on the control flow. Each flow does exactly the same thing, except on one flow, there's a precedence constraint that checks for SQLServerExecution to be true (and if so, execute that branch) and the other checks for it to be false.

Setup the false branch's Execute Package tasks to use the file system, and setup the true branch's Execute Package tasks to use SQL Server.

Or, create two master packages -- one for SQL Server packages, the other for file system packages.|||

My only advice is to use the same store type in all the environments. Otherwise support and deployment could become a nightmare as you won't be able to reproduce specific issues.

Sorry, I know that this does not answer your question.

|||

Thank you Phil that makes perfect sense. I'll have a think how easy it is to apply the solution to my parent package.

Think in future I'll also need to lay my C#/OO head to one side when doing SSIS and think a lot more carefully about when to split work off to seperate packages rather than have them as seperate Data Flows/Control flow areas etc within the same package.

|||

It wasn't really a specific question more an "any advice on this" type plea so your advice is most welcome Rafael, thank you.

Execute package results different to step by step execution - uses raw file

I have a package that has multiple data flow tasks. At the end of a task, key data is written into a raw file (file name stored in a variable) that is used as a data source for the next task. Each task requires a success from the preceding task.

Here's the rub:

If I execute the entire package, the results of the package (number of records of certain tasks) differs significantly from when I execute each step in the package in turn (many more records e.g. 5 vs 350).

I get the feeling that the Raw file is read into memory before it is flushed by the previous task, or that the next task begins preparation tasks too early.

Any help is greatly appreciated.

I am running on Server 2003 64 (although the same thing happens when deployed on a Server 2003 32 machine)

Thanks

B.

Hi Brian,

Interesting.

A workaround question/suggestion: Would staging the data between tasks in a database staging table work for you?

Just curious,
Andy

|||

Hi Andy,

I was hoping someone has encountered this and has a quick fix - that said....

Short answer - yes - would probably work, but I would prefer not to (I don't really like using temporary tables if I can avoid them).

The data that is being stored in the files are new guids and their relationship to old composite ids, generated during the processing of the records. I guess that storing that information in a staging database is a possibility, but I would hate to have to re-work that entire block of code.

Ideally, if it can't be stored in raw files, then connection specific temporary tables would have been my next option - but those are not really available either.

Thanks,

B.