Saturday, February 25, 2012

Question on db maintenance scripts

Relatively new to SQL Server, running both 2000 and 2005.
I have to set up database maintenance scripts, db and log backups,
optimizations etc in batch files so that they can be run by an external
scheduler.
When I produce a script from the SQL Server database maintenance
wizards and save it off in a bat file, the script will not work when
run manually, probably due to the difference in syntax.
I have some examples of backup scripts using sqlmaint.exe, but can't
find anything on how to script optimization/reorg jobs so that they can
be run by an external scheduler.
Can anybody point me to any docs or guidance on this?
Thanks in advance.
GerryI suggest you skip the scripting part and read up on sqlmaint.exe. So instea
d of calling
xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe directly
. this gives you much
better control and options.
2005 is another story, where a maint plan is an SSIS job, but you can just l
et the Wizard and
designer create package for you and schedule a job using DTEXEC.EXE to execu
te that package.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164723926.064627.125300@.l12g2000cwl.googlegroups.com...
> Relatively new to SQL Server, running both 2000 and 2005.
> I have to set up database maintenance scripts, db and log backups,
> optimizations etc in batch files so that they can be run by an external
> scheduler.
> When I produce a script from the SQL Server database maintenance
> wizards and save it off in a bat file, the script will not work when
> run manually, probably due to the difference in syntax.
> I have some examples of backup scripts using sqlmaint.exe, but can't
> find anything on how to script optimization/reorg jobs so that they can
> be run by an external scheduler.
> Can anybody point me to any docs or guidance on this?
> Thanks in advance.
> Gerry
>|||Tibor:
Thanks for the advice. I will do that.
On 2005 we have a scheduler (Control M) running business processing
jobs outside of SQL Server and I will have to schedule jobs that are
dependent on these jobs finishing.
How would you approach a db maintenance job that was job dependent?
Tibor Karaszi wrote:
> I suggest you skip the scripting part and read up on sqlmaint.exe. So inst
ead of calling
> xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe direct
ly. this gives you much
> better control and options.
> 2005 is another story, where a maint plan is an SSIS job, but you can just
let the Wizard and
> designer create package for you and schedule a job using DTEXEC.EXE to exe
cute that package.
>|||> How would you approach a db maintenance job that was job dependent?
Not sure I understand your question, as Control M apparently already has tha
t feature. Can you
elaborate?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164735316.116658.270460@.j44g2000cwa.googlegroups.com...
> Tibor:
> Thanks for the advice. I will do that.
> On 2005 we have a scheduler (Control M) running business processing
> jobs outside of SQL Server and I will have to schedule jobs that are
> dependent on these jobs finishing.
> How would you approach a db maintenance job that was job dependent?
>
> Tibor Karaszi wrote:
>|||Tibor:
I have to empty certain tables every evening. and that emptying task is
dependent on another job finishing.
I'm thinking an sql statement...but with something like this is
sqlmaint.exe used or should I be looking at osql utility or something
else.
Thanks in advance
Gerry
Tibor Karaszi wrote:[vbcol=seagreen]
>
> Not sure I understand your question, as Control M apparently already has t
hat feature. Can you
> elaborate?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "DataPro" <datapro01@.yahoo.com> wrote in message
> news:1164735316.116658.270460@.j44g2000cwa.googlegroups.com...|||Yes, you can use OSQL.EXE with either an input file or specify the TUNCATE T
ABLE or DELETE command
using the /Q switch for OSQL.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164801015.266092.235400@.l12g2000cwl.googlegroups.com...
> Tibor:
>
> I have to empty certain tables every evening. and that emptying task is
> dependent on another job finishing.
> I'm thinking an sql statement...but with something like this is
> sqlmaint.exe used or should I be looking at osql utility or something
> else.
> Thanks in advance
> Gerry
>
> Tibor Karaszi wrote:
>|||Tibor Karaszi wrote:
> Yes, you can use OSQL.EXE with either an input file or specify the TUNCATE
TABLE or DELETE command
> using the /Q switch for OSQL.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
Thanks much

Question on db maintenance scripts

Relatively new to SQL Server, running both 2000 and 2005.
I have to set up database maintenance scripts, db and log backups,
optimizations etc in batch files so that they can be run by an external
scheduler.
When I produce a script from the SQL Server database maintenance
wizards and save it off in a bat file, the script will not work when
run manually, probably due to the difference in syntax.
I have some examples of backup scripts using sqlmaint.exe, but can't
find anything on how to script optimization/reorg jobs so that they can
be run by an external scheduler.
Can anybody point me to any docs or guidance on this?
Thanks in advance.
GerryI suggest you skip the scripting part and read up on sqlmaint.exe. So instead of calling
xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe directly. this gives you much
better control and options.
2005 is another story, where a maint plan is an SSIS job, but you can just let the Wizard and
designer create package for you and schedule a job using DTEXEC.EXE to execute that package.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164723926.064627.125300@.l12g2000cwl.googlegroups.com...
> Relatively new to SQL Server, running both 2000 and 2005.
> I have to set up database maintenance scripts, db and log backups,
> optimizations etc in batch files so that they can be run by an external
> scheduler.
> When I produce a script from the SQL Server database maintenance
> wizards and save it off in a bat file, the script will not work when
> run manually, probably due to the difference in syntax.
> I have some examples of backup scripts using sqlmaint.exe, but can't
> find anything on how to script optimization/reorg jobs so that they can
> be run by an external scheduler.
> Can anybody point me to any docs or guidance on this?
> Thanks in advance.
> Gerry
>|||Tibor:
Thanks for the advice. I will do that.
On 2005 we have a scheduler (Control M) running business processing
jobs outside of SQL Server and I will have to schedule jobs that are
dependent on these jobs finishing.
How would you approach a db maintenance job that was job dependent?
Tibor Karaszi wrote:
> I suggest you skip the scripting part and read up on sqlmaint.exe. So instead of calling
> xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe directly. this gives you much
> better control and options.
> 2005 is another story, where a maint plan is an SSIS job, but you can just let the Wizard and
> designer create package for you and schedule a job using DTEXEC.EXE to execute that package.
>|||> How would you approach a db maintenance job that was job dependent?
Not sure I understand your question, as Control M apparently already has that feature. Can you
elaborate?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164735316.116658.270460@.j44g2000cwa.googlegroups.com...
> Tibor:
> Thanks for the advice. I will do that.
> On 2005 we have a scheduler (Control M) running business processing
> jobs outside of SQL Server and I will have to schedule jobs that are
> dependent on these jobs finishing.
> How would you approach a db maintenance job that was job dependent?
>
> Tibor Karaszi wrote:
>> I suggest you skip the scripting part and read up on sqlmaint.exe. So instead of calling
>> xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe directly. this gives you much
>> better control and options.
>> 2005 is another story, where a maint plan is an SSIS job, but you can just let the Wizard and
>> designer create package for you and schedule a job using DTEXEC.EXE to execute that package.
>>
>|||Tibor:
I have to empty certain tables every evening. and that emptying task is
dependent on another job finishing.
I'm thinking an sql statement...but with something like this is
sqlmaint.exe used or should I be looking at osql utility or something
else.
Thanks in advance
Gerry
Tibor Karaszi wrote:
> > How would you approach a db maintenance job that was job dependent?
>
> Not sure I understand your question, as Control M apparently already has that feature. Can you
> elaborate?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "DataPro" <datapro01@.yahoo.com> wrote in message
> news:1164735316.116658.270460@.j44g2000cwa.googlegroups.com...
> > Tibor:
> >
> > Thanks for the advice. I will do that.
> >
> > On 2005 we have a scheduler (Control M) running business processing
> > jobs outside of SQL Server and I will have to schedule jobs that are
> > dependent on these jobs finishing.
> >
> > How would you approach a db maintenance job that was job dependent?
> >
> >
> > Tibor Karaszi wrote:
> >> I suggest you skip the scripting part and read up on sqlmaint.exe. So instead of calling
> >> xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe directly. this gives you much
> >> better control and options.
> >>
> >> 2005 is another story, where a maint plan is an SSIS job, but you can just let the Wizard and
> >> designer create package for you and schedule a job using DTEXEC.EXE to execute that package.
> >>
> >>
> >|||Yes, you can use OSQL.EXE with either an input file or specify the TUNCATE TABLE or DELETE command
using the /Q switch for OSQL.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164801015.266092.235400@.l12g2000cwl.googlegroups.com...
> Tibor:
>
> I have to empty certain tables every evening. and that emptying task is
> dependent on another job finishing.
> I'm thinking an sql statement...but with something like this is
> sqlmaint.exe used or should I be looking at osql utility or something
> else.
> Thanks in advance
> Gerry
>
> Tibor Karaszi wrote:
>> > How would you approach a db maintenance job that was job dependent?
>>
>> Not sure I understand your question, as Control M apparently already has that feature. Can you
>> elaborate?
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "DataPro" <datapro01@.yahoo.com> wrote in message
>> news:1164735316.116658.270460@.j44g2000cwa.googlegroups.com...
>> > Tibor:
>> >
>> > Thanks for the advice. I will do that.
>> >
>> > On 2005 we have a scheduler (Control M) running business processing
>> > jobs outside of SQL Server and I will have to schedule jobs that are
>> > dependent on these jobs finishing.
>> >
>> > How would you approach a db maintenance job that was job dependent?
>> >
>> >
>> > Tibor Karaszi wrote:
>> >> I suggest you skip the scripting part and read up on sqlmaint.exe. So instead of calling
>> >> xp_sqlmaint, which in turn call sqlmaint.exe, you call sqlmaint.exe directly. this gives you
>> >> much
>> >> better control and options.
>> >>
>> >> 2005 is another story, where a maint plan is an SSIS job, but you can just let the Wizard and
>> >> designer create package for you and schedule a job using DTEXEC.EXE to execute that package.
>> >>
>> >>
>> >
>|||Tibor Karaszi wrote:
> Yes, you can use OSQL.EXE with either an input file or specify the TUNCATE TABLE or DELETE command
> using the /Q switch for OSQL.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
Thanks much

question on date (or string after using convert function) comparis

select case when '02/12/2005' >= '01/02/2006' then 'T' else 'F' end
the result return is 'T' why? (2005 should be < 2006)
On Mon, 13 Feb 2006 19:05:26 -0800, kei wrote:

>select case when '02/12/2005' >= '01/02/2006' then 'T' else 'F' end
>the result return is 'T' why? (2005 should be < 2006)
Hi kei,
You are comparing two string constants. They may look like dates to you
and me (though probably not the same dates - I'm from the part of the
world that uses dd/mm/yyyy), but SQL Server doesn't try to interpret
what you write - it takes you literally.
You could try
SELECT CASE WHEN CAST('02/12/2005' AS datetime) >=
CAST('01/02/2006' AS datetime) THEN 'T' ELSE 'F' END
and pray that SQL Server interprets the ambiguous date format the same
way you do.
Or you could switch to a non-ambiguous date format:
SELECT CASE WHEN CAST('20051202' AS datetime) >=
CAST('20060201' AS datetime) THEN 'T' ELSE 'F' END
For more information, check Tibor Karaszi's article on SQL Server date
and time handling: http://www.karaszi.com/SQLServer/info_datetime.asp
Hugo Kornelis, SQL Server MVP

Question on database/table priviledges with sql server

Hi

I'll have probably to use sql server soon but prior to that I have a
question concerning priviledges and security.
Is it possible for someone to do like in access, ie creating a
db/table that is locked with a password? My guess is that it will be
yes but in cas of... Now is it possible for someone to make a db/table
read only rather than to lock it totally?
Also can the guy who has an administrator priviledge on the server
determine easily what is the password for a db even if he's not the
guy who created it?
thanksOn 30 Aug 2004 17:31:13 -0700, J.Beaulieu wrote:

>Hi
> I'll have probably to use sql server soon but prior to that I have a
>question concerning priviledges and security.
>Is it possible for someone to do like in access, ie creating a
>db/table that is locked with a password? My guess is that it will be
>yes but in cas of... Now is it possible for someone to make a db/table
>read only rather than to lock it totally?
>Also can the guy who has an administrator priviledge on the server
>determine easily what is the password for a db even if he's not the
>guy who created it?
>thanks

Hi J.,

The answer to all these questions is "no".

Access is a great tool - but comparing SQL Server to it is dangerous and
often misleading. Access control for different users in SQL Server is
completely different (and lots more professional) than anything Access has
to offer. I'll try to give you a brief overview.

A SQL Server database is a collection of tables, views, stored procedures,
triggers, etc. It can roughly be compared to an Access .mdf file. One SQL
Server server can host many SQL Server databases.

To gain access to a SQL Server database, you must first log in to the
server. There are two variations:
1. Trusted connection: SQL Server communicates with the Windows operating
system to find out what domain you are logged in to and what user name you
have. If you are user foo on domain bar, SQL Server will check if access
to the server is allowed for user bar\foo.
2. SQL Server login: You provide a name and a password. SQL Server checks
if the name supplied has access to the server and if the password matches.
The password is stored in encrypted form and I have never heard of a case
where the password was decrypted.
A new SQL Server installation will only allow access for administrators;
all other users can only gain access if someone has given them access to
the server.

Once you are logged in to the server, SQL Server will check which of the
databases on the server you may access. It is possible (though not
obvious) to find out which databases exist on a server, but it's
impossible to access databases unless someone has allowed you access.

Further access control within the database is also possible. For each
table or view, permission to insert, delete, update or select (view) rows
can be granted to (or revoked from) individual users. For update and
select permissions, this can even be drilled down to the column level (eg.
allow the managers to see rows in the Personnel table, but not the column
holding the Salary). For stored procedures, permission to execute can be
given to or taken from users. Other notable permissions are the permission
to create new objects in the database or the permission to grant
permissions to other users.
If you have many users, managing permissions is easier if you set up
roles. An example: you set up a role "Auditor". Now, you can grant select
permission deny insert, update and delete permissions on all tables and
all views to this role. Whenever a new auditor is hired, the administrator
merely has to add the Auditor role to the userid of the new auditor and
(s)he has read access to all data.

There's lots more to SQL Server security than this, but I'll stop her for
now. If you want to find out more, read about it in Books Online. It's on
your computer if you have installed SQL Server. If it isn't, you can also
find it online:

http://msdn.microsoft.com/library/e...asp?frame=true

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||jfbeaulieu2003@.yahoo.com (J.Beaulieu) wrote in message news:<eb1b8a74.0408301631.5a911870@.posting.google.com>...
> Hi
> I'll have probably to use sql server soon but prior to that I have a
> question concerning priviledges and security.
> Is it possible for someone to do like in access, ie creating a
> db/table that is locked with a password? My guess is that it will be
> yes but in cas of... Now is it possible for someone to make a db/table
> read only rather than to lock it totally?
> Also can the guy who has an administrator priviledge on the server
> determine easily what is the password for a db even if he's not the
> guy who created it?
> thanks

In MSSQL, you control access to databases and tables with permissions.
First, you grant someone access to the server, then to databases, and
finally to individual objects (not just tables). If you want a
read-only table, for example, you only GRANT SELECT, you don't GRANT
UPDATE.

A member of the sysadmin role can do anything in any database, but
there are other roles which provide more restricted permissions.
Someone can be in the db_owner role for database A, and be able to do
anything in that database, but without access to database B he can't
do anything there.

Check out the "Managing Security" section of Books Online, and
especially "Managing Permissions" to get more information. Make sure
you look at the section called "Using Ownership Chains", as it's a
very important part of MSSQL security.

Simon

question on database notification...

hi to all,

my question is how can the database notify the user if there is a new inserted data on the database...like for example, i inserted a new data then when i go to the display reports page i would see a link saying New Report Recieved and this will take me to the details of the new inserted data or the report sent by another user..is this doable?? and by the way im using asp.net 2.0 and a sql database...thx...

Maybe you can use Query Notifications which is a new feature available in SQL Server 2005.

There are 3 ways to implement it:

- Low-level implementation (provided by SqlNotificationRequest class)

- High-level implementation (provided by SqlDependency class)

- SqlCacheDependency helper classes, in case you want to detect data changes for refresh caches.

More information:http://msdn2.microsoft.com/en-us/library/t9x04ed2.aspx

Question on Data/structure restore

new to SQL Server 2000. We have an obsolete database that we need to
save off for x number of years. DB2 has utilities (DB2Look/Export)
that allows for the export of the data along with a schema and script
that enables the future recreation of the structure of the databases
and tables to include RI etc. You can save off the architecture and
relationships of the tables as well as the data.

Does SQL Server have anything similar?

Failing that, our plan is to backup the data and logs then image the entire disk.

Thanks in advance.

GerryNot exactly sure what you want.

SQL Server 2000 of course has backup and restore capability, and the Enterprise Manager utility has the ability to script database objects and relationships.

A lot depends upon why you are archiving the data and what its intended use is.|||OK. If you had to save off a database...both the data and the 'structure' of the tables, relationships between tables etc....for possible recreation years down the road...how would you do it?|||sql server has a backup wizard, backup the DB to a BKF file and put it wherever, you can restore it just as easily with the wizard - table structures and all the data

also, the design of the database should be documented in the technical specs in a word doc. so you could recreate the whole thing from documentation if necessary|||Look up BACKUP in Books online

Do you have the SQL Server Client tools installed?|||Also look here

http://weblogs.sqlteam.com/tarad/archive/2004/08/04/1876.aspx|||also, the design of the database should be documented in the technical specs in a word doc. so you could recreate the whole thing from documentation if necessaryIn a Word document?
Just script the database ddl to a text file.|||I believe the scipting of the ddl of the objects in addition to a backup AND a data export is a desired redundancy. I belive the scripting capabilties of SS should do the job. That's for all the interesting info .

Gerry|||Should read Thanks :)

Question on data source and data source view of MOLAP cube

Hi, all experts here,

Thank you very much for your kind attention.

I am having a question on the data source and data source view of MOLAP Cube. As we know MOLAP Cube stores all data in MOLAP format, it does not have any thing to do with its source data base. As a result, I am wondering if we could actually remove the data source and data source view of the MOLAP cube? (in my case, no update is necessary to be informed from the underlying data source to the MOLAP cube)

Hope it is clear for your help.

I am looking forward to hearing from you for your advices and help.

With best regards,

Yours sincerely,

Hello again Helen.

You will always need a data source and a data source view for the cube.

What you can do, in order to create an off-line cube is to use local cubes, if this is the scenario you are thinking about?

With local cubes you create a local cube file that you can put on a laptop and analyze data without being connected to SSAS2005.

Both ProClarity and Excel2007 have this option.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your very kind advices.

But I found that change of the data source view does not really reflect on the data source view of the cube though? I mean after I created the cube, and I edited the data source view, but when I clicked on the cube, its data source view remained as the one before I changed to the new data source view?

Also, I am a bit confused, as all the data for the cube has already stored in SSAS2005 Data base (MOLAP), so why we still need to care about the data source and data source view if we dont need to have any update from the underlying data source?

Thank you for your kind attention and I am looking forward to hearing from you shortly for your further advices.

With best regards,

Yours sincerely,

|||

Hi, Helen.

I am not sure about the scenario here? Do you mean that you will create the cube once and never process it again?

If new dimension records and new fact records arrives from the data source you will always need to process the cube. If you need to process you will need to have the data source view.

During a days work new transactions will be registered in the source system. You will have to do a new process of the cube in order to get them from the source system into the cube.

Are you making cubes for laptop users that they will use off-line? Or are you building cubes using real time OLAP?

Regards

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your reply. In my case, well, I do deploy the whole project containing the cube after I changed the data source and data source view, but the cube remains the same as before I did any change. Therefore, I thought data source and data source view does not have anything to do with the MOLAP cube after its first deployment to the analysis services server.

Does it work in this way? Or I miss out anything of it? Thank you and your advices are most welcome.

With best regards,

Yours sincerely,

|||

Hello Helen. If you make a change like adding an attribute in the data source (source table/view) or the data source view(=named calculation) this change will not be a part of the cube until you add the attribute in the dimension editor.

In order to make a change in a cube you must mapp new columns to the measure group or to the dimension in the cube/dimension editor.

The data source view do not automatically update the cube or the dimensions. If you do a refresh in the data source view(right click) all changes in the source database will exported to the data source view.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thanks a lot for your kind advices.

It cleared my confusion over the above question.

Thanks again.

With best regards,

Yours sincerely,

Question on data source and data source view of MOLAP cube

Hi, all experts here,

Thank you very much for your kind attention.

I am having a question on the data source and data source view of MOLAP Cube. As we know MOLAP Cube stores all data in MOLAP format, it does not have any thing to do with its source data base. As a result, I am wondering if we could actually remove the data source and data source view of the MOLAP cube? (in my case, no update is necessary to be informed from the underlying data source to the MOLAP cube)

Hope it is clear for your help.

I am looking forward to hearing from you for your advices and help.

With best regards,

Yours sincerely,

Hello again Helen.

You will always need a data source and a data source view for the cube.

What you can do, in order to create an off-line cube is to use local cubes, if this is the scenario you are thinking about?

With local cubes you create a local cube file that you can put on a laptop and analyze data without being connected to SSAS2005.

Both ProClarity and Excel2007 have this option.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your very kind advices.

But I found that change of the data source view does not really reflect on the data source view of the cube though? I mean after I created the cube, and I edited the data source view, but when I clicked on the cube, its data source view remained as the one before I changed to the new data source view?

Also, I am a bit confused, as all the data for the cube has already stored in SSAS2005 Data base (MOLAP), so why we still need to care about the data source and data source view if we dont need to have any update from the underlying data source?

Thank you for your kind attention and I am looking forward to hearing from you shortly for your further advices.

With best regards,

Yours sincerely,

|||

Hi, Helen.

I am not sure about the scenario here? Do you mean that you will create the cube once and never process it again?

If new dimension records and new fact records arrives from the data source you will always need to process the cube. If you need to process you will need to have the data source view.

During a days work new transactions will be registered in the source system. You will have to do a new process of the cube in order to get them from the source system into the cube.

Are you making cubes for laptop users that they will use off-line? Or are you building cubes using real time OLAP?

Regards

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your reply. In my case, well, I do deploy the whole project containing the cube after I changed the data source and data source view, but the cube remains the same as before I did any change. Therefore, I thought data source and data source view does not have anything to do with the MOLAP cube after its first deployment to the analysis services server.

Does it work in this way? Or I miss out anything of it? Thank you and your advices are most welcome.

With best regards,

Yours sincerely,

|||

Hello Helen. If you make a change like adding an attribute in the data source (source table/view) or the data source view(=named calculation) this change will not be a part of the cube until you add the attribute in the dimension editor.

In order to make a change in a cube you must mapp new columns to the measure group or to the dimension in the cube/dimension editor.

The data source view do not automatically update the cube or the dimensions. If you do a refresh in the data source view(right click) all changes in the source database will exported to the data source view.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thanks a lot for your kind advices.

It cleared my confusion over the above question.

Thanks again.

With best regards,

Yours sincerely,

Question on Data out of sync

Greetings,
Here is the scenario: Server A is Prod, Server B is reporting or
history
Server A does Transactional replication to Server B. Server A gets
records purged at the given point of retention. So as you can tell
Server B will have much more data than Server A. Today, I get the
dreaded call of that data in server A does not seem to be at Server B.
We may have had some issues with replication a couple of weeks ago but
it appeared that they were corrected and replication has been working
fine since.
So , to my questions
Is there any way of pushing over all data on Server A to B to make
sure they are in sync without losing the older data as well?
Any good way of comparing the data in Server A to Server B without
having to go row by row through each table?
I am afraid that we may be out of synch and would like to recify it
without having to drop all replication, bcp out and then bcp back in
to get them back in sync. As you can imagine that is a major headache
and would be my last resort. Any and all ideas are welcome.
Thanks,
MM
You could also have a look at Innovartis DB Ghost at
http://www.innovartis.co.uk/ which can compare & synchronize your
database schema AND data (plus build, verify & version control). I have
found you can use SQL Enterprise Manager to generate scripts to quickly
drop & re-create replication and run DB Ghost to ship your missing rows.
John McGrath MCSE
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

Question on Data out of sync

I'm not too sure if it is relevant, but to ensure that
deletes on the publisher don't go to the subscriber, the
synctype is set to NONE.
To check that the data is in sync, you can use validation
(right-click on the publication and select validate
subscriptions. One option here is to use a
binary_checksum, and this is the same function you can
use to compare the publisher and subscriber tables using
linked servers in order to manually synchronize the
tables. Having said that, I'd say that most people just
reinitialize to sort out the issue.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Paul,
Could you please expand on this. I am not completely sure I understand
how to do what you are talking about. I tried looking up references to
the checksum function but could not tell how to use it to make it fit
this situation.
Thanks for the info!
MM
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||Mike,
you could use
Data Compare from Red Gate. Cheap, easy and works well.
http://www.red-gate.com/SQL_Data_Compare.htm
Checksum can be used but is not 100% sure to succeed.
select * from newEmp A
Inner Join oldEmp B on
A.empId = B.empId
where checksum(a.*) <> checksum(B.*)
Rgds,
Paul Ibison

Question on data optimizations

The database maintenance plan wizards present the ability to do either
data/index page reorgs or update statistics.
Is there any reason you can't do both?
Is it because an update of the statistics is automatically done on a
reorg?
Thanks in advance.
Gerry
Hi

> Is it because an update of the statistics is automatically done on a
> reorg?
When SQL Server rebuild indexes ,statistics ares automatically to be
updated
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164806696.954151.173330@.l39g2000cwd.googlegr oups.com...
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>
|||DataPro wrote:
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>
Reindexing will update the statistics...
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||Hello,
You are correct; Statistics will be updated as part of reindex.
Thanks
Hari
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164806696.954151.173330@.l39g2000cwd.googlegr oups.com...
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>
|||Thanks much to all.

Question on data optimizations

The database maintenance plan wizards present the ability to do either
data/index page reorgs or update statistics.
Is there any reason you can't do both?
Is it because an update of the statistics is automatically done on a
reorg?
Thanks in advance.
GerryHi

> Is it because an update of the statistics is automatically done on a
> reorg?
When SQL Server rebuild indexes ,statistics ares automatically to be
updated
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164806696.954151.173330@.l39g2000cwd.googlegroups.com...
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>|||DataPro wrote:
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>
Reindexing will update the statistics...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hello,
You are correct; Statistics will be updated as part of reindex.
Thanks
Hari
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164806696.954151.173330@.l39g2000cwd.googlegroups.com...
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>|||Thanks much to all.

Question on data optimizations

The database maintenance plan wizards present the ability to do either
data/index page reorgs or update statistics.
Is there any reason you can't do both?
Is it because an update of the statistics is automatically done on a
reorg?
Thanks in advance.
GerryHi
> Is it because an update of the statistics is automatically done on a
> reorg?
When SQL Server rebuild indexes ,statistics ares automatically to be
updated
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164806696.954151.173330@.l39g2000cwd.googlegroups.com...
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>|||DataPro wrote:
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>
Reindexing will update the statistics...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hello,
You are correct; Statistics will be updated as part of reindex.
Thanks
Hari
"DataPro" <datapro01@.yahoo.com> wrote in message
news:1164806696.954151.173330@.l39g2000cwd.googlegroups.com...
> The database maintenance plan wizards present the ability to do either
> data/index page reorgs or update statistics.
> Is there any reason you can't do both?
> Is it because an update of the statistics is automatically done on a
> reorg?
> Thanks in advance.
> Gerry
>|||Thanks much to all.

Question on data mining report performance optimization

Hi, all experts here,

Would any of you give me any ideas for how could we optimize the report on data mining models? (as we know, for the data mining report, we have to select the mining model and the case table)

Hope it is clear for your advices and help.

Thanks a lot in advance and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

What kind of report is this? An accuracy report, such as lift chart or scatter plot?

Also, what kind of algorithm is involved?

|||

Yes, accuracy report and lift chart report. All SQL Server 2005 built-in algorithms there.

Any suggestions?

With best regards,

Yours sincerely,

|||

Service Pack 2 added some performance improvements in generating accuracy reports for the Naive Bayes algorithm.

Besides this, there is not much you can do to improve the performance of these reports.

You could try to use the tip here: http://sqlserverdatamining.com/DMCommunity/TipsNTricks/4566.aspx and implement your own accuracy chart using reporting services

What kind of problems are you running into? Could you please let us know how many test cases are you using and how much time does it take to populate the report?

|||The performance of the lift chart/etc. is directly related to the number of cases you are testing. I.e. if you cut the number of cases in half, you will cut the time in half.|||

Hi, all,

Thanks a lot for your very kind advices and help.

With best regards,

Yours sincerely,

Question on Data Mining add-in for Office Visio 2007

Hi, all experts here,

Thank you for your kind attention.

Data mining add-in for Office Visio 2007 is great. But what I am concerning is-to end users (to technical users) how could we deliver the data mining models results in a better way (e.g. integrate with other possible clients ) to make it dynamically and easily undertood?

Hope my question is clear for your help.

With best regards,

Yours sincerely,

Not sure what you are looking for. We do have the thin client viewers for tree models and NB/Clustering which are interactive. Using Visio you can save as a web page which provides some interactivity|||

Hi, Jamie,

Thanks a lot for your help.

With best regards,

Yours sincerely,

Question on Custom code assembly

Hi,
I have to build a report whose content is very complex and comes from
multiple tables. I think the best possible way is to write custom
code(assembly). Is it possible to return a data set using custom code
assembly and assign it to report ?
Please help.
-- Thanks
RKOn Oct 19, 11:19 am, "S V Ramakrishna"
<ramakrishna.seeth...@.translogicsys.com> wrote:
> Hi,
> I have to build a report whose content is very complex and comes from
> multiple tables. I think the best possible way is to write custom
> code(assembly). Is it possible to return a data set using custom code
> assembly and assign it to report ?
> Please help.
> -- Thanks
> RK
Hi,
What you could do is write a storeproc for your dataset that handles
all (or most of)
your complex table handling.
V.|||Hi,
Thanks for the reply. Stored Procedure is a very good idea. I have a small
doubt. Is it possible at all to have a return type other than primitive
type(for example an array of integers ) for a method in a custom code
assembly in SSRS ?
--
RK
"Vinnie" <vsempoux@.gmail.com> wrote in message
news:1192786497.261274.131220@.e34g2000pro.googlegroups.com...
> On Oct 19, 11:19 am, "S V Ramakrishna"
> <ramakrishna.seeth...@.translogicsys.com> wrote:
>> Hi,
>> I have to build a report whose content is very complex and comes from
>> multiple tables. I think the best possible way is to write custom
>> code(assembly). Is it possible to return a data set using custom code
>> assembly and assign it to report ?
>> Please help.
>> -- Thanks
>> RK
> Hi,
> What you could do is write a storeproc for your dataset that handles
> all (or most of)
> your complex table handling.
> V.
>

Question on cube write back

Hi, all experts here,

I encountered a problem with cube writing back which contains measures with count aggregation rather than sum which said write back can not be allowed on measure groups with other aggregations rather than sum? Is it saying we can only enable write back on cubes with measure groups all based on sum aggregations?

But in my case, the measures needed to be counted rather than sum. What can I try to enable write back on this cube?

I am looking forward to hearing from you for your advices and thank you very much in advance.

With best regards,

Yours sincerely,

I'm having trouble thinking of a situation where you would need to writeback to a count measure, but that is beside the point.

One work around might be to create a new column in your fact table for this measure and pre-populate it with a value of 1 during ETL. Then you would be able to use a sum aggregation and get the same value as you would with a count measure. It would then also be possible to write back to this measure.

|||

Hi,

In my data, the facts are bit data, which only with values of 1 and 0 meaning different things respectively. Therefore I need to count these different situations happened in the fact table across different levels. There are no numeric data types at the moment which are for sum.

Thanks for your advices and I am looking forward to hearing from any of you here more ideas in this situation. Maybe you would give me some better ideas on how to deal with it.

With best regards,

Yours sincerely,

|||

Helen,

Explain me better what is the goal, I didnt understood... Explain me as I a dummy!

:-)

Regards!

|||

HI, PedroCGD,

The facts in the facts tables are with 0 or 1 values only. (0 and 1 therefore represent different meanings), therefore I dont see any point to sum these facts in any cube. Instead, count aggregation makes more sense which can count the numbers for the fact being 0 or 1.

Hope my explanation is clear for your help.

With best regards,

Yours sincerely,

|||It sounds like you have some flags which should modelled as separate attributes, not as measures. You would have one count measure and set either a single dimension or a combined dimension (also known as a 'junk' dimension). In this way you should be able to get the count by any combination of attributes.|||

Hi, Darren,

Thanks for your kind advices.

As you kindly advised, I will then need to update the underlying relational database (as the underlying data warehouse is designed by others)?Since at the moment, the data residing in the data warehouse does not have any junk dimensions at all. All those flags attributes are residing in the fact tables.

I would need to create single dimension for each flag attribute or combine some of the flag attributes together into one single dimension? But only one count measure for all these dimension attributes?

Will it be any good to complete these in data source view of the cube? Thanks again. And hope you can give me more further advices on it.

With best regards,

Yours sincerely,

Question on Cube storage location

Hi, all experts here,

Thank you for your kind attention.

Why when I selected the storage location for any partition of a cube, only the local system drive is available for options? Is there any way to choose other storage locations rather than the local system drive as the storage location of the cube?

Thanks a lot in advance for your advices and I am looking forwad to hearing from you shortly.

With best regards,

Yours sincerely,

Hi,

The server property AllowedBrowsingFolders specifies the list of folders (separated by the '|' character) available to the browsing dialog used to enter the storage location of a partition (you can edit that property in the server's properties dialog from SQL Management Studio - you need to check the 'Show Advanced Properties' checkbox).

You can only put local folders in the AllowedBrowsingFolders property (so no shares, even from the same machine).

Adrian Dumitrascu

|||

Hi, Adrian,

Thank you very much for your kind advices.

With best regards,

Yours sincerely,

Question on cube processing and deployment

Hi, all experts here,

Thank you very much for your kind attention.

I encountered a very strange problem processing and deploying the cube to the analysis services server. The fact table where the measure group coming from only got several hundred rows, while at the same time I have other 6 dimensions with many attributes. My system drive currently still has nearly 1 GB space left, but when I deployed the cube, it took ages to run and ended up with process failure because of running out of space on my system drive.

This problem never happened before and it is really frustrated and I dont know what is going on.

Would please any experts here give me any advices. (I have tried ages, but still nothing really changes).

I am looking forward to hearing from you and thanks a lot in advance.

With best regards,

Yours sincerely,

Any expers know what is the cause of that problem?

Thanks.

With best regards,

Yours sincerely,

question on cube hierarchies

Hi, all experts here,

Thank you for your kind attention.

I am having a question on cube hierarchies creation. Is it necessary to create two different hierarchies for the hierarchies as below?

.................................................................................

Year-Quarter-Month-Day

Year-Month-Day

..................................................................................

Should we create two different user-defined hierarchies for the above hierarchies? As one of the hierarchy is actually included in another one. Therefore I am wondering if we need to create two different ones for them or just create the one with more levels.

Hope my question is clear for your help and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

Hello Helen. I recommend to create the one with all levels.

Do not forget the attribute relations between the levels in the user hierarchy.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thanks a lot for your kind advices.

With best regards,

Yours sincerely,

Question on cube granularity again

Hi, all,

Again, I wonder is it a good idea to design the granularity level of the cube as the same as in its underlying relational database e.g the most detailed level as it is in its fact table and its dimension table.

I am looking forward to hearing from you for any advices.

Thanks a lot in advance.

With best regards,

Yours sincerely,

It really depends on your requirements. I would say you probably should make the granularities the same unless it's going to cause serious performance problems with the processing and/or querying of your cube - it just means you have all the data your users might want available. Changing the granularity of your cube once you've gone into production would also be a bit messy.

Chris

|||

Hi, Chris,

Thanks a lot for your advices. Very good point.

With best regards,

Yours sincerely,

Question on cube and dimension deployment in production

Hi, all experts here,

Would any of you here shed me any light on how could we deploy cubes and dimensions to other server rather than the one where they are created? And also what permissions at least are needed to be set up for them to be accessed in production enviornment?

Hope it is clear for your help.

Thanks a lot and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

Hi Helen,

Take a look at the functionality of the Deployment Wizard: http://msdn2.microsoft.com/en-us/library/ms176121.aspx

Regarding permissions, do you mean administrative access or the ability to query data? In either case BOL is again the best place to start: http://msdn2.microsoft.com/en-us/library/ms174517.aspx. Or do you mean the ability just to connect to the cube at all? In which case this new paper on connectivity problems might help - it discusses everything might might stop you from connecting:

http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/CISQL2005ASCS.mspx

Regards,

Chris

|||

Hi, Chris,

Thanks a lot for your very kind help. Definitely they've been very helpful to clear up my confusions and questions on that.

Yes, I am concerning about being able to access the cubes and dimensions after they are deployed to other production environment.

Thanks a lot again.

With best regards,

Yours sincerely,

Question on Cube aggregation percentage?

Hi, all experts here,

As I am a bit confused about the aggregation percentage of a cube, when we partition the cube, we need to set the aggregation percentage. But how does the analysis services really work here with the pre-defined cube aggregation percentage? Does not the analysis services engine only aggregates data on cube hierarchies and user-defined aggregations? (assume we have defined both the natural and user-defined hierarchies and user-defined aggregations), how can we determine how much percentage of aggregations it will be for that?

Hope it is clear for your advices and help and I am looking forward to hearing from you shortly and thanks a lot in advance.

With best regards,

Yours sincerely,

The aggregation percent number is only an approximate representation of the number of aggregations and estimated performance benifit of aggregations designed by the aggregation wizard. Note that it's a bit misleading in that you don't want 100% but something more like 30%. I recommend reading http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc for more information on this subject.|||

Hi, Matt,

Thanks a lot for your kind advices and the very helpful imformation.

With best regards,

Yours sincerely,

Question on Cube actions

Hi, all experts here,

I am having a question on cube actions. What is the easiest way for us to see all actions properties(including objects to trigger the actions, etc.), hope it is clear for your help and I am looking forward to hearing from you shortly.

Thanks a lot in advance.

With best regards,

Yours sincerely,

Hello Helen. Are you missing any information in the actions tab of the cube editor?

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thank you very much for your kind attention.

Well, I asked the question from a end-user point of view like when they browse their data from end clients like Excel 2007, rather than respectively click on each object displayed in Excel to see know available cube actions, is there any way that they can have an overview of all available cube actions on the cube they are browsing in Excel? So that they are able to have a whole idea of what actions are available on the cube.

Thank you very much again for your kind advices and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

|||

Hi Helen. I have not seen this option in Excel2007. I think it is only possible by right clicking a member.

Regards

Thomas Ivarsson

|||

Hi, Thomas,

Thank you very much for your very kind atttention and advices and help.

With best regards,

Yours sincerely,

|||I suppose you could shade the back color of all cells that have actions on them using a Back_Color(this) = assignment in the calc script. I personally think that would clutter a spreadsheet, but it's an option if you're desperate.|||

Hi, Furmangg, thank you very much for that. It's been very helpful advices.

With best regards,

Yours sincerely,

question on crystal reports

Hi,
there is a requirement of making Login name field just 4 characters. This is Group name type field which shows login names. We have to decrease the length of login names to save space. How shd I proceed?
ThanksSorry, not clear on the question, are you trying to reduce the length in the report, or in the database ?

Question on correct syntax for stored procedure

I have an ASP.NET datagrid in which I am passing the following:
PartNumber (which is actually a varchar)
PartType (varchar)
The PartType is just the description. In the PartType table I have two field
s:
ptID (what I actually need to access)
Description (what is being passed to the stored procedure)
What I want to do is insert into the Parts table the PartNumber (no problem)
and the ptID.
When I call existing records I use the following SQL statement:
SELECT p.PartNumber, p.ID as ptID, pt.description FROM parts p INNER JOIN
PartTypes pt on pt.ID =p.PartTypeID WHERE p.PartNumber = @.PartID;
What I am trying to figure out is where in the stored procedure do I fit the
inner join? This is what I have thus far:
@.PartNumber varchar (15),
@.PartType varchar (100),
AS
if NOT EXISTS (Select * from Parts where PartNumber = @.PartNumber)
BEGIN
insert into Parts
(PartNumber, PartTypeID)
VALUES
(
@.PartNumber
@.PartType <--but I don't actually want the PartType, I want the PartTypeID
)
END
How do I do this? Let me know if you need more info. Thanks!Leckey,
You can fire the query you have mentioned just before the insert statement
which
you have given in your stored procedure. Query would be something like
SELECT p.PartNumber, @.PART_ID = p.ID as ptID, pt.description FROM parts p
INNER JOIN
> PartTypes pt on pt.ID =p.PartTypeID WHERE p.PartNumber = @.PartID;
@.PART_ID need to be declared within the SP as an integer type variable. Now
you can this
@.PART_ID in your insert statement as the second param.
Hope this helps.
Regards,
Joe.
"leckey" <u23706@.uwe> wrote in message news:62b346eec9d0a@.uwe...
> I have an ASP.NET datagrid in which I am passing the following:
> PartNumber (which is actually a varchar)
> PartType (varchar)
> The PartType is just the description. In the PartType table I have two
fields:
> ptID (what I actually need to access)
> Description (what is being passed to the stored procedure)
> What I want to do is insert into the Parts table the PartNumber (no
problem)
> and the ptID.
> When I call existing records I use the following SQL statement:
> SELECT p.PartNumber, p.ID as ptID, pt.description FROM parts p INNER JOIN
> PartTypes pt on pt.ID =p.PartTypeID WHERE p.PartNumber = @.PartID;
> What I am trying to figure out is where in the stored procedure do I fit
the
> inner join? This is what I have thus far:
> @.PartNumber varchar (15),
> @.PartType varchar (100),
> AS
> if NOT EXISTS (Select * from Parts where PartNumber = @.PartNumber)
> BEGIN
> insert into Parts
> (PartNumber, PartTypeID)
>
> VALUES
> (
> @.PartNumber
> @.PartType <--but I don't actually want the PartType, I want the PartTypeID
> )
> END
> How do I do this? Let me know if you need more info. Thanks!

Question on copying over overlapping data from one database to another...

... and preserving the relationships.
(Note, this is more of a SQL question than a SQL-related ASP.NET question...)
Say I have two databases, D1 and D2, with the same three tables:
Companies
Departments
Employees
With the standard one-to-many relationships down the line, with each
table having a PK, IDENTITY field like CompanyID, DepartmentID, and
EmployeeID.

I have a smattering of data in each of D1 and D2 for these tables with
overlaps in the ID field values. What I want to be able to do is copy
over D1's data to D2, preserving the relationships in D1 even though
there are ID overlaps in D2. So the tool I'd use would have to be
smart enough to check for ID dups in D2 and appropriate change the ID
values in D1's tables, maintaining the relationships.
Is there some built-in SQL tool to do this or do I have to do this myself?
Thanks!

There may be 3rd Party tools that do this but there's not any way, I'm aware of, of doing it in Enterprise Manager or Management Server Express. I do have an idea, though, of a way you could at least merge the tables using standard SQL:

For instance, if we take the Northwind database and imagine you have two Products tables in it, both with the exact same columns and both populated with data where the ProductID (PK) might be the same in both tables. We shall call these two tablesProducts andProducts2. Now, if you wanted to merge these two tables into one new table with a new, unqiue primary key you would do this:

Create a new table called (say) Products_Temp with the same structure as Products and make sure that ProductID is set as an Identity column. The easiest way is to create a script from Products table which would look like:

CREATETABLE [dbo].[Products](

[ProductID] [int]IDENTITY(1,1)NOTNULL,

[ProductName] [nvarchar](40)COLLATE Latin1_General_CI_ASNOTNULL,

[SupplierID] [int]NULL,

[CategoryID] [int]NULL,

[QuantityPerUnit] [nvarchar](20)COLLATE Latin1_General_CI_ASNULL,

[UnitPrice] [money]NULLCONSTRAINT [DF_Products_UnitPrice]DEFAULT((0)),

[UnitsInStock] [smallint]NULLCONSTRAINT [DF_Products_UnitsInStock]DEFAULT((0)),

[UnitsOnOrder] [smallint]NULLCONSTRAINT [DF_Products_UnitsOnOrder]DEFAULT((0)),

[ReorderLevel] [smallint]NULLCONSTRAINT [DF_Products_ReorderLevel]DEFAULT((0)),

[Discontinued] [bit]NOTNULLCONSTRAINT [DF_Products_Discontinued]DEFAULT((0)),

CONSTRAINT [PK_Products]PRIMARYKEYCLUSTERED

(

[ProductID]ASC

)WITH(IGNORE_DUP_KEY=OFF)ON [PRIMARY]

)ON [PRIMARY]

Then to merge the two tables you would UNION them together on all columnsexceptthe primary key column (ProductID) and place the results into the Products_Temp table you just created. Eg.

SELECT ProductName, SupplierID, CategoryID, QuantityPerUnit,UnitPrice,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued

INTO Product_TEMP

FROM

(SELECT ProductName, SupplierID, CategoryID, QuantityPerUnit,UnitPrice,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued

FROM Products

UNION

SELECT ProductName, SupplierID, CategoryID, QuantityPerUnit,UnitPrice,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued

FROM Products2)

AS Temp;

Now Products_Temp will contain the merged data from both tables (with any exact duplicate rows removed) and each row will now have a unique primary key (because of Identity insert). I know this isn't that much help because it doesn't preserve the PK > FK relationships between multiple tables, but it might give you some ideas or inspiration... Good luck :)

|||Scott,
If you have sql enterprise you can use there management studio and do this with a breeze. even if the database is local or remote. You can execute scripts on the databases, copy, backup, merge. basically everyhing. I hope this helps.
|||Connect, I was not aware of any built-in functionality in SQL Server 2000, and figured it would have to be scripted.
Does anyone know of a third-party tool that accomplishes this?

Blake05 wrote:

Scott,
If you have sql enterprise you can use there management studio and do this with a breeze. even if the database is local or remote. You can execute scripts on the databases, copy, backup, merge. basically everyhing. I hope this helps.


Blake05, how can I use EM to copy data from one DB to another when there are overlapping ID values between the two databases' tables?
Thanks
|||Yes you can
|||You don't need third party tool create Unique index with IGNORE_DUP_KEY option and do standard INSERT INTO and in SQL Server 2000 and below that statement will not affect update statements. What happens is SQL Server will not insert the duplicate values it will insert only the unique values. I am assuming this is what you want and run a search for the above in SQL Server BOL (books online). Hope this helps.

Question on constraints and partitioned views

Can you use a combination of columns for the partitioning column in a
partition view? This is what I want to do but I cannot get it to work.
I get an error that UNION ALL view 'vw_My_Data' is not updatable
because a partitioning column was not found.
Here's my DDL
--Group1 July 2003
CREATE TABLE [dbo].[Group1_07_2003] (
[Sample_ID] [uniqueidentifier] NOT NULL ,
[Group_Constraint] [int] Check(Group_Constraint = 1) NOT NULL ,
[Month_Constraint] [int] Check (Month_Constraint = 7)NOT NULL ,
[Year_Constraint] [int] Check (Year_Constraint = 2003)NOT NULL ,
[Timestamp] [datetime] NOT NULL ,
[msec] [int] NOT NULL ,
[Device_ID] [bigint] NOT NULL ,
[Topic_ID] [bigint] NOT NULL ,
[Sample_Type_ID] [bigint] NOT NULL ,
[Calculated_Value] [float] NOT NULL ,
[Original_Value] [float] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Group1_07_2003] ADD
CONSTRAINT [PK_Group1_07_2003] PRIMARY KEY NONCLUSTERED
(
[Sample_ID],
[Group_Constraint],
[Month_Constraint],
[Year_Constraint]
) ON [PRIMARY]
GO
I would like to create other tables for Group_Constraint=2, and
Month_Constraint = 7, and Year_Constraint = 2003 and so on. But when
I add in the other group, I start getting the errors when I try to
insert into the view. Can I only use one column?Thank you for your recommendation but it did not fix the problem.
Apparently you can only have one column as your partitioning column.
However, I am ready to ditch the partitioned view design. We have
tried every other way we can to actually apply what we want to do and
either we cannot insert into the view or when we query the data it
goes to all of the tables instead of one table based on the
constraint. Even one of the examples in BOL will not work.
Here's the code from BOL:
CREATE TABLE May1998sales
(OrderID INT,
CustomerID INT NOT NULL,
OrderDate DATETIME NULL
CHECK (DATEPART(yy, OrderDate) = 1998),
OrderMonth INT
CHECK (OrderMonth = 5),
DeliveryDate DATETIME NULL
CHECK(DATEPART(mm, DeliveryDate) = 5)
CONSTRAINT OrderIDMonth PRIMARY KEY(OrderID, OrderMonth)
CREATE VIEW Year1998Sales
AS
SELECT * FROM Jan1998Sales
UNION ALL
SELECT * FROM Feb1998Sales
UNION ALL
SELECT * FROM Mar1998Sales
UNION ALL
SELECT * FROM Apr1998Sales
UNION ALL
SELECT * FROM May1998Sales
UNION ALL
SELECT * FROM Jun1998Sales
UNION ALL
SELECT * FROM Jul1998Sales
UNION ALL
SELECT * FROM Aug1998Sales
UNION ALL
SELECT * FROM Sep1998Sales
UNION ALL
SELECT * FROM Oct1998Sales
UNION ALL
SELECT * FROM Nov1998Sales
UNION ALL
SELECT * FROM Dec1998Sales
SELECT *
FROM Year1998Sales
WHERE OrderMonth IN (5,6) AND CustomerID = 64892
--
I created just two tables and modified the select statement
SELECT *
FROM Year1998Sales
WHERE OrderMonth =5
I didn't put any data in the tables, I just ran the above select
statement. If you look at the execution plan. It does not go directly
to May1998sales.
I have only gotten one example to actually work and that is using the
customer example
-- On Server1:
CREATE TABLE Customers_33
(CustomerID INTEGER PRIMARY KEY
CHECK (CustomerID BETWEEN 1 AND 32999),
... -- Additional column definitions)
-- On Server2:
CREATE TABLE Customers_66
(CustomerID INTEGER PRIMARY KEY
CHECK (CustomerID BETWEEN 33000 AND 65999),
... -- Additional column definitions)
-- On Server3:
CREATE TABLE Customers_99
(CustomerID INTEGER PRIMARY KEY
CHECK (CustomerID BETWEEN 66000 AND 99999),
... -- Additional column definitions)
However for how we want to partition the data it does not seem to
work. It has been a nightmare. Also don't try to make your
partitioning column on datetime. You can insert into the view but look
at your execution plan. If you use a datetime variable in your where
clause to specify a date, it will not work. I can elaborate more if
anyone is interested. If anyone else is looking at using this design,
BEWARE! It is not documented well in BOL. I think I have said enough.
If anyone has actually gotten this to work, feel free to comment.
"Quentin Ran" <ab@.who.com> wrote in message news:<#of9d4sVDHA.2328@.TK2MSFTNGP12.phx.gbl>...
> Loretta,
> try by putting the PK and constraint in your create table statement. Avoid
> alter table if the table participates in a partitioned view. I do not have
> the reason, but alter table tends to spoil the partitioned view / tables.
> hth
> Quentin
>|||We have similar problem. We have a large table which we
always fetch data by giving certain date.
So, we broke the table into per month bases and created a
partitioned view. The date column (datetime datatype)in
each member table has a check constraint.
When I quire to the view by giving the data condition SQL
server access all member table and it takes very long time
for fetching.
You mentioned that datetime column wouldn't work.
But it must be the once of the most typical case one want
to have a partition view scenario, isn't it?
Is there any workaround from Microsoft side?
I am looking forward to hearing from you!!!
>--Original Message--
>Thank you for your recommendation but it did not fix the
problem.
>Apparently you can only have one column as your
partitioning column.
>However, I am ready to ditch the partitioned view design.
We have
>tried every other way we can to actually apply what we
want to do and
>either we cannot insert into the view or when we query
the data it
>goes to all of the tables instead of one table based on
the
>constraint. Even one of the examples in BOL will not work.
>Here's the code from BOL:
>CREATE TABLE May1998sales
> (OrderID INT,
> CustomerID INT NOT NULL,
> OrderDate DATETIME NULL
> CHECK (DATEPART(yy, OrderDate) = 1998),
> OrderMonth INT
> CHECK (OrderMonth = 5),
> DeliveryDate DATETIME NULL
> CHECK(DATEPART(mm, DeliveryDate) = 5)
> CONSTRAINT OrderIDMonth PRIMARY KEY(OrderID,
OrderMonth)
>CREATE VIEW Year1998Sales
>AS
>SELECT * FROM Jan1998Sales
>UNION ALL
>SELECT * FROM Feb1998Sales
>UNION ALL
>SELECT * FROM Mar1998Sales
>UNION ALL
>SELECT * FROM Apr1998Sales
>UNION ALL
>SELECT * FROM May1998Sales
>UNION ALL
>SELECT * FROM Jun1998Sales
>UNION ALL
>SELECT * FROM Jul1998Sales
>UNION ALL
>SELECT * FROM Aug1998Sales
>UNION ALL
>SELECT * FROM Sep1998Sales
>UNION ALL
>SELECT * FROM Oct1998Sales
>UNION ALL
>SELECT * FROM Nov1998Sales
>UNION ALL
>SELECT * FROM Dec1998Sales
>SELECT *
>FROM Year1998Sales
>WHERE OrderMonth IN (5,6) AND CustomerID = 64892
>--
>I created just two tables and modified the select
statement
>SELECT *
>FROM Year1998Sales
>WHERE OrderMonth =5
>I didn't put any data in the tables, I just ran the above
select
>statement. If you look at the execution plan. It does not
go directly
>to May1998sales.
>I have only gotten one example to actually work and that
is using the
>customer example
>-- On Server1:
>CREATE TABLE Customers_33
> (CustomerID INTEGER PRIMARY KEY
> CHECK (CustomerID BETWEEN 1 AND 32999),
> ... -- Additional column definitions)
>-- On Server2:
>CREATE TABLE Customers_66
> (CustomerID INTEGER PRIMARY KEY
> CHECK (CustomerID BETWEEN 33000 AND
65999),
> ... -- Additional column definitions)
>-- On Server3:
>CREATE TABLE Customers_99
> (CustomerID INTEGER PRIMARY KEY
> CHECK (CustomerID BETWEEN 66000 AND
99999),
> ... -- Additional column definitions)
>However for how we want to partition the data it does not
seem to
>work. It has been a nightmare. Also don't try to make your
>partitioning column on datetime. You can insert into the
view but look
>at your execution plan. If you use a datetime variable in
your where
>clause to specify a date, it will not work. I can
elaborate more if
>anyone is interested. If anyone else is looking at using
this design,
>BEWARE! It is not documented well in BOL. I think I have
said enough.
>If anyone has actually gotten this to work, feel free to
comment.
>
>
>
>"Quentin Ran" <ab@.who.com> wrote in message
news:<#of9d4sVDHA.2328@.TK2MSFTNGP12.phx.gbl>...
>> Loretta,
>> try by putting the PK and constraint in your create
table statement. Avoid
>> alter table if the table participates in a partitioned
view. I do not have
>> the reason, but alter table tends to spoil the
partitioned view / tables.
>> hth
>> Quentin
>>
>.
>|||I agree, it does seem like partitioning by date makes the most sense
but we don't know if any work around. What actually got our team
looking at partitioned views was an article called "Add Scalability
with Data Partitioning" by Jon Rauschenberger. I only have a print
out. Sorry I don't have a link. Anyways, this article seems too good
to be true. But with the design we have in mind, it just might be to
complex. We are still considering dividing our data into smaller
tables but we do not plan on using a view for inserting and querying
the data. We are considering writing our own custom apps to do this
but we are still working on other areas and that has its own
drawbacks. Not sure what the work around is for now. Sorry, no help.
"didi" <carlsdottar@.hotmail.com> wrote in message news:<1a1e01c360c9$c23f09b0$3501280a@.phx.gbl>...
> We have similar problem. We have a large table which we
> always fetch data by giving certain date.
> So, we broke the table into per month bases and created a
> partitioned view. The date column (datetime datatype)in
> each member table has a check constraint.
> When I quire to the view by giving the data condition SQL
> server access all member table and it takes very long time
> for fetching.
> You mentioned that datetime column wouldn't work.
> But it must be the once of the most typical case one want
> to have a partition view scenario, isn't it?
> Is there any workaround from Microsoft side?
> I am looking forward to hearing from you!!!
>
> >--Original Message--

Question on Connectivity tools for SQL 2000 and SQL 2005

Question on just connectivity tools for SQL 2000 and SQL 2005
We are in the process of doing a inplace-upgrade of our SQL Server 2000
to SQL Server 2005. Currently all of the client computers have only the
connectivity tools from SQL 2000 installed in them. Is it necessary to
upgrade all the client machines with new SQL 2005 connectivity tools?
Any advice in this regard will be greatly appreciated.
Question on Client tools for SQL 2000 and SQL 2005
The other question I have is I have a unique client that has SQL 2000
Client tools installed, I am planning to also install the SQL 2005
Client tools, in this scenario when the Client connects to the Server
which client setting does it take. So for instance if in the SQL
Serever 2000 Network Client utility if I have only tcp_ip enabled as
the network protocol and on SQL 2005 if I have only Namedpipes enabled,
how does SQL Server figure out which setting should it look at? Do I
need to keep them consistent for both?
ThanksHi
Without upgrading the clients you will not have access to the facilities
available to the Native Client although this may not be a problem immediatel
y
after upgrading.
Depending on what you require the client tools for, then you may not need to
upgrade them, but you will not have any of the SQL 2005 specific features
available. The protocol used by your application will depend on what you hav
e
specified in the connection string, installing SQL 2005 may give you more
updated versions of the component you use.
If you are not using SQL 2000, I would look at removing the SQL 2000 tools.
HTH
John
"shub" wrote:

> Question on just connectivity tools for SQL 2000 and SQL 2005
> We are in the process of doing a inplace-upgrade of our SQL Server 2000
> to SQL Server 2005. Currently all of the client computers have only the
> connectivity tools from SQL 2000 installed in them. Is it necessary to
> upgrade all the client machines with new SQL 2005 connectivity tools?
> Any advice in this regard will be greatly appreciated.
> Question on Client tools for SQL 2000 and SQL 2005
> The other question I have is I have a unique client that has SQL 2000
> Client tools installed, I am planning to also install the SQL 2005
> Client tools, in this scenario when the Client connects to the Server
> which client setting does it take. So for instance if in the SQL
> Serever 2000 Network Client utility if I have only tcp_ip enabled as
> the network protocol and on SQL 2005 if I have only Namedpipes enabled,
> how does SQL Server figure out which setting should it look at? Do I
> need to keep them consistent for both?
> Thanks
>

Question on Connectivity tools for SQL 2000 and SQL 2005

Question on just connectivity tools for SQL 2000 and SQL 2005
We are in the process of doing a inplace-upgrade of our SQL Server 2000
to SQL Server 2005. Currently all of the client computers have only the
connectivity tools from SQL 2000 installed in them. Is it necessary to
upgrade all the client machines with new SQL 2005 connectivity tools?
Any advice in this regard will be greatly appreciated.
Question on Client tools for SQL 2000 and SQL 2005
The other question I have is I have a unique client that has SQL 2000
Client tools installed, I am planning to also install the SQL 2005
Client tools, in this scenario when the Client connects to the Server
which client setting does it take. So for instance if in the SQL
Serever 2000 Network Client utility if I have only tcp_ip enabled as
the network protocol and on SQL 2005 if I have only Namedpipes enabled,
how does SQL Server figure out which setting should it look at? Do I
need to keep them consistent for both?
ThanksHi
Without upgrading the clients you will not have access to the facilities
available to the Native Client although this may not be a problem immediately
after upgrading.
Depending on what you require the client tools for, then you may not need to
upgrade them, but you will not have any of the SQL 2005 specific features
available. The protocol used by your application will depend on what you have
specified in the connection string, installing SQL 2005 may give you more
updated versions of the component you use.
If you are not using SQL 2000, I would look at removing the SQL 2000 tools.
HTH
John
"shub" wrote:
> Question on just connectivity tools for SQL 2000 and SQL 2005
> We are in the process of doing a inplace-upgrade of our SQL Server 2000
> to SQL Server 2005. Currently all of the client computers have only the
> connectivity tools from SQL 2000 installed in them. Is it necessary to
> upgrade all the client machines with new SQL 2005 connectivity tools?
> Any advice in this regard will be greatly appreciated.
> Question on Client tools for SQL 2000 and SQL 2005
> The other question I have is I have a unique client that has SQL 2000
> Client tools installed, I am planning to also install the SQL 2005
> Client tools, in this scenario when the Client connects to the Server
> which client setting does it take. So for instance if in the SQL
> Serever 2000 Network Client utility if I have only tcp_ip enabled as
> the network protocol and on SQL 2005 if I have only Namedpipes enabled,
> how does SQL Server figure out which setting should it look at? Do I
> need to keep them consistent for both?
> Thanks
>

Question on Connectivity tools for SQL 2000 and SQL 2005

Question on just connectivity tools for SQL 2000 and SQL 2005
We are in the process of doing a inplace-upgrade of our SQL Server 2000
to SQL Server 2005. Currently all of the client computers have only the
connectivity tools from SQL 2000 installed in them. Is it necessary to
upgrade all the client machines with new SQL 2005 connectivity tools?
Any advice in this regard will be greatly appreciated.
Question on Client tools for SQL 2000 and SQL 2005
The other question I have is I have a unique client that has SQL 2000
Client tools installed, I am planning to also install the SQL 2005
Client tools, in this scenario when the Client connects to the Server
which client setting does it take. So for instance if in the SQL
Serever 2000 Network Client utility if I have only tcp_ip enabled as
the network protocol and on SQL 2005 if I have only Namedpipes enabled,
how does SQL Server figure out which setting should it look at? Do I
need to keep them consistent for both?
Thanks
Hi
Without upgrading the clients you will not have access to the facilities
available to the Native Client although this may not be a problem immediately
after upgrading.
Depending on what you require the client tools for, then you may not need to
upgrade them, but you will not have any of the SQL 2005 specific features
available. The protocol used by your application will depend on what you have
specified in the connection string, installing SQL 2005 may give you more
updated versions of the component you use.
If you are not using SQL 2000, I would look at removing the SQL 2000 tools.
HTH
John
"shub" wrote:

> Question on just connectivity tools for SQL 2000 and SQL 2005
> We are in the process of doing a inplace-upgrade of our SQL Server 2000
> to SQL Server 2005. Currently all of the client computers have only the
> connectivity tools from SQL 2000 installed in them. Is it necessary to
> upgrade all the client machines with new SQL 2005 connectivity tools?
> Any advice in this regard will be greatly appreciated.
> Question on Client tools for SQL 2000 and SQL 2005
> The other question I have is I have a unique client that has SQL 2000
> Client tools installed, I am planning to also install the SQL 2005
> Client tools, in this scenario when the Client connects to the Server
> which client setting does it take. So for instance if in the SQL
> Serever 2000 Network Client utility if I have only tcp_ip enabled as
> the network protocol and on SQL 2005 if I have only Namedpipes enabled,
> how does SQL Server figure out which setting should it look at? Do I
> need to keep them consistent for both?
> Thanks
>

Question on connection to SQL Server using windows authentication

I have a domain user account through which I can log on a machine. The machine allows me to use this domain user account to create DB in SQL Server using windows authentication. How can I define a connection string to connect SQL Server? My connection string is like:
string str = "server=(local);uid='"+userName+"';password='"+password+"';database='master';connection timeout=15";

This one works on another machine where I use SA account. But this one does not work using windows authentication. I tried to use domainname\myname as user name, but failed.
Could any one give me some suggestion?I solved the problem.|||Hi bagofbones.

I have the same problem.

Please post your solution.

Thanks.

DS

Question on connecting to Oracle

Hello all,

I'm using a OLE DB Connection Manager to Oracle (I've installed the client) and using Oracle Native OLE DB Provider.

The problem was the view In Oracle I'm returning a column that uses a fuction on a numeric field in the the select part of the statement, and by this, the Precition is not displayed when I'm trying to look at the data types of the fields in the view.

The Precision is shown as 0 in the external field part of the component.

I have read the previous post concerning the problem connecting to Oracle Using the Oracle provider, and solved it by using the Microsoft provider, but still I have a question about the oracle provider:

I changed the precision on the External column (from 0 to 15) and also an the Output column, but now I'm getting an error on about error output not matching:

Error 6 Validation error. Data Flow Task: OLE DB Source 1 [8970]: The output column "UPDATE_TIME_NUM" (9333) on the non-error output has no corresponding output column on the error output. Dim_registered_user.dtsx 0 0

I cannot change the error output datatype, cannot delete it or do anything at all.

Is there a way to fix the problem?

This may or may not help you, but I've switch my Oracle connections over to the ADO.NET provider for Oracle. The only side effect is that the resulting data flow is Unicode by default...so all your string data is DT_WSTR instead of DT_STR. You can change that, but it's a manual operation go through all the External Columns and change their type.

I had some weird data issues like you describe above and after poking around the internet came across this article by Donald Farmer (http://www.sqljunkies.com/WebLog/donald_farmer/archive/2005/03/13/8819.aspx). That article explains alot about SSIS Oracle issues.

|||The problem with ADO.NET provider is that it doesn't allow the use of variables as a source SQL

question on commit

Hi
i have a question about what the COMMIT command actually does
According to my understanding i thought a commit is just like including a marker and it shouldnt take time to finish.
Checkpoint is actuallly the process which writes committed transaction to the disk so that should be the one taking time.
If this is true why does BCP in take a long time at the end when its commiting all rows copied from the file
Anywhere i can go to read about what commit actually does and what checkpoint does etc
SanjaySanjay
BOL says:
COMMIT TRANSACTION
Marks the end of a successful implicit or user-defined transaction. If
@.@.TRANCOUNT is 1, COMMIT TRANSACTION makes all data modifications performed
since the start of the transaction a permanent part of the database, frees
the resources held by the connection, and decrements @.@.TRANCOUNT to 0. If
@.@.TRANCOUNT is greater than 1, COMMIT TRANSACTION decrements @.@.TRANCOUNT
only by 1.
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:A1B8DE74-8DF1-4F7C-9341-3161D1F05158@.microsoft.com...
> Hi
> i have a question about what the COMMIT command actually does
> According to my understanding i thought a commit is just like including a
marker and it shouldnt take time to finish.
> Checkpoint is actuallly the process which writes committed transaction to
the disk so that should be the one taking time.
> If this is true why does BCP in take a long time at the end when its
commiting all rows copied from the file
> Anywhere i can go to read about what commit actually does and what
checkpoint does etc
> Sanjay
>|||Hi Sanjay,
By default checkpoints happen automatically and very regularly on SQL
Server. How often a checkpoint happen is determined by the Recovery Interval
setting. If SQL Server determines that the estimate time to recover the
database by rolling forward the transactions in the transaction log is
longer than the recovery interval, it will issue a checkpoint so all the
changes are flushed to the data file and don't have to be rolled forward the
next time the database starts up.
What I think happens is that BCP makes so many changes that it automatically
triggers a checkpoint on commit.
--
Jacco Schalkwijk
SQL Server MVP
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:A1B8DE74-8DF1-4F7C-9341-3161D1F05158@.microsoft.com...
> Hi
> i have a question about what the COMMIT command actually does
> According to my understanding i thought a commit is just like including a
marker and it shouldnt take time to finish.
> Checkpoint is actuallly the process which writes committed transaction to
the disk so that should be the one taking time.
> If this is true why does BCP in take a long time at the end when its
commiting all rows copied from the file
> Anywhere i can go to read about what commit actually does and what
checkpoint does etc
> Sanjay
>|||If I am not mistaken, BCP can also defer all index fixups to once ( at the
end)... I think TABLOCK might be required for this, but i am not sure...
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:A1B8DE74-8DF1-4F7C-9341-3161D1F05158@.microsoft.com...
> Hi
> i have a question about what the COMMIT command actually does
> According to my understanding i thought a commit is just like including a
marker and it shouldnt take time to finish.
> Checkpoint is actuallly the process which writes committed transaction to
the disk so that should be the one taking time.
> If this is true why does BCP in take a long time at the end when its
commiting all rows copied from the file
> Anywhere i can go to read about what commit actually does and what
checkpoint does etc
> Sanjay
>|||I think what you are looking for is the batch size. If
you do not specify a batch size by using ther -b
parameter, the entire set of rows being inserted is
treated as one transaction and all row inserts are
commited and checkpointed at the same time.
This option is documented in the BOL.
I hope that this helps.
Matthew Bando
BandoM@.CSCTechnologies(remove).com
>--Original Message--
>Hi
>i have a question about what the COMMIT command actually
does
>According to my understanding i thought a commit is just
like including a marker and it shouldnt take time to
finish.
>Checkpoint is actuallly the process which writes
committed transaction to the disk so that should be the
one taking time.
>If this is true why does BCP in take a long time at the
end when its commiting all rows copied from the file
>Anywhere i can go to read about what commit actually
does and what checkpoint does etc
>Sanjay
>.
>

question on commit

Hi
i have a question about what the COMMIT command actually does
According to my understanding i thought a commit is just like including a ma
rker and it shouldnt take time to finish.
Checkpoint is actuallly the process which writes committed transaction to th
e disk so that should be the one taking time.
If this is true why does BCP in take a long time at the end when its commiti
ng all rows copied from the file
Anywhere i can go to read about what commit actually does and what checkpoin
t does etc
SanjaySanjay
BOL says:
COMMIT TRANSACTION
Marks the end of a successful implicit or user-defined transaction. If
@.@.TRANCOUNT is 1, COMMIT TRANSACTION makes all data modifications performed
since the start of the transaction a permanent part of the database, frees
the resources held by the connection, and decrements @.@.TRANCOUNT to 0. If
@.@.TRANCOUNT is greater than 1, COMMIT TRANSACTION decrements @.@.TRANCOUNT
only by 1.
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:A1B8DE74-8DF1-4F7C-9341-3161D1F05158@.microsoft.com...
quote:

> Hi
> i have a question about what the COMMIT command actually does
> According to my understanding i thought a commit is just like including a

marker and it shouldnt take time to finish.
quote:

> Checkpoint is actuallly the process which writes committed transaction to

the disk so that should be the one taking time.
quote:

> If this is true why does BCP in take a long time at the end when its

commiting all rows copied from the file
quote:

> Anywhere i can go to read about what commit actually does and what

checkpoint does etc
quote:

> Sanjay
>
|||Hi Sanjay,
By default checkpoints happen automatically and very regularly on SQL
Server. How often a checkpoint happen is determined by the Recovery Interval
setting. If SQL Server determines that the estimate time to recover the
database by rolling forward the transactions in the transaction log is
longer than the recovery interval, it will issue a checkpoint so all the
changes are flushed to the data file and don't have to be rolled forward the
next time the database starts up.
What I think happens is that BCP makes so many changes that it automatically
triggers a checkpoint on commit.
Jacco Schalkwijk
SQL Server MVP
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:A1B8DE74-8DF1-4F7C-9341-3161D1F05158@.microsoft.com...
quote:

> Hi
> i have a question about what the COMMIT command actually does
> According to my understanding i thought a commit is just like including a

marker and it shouldnt take time to finish.
quote:

> Checkpoint is actuallly the process which writes committed transaction to

the disk so that should be the one taking time.
quote:

> If this is true why does BCP in take a long time at the end when its

commiting all rows copied from the file
quote:

> Anywhere i can go to read about what commit actually does and what

checkpoint does etc
quote:

> Sanjay
>
|||If I am not mistaken, BCP can also defer all index fixups to once ( at the
end)... I think TABLOCK might be required for this, but i am not sure...
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:A1B8DE74-8DF1-4F7C-9341-3161D1F05158@.microsoft.com...
quote:

> Hi
> i have a question about what the COMMIT command actually does
> According to my understanding i thought a commit is just like including a

marker and it shouldnt take time to finish.
quote:

> Checkpoint is actuallly the process which writes committed transaction to

the disk so that should be the one taking time.
quote:

> If this is true why does BCP in take a long time at the end when its

commiting all rows copied from the file
quote:

> Anywhere i can go to read about what commit actually does and what

checkpoint does etc
quote:

> Sanjay
>

Question on column mappings between mining structure and case table for lift chart

Hi, all experts here,

I am a bit confused for the model evaluation (lift chart), should we map all the columns for both the mining structure and the case table? I mean for those predictive models, we have a predict column, shouldnt we ignore the mapping of the predictive column between the mining structure and the case table? But it seemes we are not allowed to miss the predictive column mapping between the mining structure and the case table.

Why is that? Could any experts here give me some explanation on that?

Hope my question is clear for your help.

Thanks a lot and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

The mapping of the predicted column is required to compare the prediction with the actual test data. It is not used in the actual prediction|||

Hi, Bogdan,

Thanks a lot for your advices.

With best regards,

Yours sincerely,