Showing posts with label view. Show all posts
Showing posts with label view. Show all posts

Friday, March 30, 2012

Question With SQLDATASOURCE and User.identity.name

is there a way that I can use the sqldatasource with a form view where my sqldatasource select statement is like this

select * from tblUsers wherevcUserName=@.vcUserName

<selectparameters>

<

asp:ParameterName="user.identity.name"Type="String/>

</selectParameters>

Hi,

The way you are trying will not work.

Here is a link from Peter with a solution to handle this case by using ExpressionBuilder:

http://peterkellner.net/2006/09/18/expressionbuilderidentity/

Another way, I think you can try to assign the user to the selectParameter from your code under SqlDataSource's selecting event programatically. Also, you can assign the login user to a session varable and access to it through SessionParameter

Wednesday, March 28, 2012

Question regarding Stored Procedures, Views and ASP

Please could someone explain to me the differences between a stored
procedure and a view.

The reason for this question is I have two almost identical ASP pages.
Both get the same results but one uses a stored procedure and one uses
a view. If the query returns no results the 'view page' generates
errors and therefore I have to check for BOF and EOF, whereas the
'stored procedure page' does not generate errors and instead would
appear to return a recordset with 0 entries.

Thanks in advance

Neil.Neil (neil.lloyd@.fdtsolutions.com) writes:
> Please could someone explain to me the differences between a stored
> procedure and a view.
> The reason for this question is I have two almost identical ASP pages.
> Both get the same results but one uses a stored procedure and one uses
> a view. If the query returns no results the 'view page' generates
> errors and therefore I have to check for BOF and EOF, whereas the
> 'stored procedure page' does not generate errors and instead would
> appear to return a recordset with 0 entries.

It might more have to do with who you call them. Could you post the
code, including how you set up cursor type and all? The SP code can
also be useful.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.aspsql

question regarding referenced views and replication order

You can update the view definition and hope that this updates the
dependencies correctly. If it doesn't you can use sp_addscriptexec to add
all the views in the correct order. Note that in SQL Server 2005 you can
specify the article order (@.processing_order) to get round this type of
issue.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
Stefan,
replication of views and procs doesn't follow the usual paradigm in
replication. Changes to these programming objects aren't picked up by the
log-reader, so we have to resort to some sort of manual solution to pick up
changes to replicated objects and apply them which ever way we go about it.
The script to generate the view creation scripts could be based on the view
definition - in which case it'll always be upto date if you even need to run
the whole thing again. Ad-hoc changes can meanwhile be done using individual
views.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com

Monday, March 26, 2012

Question regarding a View that is taking long time to process

Good afternoon everyone, I have written a view that pulls customer demographic infomration as well as pulling data from multiple scalar-valued functions. I am using this view to pull and send data from one database to another in the same SQL server. The problem that I am having is that I am running this import as a scheduled job in windows. The job is taking almost 24 hours to complete this task. The total number of records that are being pulled is around 21,000+. I have tried removing the functions from the view and it only takes the view 20 seconds to pull the demographic information from the same 21,000+ records but when I add the function calls this is where the time to complete goes through the roof. Has anyone encountered this before if so what would you suggest doing? Any help would be appreciated.

Here is the syntax for my view:

SELECT TOP 100PERCENT CUS_EMAILAS Email, CUS_CUSTNUMAS MemberID, CUS_PREFIXAS Prefix, CUS_FNAMEAS FirstName, CUS_LNAMEAS LastName, CUS_SUFFIXAS Suffix, CUS_TITLEAS Title, CUS_STATEAS State, CUS_COUNTRYAS Country, CUS_ZIPAS ZipCode, CUS_SEXAS Gender,CAST(CUS_DEMCODEAAS nvarchar(20)) +',' +CAST(CUS_DEMCODEBAS nvarchar(20)) +',' +CAST(CUS_DEMCODECAS nvarchar(20)) +',' +CAST(CUS_DEMCODEDAS nvarchar(20))AS DemoCodes, dbo.GetSubScribedDateMLA(CUS_CUSTNUM, CUS_EMAIL)AS MLASubscribedDate, dbo.GetSubScribedDateMLP(CUS_CUSTNUM, CUS_EMAIL)AS MLPSubscribedDate, dbo.GetSubScribedDateLDC(CUS_CUSTNUM, CUS_EMAIL)AS LDCSubscribedDate, dbo.GetMLAExpiration(CUS_CUSTNUM, CUS_EMAIL)AS MLASubExpireDate, dbo.GetMLPExpiration(CUS_CUSTNUM, CUS_EMAIL)AS MLPSubExpireDate, dbo.GetLDCExpiration(CUS_CUSTNUM, CUS_EMAIL)AS LDCSubExpireDate, dbo.IsProspect(CUS_CUSTNUM, CUS_EMAIL)AS AGMProspect, dbo.IsCurrentCustomer(CUS_CUSTNUM, CUS_EMAIL)AS AGMCurrentCustomer, dbo.IsMLAMember(CUS_CUSTNUM, CUS_EMAIL)AS MLAMember, dbo.IsMLPMember(CUS_CUSTNUM, CUS_EMAIL)AS MLPMember, dbo.IsLDCMember(CUS_CUSTNUM, CUS_EMAIL)AS LDCMember, dbo.CalculateTotalRevenue(CUS_CUSTNUM, CUS_EMAIL)AS AGMTotalRevenue, dbo.GetPubCodes(CUS_CUSTNUM, CUS_EMAIL)AS ProductsPurchased, dbo.GetEmailType(CUS_CUSTNUM, CUS_EMAIL, CUS_RENT_EMAIL)AS EmailType, CUS_COMPANYAS Company, CUS_CITYAS CityFROM dbo.CUSWHERE (CUS_EMAILISNOT NULL)AND (CUS_EMAIL <>'')AND (CUS_EMAIL_VALID ='Y')AND (CUS_EMAILLIKE'%@.%.%')AND (CUS_RENT_EMAIL ='Y'OR CUS_RENT_EMAIL ='R'OR CUS_RENT_EMAIL ='I')AND (CHARINDEX(' ', CUS_EMAIL) = 0)AND (CUS_EMAILNOT LIKE'@.%')

Thanks in advance

Michael Reyeros

Windows based functions are not for moving data between databases, you use SQL Server Agent Jobs for that. Run a search for SQL Server Agent Job in SQL Server BOL (books online). Hope this helps.

|||Well actually what I have done is create a dtsx package in SQL 2005. This package is calling the above view to pull all the records from one database table and then for each one ofthe records return I am sending it to a stored procedure in another database. I then built a windows application that simply calls the dtsx package and runs it. This exe I have set it to run in the windows scheduler as a scheduled task.|||

Look for an extended stored proc called xp_cmdshell, it is disabled by default in 2005 enable it and use the instructions in the thread below and post again if it is not working.

http://forums.asp.net/thread/1358665.aspx

|||

But I am not sure exactly what this is supposed to do or how I hsould implement this in my siutation?

|||That is what is used to move data between database in all platforms with DTS not Windows service, it can move gigs in one day for you nothing works better, if it is there I would have known. Hope this helps.|||If I need to schedule this to run on a nightly basis, how would I run this and from where?|||

The first two links show you how to use the Agent to schedule Jobs for most admin tasks in SQL Server. The last link shows you system stored procedures you can use to create schedules and Jobs. I worked for a bank that used it to move deposits four hours a day five days a week your data is very small. Post again if you still need help, get it work and you can automate most operations in your application. Hope this helps.

http://msdn2.microsoft.com/en-us/library/ms139805.aspx

http://msdn2.microsoft.com/en-us/library/ms141701.aspx

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sp_00_519s.asp

|||OK the only problem that I have is that two of the packages that I am running rely on an ODBC connection to a very antiquated system, QuickFill, that is being used here in the office. When I try to run the job in SQL server as an agent job this is not allowed because the ODBC connection relies on two mapped drives and this does not seem to be allowed in SQL server.|||

Try using UNC instead of mapped drive you can create a proxy account with your admin context for the Agent to run it and the permissions are covered in the links in the thread I gave you. Try the link below and read up one how to get the correct mapping with ODBC. Sorry forgot to add the link. Hope this helps.

http://www.sqlteam.com/item.asp?ItemID=125

sql

question on views

Hello,
A little background...
I have a cube that automatically refreshes every 10 minutes and pulls it's
data from a view ('ViewA') in sql server 2000.
The cube takes around a minute and a half to completely refresh. During
this time, in sql server, one of my other views ('ViewC') starts to return
inconsistent results in the form of varying numbers of records in its result
set. The result set is somewhere in the neighborhood of 1,000 records, and
varies by anywhere from 40 to 50 records each time it is run during the cube
refresh.
A bit more info...
The cube pulls it's data from ViewA with these characteristics (simplified
for relevance):
ViewA selects * from ViewB (nolock) which, in turn, selects * from
MyTable (nolock)
As mentioned earlier, while the cube is refreshing, ViewC returns an
inconsistent number of records. ViewC has these characteristcs (simplified
for relevance):
ViewC selects * from MyTable (nolock)
It seems to me that the problem may have to do with record locking, but as
shown above, i am using (nolock) to prevent the issuing of any shared locks
while ViewA is running. So what is it about this cube refresh that prevents
one of my other views from gaining access to, and completely skipping over,
some of the records?!
One hundred million points for this one ;)
Thanks
Nolock doesn't guarantee consistent data so you can expect
inconsistent results...it's not unusual. You can retrieve
before and after images of rows being updated, can read
uncommitted transactions.
It doesn't look like it's record locking itself that is the
cause the issue but rather using a hint that doesn't care
about other exclusive locks, allows dirty reads. You really
have to weigh out using the nolock hint with the
inconsistent results that you can end up with.
-Sue
On Mon, 17 Oct 2005 15:18:30 -0400, "Jesse Aufiero"
<jaufiero@.moaboil.com> wrote:

>Hello,
>A little background...
>I have a cube that automatically refreshes every 10 minutes and pulls it's
>data from a view ('ViewA') in sql server 2000.
>The cube takes around a minute and a half to completely refresh. During
>this time, in sql server, one of my other views ('ViewC') starts to return
>inconsistent results in the form of varying numbers of records in its result
>set. The result set is somewhere in the neighborhood of 1,000 records, and
>varies by anywhere from 40 to 50 records each time it is run during the cube
>refresh.
>A bit more info...
>The cube pulls it's data from ViewA with these characteristics (simplified
>for relevance):
> ViewA selects * from ViewB (nolock) which, in turn, selects * from
>MyTable (nolock)
>As mentioned earlier, while the cube is refreshing, ViewC returns an
>inconsistent number of records. ViewC has these characteristcs (simplified
>for relevance):
> ViewC selects * from MyTable (nolock)
>It seems to me that the problem may have to do with record locking, but as
>shown above, i am using (nolock) to prevent the issuing of any shared locks
>while ViewA is running. So what is it about this cube refresh that prevents
>one of my other views from gaining access to, and completely skipping over,
>some of the records?!
>One hundred million points for this one ;)
>Thanks
>

question on views

Hello,
A little background...
I have a cube that automatically refreshes every 10 minutes and pulls it's
data from a view ('ViewA') in sql server 2000.
The cube takes around a minute and a half to completely refresh. During
this time, in sql server, one of my other views ('ViewC') starts to return
inconsistent results in the form of varying numbers of records in its result
set. The result set is somewhere in the neighborhood of 1,000 records, and
varies by anywhere from 40 to 50 records each time it is run during the cube
refresh.
A bit more info...
The cube pulls it's data from ViewA with these characteristics (simplified
for relevance):
ViewA selects * from ViewB (nolock) which, in turn, selects * from
MyTable (nolock)
As mentioned earlier, while the cube is refreshing, ViewC returns an
inconsistent number of records. ViewC has these characteristcs (simplified
for relevance):
ViewC selects * from MyTable (nolock)
It seems to me that the problem may have to do with record locking, but as
shown above, i am using (nolock) to prevent the issuing of any shared locks
while ViewA is running. So what is it about this cube refresh that prevents
one of my other views from gaining access to, and completely skipping over,
some of the records?!
One hundred million points for this one ;)
ThanksNolock doesn't guarantee consistent data so you can expect
inconsistent results...it's not unusual. You can retrieve
before and after images of rows being updated, can read
uncommitted transactions.
It doesn't look like it's record locking itself that is the
cause the issue but rather using a hint that doesn't care
about other exclusive locks, allows dirty reads. You really
have to weigh out using the nolock hint with the
inconsistent results that you can end up with.
-Sue
On Mon, 17 Oct 2005 15:18:30 -0400, "Jesse Aufiero"
<jaufiero@.moaboil.com> wrote:

>Hello,
>A little background...
>I have a cube that automatically refreshes every 10 minutes and pulls it's
>data from a view ('ViewA') in sql server 2000.
>The cube takes around a minute and a half to completely refresh. During
>this time, in sql server, one of my other views ('ViewC') starts to return
>inconsistent results in the form of varying numbers of records in its resul
t
>set. The result set is somewhere in the neighborhood of 1,000 records, and
>varies by anywhere from 40 to 50 records each time it is run during the cub
e
>refresh.
>A bit more info...
>The cube pulls it's data from ViewA with these characteristics (simplified
>for relevance):
> ViewA selects * from ViewB (nolock) which, in turn, selects * from
>MyTable (nolock)
>As mentioned earlier, while the cube is refreshing, ViewC returns an
>inconsistent number of records. ViewC has these characteristcs (simplified
>for relevance):
> ViewC selects * from MyTable (nolock)
>It seems to me that the problem may have to do with record locking, but as
>shown above, i am using (nolock) to prevent the issuing of any shared locks
>while ViewA is running. So what is it about this cube refresh that prevent
s
>one of my other views from gaining access to, and completely skipping over,
>some of the records?!
>One hundred million points for this one ;)
>Thanks
>

Question on View Report Click

Hi,
In my report, i get the data from stored procedure. Now the parameter that
i passed to the stored proc through the report has default values in that
already. When user clicks on the report name, it gets the data and
displays. I do not want the report to post and get the data automatically,
i want user to click the View Reprot Button in order to display report.
Since i have default values already filled out, some reason report executes
as soon as you click on the report name.
Any help
thanksThis behavior cannot be modified. If you supply a default value for all
parameters the report will execute automatically.
The easiest workaround is to not supply a default value for one parameter. A
request to change this behavior is already on the feature wishlist.
--
Bruce Johnson [MSFT]
Microsoft SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Mayur Patel" <mayurworld@.yahoo.com> wrote in message
news:e2IsflVrEHA.2184@.TK2MSFTNGP10.phx.gbl...
> Hi,
> In my report, i get the data from stored procedure. Now the parameter
> that i passed to the stored proc through the report has default values in
> that already. When user clicks on the report name, it gets the data and
> displays. I do not want the report to post and get the data
> automatically, i want user to click the View Reprot Button in order to
> display report. Since i have default values already filled out, some
> reason report executes as soon as you click on the report name.
> Any help
> thanks
>
>|||I posted a similar question to this earlier today, and found this post.
This information does not appear to be correct. For example, I have a
report with the following parameters and settings:
Parameter Type Has Default Default Value Null Prompt User Prompt String
P1 String Y NONE N
P1:
P2 String Y Query Based Y
P2:
P3 String N Y
P3:
P4 String Y Y
P4:
P5 String Y N
P5:
P6 String Y N
P6:
These are the settings on report server, they aren't necessarily consistent
with the saved report (another annoyance :)), however, when I select Parm 3,
which is a dropdown list, and is the only parm that doesn't have a default,
the report DOES NOT automatically run. This is not consistent with the
behavior I've seen in other places, where when I select the company, the
report does run. So, something is preventing this report from running after
Parm 3 is selected. It won't run until I hit the 'View Report' button.
That's the behavior that I want and expect, the question is how to duplicate
that behavior. I don't want to change any of these settings at this point,
for fear I'll lose the desired behavior, because I don't know what's causing
it.
"Bruce Johnson [MSFT]" wrote:
> This behavior cannot be modified. If you supply a default value for all
> parameters the report will execute automatically.
> The easiest workaround is to not supply a default value for one parameter. A
> request to change this behavior is already on the feature wishlist.
> --
> Bruce Johnson [MSFT]
> Microsoft SQL Server Reporting Services
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
> "Mayur Patel" <mayurworld@.yahoo.com> wrote in message
> news:e2IsflVrEHA.2184@.TK2MSFTNGP10.phx.gbl...
> > Hi,
> > In my report, i get the data from stored procedure. Now the parameter
> > that i passed to the stored proc through the report has default values in
> > that already. When user clicks on the report name, it gets the data and
> > displays. I do not want the report to post and get the data
> > automatically, i want user to click the View Reprot Button in order to
> > display report. Since i have default values already filled out, some
> > reason report executes as soon as you click on the report name.
> > Any help
> >
> > thanks
> >
> >
> >
>
>

Question on using CASE

Hello All,

I am pretty new to SQL Server and I am trying to create a view to gather the necessary data I need. I am not sure if CASE is what I should be using, or if I can even do what I need, but I am trying to capture the following information (I know the data looks a bit wacky, but I cannot post the real data so it is just an example).

Example data:
ID - DESC - STARTDATE - ENDDATE
1A - Pool - 9/21/06 - 9/23/06
1A - Pool - 9/21/05 - 9/23/05
1B - Garden - 9/2/06 - 9/4/06

I want to return the following data:
ID - DESC - STARTDATE - ENDDATE
1A - Pool - 9/21/05 - 9/23/05
1B - Garden - 9/4/06 - 9/4/06

Basically in my mind I am thinking along the lines of:
IF DESC = "Pool" THEN STARTDATE = "minimum STARTDATE"
ELSE STARTDATE = "ENDDATE"

I am having an issue trying to figure out how to create the syntax for this CASE statement.

Any help is appreciated.

ThanksWell you don't need a CASE statement for this. A simple aggregate query will handle the problem you describe:
select ID,
DESC,
min(STARTDATE) as STARTDATE,
max(ENDDATE) as ENDDATE
from YourTable
group by ID,
DESC...but I bet you will find that your problem is more complex than you describe, and that you are going to have to deal with gaps between date ranges that a simple MIN and MAX will overlook.|||Thanks for the reply.

I had tried that type of query but the real issue is I need the min STARTDATE if the DESC = "Pool". If the DESC is anything else I want the STARTDATE to take the ENDDATE.

Probably not possible?|||select ID,
DESC,
case when DESC = 'Pool' then STARTDATE else ENDDATE end as STARTDATE,
ENDDATE
from (select ID,
DESC,
min(STARTDATE) as STARTDATE,
max(ENDDATE) as ENDDATE
from YourTable
group by ID,
DESC) Subquery

You could probably do this without the subquery as well, but it is a little odd mixing aggregate and non-aggregate values in a single column.

Friday, March 9, 2012

Question on KPI

When I view the KPI in browser view, what is the time for which the KPI is applied. Is it todays date for which the KPI displays value and goals or some time in the past. How can I see the value of KPI for time in the past, for eg, like June 2003.

Thanks,

Vijay

If you have not pointed to a time member in a calculated member(that you use for your KPI) or in the definition of the KPI, it is always time current member.

More clearly it then depends on the time members on of the axis or if you have filtered or sliced by the time dimension.

HTH

Thomas Ivarsson

|||

In the browser view of KPI tab, I am not slicing by any time dimension. I am viewing what is shown as default after creating the KPI. The KPI value seems to be a cumulative value over all the time. For eg, if the sales amount is the value, and the sales quota is the goal, then the values displayed are the sum over all the years. This does not make sense to me.

If I filter by time dimension like Calendar Year, then I would expect the sales amount and sales quota for that calendar year.

|||

The current member for your time dimension is "All". I agree that this does not make sence.

You should be aware of that the browser in the KPI-tab is not always correct.

My advice is to look at the KPI with Excel 2007 or ProClarity Professional 6.3

In the adventure works cube project, part of the installation of samples, you have examples of KPITongue Tied that you can have a look at.

Here is a link to a good article about KPITongue Tied

http://www.databasejournal.com/features/mssql/article.php/3604206

HTH

Thomas Ivarsson

Question on Job history

Please could someone enlight me.
In enterprise manager -> management-> jobs -> view history
I could normally see the historic runs of the jobs.
I have a server, for some reason, only hold one historic
record of each job. Each time a job is run the history is
then refreshed and delete with only the latest job remain.
Where can I set in Enterprise Manager so I could adjust
the amount of the history held for the jobs?
Thanks
AlexAlex Au wrote:

> Please could someone enlight me.
> In enterprise manager -> management-> jobs -> view history
> I could normally see the historic runs of the jobs.
> I have a server, for some reason, only hold one historic
> record of each job. Each time a job is run the history is
> then refreshed and delete with only the latest job remain.
See How to resize the job history log (Enterprise Manager)
http://msdn.microsoft.com/library/d...>
aem_8xpj.asp
Why not create Your Own SQL Server Job Management System?
http://www.sql-server-performance.com/rd_jobs.asp
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.|||Sebastian
Thanks for this. I have checked and found that the Job
agent setting no different from other servers. I think the
real issue seem to be in the time the job history are
kept.. All the jobs that have only one history are only
run once a month, and I have jobs that run daily - they
have all the job history up to a month.
Alex
>--Original Message--
>Alex Au wrote:
>
history
is
remain.
>
>See How to resize the job history log (Enterprise Manager)
>http://msdn.microsoft.com/library/default.asp?
url=/library/en-us/howtosql/ht_automaem_8xpj.asp
>Why not create Your Own SQL Server Job Management System?
>http://www.sql-server-performance.com/rd_jobs.asp
>sincerely,
>--
>Sebastian K. Zaklada
>Skilled Software
>http://www.skilledsoftware.com
>This posting is provided "AS IS" with no warranties, and
confers no rights.
>
>.
>|||The trouble is that if you have a job that runs very often e.g once every
minute, it can quickly eat up the 1000 rows that is the default job history
log size. This setting applies to the server so even if the job history rows
per job is 100, it will not honour this if the max history log size >1000.
Thus it might be worth lowering the rows per job and increasing the max log
size to find the balance that works for you.
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
<anonymous@.discussions.microsoft.com> wrote in message
news:2d9401c40075$21391820$a601280a@.phx.gbl...
> Sebastian
> Thanks for this. I have checked and found that the Job
> agent setting no different from other servers. I think the
> real issue seem to be in the time the job history are
> kept.. All the jobs that have only one history are only
> run once a month, and I have jobs that run daily - they
> have all the job history up to a month.
> Alex
> history
> is
> remain.
> url=/library/en-us/howtosql/ht_automaem_8xpj.asp
> confers no rights.|||Thanks Jasper. This is exactly the problem, and your
suggested solution is spot on.
Alex
>--Original Message--
>The trouble is that if you have a job that runs very
often e.g once every
>minute, it can quickly eat up the 1000 rows that is the
default job history
>log size. This setting applies to the server so even if
the job history rows
>per job is 100, it will not honour this if the max
history log size >1000.
>Thus it might be worth lowering the rows per job and
increasing the max log
>size to find the balance that works for you.
>--
>HTH
>Jasper Smith (SQL Server MVP)
>I support PASS - the definitive, global
>community for SQL Server professionals -
>http://www.sqlpass.org
>
><anonymous@.discussions.microsoft.com> wrote in message
>news:2d9401c40075$21391820$a601280a@.phx.gbl...
the
historic
history
Manager)
System?
and
>
>.
>

Question on Job history

Please could someone enlight me.
In enterprise manager -> management-> jobs -> view history
I could normally see the historic runs of the jobs.
I have a server, for some reason, only hold one historic
record of each job. Each time a job is run the history is
then refreshed and delete with only the latest job remain.
Where can I set in Enterprise Manager so I could adjust
the amount of the history held for the jobs?
Thanks
AlexAlex Au wrote:
> Please could someone enlight me.
> In enterprise manager -> management-> jobs -> view history
> I could normally see the historic runs of the jobs.
> I have a server, for some reason, only hold one historic
> record of each job. Each time a job is run the history is
> then refreshed and delete with only the latest job remain.
See How to resize the job history log (Enterprise Manager)
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/howtosql/ht_automaem_8xpj.asp
Why not create Your Own SQL Server Job Management System?
http://www.sql-server-performance.com/rd_jobs.asp
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.|||Sebastian
Thanks for this. I have checked and found that the Job
agent setting no different from other servers. I think the
real issue seem to be in the time the job history are
kept.. All the jobs that have only one history are only
run once a month, and I have jobs that run daily - they
have all the job history up to a month.
Alex
>--Original Message--
>Alex Au wrote:
>> Please could someone enlight me.
>> In enterprise manager -> management-> jobs -> view
history
>> I could normally see the historic runs of the jobs.
>> I have a server, for some reason, only hold one historic
>> record of each job. Each time a job is run the history
is
>> then refreshed and delete with only the latest job
remain.
>
>See How to resize the job history log (Enterprise Manager)
>http://msdn.microsoft.com/library/default.asp?
url=/library/en-us/howtosql/ht_automaem_8xpj.asp
>Why not create Your Own SQL Server Job Management System?
>http://www.sql-server-performance.com/rd_jobs.asp
>sincerely,
>--
>Sebastian K. Zaklada
>Skilled Software
>http://www.skilledsoftware.com
>This posting is provided "AS IS" with no warranties, and
confers no rights.
>
>.
>|||The trouble is that if you have a job that runs very often e.g once every
minute, it can quickly eat up the 1000 rows that is the default job history
log size. This setting applies to the server so even if the job history rows
per job is 100, it will not honour this if the max history log size >1000.
Thus it might be worth lowering the rows per job and increasing the max log
size to find the balance that works for you.
--
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
<anonymous@.discussions.microsoft.com> wrote in message
news:2d9401c40075$21391820$a601280a@.phx.gbl...
> Sebastian
> Thanks for this. I have checked and found that the Job
> agent setting no different from other servers. I think the
> real issue seem to be in the time the job history are
> kept.. All the jobs that have only one history are only
> run once a month, and I have jobs that run daily - they
> have all the job history up to a month.
> Alex
> >--Original Message--
> >Alex Au wrote:
> >
> >> Please could someone enlight me.
> >>
> >> In enterprise manager -> management-> jobs -> view
> history
> >> I could normally see the historic runs of the jobs.
> >>
> >> I have a server, for some reason, only hold one historic
> >> record of each job. Each time a job is run the history
> is
> >> then refreshed and delete with only the latest job
> remain.
> >
> >
> >See How to resize the job history log (Enterprise Manager)
> >http://msdn.microsoft.com/library/default.asp?
> url=/library/en-us/howtosql/ht_automaem_8xpj.asp
> >
> >Why not create Your Own SQL Server Job Management System?
> >http://www.sql-server-performance.com/rd_jobs.asp
> >
> >sincerely,
> >--
> >Sebastian K. Zaklada
> >Skilled Software
> >http://www.skilledsoftware.com
> >This posting is provided "AS IS" with no warranties, and
> confers no rights.
> >
> >
> >.
> >|||Thanks Jasper. This is exactly the problem, and your
suggested solution is spot on.
Alex
>--Original Message--
>The trouble is that if you have a job that runs very
often e.g once every
>minute, it can quickly eat up the 1000 rows that is the
default job history
>log size. This setting applies to the server so even if
the job history rows
>per job is 100, it will not honour this if the max
history log size >1000.
>Thus it might be worth lowering the rows per job and
increasing the max log
>size to find the balance that works for you.
>--
>HTH
>Jasper Smith (SQL Server MVP)
>I support PASS - the definitive, global
>community for SQL Server professionals -
>http://www.sqlpass.org
>
><anonymous@.discussions.microsoft.com> wrote in message
>news:2d9401c40075$21391820$a601280a@.phx.gbl...
>> Sebastian
>> Thanks for this. I have checked and found that the Job
>> agent setting no different from other servers. I think
the
>> real issue seem to be in the time the job history are
>> kept.. All the jobs that have only one history are only
>> run once a month, and I have jobs that run daily - they
>> have all the job history up to a month.
>> Alex
>> >--Original Message--
>> >Alex Au wrote:
>> >
>> >> Please could someone enlight me.
>> >>
>> >> In enterprise manager -> management-> jobs -> view
>> history
>> >> I could normally see the historic runs of the jobs.
>> >>
>> >> I have a server, for some reason, only hold one
historic
>> >> record of each job. Each time a job is run the
history
>> is
>> >> then refreshed and delete with only the latest job
>> remain.
>> >
>> >
>> >See How to resize the job history log (Enterprise
Manager)
>> >http://msdn.microsoft.com/library/default.asp?
>> url=/library/en-us/howtosql/ht_automaem_8xpj.asp
>> >
>> >Why not create Your Own SQL Server Job Management
System?
>> >http://www.sql-server-performance.com/rd_jobs.asp
>> >
>> >sincerely,
>> >--
>> >Sebastian K. Zaklada
>> >Skilled Software
>> >http://www.skilledsoftware.com
>> >This posting is provided "AS IS" with no warranties,
and
>> confers no rights.
>> >
>> >
>> >.
>> >
>
>.
>

Saturday, February 25, 2012

Question on data source and data source view of MOLAP cube

Hi, all experts here,

Thank you very much for your kind attention.

I am having a question on the data source and data source view of MOLAP Cube. As we know MOLAP Cube stores all data in MOLAP format, it does not have any thing to do with its source data base. As a result, I am wondering if we could actually remove the data source and data source view of the MOLAP cube? (in my case, no update is necessary to be informed from the underlying data source to the MOLAP cube)

Hope it is clear for your help.

I am looking forward to hearing from you for your advices and help.

With best regards,

Yours sincerely,

Hello again Helen.

You will always need a data source and a data source view for the cube.

What you can do, in order to create an off-line cube is to use local cubes, if this is the scenario you are thinking about?

With local cubes you create a local cube file that you can put on a laptop and analyze data without being connected to SSAS2005.

Both ProClarity and Excel2007 have this option.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your very kind advices.

But I found that change of the data source view does not really reflect on the data source view of the cube though? I mean after I created the cube, and I edited the data source view, but when I clicked on the cube, its data source view remained as the one before I changed to the new data source view?

Also, I am a bit confused, as all the data for the cube has already stored in SSAS2005 Data base (MOLAP), so why we still need to care about the data source and data source view if we dont need to have any update from the underlying data source?

Thank you for your kind attention and I am looking forward to hearing from you shortly for your further advices.

With best regards,

Yours sincerely,

|||

Hi, Helen.

I am not sure about the scenario here? Do you mean that you will create the cube once and never process it again?

If new dimension records and new fact records arrives from the data source you will always need to process the cube. If you need to process you will need to have the data source view.

During a days work new transactions will be registered in the source system. You will have to do a new process of the cube in order to get them from the source system into the cube.

Are you making cubes for laptop users that they will use off-line? Or are you building cubes using real time OLAP?

Regards

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your reply. In my case, well, I do deploy the whole project containing the cube after I changed the data source and data source view, but the cube remains the same as before I did any change. Therefore, I thought data source and data source view does not have anything to do with the MOLAP cube after its first deployment to the analysis services server.

Does it work in this way? Or I miss out anything of it? Thank you and your advices are most welcome.

With best regards,

Yours sincerely,

|||

Hello Helen. If you make a change like adding an attribute in the data source (source table/view) or the data source view(=named calculation) this change will not be a part of the cube until you add the attribute in the dimension editor.

In order to make a change in a cube you must mapp new columns to the measure group or to the dimension in the cube/dimension editor.

The data source view do not automatically update the cube or the dimensions. If you do a refresh in the data source view(right click) all changes in the source database will exported to the data source view.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thanks a lot for your kind advices.

It cleared my confusion over the above question.

Thanks again.

With best regards,

Yours sincerely,

Question on data source and data source view of MOLAP cube

Hi, all experts here,

Thank you very much for your kind attention.

I am having a question on the data source and data source view of MOLAP Cube. As we know MOLAP Cube stores all data in MOLAP format, it does not have any thing to do with its source data base. As a result, I am wondering if we could actually remove the data source and data source view of the MOLAP cube? (in my case, no update is necessary to be informed from the underlying data source to the MOLAP cube)

Hope it is clear for your help.

I am looking forward to hearing from you for your advices and help.

With best regards,

Yours sincerely,

Hello again Helen.

You will always need a data source and a data source view for the cube.

What you can do, in order to create an off-line cube is to use local cubes, if this is the scenario you are thinking about?

With local cubes you create a local cube file that you can put on a laptop and analyze data without being connected to SSAS2005.

Both ProClarity and Excel2007 have this option.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your very kind advices.

But I found that change of the data source view does not really reflect on the data source view of the cube though? I mean after I created the cube, and I edited the data source view, but when I clicked on the cube, its data source view remained as the one before I changed to the new data source view?

Also, I am a bit confused, as all the data for the cube has already stored in SSAS2005 Data base (MOLAP), so why we still need to care about the data source and data source view if we dont need to have any update from the underlying data source?

Thank you for your kind attention and I am looking forward to hearing from you shortly for your further advices.

With best regards,

Yours sincerely,

|||

Hi, Helen.

I am not sure about the scenario here? Do you mean that you will create the cube once and never process it again?

If new dimension records and new fact records arrives from the data source you will always need to process the cube. If you need to process you will need to have the data source view.

During a days work new transactions will be registered in the source system. You will have to do a new process of the cube in order to get them from the source system into the cube.

Are you making cubes for laptop users that they will use off-line? Or are you building cubes using real time OLAP?

Regards

Thomas Ivarsson

|||

Hi, Thomas,

Thank you for your reply. In my case, well, I do deploy the whole project containing the cube after I changed the data source and data source view, but the cube remains the same as before I did any change. Therefore, I thought data source and data source view does not have anything to do with the MOLAP cube after its first deployment to the analysis services server.

Does it work in this way? Or I miss out anything of it? Thank you and your advices are most welcome.

With best regards,

Yours sincerely,

|||

Hello Helen. If you make a change like adding an attribute in the data source (source table/view) or the data source view(=named calculation) this change will not be a part of the cube until you add the attribute in the dimension editor.

In order to make a change in a cube you must mapp new columns to the measure group or to the dimension in the cube/dimension editor.

The data source view do not automatically update the cube or the dimensions. If you do a refresh in the data source view(right click) all changes in the source database will exported to the data source view.

HTH

Thomas Ivarsson

|||

Hi, Thomas,

Thanks a lot for your kind advices.

It cleared my confusion over the above question.

Thanks again.

With best regards,

Yours sincerely,

Question on constraints and partitioned views

Can you use a combination of columns for the partitioning column in a
partition view? This is what I want to do but I cannot get it to work.
I get an error that UNION ALL view 'vw_My_Data' is not updatable
because a partitioning column was not found.
Here's my DDL
--Group1 July 2003
CREATE TABLE [dbo].[Group1_07_2003] (
[Sample_ID] [uniqueidentifier] NOT NULL ,
[Group_Constraint] [int] Check(Group_Constraint = 1) NOT NULL ,
[Month_Constraint] [int] Check (Month_Constraint = 7)NOT NULL ,
[Year_Constraint] [int] Check (Year_Constraint = 2003)NOT NULL ,
[Timestamp] [datetime] NOT NULL ,
[msec] [int] NOT NULL ,
[Device_ID] [bigint] NOT NULL ,
[Topic_ID] [bigint] NOT NULL ,
[Sample_Type_ID] [bigint] NOT NULL ,
[Calculated_Value] [float] NOT NULL ,
[Original_Value] [float] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Group1_07_2003] ADD
CONSTRAINT [PK_Group1_07_2003] PRIMARY KEY NONCLUSTERED
(
[Sample_ID],
[Group_Constraint],
[Month_Constraint],
[Year_Constraint]
) ON [PRIMARY]
GO
I would like to create other tables for Group_Constraint=2, and
Month_Constraint = 7, and Year_Constraint = 2003 and so on. But when
I add in the other group, I start getting the errors when I try to
insert into the view. Can I only use one column?Thank you for your recommendation but it did not fix the problem.
Apparently you can only have one column as your partitioning column.
However, I am ready to ditch the partitioned view design. We have
tried every other way we can to actually apply what we want to do and
either we cannot insert into the view or when we query the data it
goes to all of the tables instead of one table based on the
constraint. Even one of the examples in BOL will not work.
Here's the code from BOL:
CREATE TABLE May1998sales
(OrderID INT,
CustomerID INT NOT NULL,
OrderDate DATETIME NULL
CHECK (DATEPART(yy, OrderDate) = 1998),
OrderMonth INT
CHECK (OrderMonth = 5),
DeliveryDate DATETIME NULL
CHECK(DATEPART(mm, DeliveryDate) = 5)
CONSTRAINT OrderIDMonth PRIMARY KEY(OrderID, OrderMonth)
CREATE VIEW Year1998Sales
AS
SELECT * FROM Jan1998Sales
UNION ALL
SELECT * FROM Feb1998Sales
UNION ALL
SELECT * FROM Mar1998Sales
UNION ALL
SELECT * FROM Apr1998Sales
UNION ALL
SELECT * FROM May1998Sales
UNION ALL
SELECT * FROM Jun1998Sales
UNION ALL
SELECT * FROM Jul1998Sales
UNION ALL
SELECT * FROM Aug1998Sales
UNION ALL
SELECT * FROM Sep1998Sales
UNION ALL
SELECT * FROM Oct1998Sales
UNION ALL
SELECT * FROM Nov1998Sales
UNION ALL
SELECT * FROM Dec1998Sales
SELECT *
FROM Year1998Sales
WHERE OrderMonth IN (5,6) AND CustomerID = 64892
--
I created just two tables and modified the select statement
SELECT *
FROM Year1998Sales
WHERE OrderMonth =5
I didn't put any data in the tables, I just ran the above select
statement. If you look at the execution plan. It does not go directly
to May1998sales.
I have only gotten one example to actually work and that is using the
customer example
-- On Server1:
CREATE TABLE Customers_33
(CustomerID INTEGER PRIMARY KEY
CHECK (CustomerID BETWEEN 1 AND 32999),
... -- Additional column definitions)
-- On Server2:
CREATE TABLE Customers_66
(CustomerID INTEGER PRIMARY KEY
CHECK (CustomerID BETWEEN 33000 AND 65999),
... -- Additional column definitions)
-- On Server3:
CREATE TABLE Customers_99
(CustomerID INTEGER PRIMARY KEY
CHECK (CustomerID BETWEEN 66000 AND 99999),
... -- Additional column definitions)
However for how we want to partition the data it does not seem to
work. It has been a nightmare. Also don't try to make your
partitioning column on datetime. You can insert into the view but look
at your execution plan. If you use a datetime variable in your where
clause to specify a date, it will not work. I can elaborate more if
anyone is interested. If anyone else is looking at using this design,
BEWARE! It is not documented well in BOL. I think I have said enough.
If anyone has actually gotten this to work, feel free to comment.
"Quentin Ran" <ab@.who.com> wrote in message news:<#of9d4sVDHA.2328@.TK2MSFTNGP12.phx.gbl>...
> Loretta,
> try by putting the PK and constraint in your create table statement. Avoid
> alter table if the table participates in a partitioned view. I do not have
> the reason, but alter table tends to spoil the partitioned view / tables.
> hth
> Quentin
>|||We have similar problem. We have a large table which we
always fetch data by giving certain date.
So, we broke the table into per month bases and created a
partitioned view. The date column (datetime datatype)in
each member table has a check constraint.
When I quire to the view by giving the data condition SQL
server access all member table and it takes very long time
for fetching.
You mentioned that datetime column wouldn't work.
But it must be the once of the most typical case one want
to have a partition view scenario, isn't it?
Is there any workaround from Microsoft side?
I am looking forward to hearing from you!!!
>--Original Message--
>Thank you for your recommendation but it did not fix the
problem.
>Apparently you can only have one column as your
partitioning column.
>However, I am ready to ditch the partitioned view design.
We have
>tried every other way we can to actually apply what we
want to do and
>either we cannot insert into the view or when we query
the data it
>goes to all of the tables instead of one table based on
the
>constraint. Even one of the examples in BOL will not work.
>Here's the code from BOL:
>CREATE TABLE May1998sales
> (OrderID INT,
> CustomerID INT NOT NULL,
> OrderDate DATETIME NULL
> CHECK (DATEPART(yy, OrderDate) = 1998),
> OrderMonth INT
> CHECK (OrderMonth = 5),
> DeliveryDate DATETIME NULL
> CHECK(DATEPART(mm, DeliveryDate) = 5)
> CONSTRAINT OrderIDMonth PRIMARY KEY(OrderID,
OrderMonth)
>CREATE VIEW Year1998Sales
>AS
>SELECT * FROM Jan1998Sales
>UNION ALL
>SELECT * FROM Feb1998Sales
>UNION ALL
>SELECT * FROM Mar1998Sales
>UNION ALL
>SELECT * FROM Apr1998Sales
>UNION ALL
>SELECT * FROM May1998Sales
>UNION ALL
>SELECT * FROM Jun1998Sales
>UNION ALL
>SELECT * FROM Jul1998Sales
>UNION ALL
>SELECT * FROM Aug1998Sales
>UNION ALL
>SELECT * FROM Sep1998Sales
>UNION ALL
>SELECT * FROM Oct1998Sales
>UNION ALL
>SELECT * FROM Nov1998Sales
>UNION ALL
>SELECT * FROM Dec1998Sales
>SELECT *
>FROM Year1998Sales
>WHERE OrderMonth IN (5,6) AND CustomerID = 64892
>--
>I created just two tables and modified the select
statement
>SELECT *
>FROM Year1998Sales
>WHERE OrderMonth =5
>I didn't put any data in the tables, I just ran the above
select
>statement. If you look at the execution plan. It does not
go directly
>to May1998sales.
>I have only gotten one example to actually work and that
is using the
>customer example
>-- On Server1:
>CREATE TABLE Customers_33
> (CustomerID INTEGER PRIMARY KEY
> CHECK (CustomerID BETWEEN 1 AND 32999),
> ... -- Additional column definitions)
>-- On Server2:
>CREATE TABLE Customers_66
> (CustomerID INTEGER PRIMARY KEY
> CHECK (CustomerID BETWEEN 33000 AND
65999),
> ... -- Additional column definitions)
>-- On Server3:
>CREATE TABLE Customers_99
> (CustomerID INTEGER PRIMARY KEY
> CHECK (CustomerID BETWEEN 66000 AND
99999),
> ... -- Additional column definitions)
>However for how we want to partition the data it does not
seem to
>work. It has been a nightmare. Also don't try to make your
>partitioning column on datetime. You can insert into the
view but look
>at your execution plan. If you use a datetime variable in
your where
>clause to specify a date, it will not work. I can
elaborate more if
>anyone is interested. If anyone else is looking at using
this design,
>BEWARE! It is not documented well in BOL. I think I have
said enough.
>If anyone has actually gotten this to work, feel free to
comment.
>
>
>
>"Quentin Ran" <ab@.who.com> wrote in message
news:<#of9d4sVDHA.2328@.TK2MSFTNGP12.phx.gbl>...
>> Loretta,
>> try by putting the PK and constraint in your create
table statement. Avoid
>> alter table if the table participates in a partitioned
view. I do not have
>> the reason, but alter table tends to spoil the
partitioned view / tables.
>> hth
>> Quentin
>>
>.
>|||I agree, it does seem like partitioning by date makes the most sense
but we don't know if any work around. What actually got our team
looking at partitioned views was an article called "Add Scalability
with Data Partitioning" by Jon Rauschenberger. I only have a print
out. Sorry I don't have a link. Anyways, this article seems too good
to be true. But with the design we have in mind, it just might be to
complex. We are still considering dividing our data into smaller
tables but we do not plan on using a view for inserting and querying
the data. We are considering writing our own custom apps to do this
but we are still working on other areas and that has its own
drawbacks. Not sure what the work around is for now. Sorry, no help.
"didi" <carlsdottar@.hotmail.com> wrote in message news:<1a1e01c360c9$c23f09b0$3501280a@.phx.gbl>...
> We have similar problem. We have a large table which we
> always fetch data by giving certain date.
> So, we broke the table into per month bases and created a
> partitioned view. The date column (datetime datatype)in
> each member table has a check constraint.
> When I quire to the view by giving the data condition SQL
> server access all member table and it takes very long time
> for fetching.
> You mentioned that datetime column wouldn't work.
> But it must be the once of the most typical case one want
> to have a partition view scenario, isn't it?
> Is there any workaround from Microsoft side?
> I am looking forward to hearing from you!!!
>
> >--Original Message--