Oracle 11g training, Day #2 ‚Äď Savepoint!

Ok, second day Oracle 11g training – what have we done today?

Basically, two topics were addressed:

SECUREFILES and TABLE COMPRESSION

SECUREFILES

Securefiles is (no not the new black) but the new LOB implementation for Oracle 11g.
The old LOB implementation had some drawbacks (although these usually didn’t affect SAP installations, given all recommendations had been implemented correctly) so Oracle re-wrote the whole LOB stuff from scratch.

With this come some nice new features like COMPRESSION, ENCRYPTION and DEDUPLICATION.

The compression works pretty nice and uses standard compression libraries like bzip2, zlib or lzo.

Important to keep in mind when checking out the compression effects is that – just like for the old LOB implementation – lob values with less than approx. 4000 bytes will be stored in the table segment (inline) instead of in the LOB segment.

As I just mentioned segments: the deferred segment creation feature discussed yesterday kept popping up again and again, as certain checks and error messages won’t happen before the segment is created.
One prerequisite for using securefiles is that the table is stored in an ASSM tablespace, but with deferred segment creation Oracle lets you create the table and throws a nasty error message in your face as soon as you insert some data.

Since Oracle now has this super-fancy new LOB implementation, what about the existing tables? What about all the existing applications (like SAP) that use LOBs and don’t know about securefiles?
Well, Oracle provides the DB_SECUREFILE parameter to control the behaviour of the old LOB-syntax used to create tables.

Be aware that for SAP systems only PERMITTED is, well, permitted…
Don’t miss SAP note
SAP Note 1426979 Oracle 11g: SecureFiles – The new way to store LOB data
to get all details about securefiles relevant for SAP.

TABLE COMPRESSION

The second big topic today was TABLE COMPRESSION.
As often, to see what’s better with the new, you’ve to know about the old and the other.
Thus we spend some time to discuss possible (and existing) approaches to table compression.

I’m not going to repeat it here, but do yourself the favour and update your knowledge about how compression works in certain situations (‘who‘ compresses, ‘what‘, ‘when‘ and ‘where‘?).
However, for Oracle 11g table compression FOR OLTP (yes, the compression type is actually named that way), one key point to the nice working of compression for updated and inserts is that compression is NOT DONE IMMEDIATELY!

Like in other areas Oracle decided to go for the “we’ll do it just when we cannot avoid it any longer” way and it turns out that this is a quite clever idea.

Whenever a block reaches a certain (seemingly not completely deterministic) threshold, the block compression (it’s B-L-O-C-K compression, not TABLE or ROW compression!) kicks in and tries to make some space in the block.

Actually, this compressing is not as sophisticated as the LOB compression but just consists of replacing values that occur multiple times by a lookup.
Interestingly this lookup is technically implemented by using a very old block storage function: to put multiple tables into one block.
Does that ring a bell? No?

In that case, go back and check what Oracle cluster storage is in the documentation ūüėČ

So, how well is it working, this compression feature?
Is SAP going to compress each and every table now?
Surely not!

The effect of compression totally depends on what data you’ve got in your tables.

If you somehow can arrange a large amount of redundancy within (!) the blocks (think of inserting rows with today’s date today and tomorrow with the next date and so on), in that case, compression is likely to work very nicely for you.

There’s again an SAP note about this feature prepared by the Oracle development:
SAP Note 1436352 Oracle 11g Advanced Compression for SAP Systems

May not look much these two topics but definitively filled the whole day ūüôā

For yesterdays Savepoint click here:

Oracle 11g training, Day #1 – Savepoint!

 

+++UPDATE+++

I totally forgot to mention two other things we’ve had a look at:

1. Invisible Indexes

Ever wanted to check whether a new index would help a certain query without the risk of immediately affecting all execution plans for the table?
With the invisible indexes you can do exactly that.
The visibility is a new index attribute that basically tells the optimizer whether or not to consider the index for any plan evaluation.
Unless the new parameter  OPTIMIZER_USE_INVISIBLE_INDEXES is set to TRUE the CBO would simply not use the index.

Thus, while the parameter is set to FALSE on instance level you may easily change it for your session (e.g. in ST05 – Explain Plan) or via OPT_PARAM hint for your statement and review the effects of the new index to the execution plan.

Be careful, however, since you actually do create a fully functional index, which gets maintained just as usual. This is not something like virtual indexes that shortly saw the day of light with the Oracle 9i Tuning Pack for those of you that remember this…

2. DDL_LOCK_TIMEOUT

New parameter, values is ‘seconds to wait’…

Easy to explain: assume you try some DDL, say altering your table, that requires an exclusive table lock. On a productive system and a central table, you’ll likely see the ORA-00054 quite often for this.

Usually, you’d try again a bit later and check whether you now get the table lock.

With the new parameter, you now can tell Oracle to wait the specified number of seconds before giving up.
It’s like a get-the-enqueue-NOWAIT-option with a wait time…

Luckily, if you explicitly run a LOCK TABLE … IN EXCLUSIVE MODE NOWAIT then this NOWAIT will still be adhered¬†to and no waiting takes place.

Nice feature, if you ask me, but better choose the wait time wisely on your production machine…

+++

Oracle 11g training, Day #1 ‚Äď Savepoint!

Ok, management wisely decided to afford the Oracle 11g training for me and my primary support colleagues (thanks, thanks, thanks!), so this week we find ourselves sitting in a meeting room with Juergen Kirschner (yes, THE J. Kirschner) both as the victims and witnesses of his very own special way to make sure every attendee understands how Oracle works.

Of course, it’s not meant to be a pure entertaining treat for the database support team. Instead, it is expected from us that we do in fact remember a thing or two after the course so that we can do some actual support on Oracle 11g databases.
Rumors are that customers start to use it… ūüôā

Thus, I decided to do as Napoleon did – writing down the stuff that you want to remember. And while I’m at it I can even just post it.

And here we go:

Parameters

The first thing I remember from today was that Oracle again managed to increase the number of database parameters.
Don’t ask me for the exact numbers but including the undocumented parameters, 11g easily reaches >2500 parameters.
Combined with an approach to release bug fixes with an on-off-switch defaulting to off (that is, you have to set the parameter e.g. _fix_control-something to activate the fix) this is a near guarantee to make the list of required/recommended parameters even longer.

A nice change in that area is the work they did on the “event”-parameter – now you don’t have to specify the miraculous ‘trace name context forever, level xyz’ anymore. Instead, you just write the event number and – if required¬† – the level.
event= ‘10053’ would be totally sufficient to start the CBO trace. Nice, isn’t it?

Using SPFILES also got a bit handier.
When resetting a parameter you now can omit that “SID =’*'” part that was required with earlier versions.
And it is even possible to change the order of the keywords in the ALTER SYSTEM statement.

As usual with new database versions, some parameters get outdated and deprecated.

Now, what does Oracle do when you specify such parameters in your instance profile and startup the database?

  1. It starts up the instance but not without
  2. Printing out some nasty ORA-32004: obsolete or deprecated parameter(s) specified for string instance

    message and changing the return code of SQLPLUS to a non-zero value.

If that does not make you nervous (well, I was quite relaxed up to here…), you surely don’t expect to find such parameters like REMOTE_OS_AUTHENT on the list of deprecated parameters.
SAP NetWeaver heavily relies on the OPS$-logon feature, and the standard scripts’n’tools like “startsap” and “brconnect” tend to interpret non-zero return codes from sqlplus as errors … that’s going to guarantee some fun!
You’ll already find SAP notes on this one and a proper solution for this is currently being worked on – so hopefully, this won’t be an issue for too long.

Ah – while we’re at it: the new parameter recommendation note is
1431798 Oracle 11.2.0: Database Parameter Settings
Currently searching for ‘11g + parameter‘ doesn’t find it, but I guess this can be easily fixed.

Memory Management

While Oracle 10g feature presentation slides explained the advantages of automatic SGA tuning (sga_target sga_max_size), Oracle 11g slides now advertise the automatic tuning of the whole Oracle memory, SGA AND PGA (memory_target, memory_max_size).

However, to be on the safe side in your production system, you still would need to set reasonable minimum sizes for SGA and PGA to prevent your database going down due to some extreme memory consuming statement.
And as soon as you do that, all the nice automatic management advantages go out of the window as you have to decide on reasonable memory sizing yet again.

Datafile Space Management

Quite the same is true for some datafile space management features.
There’s a new background process SMCO (Space Management Coordinator Process) that extrapolates the space requirements in your database and automatically extends the datafiles for you.
That way, if you’re using autoextent data files, your data loading transactions won’t have to wait until the file extension had been made.
Good or evil feature?
Well, if you’re giving out the allocation of filesystem space out to some automatic feature – remember, this filesystem space has to be paid for (twice or more … for your QA and Testsystem as well!) – then you might get into trouble earlier or later.
Besides, there’s no automatic “drop me just that datafile” feature yet – so reclaiming the filesystem space might become a bit more work.

Unrelated side-note: the space management bitmap was now enlarged for all newly created datafiles to the size of the bigfile datafiles. This will lead to a bit more ‘wastage’ in your data files; so be informed you bean-counters out there!

Also unrelated: it’s now supported to choose a larger blocksize for the online redolog files. While this might improve write I/O performance for your storage solution, it might be worth a double thought, since the redo log I/O block size is the minimal container for writing out LOG data.
You change a row, generate 12 bytes of LOG data and commit?
DANG!
You just wrote out 4K to the online redo logs (assuming you set the maximum block size).

Another new background process is the “Virtual Keeper of Time Process” (VKTM).
As the demonstration via strace easily showed, this process does two things:

  1. sleeping and
  2. issue gettimeofday() system calls.

Now, what’s this for you might ask – as I did as well.
The idea behind it goes something like this:
gettimeofday() is a rather expensive system call and in some cases, the precision of it is not required.
Sometime the current second would do.
In such cases, the other Oracle processes now can just read out the result of VKTMs last system call and us this instead of calling the system API themselves.
Where and when this is used?
No bloody idea… (a.k.a. confidential!).

Back at datafile space management there some minor changes that only partly make sense.
You can now provide a maximum segment size on tablespace level.
So if your DBA goes nuts and decides that segments larger than 20 MB are evil and thus not allowed, then you cannot create such segments.
Even resumable space allocation gets you out of this, because… yeah, well, because it just doesn’t do it (the real reason is, even more, Homer Simpsons DUH!-like, so don’t ask!).

The next feature, deferred segment creation is important to know, since it’s enabled per default.
The effect of it is that when you create a table then no segment will be created for it as long as you don’t insert any data. The same is true for indexes on this table.

Given that you typically find thousands of empty tables in SAP systems this feature alone might save you quite some space as until now even empty tables allocated at least 8 blocks plus another 8 blocks for each index, lobindex, lob column etc.
Currently, this feature does not work for partitioned tables (should be changed ‘soon’) and will crap up the output of your monitoring scripts of you just join DBA_TABLES and DBA_SEGMENTS without an OUTER JOIN.
In the worst case, you’ll look at your script output and find thousands of tables missing… In that case, breathe in, relax and check the DBA_TABLES column SEGMENT_CREATED to figure out whether there should be a segment at all.

The CBO, however, will treat these tables as if they would have their segments.
So it just keeps on relying on the CBO stats and does not play clever tricks with the knowledge of having really empty tables. ūüôĀ

While digging into this feature it came up that the famous Oracle dictionary scripts that helped us Oracle hackers so much to learn about the inner structures in the past have changed as well.
SQL.BSQ now is split up in multiple sub *.BSQ files – but still these are worth the dive in from time to time!

Encryption

Before I totally get too tired to remember anything more, let’s move to tablespace encryption.
Just like the TDE that was available with Oracle 10g it’s required to create an encryption key in form of a ‘wallet‘.
This can either be manually opened after instance startup or an automatic opening can be configured.
As the encryption setting is now done on tablespace level, the handling got a lot easier than before.
Still, the major impact on security is there to the backup situation.
Of course, the production database will have the wallet open all the time and automatic – so few additional security on your productive instance. The backups, however, even the ones done via filesystem snapshots, are now secure and useless to everybody who does not have the correct wallet.

Ok, that’s all that I can wring out from my short-term memory today.
(note to me: maybe I should consider taking notes… ?) ūüėČ

See you tomorrow, so long

Lars

SAP Support Case ‚ÄúWhy is Oracle writing out so many archivelogs in my BI recently?‚ÄĚ

Sometimes support messages have to go a rather long way to finally get a solution.

The following problem came up in a support message recently and since neither the analysis steps nor the solution were obvious, I decided to make a blog out of it.

Ok, so the problem statement was something like this:

Hello support,
we’re using SAP BI and recently re-initialized some of our data sources.
From then on, our Oracle database writes out Archivelogs at a rate 4 times higher than before.
What’s causing this and how can we stop it?

Hmm… this does sound a bit odd!
Why should performing an initial load to the BI lead to a permanently raised level of DB change-logging?

So the first step for this message was of course to verify the observation of the customer.

Nifty tools at hand

A very nice way to do that is to use the scripts from the RSORASTT support script framework (see note #1299493 -“Loadable SAP Support Monitors in DBA Cockpit” for details on this).

With one of those scripts the following matrix was produced:

----------------------------------------------------------------------------------------------------------------------------------
  DAY|WD |00  |01  |02  |03  |04  |05  |06  |07  |08  |09  |10  |11  |12  |13  |14  |15  |16  |17  |18  |19  |20  |21  |22  |23  |
----------------------------------------------------------------------------------------------------------------------------------
06-08|TUE|   2|  12|  44|    |   1|    |    |  45|    |    |  45|    |    |    |    |    |    |    |    |    |    |    |    |    |
06-07|MON|  90|    |    |    |    |    |    |  43|    |    |  44|    |   1|  44|    |    |  43|    |   1|  44|    |    |   5|    |
06-06|SUN|    |    |    |    |    |    |    |    |    |  48| 116| 132|  92|  50|  42|  42|  53|  24|    |  60| 135| 130| 132| 124|
06-05|SAT|   6|   7|  45|  53| 144|  90|    |    |    |    |    |  68| 134| 123|    |    |  47| 128|  69|    |    |    |    |    |
06-04|FRI|   2|  14|    |  43|    |    |    |  40|   3|    |    |  54|    |  44|    |    |  44|    |    |  45|    |    |   1|    |
06-03|THU|   6|  11|  44|    |   1|    |    |    |    |  45|  47|    |    |  46|    |    |  44|    |    |  43|    |    |   1|    |
06-02|WED|   3|   8|  77|   8|   1|    |    |  45|    |    |  44|    |    |  43|    |   1|  45|    |   1|  45|    |    |   4|    |
06-01|TUE|   3|   6|  22|  24|   1|    |  18|  74|   7|    |  45|   1|    |  44|    |    |  44|    |    |  45|    |   1|   5|    |
05-31|MON|    |    |    |    |    |    |    |  45|    |    |  44|    |    |  43|    |    |  43|    |    |  44|    |    |   6|    |
05-30|SUN|    |    |    |    |    |    |    |    |  63|    |    |    |    |    |    |    |    |    |    |    |    |    |   1|    |

05-29|SAT|   3|   5|  28|  64| 141| 102|    |    |    |    |    |    |  97| 124| 133|  98|    |    |    |    |    |    |    |    |
05-28|FRI|   6|   6|  45|    |    |    |    |  45|    |    |  45|    |    |  44|    |    |  47|    |    |  46|    |   1|   1|    |
05-27|THU|   6|   6|  47|    |    |    |    |  47|    |    |  46|    |    |  44|    |    |  46|    |    |  46|    |    |   5|    |
05-26|WED|   2|  11|  43|   7|    |    |    |  44|    |    |  44|    |    |  44|    |   1|  45|    |    |  46|    |    |   5|    |
05-25|TUE|    |    |    |  49|  24|    |    |  44|    |    |  45|    |    |  45|    |    |  45|    |    |  45|    |    |   3|    |
05-24|MON|    |    |    |  49| 140| 137| 150|  53|    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
05-23|SUN|    |    |    |  98| 146|   1|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |

05-22|SAT|   5|  11|    |  43|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |
05-21|FRI|   4|   6|    |    |    |    |    |    |    |   4|   3|  42|  50|  43|  46|  44|  45|    |    |  46|    |    |   4|    |
05-20|THU| 158| 207| 212| 194| 156|  89|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
05-19|WED|    |    |    | 202| 190| 152|    | 142| 199| 207| 218| 209| 172| 222| 208| 210| 225| 224| 219| 217| 218| 222| 216| 219|
05-18|TUE| 152| 208|   2|    |    |    |    |  53|    |    |    |  13|    |    |    |    |    |    |    |    |    |    |   5|    |
05-17|MON|    |    |    |  73| 106|   1|   2|   1|    |   9|   8|  61|  95| 160|  78| 207| 207| 231| 197| 208| 210| 216| 210| 193|
05-16|SUN|    |    |    |  10|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   6|    |

05-15|SAT|   3|   5|    |  64| 134|    |    |    |  32| 105|    |  21|    |  17|  16|    |   7|    |    |    |    |    |   4|    |
05-14|FRI|    |    |    |  66| 108|  14|   8|   7|   8|   7|   6|   6|   7|   7|   9|   8|   9|   8|   7|   7|   7|   9|   5|    |
05-13|THU|   4|   7|    |  72| 113|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
05-12|WED|   4|   9|   5|    |   2|    |   6|   7|   6|   6|   6|   6|   7|   8|   7|   7|   7|   7|   7|   7|   8|   8|   4|    |
05-11|TUE|   4|   6|   8|  10|    |    |  10|   8|   6|  14|   7|   8|   7|   6|   7|   8|   8|   8|   7|   7|   7|   7|   6|    |
05-10|MON|    |    |    |  67| 107|  16|   6|   7|   6|   6|   7|   8|   7|   8|   7|   9|   6|   6|   7|   9|   6|   9|   4|   1|
05-09|SUN|    |    |    |  66| 120|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   6|    |
05-08|SAT|    |    |    |    |  48|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
----------------------------------------------------------------------------------------------------------------------------------

What we see here is for each day 24 columns with the number of archive logs that had been written out in that period of time.
The initial data load started on Monday (17. Mai) and finished (with some interruptions) on Thursday (20.5.).
Before this load, we see an archive log creation rate of 6 – 9 archive logs per hour.
After the load, we see something around 44 – 50 every three hours.

Not only had the total amount of redo log data increased but also the pattern in which the archive logs are written out changed.

Now, after we proved that the customer’s observation was totally correct, the next obvious question was:
“What causes so much database change logging?”

In fact, this is one of the hard to answer questions, since no database keeps details logs about the objects, statements or users that caused logging – except in the logs itself. Actually, that’s was the redo log files are there.
Unfortunately, it’s not quite practical to try to read what’s in Oracle archive log files.

So one main approach is to assume that in a BI system most data change comes from loading data, a.k.a. inserting data into tables.
This would make the tables and the associated indexes larger.
So let’s look out for objects that showed a considerable increase in size in the last weeks.

By using the standard tools from DBACockpit I found the following segments to be the ones growing fastest:

Months   (Last analysis: 01.06.2010 07:05:30)
-----------------------------------------------------------------------
Name            |Partition            |Type      |Si(MB)|Chg.Size/month
-----------------------------------------------------------------------
/BIC/B0000356000|B00003560000000000086|TABLE PART| 3.162|    2.449,355
/BIC/B0000356000|B00003560000000000084|TABLE PART| 3.255|    1.241,311
RSBERRORLOG     |                     |TABLE     |29.339|      994,767
/BIC/B0000356000|B00003560000000000082|TABLE PART| 3.142|      818,478
/BIC/B0000356000|B00003560000000000080|TABLE PART| 3.110|      625,500
/BIC/B0000356000|B00003560000000000078|TABLE PART| 3.204|      483,974
RSBERRORLOG~0   |                     |INDEX     |11.203|      462,247
/BIC/B0000356000|B00003560000000000076|TABLE PART| 3.179|      399,890
RSBERRORLOG~001 |                     |INDEX     | 8.237|      339,616
/BIC/B0000356000|B00003560000000000074|TABLE PART| 3.126|      338,208
/BIC/B0000356000|B00003560000000000072|TABLE PART| 3.157|      292,963
/BIC/B0000356000|B00003560000000000070|TABLE PART| 3.127|      261,868
/BIC/B0000267000|B00002670000000000002|TABLE PART| 8.195|      260,137
/BIC/B0000356000|B00003560000000000068|TABLE PART| 3.008|      224,211
/BIC/B0000356000|B00003560000000000066|TABLE PART| 3.058|      206,328
/BIC/B0000356000|B00003560000000000064|TABLE PART| 3.008|      186,740
/BIC/B0000356000|B00003560000000000062|TABLE PART| 3.072|      185,425
/BIC/B0000356000|B00003560000000000060|TABLE PART| 2.304|      181,479
/BIC/B0000356000|B00003560000000000057|TABLE PART| 2.240|      176,219
[...]

This boiled down to the RSBERRLOG table and a table called /BIC/B0000356000.
For those of you familiar with the table naming convention of SAP BW, it’s obvious that this /BIC/B0* table is a PSA table.
For us mere mortals transaction SE11 provides this information ;-D

/BIC/B0000356000
PSA for 2LIS_41_S920 P11-100 2LIS_41_S920

The “2LIS_41_S920 P11-100 2LIS_41_S920” is a BI data source name.

A hot lead – or not?

This sounded like a hot lead.
Unfortunately, we still have no direct connection between the table growth and the redo log writing.

So I decided to take another approach and to review the Oracle segment statistics.

SELECT * FROM
 ( SELECT
     SUBSTR(OBJECT_NAME, 1, 30) SEGMENT,
     SUBSTR(STATISTIC_NAME, 1, 20) STATISTIC_NAME,
     VALUE
   FROM V$SEGMENT_STATISTICS
   WHERE STATISTIC_NAME IN ('db block changes', 'physical writes')
   ORDER BY VALUE DESC )
 WHERE ROWNUM <=10; 
Result of the SELECT statement
-------------------------------------------------------------------
|SEGMENT                       |STATISTIC_NAME  |            VALUE|
-------------------------------------------------------------------
|/BIC/LZLMI02~0                |db block changes|   1.651.973.280 | <<<
|/BIC/LZLMI02                  |db block changes|   1.095.154.352 | <<<
|TESTDATRNRPART0               |db block changes|     107.267.472 |
|/BIC/LZLMI02~0                |physical writes |      45.216.909 |
|TESTDATRNRPART1               |db block changes|      15.905.616 |
|/BIC/LZLMI02                  |physical writes |      14.176.353 |
|/BI0/TMATERIAL                |db block changes|      13.590.528 |
|TESTDATRNRPART0~0             |db block changes|       5.284.080 |
|SEG$                          |db block changes|       3.218.256 |
|RSBKDATA~0                    |db block changes|       2.265.696 |
|/BIC/B0000356000              |db block changes|       2.115.920 | <<<
[...]

So indeed, the table we identified earlier belongs to the Top 10 segments on which block changes and therefore logging occurs.
But table /BIC/LZLMI02 and its index /BIC/LZLMI02~0 are way up in a totally different class of that!
A whole order of magnitude larger than the largest next objects concerning the number of block changes are these two segments.

In fact, these two segments caused more block changes in that database than all other tables/indexes combined!

I thought this would be a good reason to learn something more about this table.

The first unusual thing (at least for a BI newbie like me) was the naming of the table.
I never came across a /BIC/L* table before.

NC IC? S.O.S.!

Thus I consulted with BW colleagues and got some great inter-component-intra-primary-support consulting.
This /BIC/L* table belongs to a special kind of InfoCube: a “non-cumulative” or NC Infocube (SAP BI documentation “Non-Cumulatives”).

As far as I got it, the special feature of this InfoCube is the ability to track information properly that cannot be summed up (accumulated) over time.

Like inventory in your stock.
It simply does not make sense to add up the stock of Monday with that of Tuesday – thus the standard summation model of InfoCubes cannot be applied for this.
Instead, the NC InfoCube denotes the amount of stock leaving and entering the inventory and calculates the current stock at query runtime.

Now, it’s possible to make very specific restrictions on the validity of the data.
In short – the more attributes you use to restrict the validity, the more combinations of restrictive attributes need to be checked and updated everytime data is loaded into the NC InfoCube.
Therefore it’s highly recommended to only use as few as possible restrictions on the validity (SAP BI documentation “Validity Area”).

For this specific NC InfoCube four attributes had been chosen:

InfoObject                 Description
----------                 ----------------
0CALDAY                    Calender Day
0PLANT                     Plant
0STOR_LOC                  Storage Location
0BATCH                     Batchnumber

Four attributes may not look much, but it’s the number of distinct values for each of them that makes up for the huge number of possible combinations.

To get a feeling about how many combinations are in the table right now and which of those attributes is the largest contributor to that we used the old, but seldom used transaction DB05.

Arcane DB05 techniques in use…

The following is the output for a DB05 analysis for table /BIC/LZLMI02 over the mentioned fields.
Attention: this transaction does read ALL data in the analysed table and aggregates it.
So better be careful what combination of fields you put into it!

-----------------------------------------------------------------------
|Date/time of analysis:                  10.06.2010  13:18:18         |
|Analyzed table:                         /BIC/LZLMI02                 |
|Total number of rows:                      7.667.420 rows            |
|Requested bytes per row:                          16 bytes per row   |
|Requested bytes to be buffered 100%:     122.678.720 bytes           |
|Current buffering mode:                           no buffering       |
-----------------------------------------------------------------------
------------------------------------------------------------------------
|     Rows per    | Distinct   |          1           11          101
|   generic key   | values     |       - 10        - 100      - 1.000
------------------------------------------------------------------------
|      Key fields |            |
|  1   SID_0REQUID|          2 |          0            0            0
|  2   SID_0PLANT |         14 |          2            0            0
|  3   SID_0STOR_L|        816 |         92          108          278
|  4   SID_0BATCH |  7.667.420 |  7.667.420
------------------------------------------------------------------------

------------------------------- ----------------------------------------
|     Rows per    | Distinct  |     1.001    10.001    100.001 more than
|   generic key   | values    |  - 10.000 - 100.000 -1.000.000 1.000.000
------------------------------- ----------------------------------------
|      Key fields |           |
|  1   SID_0REQUID|          2|         0         0          0         2
|  2   SID_0PLANT |         14|         0         2          8         2
|  3   SID_0STOR_L|        816|       274        50         14
|  4   SID_0BATCH |  7.667.420|
------------------------------- ----------------------------------------

The output needs to be explained, however.

Usually, you would only see one matrix with fields and distinct values, but for readability reasons I reformatted the output, so we’ve two matrixes here.

To understand this matrix let’s start in line 1.
For field SID_0REQUID there are 2 distinct values in the table and both of them are held by more than 1 Mio. rows.
So far so obvious.
Line 2 now is NOT just about SID_0PLANT, but instead about the combination of SID_0REQUID and SID_0PLANT!
For this combination, we get a total of 14 distinct combinations.
Two of them are rather seldom with just 1-10 rows carrying it and the other 12 combinations all have >= 10.000 rows.

And so on.

Interesting is, of course, the last line.
SID_0BATCH makes the combination of all columns unique!
So, for every one of the 7.667.420 combinations of the four fields, we just have one row in the table.

Therefore this “batch number” is a near-unique attribute and the /BIC/LZLMI02 needed to keep track of many, extremely many validity range definitions.

And remember: this is not the data to be reported from, but just the validity information.
These 7.6 Mio rows needed to be updated whenever new data was loaded into the ZLMI02 InfoCube!

If that does not explain the high archive log creation rate, I don’t know what else would ūüôā

Happy to have found the culprit, one last question remained:
What can we, respectively the customer, do against this?

As many of you know, the most efficient way to do something is to avoid doing it.
Thus, we checked whether the /BIC/LZLMI02 would actually be used by any of the queries stored in this BI.

One easy way to approach this is to check table RSRREPDIR.
It contains the directory of all queries that are built on InfoCubes:

INFOCUBE   GENUNIID
ZLMI02     ZLMI02/Z_ZLMI02_0001
ZLMI02     ZLMI02/E_ZLMI02_0002

Just two queries for this InfoCube.
Via transaction RSRT (Query Monitor) it was then easy to look up, whether any of those two queries used non-cumulative values.
To display this information, all you’ve to do is to select the query and then click on the pushbutton “Performance Info”.
You’ll then get a list with a section called “Technical Information” and a paragraph labelled¬†“Non-Cumulative Indicators” (SAP BI documentation “Technical Information”).

It turned out, that for none of both queries the non-cumulative values, let alone the validity restrictions had actually been used at all.

Get this! I mean: get THIS!

Get this: the vast amount of database change activity was done for no use!

The recommended solution was to remove the validity restriction attributes with the help of BI report RSDG_CUBE_VALT_MODIFY as it is documented:

SAP BI documentation “Validity Area”

There’s also this overview note available on the non-cumulative data handling in SAP BI (#586163Composite Note on SAP R/3 \¬† Inventory Management in SAP BW).

Finally, the problem was solved by changing the validity areas.