Doing the same thing over and over again over a long time usually gives the opportunity to recognize some pattern.
One pattern I found in solving customer messages is the following:
The less friendly and the more agressive the customer acts with the message processor the worse is the actual outcome of the message.
What does ‘outcome’ mean here? To me a positive message outcome includes:
1) The problem is technically solved.
2) The solution was available quickly enough so that the solution is still relevant to the customer
3) The customer feels better after the issue is solved – it was a positive change for him.
So, in oder to find out how to facilitate the 3 points above, let’s look what one can do to actually prevent them. Let’s start looking for anti-pattern… these are the WTMTW (ways-to-make-things-worse) I found:
WTMTW #1: Don’t cooperate at all!
Just tell the message processor that there is a problem in some area of his so-called standard-software and let him alone with that.
If the message processor asks for log- or trace-files ignore this or just send some arbitrary file.
If he asks you for a system connection, deny this and insist that the problem must be a bug that should be analyzed in-house at SAP.
WTMTW #2: Pump up the pressure!
Open each and every message on priority ‘Very High’ or ‘High’ regardless how important it is to the business processes of your company. If you don’t get replies to your message entries that you made 5 minutes before the office hours are done in your timezone – call the CIC and complain about it.
Ask to be connected to the message processor immediately and threaten him/her with the escalation of this message to the highest management level there is.
WTMTW #3:Ignore the basic rules of human communications.
As the support people are trained to handle sensitive cutomers they get along without any problem when you don’t put any formal greetings or regards in your messages. Don’t even tell them what your name is – it’s none of their business, is it?
WTMTW #4:“Punish” the message processor by giving him/her bad rates in the customer satisfaction survey after the message had been closed. He/Her should know that they cannot mess with you.
WMTMW #5:Fortify your position! You’re not ‘just another customer’ but the most important one. Let the message processor know exactly how important you and your company are. This surely provides the motivation to “give out the solution” more quickly. Don’t react to anything the message processor writes or asks before he has confessed your importance.
Interestingly these anti-pattern (!) are used quite a lot.
More interestingly to me it seems like the group of the users is heavily biased to be maskulin ***.
I can hardly remember to have ever read a message of a women that was not at least civilized. And in my memory I can only find phone-calls to men, that swaered and shouted all the time while talking to the message processor.
To me ‘message processing’ (similar to most other services) can be seen either as a conflict or a strategic game.
In both models one overall strategy is most often the most succssfull: focus on the goal – not on positions.* For all involved parties the outcome of a message is best, when the problem is solved. Assinging the guilt or blame to anyone does make nobody better off. Making the other party angry – never a good idea to get things done (ever argued with your mechanic..?).
But what if you got a ‘really bad’ message processor? Well, tell him/her directly what you don’t like and what you expect. Ask to have the message handled by a different processor if it does not get better.
If anything else fails: contact your SAP representative and discuss what can be done to improve the outcome of messages for your compary (and yourself).
More and more customers decide to use MaxDB and with that, the number of customers asking for a comprehensive list of differences or a comparison increases as well.
In my mini-series “Mind the gap” I will try to shed some light on where the little or big differences between MaxDB and Oracle databases are and what to keep in mind when working with them.
Today I take a look at how the Oracle and MaxDB store the data that is put into the databases and what are the consequences of it. It’s quite a long text to read for a blog, but I promise you’ll be wiser when you finish it 😉
How Oracle does it
As more people have already learned a bit or two about Oracle database administration I start with it today.
In Oracle databases with SAP systems data is usually stored in tables, indexes or the partitions of tables and indexes.
These are just 4 types of what is called a segment in an Oracle database. There are many more segment types (e.g. like LOBs, clusters, temporary segments, undo segments etc.) but let us first take a general look.
Basically, a segment is anything that allocates storage for data (so a view is not a segment, although you can read data from it).
These segments are stored into chunks of data blocks, called extents. So an extent is a group of blocks. Extents are stored in so-called tablespaces that simply consist of the storage of one or more data files.
Thus we have this chain of storage abstraction:
SEGMENT --> DB BLOCKS/PAGES --> EXTENTS --> TABLESPACES --> DATAFILES
Whenever it’s necessary to allocate a new block to store the data for the segment the next free block in the extent is used for it.
When the extent has not enough free blocks available anymore a new extent is allocated from the free space in the tablespace.
This approach is both simple and effective.
Due to the extent-wise allocation data of one segment is stored clustered together in the data files.
This makes reading all of this data (e.g. when performing a full table scan) rather quick as the latency only applies for the first block to be read – the next ones are readable in nearly no time.
Another consequence of this approach is that, due to the clustering of data into extents, a problem called fragmentation can occur.
This problem is present when e.g. although many small pieces of free space in a tablespace exist in a tablespace a new extent cannot be allocated because it is too large to fit into one of these small free extents.
The second problem with this approach is that it is heavily optimized for data insertion.
As soon as data is deleted the regained space is not available for other segments right away. That’s right – once an extent is allocated it will stay allocated until the DBA takes care of it.
The problem is of course that although much data is deleted it may be possible that there is still some data present in all the extents.
Therefore – to regain storage space – a major task for the DBA is to reorganize the database.
Also, one important characteristic is that data – once it’s written to a block on the disk – will stay in that position.
As long as no reorganization takes place, a specific row of a table will always be available at the very same offset in the very same block where it has been created. This is characteristic is especially used in indexes – Oracle indexes use the row-locations (ROWIDs) for the reference from the index leaf entries to the data. This is the reason why all indexes of a table need to be rebuilt after a table reorg.
On one side it’s pretty nice to access data via this ROWID because it is the fastest way there is. The ROWID points exactly to the right data file and the exact block and the position in that block – no index can ever be more effective.
On the other side, all references to a specific ROWID have to be updated once I want to have the data in another location.
And this is usually the case when a reorganization is done…
There are more advantages of the oracle-approach: due to the many levels of abstraction, there are many options to customize and adapt the storage management to the special needs of your system.
One nice option is e.g. to take tablespaces or data files offline or to read-only mode to protect parts of the data from access.
It’s even possible to recover the database part wise – just because the mechanism Oracle employs for the recovery are bound to the way the data is stored.
How it is with MaxDB
MaxDB stores data is a very different manner.
First of all there are fewer levels of abstraction. All data is stored in one big facility, called the data area.
The data area consists of one or more files, called data volumes. Within the data area, all data is written to pages of 8K – similar to Oracle.
But in contrast to Oracle MaxDB automatically stripes the data evenly over all data volumes.
For MaxDB the storage abstraction looks like this:
DB FILES (tables/indexes) --> PAGES/BLOCKS --> DATA AREA --> DATA VOLUMES
The following picture should clarify this:
As you can see, the pages of all files (that’s what MaxDB calls ‘segments’) are distributed evenly over the whole data area.
All files allocate always just one block here and there – so when data is deleted the regained free space is exactly of the size we need to store new data. No fragmentation whatsoever!
Reorganization free, automatic data distribution
Basically, whenever a page is written out to the disks, MaxDB chooses the one that is filled the less.
Another important feature is the shadow-storage functionality. With each write of a page to the disk, the page will be stored at a new location. That way the old version of the page is still in place. To find the current versions of pages in the data volumes MaxDB uses an internal facility called ‘the converter’. As soon as the writing to the disks is successfully (savepoint) the converter knows, that the old versions of pages can be overwritten.
Thus free space is immediately reclaimed after a savepoint. There’s no need to manually reorganize the database.
Building on that concept MaxDB also supports the freezing of a specific DB state – that feature is called ‘snapshot’. When pages are used for a snapshot, a copy of the old converter is kept. You can continue to work with the database as usual, but have the option to jump back to the state of the snapshot, at basically no time. This feature is quite useful for training or test systems to avoid the regular recoveries.
One common misconception
Up to SAP DB 7.3 the data distribution was managed differently.
There had been a feature that actively shifted data between the data volumes when the data was unevenly distributed. The problem with that approach is: pressure is on for the I/O subsystem without any use for business actions, that is: SQL queries.So it was perfectly possible that your I/O subsystem was running on its edge while users did not do anything on the database. That is something not so desirable when you think of today’s storage systems that host many database instances.
Therefore this behaviour had been changed to the “passive” redistribution scheme described above.
The one big consequence of that is that it should be avoided to add just one single datavolume to the database when you want to extend it.
Instead, add multiple data volumes. Even if these data volumes are smaller in size this will avoid an I/O hotspot as the new volumes would be the preferred locations for changed pages (see above)
So, today there is no active rebalancing of data in MaxDB instances. Even if you’ve heard the SAP DB administration course some years ago and you have been told there is – it is not. Not anymore 😉
My view to it
Storing the data efficiently and getting it back quick and reliable is one of the major problems when creating a DBMS. And the specific requirements for what the data storage should actually do differ widely. Just think of the many different storage engines available for mySQL databases.
Oracle follows the all-in-one approach. The storage technologies available (I only mentioned the very basic, standard options, which are used most with SAP installations. There are _so_much_more!) for Oracle are immense and very flexible.
In turn, they all are quite complex.
Even for a standard table and its indexes, it’s necessary to provide a target tablespace. And for the target tablespace, you’ve to define storage parameters etc. That is something many users don’t need and don’t want. Just observe how SAP went away from the multi-tablespace default setups and arrived finally with just a bunch of tablespaces, that mostly are configured just in one fashion (LMTS+ASSM = all automatic). But that’s just what MaxDB does anyhow. It does everything automatically.
Once you’re in the position that you really gain from fine-tuning your storage, then Oracle is really the tool of choice as it is very flexible.
On the other hand, if you just want your database to hold your data – then MaxDB does the better job. You delete data – you get space back in the database. Just as you expect it, without any reorganization.
So once more it’s ease of use for MaxDB and flexibility for extreme cases with Oracle.
The cheat sheet ------------- 8< ------------ cut here ------------ 8< ------------
Tablespaces separate data storage within the database
All data is stored in a single location: the data area
Table rows are referenced by ‘ROWID’s and have a fixed location
Table rows are located by their primary key, so it doesn’t matter where the data is actually stored.
Space is allocated in chunks (extents), but not deallocated automatically.
A reorganization is necessary to regain free space and to ‘clean up’
Space is allocated page wise. Whenever a page is not used any longer, space is immediately available again to any other object in the database.
Scan-operations that read loads of data sequentially are supported very good by storage systems
Scans need to read pages one-by-one so the support for ‘large reads’ of storage systems does usually not help much.
Exception: clustered tables in the BW feature pack
Database performance problems are one of the more complex tasks that brighten the day of a database software supporter.
Each and every single day since I’m doing support for Oracle and MaxDB there had been messages, in which customers complained about bad database performance.
Statements ‘suddenly’ took ages where they needed only seconds before – although the DBA “did not change anything”.
Parameters need to be maintained
In fact, not changing anything might have been the cause of the problem. With new versions of the database software – whether it’s MaxDB or Oracle – new recommendations for database parameters come up and these should be implemented.
Unfortunately, there are not only three or four parameters but somewhat hundreds of them. Which parameter should you set? Which one is the important one, that determines if the database is slow or fast?
Fortunately, it’s not necessary that you actually know all the parameters and their meanings.
All you’ve to do is to follow the parameter recommendations that are available these notes (take the one for your database software version):
#814704 – MaxDB Version 7.6 parameter settings for OLTP/BW
#767635 – MaxDB Version 7.5 parameter settings for OLTP/BW
#1004886 – MaxDB Version 7.7 parameter recommendations
#901377 – MaxDB 7.5/7.6 Parameter recommendations for BW systems
Now how to check the parameters?
Ok, looking into each one of these notes might be a bit scary as each of them includes tens of parameters, most together with some reference to another note and explanations about what the parameter is for or against.
This is “too much information” for a quick check.
So what are you’re going to do? Compare parameters one by one without understanding them? That’ll be a kind of monkey-job.
Luckily there are options to automate this.
Usually, such checks would have been build into the Netweaver Basis (for BW actually had this…) but as the parameter recommendations are likely to be changed rather often, these built-in checks would be nearly always wrong itself.
Thus there are semi-automatic check options for MaxDB and Oracle that are maintained whenever the parameter recommendations are modified.
For Oracle, there is a SQL-Script available, which you find as an attachment to note
#0830576 – Parameter recommendations for Oracle 10g
Although the note is for Oracle 10g systems, the attached scripts do work as well on 9i systems.
You can use report “RSORADJV” to execute it or the SQL-Query-functionality in the DBACOCKPIT.
Of course, you can also use it within SQL*plus – but then you’ll need to set formatting parameters (see below).
The output delivers to you how the parameters are currently set, how they should be set and references the SAP notes for this. Quite handy!
For MaxDB, there is a different option available, described in note
#1111426 – Parameter check for liveCache/MaxDB instances
Basically, it’s a special parameter file for the DBAnalyzer.
Just start the dbanalyzer with this parameter file and you get all wrong set parameters and some additional ‘health checks’.
It’s really easy once you know…
As you see, it’s not black magic to make sure that the database parameters are set OK. You won’t believe how many performance-related support messages are really solved by this – it’s a lot!
When you like to run the parameter check script in SQLplus you need to setup some formatting options.
The following worked for me (just copy&paste the line before running the check script):
set linesize 300
set pagesize 1000
COLUMN name FORMAT a40
COLUMN set FORMAT a8
COLUMN remark FORMAT a60
COLUMN recommendation FORMAT a70
COLUMN is_value FORMAT a50
COLUMN should_be_value FORMAT a50