Category Archives: SCN Blog

HANA backtrace 36/2017

What to read? Whom to follow?

One of the important things to pick up in any community is finding out who are the people that usually have good, funny or interesting content. Many community platforms make it easy to find out who are the “leading contributors”, the “heroes”, “aces” etc.Currently, the platform SAP Community is run on struggles with this, big time.
It is actually hard to not miss new blog posts, sometimes even if one has subscribed to a specific author.

So, here is a short list of people that I noticed putting out really good content in the SAP HANA space.
This list is really just what my very biased eye has seen and my colander-like memory has not lost.

If you know more authors or think that your own material should be seen by more HANA-heads then please, do not hold back and add it in the comments section.

Laura Nevin

A SAP colleague up North close to the Niagra Falls working as a Development Architect in the HANA documentation team. Her blog “atrium” does not just have your vanilla product updates but provide the reader with a whole tour around what’s new and noteworthy with the documentation.

I find her blogs particularly well written and produced and silently wish more people (myself included) would publish such nice blogs.

Lucia Subatin

Yes, also a SAP colleague, but from the other end of the world: Argentina.
Lucia has a lot to say and write about the SAP HANA Express edition.
Actually, that’s her job as she is a SAP HANA Evangelist and apparently a very good one.

Jan Zwickel

Jan is a SAP HANA product manager who has written a number of in-depth explanations of the otherwise mysterious “flags” of the graphical calculation views.
Working from SAP’s center of gravity in Walldorf, Germany he is as close to the source as it gets.

Sharyu Sarode

There’s not much that I know about Sharyu, except for that she seems rather experienced in the administration corner of SAP on HANA and that she got a knack for writing about that.

She is not an SAP employee, which makes it even better that she shares her experiences with us. Judging from her profile page, she works in Australia, just like myself.
Her blogs are well prepared and definitively worth the read.

 

As mentioned, these pointers are just the content I’ve seen and liked here on SAP Community platform.

I am very happy to see that the reports of the death of the platform and the community seem to have been exaggerated and that there are interesting, read-worthy contributions from both SAP and non-SAP authors.

Thanks for that folks!

Other stuff I like to recommend

As there is so much other stuff going on outside the SAP Community let’s have a quick view at that, too.

Weapons of Math Destruction

If you work with HANA, then Big Data, Data Science, and Predictive are your daily bread. Make sure to get an understanding of the social impact this can have.

Both the book and the talk by the author are highly recommendable.

“Badass: Making Users Awesome”

from the famous Kathy-“Head first”-Sierra is a particularly enjoyable and easy read. Well designed book with good ideas on how to create succesful products and not just a bunch of features.

There you go, now you know!
Lars

HANA gets stripped

This is going to be one of those weird blog posts, that basically convey just a single piece of information, but since the new SAP Community platform doesn’t really care for WIKI/document style knowledge storage, I had to come up with a click-bait title to make it a blog post.

I’ve written before on installing HANA cockpit and yesterday I fired it up once again.
Snooping around a bit in the tracefiles (running two HANA 2 instances + XSA stacks on my one 32GB NUC always leaves room for questions like “how much actual processing memory do we have left now?“) I found this jewel of an “error curtain”:

[26144]{-1}[-1/-1] 2017-09-12 13:30:34.208039 w Basis ELF.cpp(00770) :
[26144]{-1}[-1/-1] 2017-09-12 13:30:34.208039 w Basis ELF.cpp(00771) :
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libhdbpythonbase.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libxsengine.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libstatisticsserver2.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libstatisticscommon.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libhdblivecache.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libhdbpackman.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libhdbpackmansqldriver.so
Using only dynamic symbol table for symbols of ELF file, no string section
/usr/sap/H4C/HDB96/exe/libhdbxsdbcserver.so

[... Buzz-Lightyear continuation mode ... ]
[... TO INFINITY ... AND BEYOND ....]

Strip it!

To the average advanced computer user like myself, this looks like error messages.

Yes, these messages are marked as ‘w’ like wwwwwarning and the whole system still seems to be doing its thing, but personally, I don’t really like when I see tons of such messages and find precisely zero documents, SAP notes, KBAs or other information sources that can tell me what they mean.
I have the same issue with looking into the CONSOLE app of my MacBook by the way… #who_thinks_of_the_poor_end_user_nerd?

After some HANA source code reading (that really helps every now and then, especially when the trace file message neatly points to C++ file ELF.cpp, lines 770 and 771) I figured out two things:

  • this is really just an info-warning written out when shared libraries are loaded by a HANA process. That usually happens once, at process startup, e.g. when the nameserver starts up.
    and
  • this specific code has been around for several years now. Yet, I have not seen these messages before, which means that something else (aka not this piece of code) must have changed.

The next best thing from here was to look into the trace files of my other HANA 2 installation.
Guess what! No such messages in any of the trace files.

Ok, I thought, so maybe there is something different with the HANA installation that comes with HANA cockpit.

Fighting with ELFes

(sadly, she has nothing to do with this…) (image http://lotr.wikia.com/wiki/Arwen)

Let’s have a look into that anyway.

The warning message listed the shared objects (.so) files that it seems to have issues with, one of them was libhdbpythonbase.so.
Checking this in my HANA Cockpit installation with the Linux tool <em>file</em> shows:

file /usr/sap/H4C/HDB96/exe/libhdbpythonbase.so
/usr/sap/H4C/HDB96/exe/libhdbpythonbase.so:
ELF 64-bit LSB shared object, x86-64, version 1 (SYSV)
, dynamically linked
, BuildID[sha1]=cafedde1f0db3527e5a8f3f69fc75019287f4196
, stripped <<< !

Compare this with my other HANA 2 installation:

file /usr/sap/S20/HDB20/exe/libhdbpythonbase.so
/usr/sap/S20/HDB20/exe/libhdbpythonbase.so:
ELF 64-bit LSB shared object, x86-64, version 1 (SYSV)
, dynamically linked
, BuildID[sha1]=1089da0d45a4fc59a631f0877b0f2786a5282937
, not stripped <<< !

Aha! So, the files actually have been “stripped”.

What that means is that human readable information, so called symbols, about the compiled program in the .so files have been removed.
This information is not required to run the program but is extremely useful for debugging and problem analysis.
Without the information, a crash dump would only contain memory addresses for functions and since dynamic loading is used, these addresses change every time the program is run.
On the other hand, with symbols in the files, a stack trace or a crash dump literally print out the names of the involved functions and methods.
This is what makes the problem analysis actually possible in most cases.

Huh? I mean why?

Peachy“, you think, “why on earth would then somebody strip this important info from the files“?
Great question!

The reason is the file size for the .so files.
Symbols can take quite some space in them. This alone might not be a problem since disk space is cheap, but the symbols also get loaded/mapped into memory when the shared object file is used.

And if there is a single precious resource on HANA systems it is the main memory.

How much can we save with this stripping?
First the unstripped version:

ls -lh /usr/sap/S20/HDB20/exe/libhdbpythonbase.so
-r-xr-xr-x 1 s20adm sapsys 8.0M Jul 25 05:11 /usr/sap/S20/HDB20/exe/libhdbpythonbase.so

Now the stripped version:

ls -lh /usr/sap/H4C/HDB96/exe/libhdbpythonbase.so
-r-xr-xr-x 1 h4cadm sapsys 2.2M Jun 26 19:42 /usr/sap/H4C/HDB96/exe/libhdbpythonbase.so

Boom!

Down from 8MB to 2.2MB!
And that is just one of 418 shared objects
(counted on my HANA 2 system via “ls -l *.so | wc -l“).

In total that amounts to quite a difference between the binary sizes for the normal HANA installation and the HANA Cockpit one:

“Normal HANA 2”
ls -lh /usr/sap/S20/HDB20/exe/ | grep total
total 5.2G
“HANA Cockpit”
ls -lh /usr/sap/H4C/HDB96/exe/ | grep total
total 1.7G

Obviously, it has been a goal for the HANA cockpit team to make the additional overhead of having its own HANA instance as small as possible.
There is actually another case where memory usage of HANA needed to be minimized: the HANA Express edition.

After quickly firing up my HXE virtual machine (gee, I really love my NUC system for making this so easy) I checked whether it also uses symbol stripping and lo and behold, it does:

“HANA Express edition”
ls -lh /usr/sap/HXE/HDB90/exe/libhdbpythonbase.so
-r-xr-xr-x 1 hxeadm sapsys 2.2M Jul 24 19:16 /usr/sap/HXE/HDB90/exe/libhdbpythonbase.so

file /usr/sap/HXE/HDB90/exe/libhdbpythonbase.so
/usr/sap/HXE/HDB90/exe/libhdbpythonbase.so:
ELF 64-bit LSB shared object, x86-64, version 1 (SYSV)
, dynamically linked
, BuildID[sha1]=1089da0d45a4fc59a631f0877b0f2786a5282937
, stripped <<< !

For those interested to learn more about symbols and shared libraries here are some pointers:

https://www.akkadia.org/drepper/dsohowto.pdf
https://unix.stackexchange.com/questions/116327/loading-of-shared-libraries-and-ram-usage
https://stackoverflow.com/questions/7683630/library-symbols-and-user-symbols-in-elf

Finally, to go really down into the deep end of this rabbit hole, can we actually see that HANA uses less memory now?

Yes, we can!

The first option, on OS level:

“HANA Express”
hxeadm@hxehost:/usr/sap/HXE/HDB90> HDB info
USER PID PPID %CPU VSZ RSS COMMAND
[...]
hxeadm 1707 1 0.0 21628 2792 sapstart pf=/usr/sap/HXE/SYS/profile/HXE_HDB90_hxehost
hxeadm 1716 1707 0.0 217104 60060 \_ /usr/sap/HXE/HDB90/hxehost/trace/hdb.sapHXE_HDB90 ...
hxeadm 1739 1716 9.5 6880612 5089404 \_ hdbnameserver
[...]

We take the process ID of the nameserver and use the pmap tool with it.
As the man page explains, pmap displays information about process memory mappings.

hxeadm@hxehost:/usr/sap/HXE/HDB90> pmap -x 1739

1739: hdbnameserver
START            SIZE       RSS      PSS    DIRTY SWAP SWAPPSS PERM MAPPING
000055b587d67000 1932K    1928K    1928K      12K   0K      0K r-xp .../hdbnameserver
000055b587f4b000  248K     248K     248K     248K   0K      0K r--p .../hdbnameserver
000055b587f89000    4K       4K       4K       4K   0K      0K rw-p .../hdbnameserver
[...]
00007f0be3758000  712K     520K     288K       8K   0K      0K r-xp .../libhdbaflsdk.so
00007f0be380a000   16K      16K      16K      16K   0K      0K r--p .../libhdbaflsdk.so
00007f0be380e000    4K       4K       4K       4K   0K      0K rw-p .../libhdbaflsdk.so
[...]
00007f0c54af6000    4K       4K       4K       4K   0K      0K rw-p [anon]
00007ffccc580000  136K      68K      68K      68K   0K      0K rw-p [stack]
00007ffccc5bc000   12K       0K       0K       0K   0K      0K r--p [vvar]
00007ffccc5bf000    8K       8K       0K       0K   0K      0K r-xp [vdso]
ffffffffff600000    4K       0K       0K       0K   0K      0K r-xp [vsyscall]
Total:        6880616K 5092916K 4825683K 4633372K   0K      0K

5870672K writable-private, 992656K readonly-private, 17288K shared, and 4621300K referenced

(full path to objects /hana/shared/HXE/exe/linuxx86_64/HDB_2.00.020.00.1500920972_7f63b0aa11dca2ea54d450aa302319302c2eeac)

The output from pmap is usually very long and wide, so I cut out a lot here and shortened the file path.
The important bit is 17288K (~16 MB) shared in the last line.

s20adm@skullbox:/usr/sap/S20/SYS/exe/hdb> HDB info
USER PID PPID %CPU VSZ RSS COMMAND
[...]
s20adm 20398 1 0.0 21600 228 sapstart pf=/usr/sap/S20/SYS/profile/S20_HDB20_skullbox.lab.cat5
s20adm 20407 20398 0.0 322428 580 \_ /usr/sap/S20/HDB20/skullbox.lab.cat5/trace/hdb.sapS20_HDB20 ...
s20adm 20429 20407 11.8 9131476 2749264 \_ hdbnameserver
[...]

s20adm@skullbox:/usr/sap/S20/SYS/exe/hdb> pmap -x 20429
20429: hdbnameserver
START            SIZE  RSS  PSS  DIRTY SWAP SWAPPSS PERM MAPPING
000000679a5a6000 1932K 240K 240K    8K   4K      4K r-xp .../hdbnameserver
000000679a78a000 248K   24K  24K   24K 224K    224K r--p .../hdbnameserver
000000679a7c8000 4K      4K   4K    4K   0K      0K rw-p .../hdbnameserver
[...]
00007fd36a355000 76K     0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
00007fd36a368000 32K     0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
00007fd36a370000 648K    0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
00007fd36a412000 1368K   0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
00007fd36a568000 116K    0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
00007fd36a585000 156K    0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
00007fd36a5ac000 80K     0K   0K    0K   0K      0K r--s .../libhdbaflsdk.so
[...]
00007fd46f0d7000 4K      4K   4K    4K   0K      0K rw-p [anon]
00007ffdd3c51000 136K   28K  28K   28K  40K     40K rw-p [stack]
00007ffdd3dd6000 12K     0K   0K    0K   0K      0K r--p [vvar]
00007ffdd3dd9000 8K      8K   0K    0K   0K      0K r-xp [vdso]
ffffffffff600000 4K      0K   0K    0K   0K      0K r-xp [vsyscall]
Total:          8925232K 2753308K 2717475K 2491696K 3128688K 3023716K

5720584K writable-private, 994368K readonly-private, 2210280K shared, and 2569084K referenced

(full path to objects /hana/shared/S20/exe/linuxx86_64/HDB_2.00.020.00.1500920972_7f63b0aa11dca2ea54d450aa302319302c2eeac)

Same story here, but here we have 2210280K (2.1 GB) shared data.

The HANA way

Another approach to compare the memory usage is of course to query the HANA monitoring views:

select
name,
round (value/1024/1024, 2) as value_MB
from "PUBLIC"."M_MEMORY";
                               HANA 2    HANA Express
NAME                          VALUE_MB       VALUE_MB
SYSTEM_MEMORY_SIZE           32063.550      23546.860
SYSTEM_MEMORY_FREE_SIZE        272.140        246.680
PROCESS_MEMORY_SIZE           8652.050       6717.480
PROCESS_RESIDENT_SIZE         2110.720       4960.260 <<< ~ Linux RSS
PROCESS_CODE_SIZE             2017.950          3.580 <<< ~ Linux shared
PROCESS_STACK_SIZE             146.50          99.690
PROCESS_ALLOCATION_LIMIT     28857.190      21192.170
GLOBAL_ALLOCATION_LIMIT      28857.190      21192.170
[...]

More about HANA memory can be found in SAP Note 1999997 – FAQ: SAP HANA Memory

If you see warning about the “using only dynamic symbol table” now, you can put this into perspective.

There you go, now you know.

Cheers,

Lars

A new feature! YAY!?

A recent discussion in the SAP internal JAM community for HANA made me aware of a feature added to HANA SQL starting with HANA 1 SPS 12 (don’t ask the revision, just check the latest), respectively HANA 2 SPS 02 (we don’t speak about SPS 00 and SPS 01, do we?): “calculated columns”.

Generated <> Calculated

Like many other functions, this feature seems to have been implemented to

  • tick the box for SQL standard compliance/competition feature equality
    and
  • to provide Core Data Services (CDS) with the SQL implementation for a functionality (all HANA CDS functions are mapped to SQL features in some way, which means that it is not possible to express anything more in CDS than in SQL)

Initially, I thought “calculated columns? We had that for some time!” and associated the “GENERATED ALWAYS AS” (GC) columns but this is not what “calculated columns” (CC from here on) are. CCs do not store any values but values get computed at query time.

Look, others have it too!

They are the corresponding feature to Oracle’s “virtual column” and MS SQL Server’s “Computed Column”.
Or, to be more precise, both HANA features CC and GC combined map to the respective features of the other DBMS since these provide the option to persist computed values as a command option (makes sense to me). As mentioned above, the SQL Standard (2003) contains computed columns as an optional feature, which is probably why this obscure functionality is present in all those DBMS products.

What is it for?

While I understand the usage scenarios for GC where the result of the computation gets stored – pre-computing something is about performance and reuse later on – I am not too sure about the use case for CC’s.
The SAP colleague in the mentioned JAM discussion was quite fond of them as the allow to specify computed columns without using views. That way, common data transformations that can be built with just the columns of one table, can be defined very close to the table itself.
This sounds like a good thing.

Now, I have a different perspective on that.

No surprises are good surprises

Let’s say I run a plain old

SELECT * FROM user_info;

my expectation is that HANA provides me with the currently valid contents of the visible table columns.
I definitively do not expect to find any computation in the explain plan. But with expression based columns this is exactly what I get:

Of course, this computation adds runtime and memory usage to the statement processing and will change the query plan in more complicated scenarios.

The same surprise will show up when I try to index the column:

create index i_address on user_info (email_address);
Could not execute 'create index i_address on user_info (email_address)'
SAP DBTech JDBC: [7]: feature not supported: 
cannot support create index on calculated field: EMAIL_ADDRESS: line 1 col 38 (at pos 37)

or when I try to partition based on this column

alter table user_info partition by HASH (email_address) PARTITIONS 4;
Could not execute 'alter table user_info partition by HASH (email_address) PARTITIONS 4' in 39 ms 39 µs . 
SAP DBTech JDBC: [2048]: column store error: 
fail to alter partition: [2051] partition specification not valid;
The attribute EMAIL_ADDRESS is not allowed in a partition specification.

Both indexing and partitioning are the main use cases for the CC feature in Oracle and MS SQL Server.
HANA does not support these use cases, so what’s left really is not having to create a view as the main benefit.

Beginning of problems

Obviously one could argue: well, the expectation that HANA should not be computing expressions when I query a base table is just wrong.
As of SQL 2003 (since 14 years!), it was not correct anymore.
So, go change your expectation!

Fair call, but most users don’t have that expectation.
The notion of the “dumb” base table is too strong that things like CCs and triggers are easily overlooked.
And that is where the problem starts: when the expectation about the behavior is wrong this usually means mistakes, wrong decisions and wasted time and effort.

The poor person who is confronted with a table using CCs later on (might even be the same developer who simply forgot that this table contains CCs) cannot tell from “looking” at the table which columns are real “base” columns and which ones are computed.
It does not show in the table definition UI and it does not show in SELECT TOP 5 * from <table>.
Only when this person digs deeper and checks the CDS source code or the HANA system tables (TABLE_COLUMNS) and really looks for this setting, this piece of information can be seen.

select
      column_name, data_type_name
    , index_type
    , generated_always_as, generation_type
from 
     "TABLE_COLUMNS" 
where 
     table_name='USER_INFO' 
 and schema_name=current_schema
order by position asc ;

I do not like this kind of surprises and rather have a base table plus a consumption view that contains the necessary expressions and transformations.
Not only can I create more complex expressions in such a consumption view but I also have a better chance to understand where data is stored and where data is computed.

Pulling the rug?

The only other use case for CCs that came to my mind is changing data models on DB level, while the application(s) remain unchanged.
Such changes could only be made for applications that read from the table. As soon as somebody tries to change the column this leads to an error.

So, you see, I really struggle with finding a good reason for using this feature.
What about you?
Have you come across a scenario where this was/would have been the absolute life saver? When and where would you use it and why?

Please share your experience with CCs or virtual/columns in the comments section.

There you go, now you now!

Cheers,
Lars

 

p.s. I created my user_info table based on a table with made up email addresses:

create column table user_info as (
 SELECT id,
        substr_before (emailaddress, '@') user_part,
        substr_after (emailaddress, '@') domain_part
 FROM email_adr);

Adding the computed column is straight forward:

alter table user_info 
      add (email_address NVARCHAR(320) as 
           user_part ||'@'|| domain_part);

Note: only HANA expressions are allowed in the computation part. No function calls or complex SQL statements.

What HANA needs now is some…

… logging!

Yes, you just got click-baited into reading this advertisement for a product enhancement idea that I posted: Standard logging facility for SQLScript

The idea here is simple: when you write application logic in SQLScript then you very likely want to have the option to log what your program does.

This is not the logging I am talking about

http://www.imdb.com/character/ch0009715/mediaviewer/rm2931612928
Photo by ABC Photo Archives – © 2011 American Broadcasting Companies, Inc. – Image courtesy gettyimages.com

What I am talking about is a language feature in SQLScript that would allow to write out arbitrary messages during the execution of a SQLScript read/write procedure to some logging target. A target could really be anything, in this case, that can receive the logging information, e.g. a database table, the OS syslog, a logging server, a CSV file, an OS pipe, a kafka topic…

HANA SQLScript should provide the API for such a logging facility so that developers can focus on their business logic instead.

Especially with more and more application logic spread out over different micro-services, js apps and SQLScript an open and flexible logging feature could help to make sense of what an application does from one end to the other.

Another big technical benefit for a built-in logging would be that it could avoid the use of dynamic SQL and all the issues around it (security, performance, …).
Instead of pasting together an INSERT statement, a simple function call could not only invoke a potentially buffered log writing but also automatically include common technical information like current code location, timestamp, session context, user name, etc.

If you agree that such a feature is just what is missing in SAP HANA SQLScript, then go and vote for the idea Standard logging facility for SQLScript.

Note, that you have to register for the idea place first (https://ideas.sap.com/SAPHANAPlatform) and that you have to be logged on to vote.

SAP Community? Why?

If you’re reading this, you are likely aware of the ongoing debates, complaints, failures and the general situation around the SAP Community platform and the state of this very community in general, which is why I am not going to point out the many technical and organisational deficiencies. Others have done that well and often.
Very unfortunately, this, sometimes tedious, feedback did not lead to the expected changes but instead to the perception that “SAP is not hearing it” (why-oh-why.html, what-weve-got-here…)

Brian Ellefritz, the new Head of SAP Community , promised betterment and received sceptical encouragement and “good lucks”.
Many, including myself, believe that he means what he writes and that he will try to improve the situation.

Do I trust him with this, though? No, I do not and cannot.
And that is not because of Brian or what he has done, not at all. Frankly, I don’t personally know him so the trust that I could feel would be for his role and this role does not mean a thing to me.
I believe it does not mean much to the vast majority of community members either.

In my case, the distrust reaches further and covers the whole area of people, processes and decision making behind the organisation of SAP Community (let me refer to it as SAP-Community-ORG for this piece).
I simply don’t understand, on a very basic level, what the motivation of SAP-Community-ORG for providing and operating a platform for the community is.
What drives the actions and plans to win back active and engaged members?
Why does SAP-Community-ORG want to host the community platform?

I don’t know or feel their goals at all.

Don’t rush for answers

Now, before this question gets addressed with answers, let me point out that at this point I am not looking for quick and easy answers, but that there simply does not seem to be any explicit statement that explains to me and everyone else who might be interested in this community, what SAP-Community-ORG stands for.
“About SAP Community” does not provide this but is rather the digital version of a product leaflet (“look feature A, B, C…“, “this is what we have, this is how we do“).

I believe that the lack of this core “Why Community” is pretty much at the heart of all current problems.
The reason for me to believe this is how I think I pick the communities I am active in or which ones to not engage with.
I very much value knowledge sharing and exchange.
I have profited a lot of other people’s knowledge sharing in my life so far and I want to “give back” and contribute myself.
I learned and improved on public writing, about the different kind of people you find in forums and to try and see what someone like to achieve first instead of handing out a copy-from-the-manual-answer to a question.
I like the recognition I receive and feel very excited when I find out about things that I did not know and that I find interesting. These questions take me away from my normal business as usual problems and give me the chance to learn a lot more about the technology that I spent most of my waking hours with and on.
I also like to see how other members get similar benefits and sometimes develop from “mostly receiving/reading” to “active contributors”. That is probably one of the most motivating aspects for me: helping others to have positive experiences with communication in a forum.

When I find that these are the things that I can get from a community, that gets me interested.

Much noise, little signal

For several years now I have been active in different forums (SCN, stackoverflow, SAP internal JAM site around HANA) and became dormant in others when I felt that the forums did not follow the same goals and did not share the same values as I do.
In the case of SCN, I have been considering for the last two years what I get out of it and how many of my own values I still find a match.
Coming up with an answer for this question turned out to be difficult. There was so much noise due to the technical platform change, the bugs and the aftermath of all of this, that the usual signals for the values were and are very hard to see.
The few “signals” I find (a big shout out to the active moderators and the few remaining non-corporate contributors in the HANA-tag space!) are washed over by a wave of noise.
That frustrates and disappoints me and is the major reason for me for reducing my efforts in SCN and rather spending time elsewhere.

I cannot know that this is the same or similar for most members, but I believe many will have values and reasons to actually be active in the community beyond “what is the answer to my question?“. The latter can be solved be googling long enough, but no amount of search engine usage can provide any of the outcomes I described before.

Lost values

It is pretty clear that the technical failures annoyed and keep on annoying the users of the platform. But I don’t think that this is the reason for anyone who was interested in active participation to turn around and leave for good.
There are many comments where the author mentioned to have returned after a hiatus of 1, 3 or 6 months to see if the “toothing pains” were gone.
What I think, made them turn away again, was the impression that not only had the feedback not lead to enough improvement and that they could not find their own values in this community anymore. It is maybe just hidden behind too much noise, but the effect is the same as if the values are not there in the first place.

It looks, tastes and feels like these values are gone.

This lack of values, the “Why” of the community, is what prevents people to engage and to feel that this is a place where their own beliefs and values are shared.

Come with your vision and your values

What I see so far in terms of “community recovery” is very much focussed on organisational matters and in reducing the technical noise.

But what is needed, I believe, is that SAP-Community-ORG finds their own values and makes those unmistakable clear.
These values and the motivation behind SAP-Community-ORG need to lead the decisions made for the community and its platform.

Don’t hand out merchandise and gimmicks (everyone will take it, sure) and expect that this can buy engaged members.
Come with the vision and the values for what this community should be in your eyes instead.
That would be something one can relate to – or not. But without it, there is just nothing behind a lot of noise.

HANA 2 and two riddle errors

HANA 2 has been out for a while now – even SP01 of it is available already – so I thought it is about time to get more familiar with it.
Using my trusty “skullbox” (HANA in a pocket… ) I installed HANA 2 and realised that I now also will have to have a HANA Cockpit 2.0 installation, to be fully able to use all of the glorious new features.

Downloading the HANA Cockpit 2.0 package is a breeze as long as you do it via a proper broadband connection.
The installation archive for the database administration tool just takes about 2.5 GB (sic!).

> ll -h
-rw-r--r-- 1 lars sdba 2.5G May 12 19:58 SAPHANACOCKPIT02P_1-70002299.SAR

Compare this with, say, the current OpenSuse Leap Linux distribution (4.7 GB) and one cannot help having certain expectations around usability, the range of features and general awesomeness.
BTW: OpenSuse can be downloaded via the torrent network – given the size of the installation binaries for typical SAP application, that would be nice to have for the SAP Service Marketplace downloads, too.
Instead, in 2017, we still have to use the “Download Manager” tool, that sequentially downloads files from one server at a time.

Once the archive is on my skullbox, the interesting part begins.
Using the graphical installer “hdblcmgui” I triggered off the installation.
Being a “consumer” type of computer user as long as the situation allows, I simply accepted all the default values concerning file paths, instance numbers or names.

Click next, click next

And off the installation went, pushing a full blown HANA 2 instance onto my machine.
“WHAT?” I hear you say… which I have to respond to with “That’s right, the HANA Cockpit 2.0″ comes with its very own HANA 2 installation.”
That approach probably makes sense if one is managing many, many instances across your organisation in a centralised fashion (yep, that’s the old business user – IT – DBA/Admin way of thinking), but in times where “DevOps” is THE thing and development teams look after their “own” DBs themselves, this looks like bloat to me.
If anything, this sure will tick up the counter for productive HANA instances…

Surprisingly, the installation of HANA 2 went rather quickly. I suspect that having an SSD in your system really pays off for this sort of workload.
When I was just on the homestretch to successful-installation-celebration, a dialogue box announced:
Installation failed – Error occurred while executing SAP HANA Cockpit Components.”

Fortunately, the hdblcmgui tool makes it really easy to look at the log files in order to find out what was the cause for the error.
Kudos to the development colleagues for that!

This is what I found in the error log file:
[...]
18:02:19.875 - INFO: Output line 89: [26] --- Installation step: Set the HANA broker default DB ---
18:02:19.875 - INFO: Output line 90: [26] Set the default mapping to : '594863bc-d5f5-0b77-e100-00007f000002'(SYSTEMDB)
18:02:19.933 - INFO: Output line 91: [26] Map org: space: to DB: 594863bc-d5f5-0b77-e100-00007f000002
18:02:19.985 - INFO: Output line 92: [27] --- Installation step: Audit-Log Service (DB) ---
18:02:19.994 - INFO: Output line 93: [27] Creating application auditlog-db
18:02:19.994 - INFO: Output line 94: [27] Creating service auditlog-db-container
18:02:23.161 - INFO: Output line 95: [27] Uploading files for application auditlog-db
18:02:23.417 - INFO: Output line 96: [27] Staging application auditlog-db
18:02:24.454 - INFO: Output line 97: [27] 6/20/17 6:02:23.601 PM [STG/1] ERR /bin/sh: mc: line 1: syntax error: unexpected end of file
18:02:24.454 - INFO: Output line 98: [27] 6/20/17 6:02:23.601 PM [STG/1] ERR /bin/sh: error importing function definition for `mc'
18:02:24.454 - INFO: Output line 99: [27] 6/20/17 6:02:23.604 PM [STG/1] ERR bash: mc: line 1: syntax error: unexpected end of file
18:02:24.454 - INFO: Output line 100: [27] 6/20/17 6:02:23.604 PM [STG/1] ERR bash: error importing function definition for `mc'
18:02:24.454 - INFO: Output line 101: [27] 6/20/17 6:02:24.160 PM [STG/1] ERR bash: mc: line 1: syntax error: unexpected end of file
18:02:24.455 - INFO: Output line 102: [27] 6/20/17 6:02:24.161 PM [STG/1] ERR bash: error importing function definition for `mc'
[...]

First off: why is the error message (indicated by ERR) marked as a non-error INFO output line?
That is unnecessarily confusing.

Then looking at the error messages really didn’t help all that much:

/bin/sh: mc: line 1: syntax error: unexpected end of file
/bin/sh: error importing function definition for `mc'

bash: mc: line 1: syntax error: unexpected end of file
bash: error importing function definition for `mc'

“What is function ‘mc’?” you might ask and so did I.
Turns out, many moons ago, when I first installed my NUC system, I opted for some convenience in the dire straits of the Linux terminal console and installed Midnight Commander (a fantastic tool, everyone should have that!).
MidnightCommander’s program file happens to be called ‘mc‘… so, this seems to be a hot lead.

Unfortunately, I’m not the super-duper-Linux-expert, which is why I haven’t been aware of the way that Linux shells technically implement their startup.
Why would I need to know this? Well, after some lengthy research, I found that the error message was caused by a shell initialisation script stored in system folder

/etc/profile.d

Without going into too many details here (look here instead), there were two symbolic links (mc.sh and mc.csh) in this folder, pointing to files from the MidnightCommander installation.

After deleting those links (you can always recreate them via `ln /usr/share/mc/mc.sh /etc/profile.d/mc.sh` and `ln /usr/share/mc/mc.csh /etc/profile.d/mc.csh`), I thought the problem was fixed and I could just continue the installation.
For some reason though, this does not seem to be possible. Instead, I uninstalled the whole half-installed HANA Cockpit setup and started from scratch.

This time around, no script failed due to the “syntax error” and the “unexpected end of file”.

Awesome, until…

Which would have been awesome, if the installation would not have run into a different error right after that:

[...]
18:22:34.688 - INFO: Output line 156: [31] [DEPLOY_SERVICE] Getting multi-target apps in org "HANACockpit" / space "SAP" as COCKPIT_ADMIN...
18:22:34.689 - INFO: Output line 157: [31] [DEPLOY_SERVICE] No multi-target apps found
18:22:34.689 - INFO: Output line 158: [31] [DEPLOY_SERVICE] 
18:22:34.697 - INFO: Output line 159: [31] Checking for existing service lcm-view-grantor
18:22:34.713 - INFO: Output line 160: [31] Creating service lcm-view-grantor
18:22:35.947 - INFO: Output line 161: [31] Checking for existing service lm-service-credentials
18:22:35.960 - INFO: Output line 162: [31] Creating service lm-service-credentials
18:22:36.024 - INFO: Output line 163: [31] Deploying application Product Installer
18:22:36.281 - INFO: Output line 164: [31] [DEPLOY_SERVICE] 
18:22:36.707 - INFO: Output line 165: [31] [DEPLOY_SERVICE] Uploading 1 files:
18:22:36.707 - INFO: Output line 166: [31] [DEPLOY_SERVICE] /hana/shared/H4C/xs/installdata/apps/product-installer/product-installer.mtar
18:22:38.243 - INFO: Output line 167: ERR Failed to upload files
18:22:38.243 - INFO: Output line 168: ERR Error occured during communication with HTTP server
18:22:38.244 - INFO: Output line 169: ERR Broken pipe (Write failed) (local port 63255 to address 127.0.0.1 (localhost), remote host unknown)
18:22:38.255 - INFO: Output line 170: [31] Removing service lcm-view-grantor
18:22:38.258 - INFO: Output line 171: [31] Service binding lcm-view-grantor not found
18:22:38.312 - INFO: Output line 172: [31] ERROR: 
18:22:38.312 - INFO: Output line 173: [31] com.sap.xs2rt.installation.util.InstallationException: 
Installing the Product Installer application failed: 
com.sap.cloud.lm.sl.slp.client.communication.CommunicationException: 
Error occured during communication with HTTP server
18:22:38.312 - INFO: Output line 174: [31] at 
[...]

Once again, error messages are labelled as non-error INFO lines, but good to know each timestamp to the millisecond…

This time around, the DEPLOY_SERVICE complains about a “Broken pipe” when writing to a port on localhost, claiming the “remote host is unknown“.
“How odd!” I thought, because “localhost” is not “remote” at all and not “unknown” at all.

A quick ping localhost proved that “localhost” was indeed known and reachable, so what could be the problem?

The key hint – which I was not able to pick up, instead I found a side comment in one of few open customer incidents – is that this is not about communication via UDP or TCP, but via HTTP.
HTTP does use cookies for a variety of purposes.
Once again, I am not an expert of HTTP standards, but intensive googling led me to these pages:

Count your dots!

The insight from above web pages was that proper cookie domains have to have at least 2 (two!) dots/periods in them.
That’s right, a fully-qualified-hostname like `skullbox.cat5` is not suitable for proper cookies.

> hostname -f
skullbox.cat5

Alright, so I changed the domain name of my system (that has been running HANA and other software in different versions for over a year now and never ever had an issue with the domain name, but hey…) to `skullbox.lab.cat5`.
See?


That’s two dots in that name now:

> hostname -f
skullbox.lab.cat5

To be frank, this error is the one that really frustrated me.

Nowhere in the current or past HANA documentation does it say:
“Make sure to have two dots in your fully-qualified-hostname or we’ll throw random cryptic error messages at you.”

Even with the knowledge of what went wrong, it was a task to find any SAP documentation in relation to this.
The closest I found are

“In accordance with the Netscape cookie specification, cookies can be set for one domain only, and a domain must contain two or three dots (.) due to security restrictions.
Each of the seven top level domains (.COM,.EDU,.NET,.ORG,.GOV,.MIL,.INT) must contain at least one further domain component (usually the name of the company or organization), amounting to two dots.
Each domain with a different ending (this includes the top level domains for countries, such as UK, DE, FR, and so on) must consist of two further domain components, that is, these domains must contain at least three dots. For more information see the cookie specification.

Tip
Examples of valid domains:

<host>.sap.com Top level domain -> two domain components
<host>.portal.sap.de No top level domain -> three domain components”

Rounding off

After another round of reboot and uninstallation of the broken HANA Cockpit setup, hdblcmgui finally went through all the steps, finishing with “green lights”.

So, here we are. 157 lines into the installation of a database monitoring and administration tool that apparently is required to use HANA 2.

Given the information available (before I wrote this blog post), it took me several weeks to get it up and running and nearly two days straight when I finally decided to get to the bottom of the errors and finally have a proper HANA 2 setup.
I cannot say that I enjoyed the experience or felt that it was time well spent.
Hopefully, having these two nasty problems documented and searchable saves others some time.
There you go, now you know!

Eco-friendly slide recycling with HANA

Hey there!

I recently presented a short session on my experiences with developing and debugging non-classic-SAP HANA solutions at the Leading Insights conference in Melbourne (27./28.3.2017).

And since the slides are prepared in a way that they could be used without my talking as well, I thought, I might just post them. So, there they are on slideshare:

if you feel this is a shameless content plug, you’re probably right, but I got the feeling, that this might still be interesting to some readers of the “SAP HANA” tag.

Content-wise, this is what you can expect (just in case you don’t want to swap to page 2 which shows the table of contents):

  • an exciting example for a pure HANA development without any NetWeaver/ERP background
  • some practical tips for finding performance issues with SQL statements
  • and tidbits of performance gone bad and how to fix it.

There you go; now you know.

Cheers,
Lars

Make query hints great again…

In case you ever got frustrated by how 80’s like the SAP HANA hints infrastructure looks like, go and support my improvement idea on SAP Idea place: https://ideas.sap.com/D40094 

Go and upvote! Now! 🙂

[Update 23.1.2017]: After some of the folks that I really liked to review my idea reported that they don’t have access to it, I decided to put the idea text as a copy in here as well.
So here it is:


Currently (HANA 1 SPS12/HANA 2 SP0) SAP HANA supports the usage of query hints via information view parameters or – most commonly used – the WITH HINT query parameter.

Most of the times, the hints are used to avoid a specific bug in SAP HANA and are only applicable to a specific version/revision or application state.

For all practical matters, the hints don’t serve a functional purpose for any application but are necessary band-aids to enable the application to run successfully on SAP HANA.

As many bugs do get fixed with later SPS/revisions, the hints could and should be reviewed in order to allow the application leverage the formerly disabled feature again.

Right now, this means, that a developer/system operator has to manually document and keep track of all hints used in the system, the reason for the usage, the HANA revision the hint applies, maybe a incident numbers where the hint usage had been discussed and probably a target HANA version when the hint shouldn’t be required anymore.

This task is obviously not delivering a lot of value for any developer, so in reality, a hint, once introduced, will likely stay in place forever, as nobody remembers what it was used for and if it’s safe to remove it now.

The proposal is as follows:

Instead of a simple WITH HINT syntax, allow for a more comprehensive hint management that allows for identifying a specific statement (e.g. provide a custom statement identifier in a statement annotation or comment) and provide HANA function to specify a set of hints for any statement ID, including the already mentioned metadata.

Upon SQL query compilation SAP HANA would automatically match the statement ID against the maintained statement hints (this could be a single table structure) and include the hint into the statement execution.

The fact that this happened (and against which statement id hint setting this match had been performed) should be visible in the usual query execution analysis tools like EXPLAIN PLAN or PLANVIZ.

The proposed solution has the following benefits:

Application code does not need to be modified to add hints, as long as the statements can be uniquely identified.

The hint application can be controlled centrally, without the involvement of the original developer – essentially the hint can be applied

by

  • operations/support staff
  • The query compiler in HANA can automatically check for the valid HANA versions and ignore the hint for non-matching versions
  • The hints for a given installation can be transported like a configuration data set across multiple servers
  • The central list of statements with hints settings allows an efficient follow-up of HANA issues and – given an option to ignore the hint list during plan compilation – also allows for very efficient testing of whether a hint is still required or not.

I haven’t found an obvious way to contact support for the SAP Ideas Place, so my best bet is that in order to view any idea a registration is required.

  • Lars

Merging right. Some tests with data loading in HANA from Down Under.

Based on SAP HANA SPS 12 (HCP trial)

When Jody Hesch recently send me an email with a question about HANA, he received what everyone receives when sending email questions about HANA. The reply is a version of

“Usually I don’t do email Q&A as this simply doesn’t help with knowledge sharing.

Instead, I advise everyone to post the question in one of the HANA related forums
(SAP Community https://answers.sap.com/questions/metadata/23925/sap-hana.html ,
JAM https://jam4.sapjam.com/groups/about_page/6UHzR2Fxra4quFAbACtxFD
or even stackoverflow http://stackoverflow.com/questions/tagged/hana ) so that the question and its answers are search- and findable.

That way everyone can benefit from this and you even might get faster and/or better answers than from just writing to me.”

I have not given up on my personal fight against GEIGOKAI and this it what it looks like in daily real-life.

RUDE!

You might be inclined to call this reply rude but it turns out that most people accept this very easily and manage to copy&paste their question into one of the forums – and changing culture is always a bit painful … drip drip drip

And just as many others, Jody managed to post his question for all to see, read and discuss here: Manually managing Delta Merge

While I was doing the due diligence of checking my own ideas against a HCP trial HANA instance (SPS 12.04), the whole explanation and the example got larger than what would be nice for a simple answer, which is why you now read this is a blog post format.

On loading data and merging

Ok, let’s get to it, then.
Jody’s question was “how can manually handling delta merge make a data load faster?”
To answer this I set up a small test case:

drop table merge_test;
CREATE COLUMN TABLE  "MERGE_TEST" ("ID" INTEGER  ,
     "AAA" NVARCHAR(20),
     "BBB" INTEGER ,
     PRIMARY KEY ("ID"));

do
begin
declare ts_start timestamp = current_timestamp;
declare ts_end timestamp;
declare loops integer = 100;
declare j integer;
declare i integer;
declare with_commit integer = 1;

    truncate table merge_test;
    
    for j in 0 .. :loops do
        for i in 0 .. :loops*50 do
            
            upsert merge_test 
                values (:i, :i || ' - ' || :j, :i*:j)
                with primary key;

        end for;
        
        if with_commit = 1 then
            commit;
        end if;
        
    end for;

    ts_end = current_timestamp;    
 
    -- now show some infos
    select seconds_between(:ts_start, :ts_end) as duration from dummy;
    
    select count(*) as num_rows, min(id), max(id) from merge_test;
    
    select column_name, memory_size_in_main, memory_size_in_delta, count, distinct_count
    from m_cs_all_columns 
    where table_name ='MERGE_TEST'
    and schema_name =current_user;
    
    select merge_count, read_count, write_count, last_merge_time
    from m_cs_tables
    where table_name ='MERGE_TEST'
    and schema_name =current_user;
    
    select start_time, motivation, execution_time, memory_merge, merged_delta_records, success
    from m_delta_merge_statistics
    where table_name='MERGE_TEST'
    and start_time >= :ts_start
    and schema_name = current_user;
    
end;
 

What we have here is a simple table and a data loading simulation. The data gets inserted or updated via the UPSERT command and, if the parameter with_commit = 1 a commit is done after each iteration of the outer loop.
After both loops have been left, the code prints the runtime in seconds and selects some statistics from system views.

Next, I ran three experiments.

The first two used the SAP HANA AUTOMERGE feature, where HANA decides when to perform a delta merge.
The third one followed the approach SAP BW uses for data loading: the AUTOMERGE is disabled and the target tables get merged after all data has been loaded.

No commit, AUTOMERGE ON

 

/*
DURATION
386     

NUM_ROWS    MIN(ID) MAX(ID)
5001        0       5000   

COLUMN_NAME MEMORY_SIZE_IN_MAIN MEMORY_SIZE_IN_DELTA    COUNT   DISTINCT_COUNT
ID          1064                3733044                 505101  5001          
AAA         0                   17607386                505101  -1            
BBB         0                   5571068                 505101  -1            
$trex_udiv$ 232                 8889016                 0       0             
$rowid$     768                 15676428                505101  505101        

MERGE_COUNT READ_COUNT  WRITE_COUNT LAST_MERGE_TIME        
0           7           505101      2016-12-20 03:15:41.986

START_TIME  MOTIVATION  EXECUTION_TIME  MEMORY_MERGE    MERGED_DELTA_RECORDS    SUCCESS
*/

Easy to see, the whole thing ran for 386 seconds and managed to create a total of 505.101 record versions in the delta store.
Note that during this load NO delta merge had happened. This is because we never committed the transaction at all and the delta merge won’t “relocate” open update transactions to a new delta store.

Commit, AUTOMERGE ON

/*
DURATION
349     

NUM_ROWS    MIN(ID) MAX(ID)
5001        0       5000   

COLUMN_NAME MEMORY_SIZE_IN_MAIN MEMORY_SIZE_IN_DELTA    COUNT   DISTINCT_COUNT
ID          723716              9132                    5001    5001          
AAA         741076              10052                   5001    5001          
BBB         711828              9132                    5001    5001          
$trex_udiv$ 273432              1912                    0       0             
$rowid$     1504                9132                    5001    5001          

MERGE_COUNT READ_COUNT  WRITE_COUNT LAST_MERGE_TIME       
5           30          505111      2016-12-20 03:40:22.85

START_TIME              MOTIVATION  EXECUTION_TIME  MEMORY_MERGE    MERGED_DELTA_RECORDS    SUCCESS
2016-12-20 03:36:22.166 AUTO        54              FALSE           80016                   TRUE   
2016-12-20 03:37:22.24  AUTO        88              FALSE           90018                   TRUE   
2016-12-20 03:38:22.349 AUTO        119             FALSE           85017                   TRUE   
2016-12-20 03:39:22.49  AUTO        157             FALSE           85017                   TRUE   
2016-12-20 03:40:22.67  AUTO        186             FALSE           85017                   TRUE   
*/   

Here we see that with commits in between the whole thing only took 349 seconds.
As we’ve committed the transaction in between a couple of times, the automerge had a chance to be active – and did so roughly every minute (5 times).

Now, why is this variant a bit faster overall? The effort to find the currently active record versions grows with the number of versions in the delta store.
With the automerge in between, this number stayed consistently below 100.000 versions, while the first version had to face an increasing number of versions to check between round 100.000 and 505.101.

Commit, AUTOMERGE OFF

Finally the version of data loading that is similar to SAP BW’s approach.
I disabled the automerge via

alter table merge_test disable automerge;

and ran the test again:

/*
DURATION
325     

NUM_ROWS    MIN(ID) MAX(ID)
5001        0       5000   

COLUMN_NAME MEMORY_SIZE_IN_MAIN MEMORY_SIZE_IN_DELTA    COUNT   DISTINCT_COUNT
ID          1064                3733044                 505101  5001          
AAA         0                   17607386                505101  -1            
BBB         0                   5571068                 505101  -1            
$trex_udiv$ 232                 8810536                 0       0             
$rowid$     768                 15676428                505101  505101        

MERGE_COUNT READ_COUNT  WRITE_COUNT LAST_MERGE_TIME        
0           15          505101      2016-12-20 03:49:36.914

START_TIME  MOTIVATION  EXECUTION_TIME  MEMORY_MERGE    MERGED_DELTA_RECORDS    SUCCESS

*/

merge delta of merge_test;

/*
START_TIME              MOTIVATION  EXECUTION_TIME  MEMORY_MERGE    MERGED_DELTA_RECORDS    SUCCESS
2016-12-20 03:56:09.435 HARD        46              FALSE           5001                    TRUE   
*/

With 325 seconds this is the fastest run.
Looking at the time the manual delta merge took, we find that it’s still faster than the last delta merge of the automerge example.

This is – again – due to the fact that now the are a lot less concurrent record versions in the table.
Note, MERGED_DELTA_RECORDS shows the number of valid (committed) records that have been moved from delta to main storage.
If I update the same record a hundred times before I commit, then I would only get one record merged into the main store.

Now, why do we see a better performance in this case?

To illustrate the resource usage during our loads, see the following diagrams:

In our first case, we see that the memory usage for the delta store as well as the CPU usage for the UPSERT gets larger over time.
Sometime after the last commit is done, an automerge gets triggered, leading to additional CPU and a lot more memory usage.
The result is a relatively well-compressed table (last little blue square).

Note that the number of uncomitted record versions (little orange squares) increases during the whole time.


With the second approach (commits in between and automerge on), we see that CPU and memory usage during the load stay moderate. Only when the merge is performed, memory and CPU usage increase and the whole process has to wait for the merge to switch over to the delta2 in order to continue.

Noteworthy here is that the resulting table can easily be larger than the table sizes produced by the other methods. This is because the column compression algorithms are determined during the first delta merge and won’t change that easily.
This can lead to a situation where the compression algorithms are determined on a subset of data that is not representative of the total (think of loading data with timestamps or dates in sequential order – the first merge may only see one specific date).

With the last option, we see a very similar pattern than in the first case.
The only difference here is that the final merge was manually triggered.

So with AUTOMERGE disabled and regular commits we get the best of everything:

  • no waits due to delta merges
  • no increased runtimes due to a high number of outdated uncommitted record versions
  • well-compressed table (since the compression optimisation can work on the whole data set).

This works well, as long as the resulting delta merge can be done within the available memory.
And even if other memory structures would need to be unloaded to allow for the delta merge to happen, then this would have to happen only once and not several times during the loading process.

Smarter merge

I mentioned that this third approach is similar to what SAP BW does but this is only half the truth.
Instead of manually sending a

merge delta of merge_test;

which gives the hard instruction to perform a delta merge now, SAP BW uses a “smart merge”.
The idea here is that instead of forcing a delta merge to let HANA evaluate whether a merge would be granted for, given the current memory statistics and given the merge decision functions.

With this flexible approach, the application does not need to care about the specifics of how to decide when a merge should take place, but can yet trigger HANA to take care about that.

TL;DR

Manually controlling the delta merge can improve performance for the load process as well as reduce the overall system load during the data loading, since multiple merges are avoided. Delta merges still are extremely resource intensive operations that can have a massive impact to the whole HANA system. Therefore it’s a good idea to have them done as seldom as possible.
This, in combination with a controlled approached to COMMITTING loaded data can, as shown above, have quite an effect to the runtime of the data loading.

There you go, now you know.
Lars

I’m done (apparently) – see y’all

SCN is coming to an end as we all know and apparently I managed to get through early.

All missions completed and highest level reached.

Now, that leaves a feeling of achievement, even early in the week 😀

screen_shot_2016_10_04_at_08_36_25_1047739

Having this big achievement under my belt, I am a bit disappointed.

No final fight, no end boss, no grand finale! 🙁

Guys, if that’s “gamification” for you, then please, here’s some examples on how the cool stuff looked like:

Or, for the less arcade-inclined players:

Even TGS had it a bit more glamorous… (although not a lot more)

Anyhow, maybe the SCN team can still come up with an epic end rememberable end of service webpage.

I’d like that!

But I assume, it will just be a redirect to the new community or to here.

Cheerio!