Category Archives: SAP HANA

What’s your number?

Have you been using the ISNUMERIC() function of MySQL, MS SQL Server or PostgreSQL and wondered what to use in SAP HANA instead?
If so, this is the blog for you.

Just recently I’ve seen a customer using my old workaround (first published here) and then a few days later the related stackoverflow question got updated.

This topic is not dead

The new answer to the SO discussion took a popular approach to the IS_NUMERIC problem:

If the string can successfully be converted into a numeric type, then the value is numeric (IS_NUMERIC returns 1 for TRUE), else it’s not (IS_NUMERIC returns 0 for FALSE).
This is a completely feasible approach (and used on other platforms as well, e.g. Oracle or PostgreSQL) and provides a result that allows making the decision of whether or not the tested value can be turned into a number data type in HANA.

IT DEPENDS, as usual

The problem, or rather the limitation of this approach is, that numeric can mean different things in different circumstances.
Sometimes, one might want to accept ‘123.456,–‘ as numeric. Sometimes “$500.00 –” shall be OK, and so on.
Checking the T-SQL definition of ISNUMERIC (here and here) shows that there are plenty other options.

This means nothing else that there are alternative approaches for this problem.
One of those alternatives could be to use regular expressions (regex), which have been supported by HANA for quite some time now (since SP10 if I’m not mistaken).
With tools like one can comfortably work out a pattern that matches or doesn’t match the kind of data one is interested in.

It might well be, that in your application the question is not about whether a string or sub-string can pass IS_NUMERIC() but rather something more specific.
The customer that I mentioned earlier, for example, wanted to see if a string looks like an ICD10 code (one character followed by two digits, ‘B13’, ‘A00’, etc.). Snipping out the first character and putting the rest into an IS_NUMERIC() function is not ideal here.

Instead, the regex ‘^\D\d{2}.’ can be used to capture the whole pattern.

Show me how it’s done

Let’s take a rather simple regex that matches any string that

  • is not a number between 0-9
  • is not a plus (+), minus (-), comma (,) or full stop (.)
  • is not an exponent ‘E’ (as in 1.23E02)

Such a regex is ‘[^0-9-.\,+e]’ (check here) with flags for global, case-insensitive matching.

In SAP HANA SQL we can use this regex in the LOCATE_REGEXPR() function. If it doesn’t find any matching string, then the result is 0, else it will return the location of the first matching character.
Since we are only interested in those cases, where no matches were found, we can easily MAP this output to make up a IS_NUMERIC value:
1 = no unwanted characters found, it’s numeric
0 = unwanted characters found, it’s not numeric

MAP (locate_regexpr(START '[^0-9\-\.\,\+e]' 
                    FLAG 'i' 
                    IN <string to be searched>)
     , 0, 1
     , 0)

So, what’s better now?

Using the “try-the-conversion“-approach can only be done via a procedure or a scalar function. This brings some overhead with it for the function invocation which can be avoided, when using the regex directly in the SQL statement.

As an example, I ran a quick test against 10.000.000 unique values (all tests were done against HANA1 1.00.122 and HANA2 2.00.020).

Conversion-approach function

       sum("IS_NUMERIC_CONV"(id_char) )
       "IS_NUMERIC_CONV"(id_char) = 1
   and "IS_NUMERIC_CONV"(id_char) IS NOT NULL;

This ran for roughly 19.5 seconds with 8 CPU cores at 100% usage.
And this is the runtime when all values actually can be converted correctly. With the same number of non-numeric values, the runtime goes up to 2:45 mins.

Regex-approach pure SQL

             MAP (locate_regexpr(START '[^0-9\-\.\,\+e]' 
                                 FLAG 'i' 
                                 IN id_char)
                  , 0, 1
                  , 0) IS_NUM_REGEX
     IS_NUM_REGEX = 1 

This ran for roughly 8.6 seconds with 8 CPU cores at 100% usage. Same runtime for both convertible and non-convertible values, here.

Too good to be true

By now you maybe think: “Gee, that regex thing is a lot faster, but also really ugly code. Let’s put that into a function and have our cake and eat it too!“.

Off we go:

                 in stringToCheck nvarchar(5000)) 
         RETURNS isNumeric integer
     isNumeric := MAP (locate_regexpr(START '[^0-9\-\.\,\+e]' 
                                      FLAG 'i' 
                                      IN :stringToCheck)
                       , 0, 1
                       , 0);

A statement using this function takes roughly 30 seconds with 8 CPU cores at 100% usage.

No cake for us!

As long as all our input data is clean, the conversion-function approach works nicely but really becomes expensive, when there are many cases, for which the exception handling needs to be executed. The lesson here is probably that it’s not a great (performance) idea to make exception handling part of your main processing path.

A truly high performing solution would need to be implemented as a native HANA function, so for it needs to be carefully checked what exactly the job of IS_NUMERIC()  shall be in your application and how many matches/non-matches are expected.

There you go, now you know!


HANA inverts the Thunderdome law

If you’ve watched “Mad Max Beyond Thunderdome”  then “… You know the law: Two men enter, one man leaves …“.

MV5BNDM4N2NkZmEtMjQ2Yi00ZTBkLWExODYtYWJiYzkwNWQ4NDQ4XkEyXkFqcGdeQXVyMTI3MDk3MzQ@._V1_Watch out, he’s coming for you! — pic from

HANA, of course, reverses, sorry, disrupts this law by making two (or more) values get out of functions, even when only one input value was provided.

Wait, what?

Alright, this Thunderdome reference is really a massive plug and only used to get your attention here. Now, that I seem to have it, let us get to the topic.

If you’ve been using SQLScript for some time you are aware of user defined functions (UDF). These UDFs can nowadays handle both scalar and table typed parameters, both for the input and for the output aka result of the function.

The main purpose of functions, just like in other programming languages, is code reuse and making the code that uses them easier to understand.
With some thought about the naming, one can even make up parts of a domain specific language for the problem at hand.
Think of

applyTaxTRate(sales_price) as "priceWithGST"

in your SELECT statements instead of

 CASE sales_price < 10 
 THEN sales_price 
 ELSE sales_price * 0.20 
 END as "priceWithGST"

as an example.

All good so far. Now, with HANA performance considerations are nearly always the main concern, so a question often asked is:
Isn’t using functions bad for performance?

Which I can only answer with
Isn’t using high-level languages bad for performance?
And indeed, it is easy to lose some microseconds and several MegaBytes of memory by using functions compared to “pure” SQL. But then again, the main goal for a developer is to implement a functionality (correctly) that eventually provides value for the users.

So, my recommendation is to use functions where it helps with making the programmers more productive and worry about the performance impact, when one can actually measure it.

Back to the topic of this blog.

I had looked at a question on stackoverflow a while ago which basically asked, why using UDFs with more than one result seems to lead to slower execution times, the more results are actually used.

So something like

      getUserDetails (143776).userName
    , getUserDetails (143776).expiryDate
    , getUserDetails (143776).currentAge
    , getUserDetails (143776).customerSegment
from dummy;

would take around four times as long as

    getUserDetails (143776).userName
from dummy;

even though we’re running the same function with the same input parameter.
The programmers’ intuition, well at least mine and that of the OP from stackoverflow, was that HANA would evaluate the function once for the provided input parameter and use the different results from this one execution.

Some testing of the query execution times showed that this could not be the case.
A more in-depth analysis was in order here.

The try setup

For my “learn about the system by poking around“-try I created a very simple scenario: a user account table and a query to find the date for when any user account would expire.
Not wasting time with inventing play-doh data I simply ran:

create column table myusers as (select * from users);

With that in place a UDF was easy to build:

drop function userdetails;
create or replace 
 function userDetails(user_id BIGINT) 
 returns userName NVARCHAR (256),
         expiryDate DATE
   select user_name
        , to_date(add_days(valid_from, 180)) as expiryDate
    into userName
      , expiryDate
         user_id = :user_id;
select userDetails (143776).userName
     , userDetails (143776).expiryDate
from dummy;

_SYS_REPO                    21.01.2018

This should be equivalent to this pure SQL statement:

select user_name
     , to_date(add_days(valid_from, 180)) as expiryDate
     user_id = 143776;
_SYS_REPO 2018-01-21

Looks good, I’d say.

Checking on the execution times shows:

UDF (multi-result)

successfully executed in 39 ms 67 µs (server processing time: 37 ms 819 µs)
successfully executed in 42 ms 929 µs (server processing time: 41 ms 60 µs)
successfully executed in 44 ms 13 µs (server processing time: 42 ms 492 µs)

pure SQL multi-result)

successfully executed in 1 ms 191 µs (server processing time: 0 ms 385 µs)
successfully executed in 1 ms 933 µs (server processing time: 0 ms 825 µs)
successfully executed in 2 ms 507 µs (server processing time: 0 ms 827 µs)

While this is a big difference (30 times) it is actually the wrong one for the current question. What we need to look at is the difference between executions with just one or two results.

Running the statements with just the expiryDate output:

UDF (single parameter)

successfully executed in 29 ms 443 µs (server processing time: 28 ms 200 µs)
successfully executed in 30 ms 61 µs (server processing time: 28 ms 610 µs)
successfully executed in 29 ms 586 µs (server processing time: 28 ms 131 µs)

pure SQL (single parameter)

successfully executed in 1 ms 917 µs (server processing time: 0 ms 735 µs)
successfully executed in 1 ms 63 µs (server processing time: 0 ms 308 µs)
successfully executed in 2 ms 864 µs (server processing time: 0 ms 742 µs)

And here we do see an important difference: the pure SQL runtime kept stable while the UDF fetching just a single result did so quite a bit faster.

Phase One “Reproducing the issue” successfully done.

Moving on with the analysis, I played around with the statements and wanted to check what happens when you select a non-existing user:

Trouble with no data

     userDetails (-4711).expiryDate
from dummy;

Could not execute 'select userDetails (-4711).expiryDate from dummy' in 40 ms 99 µs . 
SAP DBTech JDBC: [478]: 
user defined function runtime error: "DEVDUDE"."USERDETAILS": 
line 9 col 5 (at pos 131): 
[1299] (range 3) no data found exception: 
no data found at user-defined function (at pos 8)
pure SQL
     to_date(add_days(valid_from, 180)) as expiryDate
     user_id = -4711;


When there is no matching record we get an error with our UDF. That is likely not what we want, so we need to cater for that in the code.
Here we can use the EXIT HANDLER to catch the ‘NO DATA FOUND’ SQL error.

create or replace 
 function userDetails(user_id BIGINT) 
 returns userName NVARCHAR (256),
         expiryDate DATE
declare exit handler for SQL_ERROR_CODE 1299 
     userName := NULL;
     expiryDate := NULL;

select user_name
     , to_date(add_days(valid_from, 180)) as expiryDate
 into userName
    , expiryDate
      user_id = :user_id;

With this change, the error is caught and the UDF returns NULL values. That’s not the same as an empty set, but good enough for this case and a lot better than the error.

      userDetails (-4711).expiryDate
from dummy;


Trouble with tracing

The next step of the analysis should be to actually show that the UDF is executed once for each result.

I looked into the different usual suspects for statement analysis and was rather disappointed:
SQL Plan Cache, Expensive Statement Trace and PlanViz all only show the top-level statement + statistics.

PlanViz shows a “Row Search”-Pop as the final processing node and this contains the projection list but that’s about it.

Name: Row Search
ID: ID_EA341305E062BA47B440FD71F07CA318_0
Execution Time (Inclusive): 19.9 ms
Execution Time (Exclusive): 19.893 ms
Execution Start Time: 6,934.218 ms
Execution End Time: 6,954.174 ms
CPU Time (User): 19.883 ms
Projected Cols: 

So what would be a good way to find out whether the function had been called more than once during the statement execution?
I didn’t want to go some low-level tracing, so I came up with a modification of the function:

Pudding, proofs… is it Christmas yet?

create sequence pcnt;

drop function userdetails;
create or replace 
 function userDetails(user_id BIGINT) 
 returns userName NVARCHAR (256),
       expiryDate NVARCHAR (30)
declare exit handler for SQL_ERROR_CODE 1299 
     userName := NULL;
     expiryDate := NULL;

    select user_name|| '_> '|| pcnt.nextval
         , to_nvarchar(add_days(valid_from, 180), 'DD.MM.YYYY') ||'_> '|| pcnt.nextval as expiryDate
     into userName
        , expiryDate
       user_id = :user_id;

I am going to use a sequence to add the current counter for execution to the output values.

In pure SQL the sequence is bumped up once for each output row:

select user_name || '_> '|| pcnt.nextval
 , to_nvarchar(add_days(valid_from, 180), 'DD.MM.YYYY') ||'_> '|| pcnt.nextval as expiryDate
 user_id = 143776;
Exection …
_SYS_REPO_> 30                 21.01.2018_> 30
Next exection…
_SYS_REPO_> 31                 21.01.2018_> 31

That means, if the UDF is really called once per output argument, then we should see increasing numbers for the same result row.
Let’s see what we get:

select userDetails (143776).userName
     , userDetails (143776).expiryDate
from dummy;

_SYS_REPO_> 32               21.01.2018_> 33

Here we got our proof, the function actually is executed twice.

Ok… so what?

Alright, up to here I spend your time and attention on explaining how to find out that HANA does not do the thing we liked it to do.
If that was the end of it, you rightfully would ask, what this was for.

Worry not! There is a good end in sight!

The answer lies with using the UDFs in SQLScript blocks.
But not like this:

do begin
declare _userName   NVARCHAR(256);
declare _expiryDate NVARCHAR(30);
    _userName  := userDetails (143776).userName;
    _expiryDate:= userDetails (143776).expiryDate;
    select _userName, _expiryDate from dummy;
:1              :2 
_SYS_REPO_> 34  21.01.2018_> 35

We have to step up our SQLScript game here.
While we’re already avoiding the stupid SELECT … INTO FROM dummy workaround for assigning function results to variables, it’s obviously calling the function twice now.

Instead, we need to use the function in a single assignment.
How can you assign values to multiple variables in a single assignment?“, you ask?

I did the same until one of our SQLScript developers pointed me to something that is, well, “mentioned” in the documentation. I put “mentioned” in quotation marks, as this non-obvious programming construct is not really explained, but shows up in just one example code (this one)

The solution and answer

do begin
declare _userName   NVARCHAR(256);
declare _expiryDate NVARCHAR(30);
    (_userName, _expiryDate) := userDetails (143776);
    select _userName, _expiryDate from dummy;
:1               :2 
_SYS_REPO_> 36   21.01.2018_> 36

With this multi-result assignment, we not only have just one function call but also much nicer to read code.

There you go, now you know.

Cheers from Mad Max-country,



HANA backtrace 36/2017

What to read? Whom to follow?

One of the important things to pick up in any community is finding out who are the people that usually have good, funny or interesting content. Many community platforms make it easy to find out who are the “leading contributors”, the “heroes”, “aces” etc.Currently, the platform SAP Community is run on struggles with this, big time.
It is actually hard to not miss new blog posts, sometimes even if one has subscribed to a specific author.

So, here is a short list of people that I noticed putting out really good content in the SAP HANA space.
This list is really just what my very biased eye has seen and my colander-like memory has not lost.

If you know more authors or think that your own material should be seen by more HANA-heads then please, do not hold back and add it in the comments section.

Laura Nevin

A SAP colleague up North close to the Niagra Falls working as a Development Architect in the HANA documentation team. Her blog “atrium” does not just have your vanilla product updates but provide the reader with a whole tour around what’s new and noteworthy with the documentation.

I find her blogs particularly well written and produced and silently wish more people (myself included) would publish such nice blogs.

Lucia Subatin

Yes, also a SAP colleague, but from the other end of the world: Argentina.
Lucia has a lot to say and write about the SAP HANA Express edition.
Actually, that’s her job as she is a SAP HANA Evangelist and apparently a very good one.

Jan Zwickel

Jan is a SAP HANA product manager who has written a number of in-depth explanations of the otherwise mysterious “flags” of the graphical calculation views.
Working from SAP’s center of gravity in Walldorf, Germany he is as close to the source as it gets.

Sharyu Sarode

There’s not much that I know about Sharyu, except for that she seems rather experienced in the administration corner of SAP on HANA and that she got a knack for writing about that.

She is not an SAP employee, which makes it even better that she shares her experiences with us. Judging from her profile page, she works in Australia, just like myself.
Her blogs are well prepared and definitively worth the read.


As mentioned, these pointers are just the content I’ve seen and liked here on SAP Community platform.

I am very happy to see that the reports of the death of the platform and the community seem to have been exaggerated and that there are interesting, read-worthy contributions from both SAP and non-SAP authors.

Thanks for that folks!

Other stuff I like to recommend

As there is so much other stuff going on outside the SAP Community let’s have a quick view at that, too.

Weapons of Math Destruction

If you work with HANA, then Big Data, Data Science, and Predictive are your daily bread. Make sure to get an understanding of the social impact this can have.

Both the book and the talk by the author are highly recommendable.

“Badass: Making Users Awesome”

from the famous Kathy-“Head first”-Sierra is a particularly enjoyable and easy read. Well designed book with good ideas on how to create succesful products and not just a bunch of features.

There you go, now you know!

HANA gets stripped

This is going to be one of those weird blog posts, that basically convey just a single piece of information, but since the new SAP Community platform doesn’t really care for WIKI/document style knowledge storage, I had to come up with a click-bait title to make it a blog post.

I’ve written before on installing HANA cockpit and yesterday I fired it up once again.
Snooping around a bit in the tracefiles (running two HANA 2 instances + XSA stacks on my one 32GB NUC always leaves room for questions like “how much actual processing memory do we have left now?“) I found this jewel of an “error curtain”:

[26144]{-1}[-1/-1] 2017-09-12 13:30:34.208039 w Basis ELF.cpp(00770) :
[26144]{-1}[-1/-1] 2017-09-12 13:30:34.208039 w Basis ELF.cpp(00771) :
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section
Using only dynamic symbol table for symbols of ELF file, no string section

[... Buzz-Lightyear continuation mode ... ]

Strip it!

To the average advanced computer user like myself, this looks like error messages.

Yes, these messages are marked as ‘w’ like wwwwwarning and the whole system still seems to be doing its thing, but personally, I don’t really like when I see tons of such messages and find precisely zero documents, SAP notes, KBAs or other information sources that can tell me what they mean.
I have the same issue with looking into the CONSOLE app of my MacBook by the way… #who_thinks_of_the_poor_end_user_nerd?

After some HANA source code reading (that really helps every now and then, especially when the trace file message neatly points to C++ file ELF.cpp, lines 770 and 771) I figured out two things:

  • this is really just an info-warning written out when shared libraries are loaded by a HANA process. That usually happens once, at process startup, e.g. when the nameserver starts up.
  • this specific code has been around for several years now. Yet, I have not seen these messages before, which means that something else (aka not this piece of code) must have changed.

The next best thing from here was to look into the trace files of my other HANA 2 installation.
Guess what! No such messages in any of the trace files.

Ok, I thought, so maybe there is something different with the HANA installation that comes with HANA cockpit.

Fighting with ELFes

(sadly, she has nothing to do with this…) (image

Let’s have a look into that anyway.

The warning message listed the shared objects (.so) files that it seems to have issues with, one of them was
Checking this in my HANA Cockpit installation with the Linux tool



file /usr/sap/H4C/HDB96/exe/
ELF 64-bit LSB shared object, x86-64, version 1 (SYSV)
, dynamically linked
, BuildID[sha1]=cafedde1f0db3527e5a8f3f69fc75019287f4196
, stripped <<< !

Compare this with my other HANA 2 installation:

file /usr/sap/S20/HDB20/exe/
ELF 64-bit LSB shared object, x86-64, version 1 (SYSV)
, dynamically linked
, BuildID[sha1]=1089da0d45a4fc59a631f0877b0f2786a5282937
, not stripped <<< !

Aha! So, the files actually have been “stripped”.

What that means is that human readable information, so called symbols, about the compiled program in the .so files have been removed.
This information is not required to run the program but is extremely useful for debugging and problem analysis.
Without the information, a crash dump would only contain memory addresses for functions and since dynamic loading is used, these addresses change every time the program is run.
On the other hand, with symbols in the files, a stack trace or a crash dump literally print out the names of the involved functions and methods.
This is what makes the problem analysis actually possible in most cases.

Huh? I mean why?

Peachy“, you think, “why on earth would then somebody strip this important info from the files“?
Great question!

The reason is the file size for the .so files.
Symbols can take quite some space in them. This alone might not be a problem since disk space is cheap, but the symbols also get loaded/mapped into memory when the shared object file is used.

And if there is a single precious resource on HANA systems it is the main memory.

How much can we save with this stripping?
First the unstripped version:

ls -lh /usr/sap/S20/HDB20/exe/
-r-xr-xr-x 1 s20adm sapsys 8.0M Jul 25 05:11 /usr/sap/S20/HDB20/exe/

Now the stripped version:

ls -lh /usr/sap/H4C/HDB96/exe/
-r-xr-xr-x 1 h4cadm sapsys 2.2M Jun 26 19:42 /usr/sap/H4C/HDB96/exe/


Down from 8MB to 2.2MB!
And that is just one of 418 shared objects
(counted on my HANA 2 system via “ls -l *.so | wc -l“).

In total that amounts to quite a difference between the binary sizes for the normal HANA installation and the HANA Cockpit one:

“Normal HANA 2”
ls -lh /usr/sap/S20/HDB20/exe/ | grep total
total 5.2G
“HANA Cockpit”
ls -lh /usr/sap/H4C/HDB96/exe/ | grep total
total 1.7G

Obviously, it has been a goal for the HANA cockpit team to make the additional overhead of having its own HANA instance as small as possible.
There is actually another case where memory usage of HANA needed to be minimized: the HANA Express edition.

After quickly firing up my HXE virtual machine (gee, I really love my NUC system for making this so easy) I checked whether it also uses symbol stripping and lo and behold, it does:

“HANA Express edition”
ls -lh /usr/sap/HXE/HDB90/exe/
-r-xr-xr-x 1 hxeadm sapsys 2.2M Jul 24 19:16 /usr/sap/HXE/HDB90/exe/

file /usr/sap/HXE/HDB90/exe/
ELF 64-bit LSB shared object, x86-64, version 1 (SYSV)
, dynamically linked
, BuildID[sha1]=1089da0d45a4fc59a631f0877b0f2786a5282937
, stripped <<< !

For those interested to learn more about symbols and shared libraries here are some pointers:

Finally, to go really down into the deep end of this rabbit hole, can we actually see that HANA uses less memory now?

Yes, we can!

The first option, on OS level:

“HANA Express”
hxeadm@hxehost:/usr/sap/HXE/HDB90> HDB info
hxeadm 1707 1 0.0 21628 2792 sapstart pf=/usr/sap/HXE/SYS/profile/HXE_HDB90_hxehost
hxeadm 1716 1707 0.0 217104 60060 \_ /usr/sap/HXE/HDB90/hxehost/trace/hdb.sapHXE_HDB90 ...
hxeadm 1739 1716 9.5 6880612 5089404 \_ hdbnameserver

We take the process ID of the nameserver and use the


tool with it.
As the man page explains, pmap displays information about process memory mappings.

hxeadm@hxehost:/usr/sap/HXE/HDB90> pmap -x 1739

1739: hdbnameserver
000055b587d67000 1932K    1928K    1928K      12K   0K      0K r-xp .../hdbnameserver
000055b587f4b000  248K     248K     248K     248K   0K      0K r--p .../hdbnameserver
000055b587f89000    4K       4K       4K       4K   0K      0K rw-p .../hdbnameserver
00007f0be3758000  712K     520K     288K       8K   0K      0K r-xp .../
00007f0be380a000   16K      16K      16K      16K   0K      0K r--p .../
00007f0be380e000    4K       4K       4K       4K   0K      0K rw-p .../
00007f0c54af6000    4K       4K       4K       4K   0K      0K rw-p [anon]
00007ffccc580000  136K      68K      68K      68K   0K      0K rw-p [stack]
00007ffccc5bc000   12K       0K       0K       0K   0K      0K r--p [vvar]
00007ffccc5bf000    8K       8K       0K       0K   0K      0K r-xp [vdso]
ffffffffff600000    4K       0K       0K       0K   0K      0K r-xp [vsyscall]
Total:        6880616K 5092916K 4825683K 4633372K   0K      0K

5870672K writable-private, 992656K readonly-private, 17288K shared, and 4621300K referenced

(full path to objects /hana/shared/HXE/exe/linuxx86_64/HDB_2.

The output from pmap is usually very long and wide, so I cut out a lot here and shortened the file path.
The important bit is 17288K (~16 MB) shared in the last line.

s20adm@skullbox:/usr/sap/S20/SYS/exe/hdb> HDB info
s20adm 20398 1 0.0 21600 228 sapstart pf=/usr/sap/S20/SYS/profile/S20_HDB20_skullbox.lab.cat5
s20adm 20407 20398 0.0 322428 580 \_ /usr/sap/S20/HDB20/skullbox.lab.cat5/trace/hdb.sapS20_HDB20 ...
s20adm 20429 20407 11.8 9131476 2749264 \_ hdbnameserver

s20adm@skullbox:/usr/sap/S20/SYS/exe/hdb> pmap -x 20429
20429: hdbnameserver
000000679a5a6000 1932K 240K 240K    8K   4K      4K r-xp .../hdbnameserver
000000679a78a000 248K   24K  24K   24K 224K    224K r--p .../hdbnameserver
000000679a7c8000 4K      4K   4K    4K   0K      0K rw-p .../hdbnameserver
00007fd36a355000 76K     0K   0K    0K   0K      0K r--s .../
00007fd36a368000 32K     0K   0K    0K   0K      0K r--s .../
00007fd36a370000 648K    0K   0K    0K   0K      0K r--s .../
00007fd36a412000 1368K   0K   0K    0K   0K      0K r--s .../
00007fd36a568000 116K    0K   0K    0K   0K      0K r--s .../
00007fd36a585000 156K    0K   0K    0K   0K      0K r--s .../
00007fd36a5ac000 80K     0K   0K    0K   0K      0K r--s .../
00007fd46f0d7000 4K      4K   4K    4K   0K      0K rw-p [anon]
00007ffdd3c51000 136K   28K  28K   28K  40K     40K rw-p [stack]
00007ffdd3dd6000 12K     0K   0K    0K   0K      0K r--p [vvar]
00007ffdd3dd9000 8K      8K   0K    0K   0K      0K r-xp [vdso]
ffffffffff600000 4K      0K   0K    0K   0K      0K r-xp [vsyscall]
Total:          8925232K 2753308K 2717475K 2491696K 3128688K 3023716K

5720584K writable-private, 994368K readonly-private, 2210280K shared, and 2569084K referenced

(full path to objects /hana/shared/S20/exe/linuxx86_64/HDB_2.

Same story here, but here we have 2210280K (2.1 GB) shared data.

The HANA way

Another approach to compare the memory usage is of course to query the HANA monitoring views:

round (value/1024/1024, 2) as value_MB
                               HANA 2    HANA Express
NAME                          VALUE_MB       VALUE_MB
SYSTEM_MEMORY_SIZE           32063.550      23546.860
SYSTEM_MEMORY_FREE_SIZE        272.140        246.680
PROCESS_MEMORY_SIZE           8652.050       6717.480
PROCESS_RESIDENT_SIZE         2110.720       4960.260 <<< ~ Linux RSS
PROCESS_CODE_SIZE             2017.950          3.580 <<< ~ Linux shared
PROCESS_STACK_SIZE             146.50          99.690
PROCESS_ALLOCATION_LIMIT     28857.190      21192.170
GLOBAL_ALLOCATION_LIMIT      28857.190      21192.170

More about HANA memory can be found in SAP Note 1999997 – FAQ: SAP HANA Memory

If you see warning about the “using only dynamic symbol table” now, you can put this into perspective.

There you go, now you know.



A new feature! YAY!?

A recent discussion in the SAP internal JAM community for HANA made me aware of a feature added to HANA SQL starting with HANA 1 SPS 12 (don’t ask the revision, just check the latest), respectively HANA 2 SPS 02 (we don’t speak about SPS 00 and SPS 01, do we?): “calculated columns”.

Generated <> Calculated

Like many other functions, this feature seems to have been implemented to

  • tick the box for SQL standard compliance/competition feature equality
  • to provide Core Data Services (CDS) with the SQL implementation for a functionality (all HANA CDS functions are mapped to SQL features in some way, which means that it is not possible to express anything more in CDS than in SQL)

Initially, I thought “calculated columns? We had that for some time!” and associated the “GENERATED ALWAYS AS” (GC) columns but this is not what “calculated columns” (CC from here on) are. CCs do not store any values but values get computed at query time.

Look, others have it too!

They are the corresponding feature to Oracle’s “virtual column” and MS SQL Server’s “Computed Column”.
Or, to be more precise, both HANA features CC and GC combined map to the respective features of the other DBMS since these provide the option to persist computed values as a command option (makes sense to me). As mentioned above, the SQL Standard (2003) contains computed columns as an optional feature, which is probably why this obscure functionality is present in all those DBMS products.

What is it for?

While I understand the usage scenarios for GC where the result of the computation gets stored – pre-computing something is about performance and reuse later on – I am not too sure about the use case for CC’s.
The SAP colleague in the mentioned JAM discussion was quite fond of them as the allow to specify computed columns without using views. That way, common data transformations that can be built with just the columns of one table, can be defined very close to the table itself.
This sounds like a good thing.

Now, I have a different perspective on that.

No surprises are good surprises

Let’s say I run a plain old

SELECT * FROM user_info;

my expectation is that HANA provides me with the currently valid contents of the visible table columns.
I definitively do not expect to find any computation in the explain plan. But with expression based columns this is exactly what I get:

Of course, this computation adds runtime and memory usage to the statement processing and will change the query plan in more complicated scenarios.

The same surprise will show up when I try to index the column:

create index i_address on user_info (email_address);
Could not execute 'create index i_address on user_info (email_address)'
SAP DBTech JDBC: [7]: feature not supported: 
cannot support create index on calculated field: EMAIL_ADDRESS: line 1 col 38 (at pos 37)

or when I try to partition based on this column

alter table user_info partition by HASH (email_address) PARTITIONS 4;
Could not execute 'alter table user_info partition by HASH (email_address) PARTITIONS 4' in 39 ms 39 µs . 
SAP DBTech JDBC: [2048]: column store error: 
fail to alter partition: [2051] partition specification not valid;
The attribute EMAIL_ADDRESS is not allowed in a partition specification.

Both indexing and partitioning are the main use cases for the CC feature in Oracle and MS SQL Server.
HANA does not support these use cases, so what’s left really is not having to create a view as the main benefit.

Beginning of problems

Obviously one could argue: well, the expectation that HANA should not be computing expressions when I query a base table is just wrong.
As of SQL 2003 (since 14 years!), it was not correct anymore.
So, go change your expectation!

Fair call, but most users don’t have that expectation.
The notion of the “dumb” base table is too strong that things like CCs and triggers are easily overlooked.
And that is where the problem starts: when the expectation about the behavior is wrong this usually means mistakes, wrong decisions and wasted time and effort.

The poor person who is confronted with a table using CCs later on (might even be the same developer who simply forgot that this table contains CCs) cannot tell from “looking” at the table which columns are real “base” columns and which ones are computed.
It does not show in the table definition UI and it does not show in SELECT TOP 5 * from <table>.
Only when this person digs deeper and checks the CDS source code or the HANA system tables (TABLE_COLUMNS) and really looks for this setting, this piece of information can be seen.

      column_name, data_type_name
    , index_type
    , generated_always_as, generation_type
 and schema_name=current_schema
order by position asc ;

I do not like this kind of surprises and rather have a base table plus a consumption view that contains the necessary expressions and transformations.
Not only can I create more complex expressions in such a consumption view but I also have a better chance to understand where data is stored and where data is computed.

Pulling the rug?

The only other use case for CCs that came to my mind is changing data models on DB level, while the application(s) remain unchanged.
Such changes could only be made for applications that read from the table. As soon as somebody tries to change the column this leads to an error.

So, you see, I really struggle with finding a good reason for using this feature.
What about you?
Have you come across a scenario where this was/would have been the absolute life saver? When and where would you use it and why?

Please share your experience with CCs or virtual/columns in the comments section.

There you go, now you now!



p.s. I created my user_info table based on a table with made up email addresses:

create column table user_info as (
        substr_before (emailaddress, '@') user_part,
        substr_after (emailaddress, '@') domain_part
 FROM email_adr);

Adding the computed column is straight forward:

alter table user_info 
      add (email_address NVARCHAR(320) as 
           user_part ||'@'|| domain_part);

Note: only HANA expressions are allowed in the computation part. No function calls or complex SQL statements.

What HANA needs now is some…

… logging!

Yes, you just got click-baited into reading this advertisement for a product enhancement idea that I posted: Standard logging facility for SQLScript

The idea here is simple: when you write application logic in SQLScript then you very likely want to have the option to log what your program does.

This is not the logging I am talking about
Photo by ABC Photo Archives – © 2011 American Broadcasting Companies, Inc. – Image courtesy

What I am talking about is a language feature in SQLScript that would allow to write out arbitrary messages during the execution of a SQLScript read/write procedure to some logging target. A target could really be anything, in this case, that can receive the logging information, e.g. a database table, the OS syslog, a logging server, a CSV file, an OS pipe, a kafka topic…

HANA SQLScript should provide the API for such a logging facility so that developers can focus on their business logic instead.

Especially with more and more application logic spread out over different micro-services, js apps and SQLScript an open and flexible logging feature could help to make sense of what an application does from one end to the other.

Another big technical benefit for a built-in logging would be that it could avoid the use of dynamic SQL and all the issues around it (security, performance, …).
Instead of pasting together an INSERT statement, a simple function call could not only invoke a potentially buffered log writing but also automatically include common technical information like current code location, timestamp, session context, user name, etc.

If you agree that such a feature is just what is missing in SAP HANA SQLScript, then go and vote for the idea Standard logging facility for SQLScript.

Note, that you have to register for the idea place first ( and that you have to be logged on to vote.

HANA 2 and two riddle errors

HANA 2 has been out for a while now – even SP01 of it is available already – so I thought it is about time to get more familiar with it.
Using my trusty “skullbox” (HANA in a pocket… ) I installed HANA 2 and realised that I now also will have to have a HANA Cockpit 2.0 installation, to be fully able to use all of the glorious new features.

Downloading the HANA Cockpit 2.0 package is a breeze as long as you do it via a proper broadband connection.
The installation archive for the database administration tool just takes about 2.5 GB (sic!).

> ll -h
-rw-r--r-- 1 lars sdba 2.5G May 12 19:58 SAPHANACOCKPIT02P_1-70002299.SAR

Compare this with, say, the current OpenSuse Leap Linux distribution (4.7 GB) and one cannot help having certain expectations around usability, the range of features and general awesomeness.
BTW: OpenSuse can be downloaded via the torrent network – given the size of the installation binaries for typical SAP application, that would be nice to have for the SAP Service Marketplace downloads, too.
Instead, in 2017, we still have to use the “Download Manager” tool, that sequentially downloads files from one server at a time.

Once the archive is on my skullbox, the interesting part begins.
Using the graphical installer “hdblcmgui” I triggered off the installation.
Being a “consumer” type of computer user as long as the situation allows, I simply accepted all the default values concerning file paths, instance numbers or names.

Click next, click next

And off the installation went, pushing a full blown HANA 2 instance onto my machine.
“WHAT?” I hear you say… which I have to respond to with “That’s right, the HANA Cockpit 2.0″ comes with its very own HANA 2 installation.”
That approach probably makes sense if one is managing many, many instances across your organisation in a centralised fashion (yep, that’s the old business user – IT – DBA/Admin way of thinking), but in times where “DevOps” is THE thing and development teams look after their “own” DBs themselves, this looks like bloat to me.
If anything, this sure will tick up the counter for productive HANA instances…

Surprisingly, the installation of HANA 2 went rather quickly. I suspect that having an SSD in your system really pays off for this sort of workload.
When I was just on the homestretch to successful-installation-celebration, a dialogue box announced:
Installation failed – Error occurred while executing SAP HANA Cockpit Components.”

Fortunately, the hdblcmgui tool makes it really easy to look at the log files in order to find out what was the cause for the error.
Kudos to the development colleagues for that!

This is what I found in the error log file:
18:02:19.875 - INFO: Output line 89: [26] --- Installation step: Set the HANA broker default DB ---
18:02:19.875 - INFO: Output line 90: [26] Set the default mapping to : '594863bc-d5f5-0b77-e100-00007f000002'(SYSTEMDB)
18:02:19.933 - INFO: Output line 91: [26] Map org: space: to DB: 594863bc-d5f5-0b77-e100-00007f000002
18:02:19.985 - INFO: Output line 92: [27] --- Installation step: Audit-Log Service (DB) ---
18:02:19.994 - INFO: Output line 93: [27] Creating application auditlog-db
18:02:19.994 - INFO: Output line 94: [27] Creating service auditlog-db-container
18:02:23.161 - INFO: Output line 95: [27] Uploading files for application auditlog-db
18:02:23.417 - INFO: Output line 96: [27] Staging application auditlog-db
18:02:24.454 - INFO: Output line 97: [27] 6/20/17 6:02:23.601 PM [STG/1] ERR /bin/sh: mc: line 1: syntax error: unexpected end of file
18:02:24.454 - INFO: Output line 98: [27] 6/20/17 6:02:23.601 PM [STG/1] ERR /bin/sh: error importing function definition for `mc'
18:02:24.454 - INFO: Output line 99: [27] 6/20/17 6:02:23.604 PM [STG/1] ERR bash: mc: line 1: syntax error: unexpected end of file
18:02:24.454 - INFO: Output line 100: [27] 6/20/17 6:02:23.604 PM [STG/1] ERR bash: error importing function definition for `mc'
18:02:24.454 - INFO: Output line 101: [27] 6/20/17 6:02:24.160 PM [STG/1] ERR bash: mc: line 1: syntax error: unexpected end of file
18:02:24.455 - INFO: Output line 102: [27] 6/20/17 6:02:24.161 PM [STG/1] ERR bash: error importing function definition for `mc'

First off: why is the error message (indicated by ERR) marked as a non-error INFO output line?
That is unnecessarily confusing.

Then looking at the error messages really didn’t help all that much:

/bin/sh: mc: line 1: syntax error: unexpected end of file
/bin/sh: error importing function definition for `mc'

bash: mc: line 1: syntax error: unexpected end of file
bash: error importing function definition for `mc'

“What is function ‘mc’?” you might ask and so did I.
Turns out, many moons ago, when I first installed my NUC system, I opted for some convenience in the dire straits of the Linux terminal console and installed Midnight Commander (a fantastic tool, everyone should have that!).
MidnightCommander’s program file happens to be called ‘mc‘… so, this seems to be a hot lead.

Unfortunately, I’m not the super-duper-Linux-expert, which is why I haven’t been aware of the way that Linux shells technically implement their startup.
Why would I need to know this? Well, after some lengthy research, I found that the error message was caused by a shell initialisation script stored in system folder


Without going into too many details here (look here instead), there were two symbolic links ( and mc.csh) in this folder, pointing to files from the MidnightCommander installation.

After deleting those links (you can always recreate them via `ln /usr/share/mc/ /etc/profile.d/` and `ln /usr/share/mc/mc.csh /etc/profile.d/mc.csh`), I thought the problem was fixed and I could just continue the installation.
For some reason though, this does not seem to be possible. Instead, I uninstalled the whole half-installed HANA Cockpit setup and started from scratch.

This time around, no script failed due to the “syntax error” and the “unexpected end of file”.

Awesome, until…

Which would have been awesome, if the installation would not have run into a different error right after that:

18:22:34.688 - INFO: Output line 156: [31] [DEPLOY_SERVICE] Getting multi-target apps in org "HANACockpit" / space "SAP" as COCKPIT_ADMIN...
18:22:34.689 - INFO: Output line 157: [31] [DEPLOY_SERVICE] No multi-target apps found
18:22:34.689 - INFO: Output line 158: [31] [DEPLOY_SERVICE] 
18:22:34.697 - INFO: Output line 159: [31] Checking for existing service lcm-view-grantor
18:22:34.713 - INFO: Output line 160: [31] Creating service lcm-view-grantor
18:22:35.947 - INFO: Output line 161: [31] Checking for existing service lm-service-credentials
18:22:35.960 - INFO: Output line 162: [31] Creating service lm-service-credentials
18:22:36.024 - INFO: Output line 163: [31] Deploying application Product Installer
18:22:36.281 - INFO: Output line 164: [31] [DEPLOY_SERVICE] 
18:22:36.707 - INFO: Output line 165: [31] [DEPLOY_SERVICE] Uploading 1 files:
18:22:36.707 - INFO: Output line 166: [31] [DEPLOY_SERVICE] /hana/shared/H4C/xs/installdata/apps/product-installer/product-installer.mtar
18:22:38.243 - INFO: Output line 167: ERR Failed to upload files
18:22:38.243 - INFO: Output line 168: ERR Error occured during communication with HTTP server
18:22:38.244 - INFO: Output line 169: ERR Broken pipe (Write failed) (local port 63255 to address (localhost), remote host unknown)
18:22:38.255 - INFO: Output line 170: [31] Removing service lcm-view-grantor
18:22:38.258 - INFO: Output line 171: [31] Service binding lcm-view-grantor not found
18:22:38.312 - INFO: Output line 172: [31] ERROR: 
18:22:38.312 - INFO: Output line 173: [31] 
Installing the Product Installer application failed: 
Error occured during communication with HTTP server
18:22:38.312 - INFO: Output line 174: [31] at 

Once again, error messages are labelled as non-error INFO lines, but good to know each timestamp to the millisecond…

This time around, the DEPLOY_SERVICE complains about a “Broken pipe” when writing to a port on localhost, claiming the “remote host is unknown“.
“How odd!” I thought, because “localhost” is not “remote” at all and not “unknown” at all.

A quick ping localhost proved that “localhost” was indeed known and reachable, so what could be the problem?

The key hint – which I was not able to pick up, instead I found a side comment in one of few open customer incidents – is that this is not about communication via UDP or TCP, but via HTTP.
HTTP does use cookies for a variety of purposes.
Once again, I am not an expert of HTTP standards, but intensive googling led me to these pages:

Count your dots!

The insight from above web pages was that proper cookie domains have to have at least 2 (two!) dots/periods in them.
That’s right, a fully-qualified-hostname like `skullbox.cat5` is not suitable for proper cookies.

> hostname -f

Alright, so I changed the domain name of my system (that has been running HANA and other software in different versions for over a year now and never ever had an issue with the domain name, but hey…) to `skullbox.lab.cat5`.

That’s two dots in that name now:

> hostname -f

To be frank, this error is the one that really frustrated me.

Nowhere in the current or past HANA documentation does it say:
“Make sure to have two dots in your fully-qualified-hostname or we’ll throw random cryptic error messages at you.”

Even with the knowledge of what went wrong, it was a task to find any SAP documentation in relation to this.
The closest I found are

“In accordance with the Netscape cookie specification, cookies can be set for one domain only, and a domain must contain two or three dots (.) due to security restrictions.
Each of the seven top level domains (.COM,.EDU,.NET,.ORG,.GOV,.MIL,.INT) must contain at least one further domain component (usually the name of the company or organization), amounting to two dots.
Each domain with a different ending (this includes the top level domains for countries, such as UK, DE, FR, and so on) must consist of two further domain components, that is, these domains must contain at least three dots. For more information see the cookie specification.

Examples of valid domains:

<host> Top level domain -> two domain components
<host> No top level domain -> three domain components”

Rounding off

After another round of reboot and uninstallation of the broken HANA Cockpit setup, hdblcmgui finally went through all the steps, finishing with “green lights”.

So, here we are. 157 lines into the installation of a database monitoring and administration tool that apparently is required to use HANA 2.

Given the information available (before I wrote this blog post), it took me several weeks to get it up and running and nearly two days straight when I finally decided to get to the bottom of the errors and finally have a proper HANA 2 setup.
I cannot say that I enjoyed the experience or felt that it was time well spent.
Hopefully, having these two nasty problems documented and searchable saves others some time.
There you go, now you know!

Eco-friendly slide recycling with HANA

Hey there!

I recently presented a short session on my experiences with developing and debugging non-classic-SAP HANA solutions at the Leading Insights conference in Melbourne (27./28.3.2017).

And since the slides are prepared in a way that they could be used without my talking as well, I thought, I might just post them. So, there they are on slideshare:

if you feel this is a shameless content plug, you’re probably right, but I got the feeling, that this might still be interesting to some readers of the “SAP HANA” tag.

Content-wise, this is what you can expect (just in case you don’t want to swap to page 2 which shows the table of contents):

  • an exciting example for a pure HANA development without any NetWeaver/ERP background
  • some practical tips for finding performance issues with SQL statements
  • and tidbits of performance gone bad and how to fix it.

There you go; now you know.


Make query hints great again…

In case you ever got frustrated by how 80’s like the SAP HANA hints infrastructure looks like, go and support my improvement idea on SAP Idea place: 

Go and upvote! Now! 🙂

[Update 23.1.2017]: After some of the folks that I really liked to review my idea reported that they don’t have access to it, I decided to put the idea text as a copy in here as well.
So here it is:

Currently (HANA 1 SPS12/HANA 2 SP0) SAP HANA supports the usage of query hints via information view parameters or – most commonly used – the WITH HINT query parameter.

Most of the times, the hints are used to avoid a specific bug in SAP HANA and are only applicable to a specific version/revision or application state.

For all practical matters, the hints don’t serve a functional purpose for any application but are necessary band-aids to enable the application to run successfully on SAP HANA.

As many bugs do get fixed with later SPS/revisions, the hints could and should be reviewed in order to allow the application leverage the formerly disabled feature again.

Right now, this means, that a developer/system operator has to manually document and keep track of all hints used in the system, the reason for the usage, the HANA revision the hint applies, maybe a incident numbers where the hint usage had been discussed and probably a target HANA version when the hint shouldn’t be required anymore.

This task is obviously not delivering a lot of value for any developer, so in reality, a hint, once introduced, will likely stay in place forever, as nobody remembers what it was used for and if it’s safe to remove it now.

The proposal is as follows:

Instead of a simple WITH HINT syntax, allow for a more comprehensive hint management that allows for identifying a specific statement (e.g. provide a custom statement identifier in a statement annotation or comment) and provide HANA function to specify a set of hints for any statement ID, including the already mentioned metadata.

Upon SQL query compilation SAP HANA would automatically match the statement ID against the maintained statement hints (this could be a single table structure) and include the hint into the statement execution.

The fact that this happened (and against which statement id hint setting this match had been performed) should be visible in the usual query execution analysis tools like EXPLAIN PLAN or PLANVIZ.

The proposed solution has the following benefits:

Application code does not need to be modified to add hints, as long as the statements can be uniquely identified.

The hint application can be controlled centrally, without the involvement of the original developer – essentially the hint can be applied


  • operations/support staff
  • The query compiler in HANA can automatically check for the valid HANA versions and ignore the hint for non-matching versions
  • The hints for a given installation can be transported like a configuration data set across multiple servers
  • The central list of statements with hints settings allows an efficient follow-up of HANA issues and – given an option to ignore the hint list during plan compilation – also allows for very efficient testing of whether a hint is still required or not.

I haven’t found an obvious way to contact support for the SAP Ideas Place, so my best bet is that in order to view any idea a registration is required.

  • Lars

Merging right. Some tests with data loading in HANA from Down Under.

Based on SAP HANA SPS 12 (HCP trial)

When Jody Hesch recently send me an email with a question about HANA, he received what everyone receives when sending email questions about HANA. The reply is a version of

“Usually I don’t do email Q&A as this simply doesn’t help with knowledge sharing.

Instead, I advise everyone to post the question in one of the HANA related forums
(SAP Community ,
or even stackoverflow ) so that the question and its answers are search- and findable.

That way everyone can benefit from this and you even might get faster and/or better answers than from just writing to me.”

I have not given up on my personal fight against GEIGOKAI and this it what it looks like in daily real-life.


You might be inclined to call this reply rude but it turns out that most people accept this very easily and manage to copy&paste their question into one of the forums – and changing culture is always a bit painful … drip drip drip

And just as many others, Jody managed to post his question for all to see, read and discuss here: Manually managing Delta Merge

While I was doing the due diligence of checking my own ideas against a HCP trial HANA instance (SPS 12.04), the whole explanation and the example got larger than what would be nice for a simple answer, which is why you now read this is a blog post format.

On loading data and merging

Ok, let’s get to it, then.
Jody’s question was “how can manually handling delta merge make a data load faster?”
To answer this I set up a small test case:

drop table merge_test;
     "AAA" NVARCHAR(20),
     "BBB" INTEGER ,
     PRIMARY KEY ("ID"));

declare ts_start timestamp = current_timestamp;
declare ts_end timestamp;
declare loops integer = 100;
declare j integer;
declare i integer;
declare with_commit integer = 1;

    truncate table merge_test;
    for j in 0 .. :loops do
        for i in 0 .. :loops*50 do
            upsert merge_test
                values (:i, :i || ' - ' || :j, :i*:j)
                with primary key;

        end for;
        if with_commit = 1 then
        end if;
    end for;

    ts_end = current_timestamp;    
    -- now show some infos
    select seconds_between(:ts_start, :ts_end) as duration from dummy;
    select count(*) as num_rows, min(id), max(id) from merge_test;
    select column_name, memory_size_in_main, memory_size_in_delta, count, distinct_count
    from m_cs_all_columns
    where table_name ='MERGE_TEST'
    and schema_name =current_user;
    select merge_count, read_count, write_count, last_merge_time
    from m_cs_tables
    where table_name ='MERGE_TEST'
    and schema_name =current_user;
    select start_time, motivation, execution_time, memory_merge, merged_delta_records, success
    from m_delta_merge_statistics
    where table_name='MERGE_TEST'
    and start_time &gt;= :ts_start
    and schema_name = current_user;

What we have here is a simple table and a data loading simulation. The data gets inserted or updated via the UPSERT command and, if the parameter

with_commit = 1

a commit is done after each iteration of the outer loop.
After both loops have been left, the code prints the runtime in seconds and selects some statistics from system views.

Next, I ran three experiments.

The first two used the SAP HANA AUTOMERGE feature, where HANA decides when to perform a delta merge.
The third one followed the approach SAP BW uses for data loading: the AUTOMERGE is disabled and the target tables get merged after all data has been loaded.




5001        0       5000  

ID          1064                3733044                 505101  5001          
AAA         0                   17607386                505101  -1            
BBB         0                   5571068                 505101  -1            
$trex_udiv$ 232                 8889016                 0       0            
$rowid$     768                 15676428                505101  505101        

0           7           505101      2016-12-20 03:15:41.986


Easy to see, the whole thing ran for 386 seconds and managed to create a total of 505.101 record versions in the delta store.
Note that during this load NO delta merge had happened. This is because we never committed the transaction at all and the delta merge won’t “relocate” open update transactions to a new delta store.



5001        0       5000  

ID          723716              9132                    5001    5001          
AAA         741076              10052                   5001    5001          
BBB         711828              9132                    5001    5001          
$trex_udiv$ 273432              1912                    0       0            
$rowid$     1504                9132                    5001    5001          

5           30          505111      2016-12-20 03:40:22.85

2016-12-20 03:36:22.166 AUTO        54              FALSE           80016                   TRUE  
2016-12-20 03:37:22.24  AUTO        88              FALSE           90018                   TRUE  
2016-12-20 03:38:22.349 AUTO        119             FALSE           85017                   TRUE  
2016-12-20 03:39:22.49  AUTO        157             FALSE           85017                   TRUE  
2016-12-20 03:40:22.67  AUTO        186             FALSE           85017                   TRUE  

Here we see that with commits in between the whole thing only took 349 seconds.
As we’ve committed the transaction in between a couple of times, the automerge had a chance to be active – and did so roughly every minute (5 times).

Now, why is this variant a bit faster overall? The effort to find the currently active record versions grows with the number of versions in the delta store.
With the automerge in between, this number stayed consistently below 100.000 versions, while the first version had to face an increasing number of versions to check between round 100.000 and 505.101.


Finally the version of data loading that is similar to SAP BW’s approach.
I disabled the automerge via

alter table merge_test disable automerge;

and ran the test again:


5001        0       5000  

ID          1064                3733044                 505101  5001          
AAA         0                   17607386                505101  -1            
BBB         0                   5571068                 505101  -1            
$trex_udiv$ 232                 8810536                 0       0            
$rowid$     768                 15676428                505101  505101        

0           15          505101      2016-12-20 03:49:36.914



merge delta of merge_test;

2016-12-20 03:56:09.435 HARD        46              FALSE           5001                    TRUE  

With 325 seconds this is the fastest run.
Looking at the time the manual delta merge took, we find that it’s still faster than the last delta merge of the automerge example.

This is – again – due to the fact that now the are a lot less concurrent record versions in the table.
Note, MERGED_DELTA_RECORDS shows the number of valid (committed) records that have been moved from delta to main storage.
If I update the same record a hundred times before I commit, then I would only get one record merged into the main store.

Now, why do we see a better performance in this case?

To illustrate the resource usage during our loads, see the following diagrams:

In our first case, we see that the memory usage for the delta store as well as the CPU usage for the UPSERT gets larger over time.
Sometime after the last commit is done, an automerge gets triggered, leading to additional CPU and a lot more memory usage.
The result is a relatively well-compressed table (last little blue square).

Note that the number of uncomitted record versions (little orange squares) increases during the whole time.

With the second approach (commits in between and automerge on), we see that CPU and memory usage during the load stay moderate. Only when the merge is performed, memory and CPU usage increase and the whole process has to wait for the merge to switch over to the delta2 in order to continue.

Noteworthy here is that the resulting table can easily be larger than the table sizes produced by the other methods. This is because the column compression algorithms are determined during the first delta merge and won’t change that easily.
This can lead to a situation where the compression algorithms are determined on a subset of data that is not representative of the total (think of loading data with timestamps or dates in sequential order – the first merge may only see one specific date).

With the last option, we see a very similar pattern than in the first case.
The only difference here is that the final merge was manually triggered.

So with AUTOMERGE disabled and regular commits we get the best of everything:

  • no waits due to delta merges
  • no increased runtimes due to a high number of outdated uncommitted record versions
  • well-compressed table (since the compression optimisation can work on the whole data set).

This works well, as long as the resulting delta merge can be done within the available memory.
And even if other memory structures would need to be unloaded to allow for the delta merge to happen, then this would have to happen only once and not several times during the loading process.

Smarter merge

I mentioned that this third approach is similar to what SAP BW does but this is only half the truth.
Instead of manually sending a

merge delta of merge_test;

which gives the hard instruction to perform a delta merge now, SAP BW uses a “smart merge”.
The idea here is that instead of forcing a delta merge to let HANA evaluate whether a merge would be granted for, given the current memory statistics and given the merge decision functions.

With this flexible approach, the application does not need to care about the specifics of how to decide when a merge should take place, but can yet trigger HANA to take care about that.


Manually controlling the delta merge can improve performance for the load process as well as reduce the overall system load during the data loading, since multiple merges are avoided. Delta merges still are extremely resource intensive operations that can have a massive impact to the whole HANA system. Therefore it’s a good idea to have them done as seldom as possible.
This, in combination with a controlled approached to COMMITTING loaded data can, as shown above, have quite an effect to the runtime of the data loading.

There you go, now you know.