Buffer Get and Release

Disclaimer: I'm getting carried away here and most folks will want to bypass this post now.

I wonder if the following is worthwhile:

Recent Buffer chat got me thinking. Here are some 'facts':
o Buffer create/delete is particularly slow compared to Buffer adding "+=" and Buffer clearing "="; which encourages the re-use of perpetual Buffers.
o Many functions need to use a Buffer as a tool for manipulating strings, but the contents of the Buffer are not needed once the function ends.
o There are recursive functions needing such temporary Buffers.

I wonder if I could write functions to replace use of "Buffer create()" and "delete(Buffer)" with functions that allocate and de-allocate instead. Thus a function that needs a temp buffer can allocate at the start and de-allocate at the end, where they would normally create and delete; or can use these instead of that notion I embraced about declaring and creating the Buffer globally, even though only used by the one function.

Buffer  fBuf_Create()
{    // Return an unclaimed empty Buffer, 'creating' one if needed.
        Buffer  buf
        buf = pop unclaimed Buffer from the UnClaimed_Buffers Stack
        if (null buf)           // stack empty -
        then buf = create()   // a new one.     return(buf)
}     // end fBuf_Create()
 
Buffer  fBuf_Create(int Size)
{     // Overloaded, make sure the Buffer starts at the given size
        // Note: I am unsure about doing this...
        Buffer  buf = fBuf_Create()
 
        if (Size >= 0) length(buf, Size)
        return(buf)
}     // end fBuf_Create(Size)
 
void    fBuf_Delete(Buffer &buf)
{     // Claim the Buffer and make it available for other Functions
        buf = ""      // also set length to zero and/or setEmpty(buf) ??
        push buf onto the UnClaimed_Buffers stack.
        buf = null      // ?? Will this mess up the stack?
}     // end fBuf_Delete()
 
void    fBuf_ReleaseAll()
{     // Delete all unclaimed Buffers
        // Calling programs should call this before they exit;
        //      and possibly after other intense computing times
        bool    Done = false
        while(!Done)
        {  buf = pop unclaimed Buffer from the UnClaimed_Buffers Stack
           if (null buf)        // stack empty -
           then Done = true
           else delete(buf)
        }
}    // end fBufReleaseAll()


If the above is worthwhile for huge programs or for use by huge libraries, then I suppose we could figure out the "pop" and "push" mechanics.

Yes, we could use a clever Mathias trick of actually overwriting the existing "create/delete" functions with the ones above, reducing the need to modify some existing code.

 

 

  • Louie

 

 


llandale - Wed Aug 10 14:30:36 EDT 2011

Re: Buffer Get and Release
Mathias Mamsch - Thu Aug 11 08:40:44 EDT 2011

Hey Louie,

I would go even one step further. Memory Allocation is slow for sure, the bad thing is, it gets much (factor 100-1000 times) slower if you create more than 5000 Objects in DOORS (which is not much). Example: You store the contents of one attribute of one module in a buffer each and put them to a skip ... BOOM! For middle-sized modules your program will get a HUGE performance decrease.

So recycling buffers is not sufficient to get around the performance drop. I implemented a StringBuffer that will store a couple of strings all in one buffer, and store the string starts in an array. This way you will have another advantage:

  • You will only use one Buffer to store everything (no unnecessary allocations), just adding the string to the buffer
  • You will only have one allocated objects, so other allocations will not experience a performance drop.

If you are interested I can post some code.

Regards, Mathias

Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

Re: Buffer Get and Release
llandale - Fri Aug 12 15:21:09 EDT 2011

Mathias Mamsch - Thu Aug 11 08:40:44 EDT 2011
Hey Louie,

I would go even one step further. Memory Allocation is slow for sure, the bad thing is, it gets much (factor 100-1000 times) slower if you create more than 5000 Objects in DOORS (which is not much). Example: You store the contents of one attribute of one module in a buffer each and put them to a skip ... BOOM! For middle-sized modules your program will get a HUGE performance decrease.

So recycling buffers is not sufficient to get around the performance drop. I implemented a StringBuffer that will store a couple of strings all in one buffer, and store the string starts in an array. This way you will have another advantage:

  • You will only use one Buffer to store everything (no unnecessary allocations), just adding the string to the buffer
  • You will only have one allocated objects, so other allocations will not experience a performance drop.

If you are interested I can post some code.

Regards, Mathias

Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

".. gets much (factor 100-1000 times) slower if you create more than 5000 Objects in DOORS (which is not much)" WHAT? Are you saying:
[] Creating the 5000th Buffer/Skip/Whatever causes DOORS to degrade; even if you've deleted many already. for i 0-4999{buf = create(); delete(buf)} what follows is degraded.
[] Having 5000 Buffers/Skip/Whatever in existence in one time causes degrade.
[] 4999 does not
[] What is "Whatever"
[] There is some XFLAG or pragma whatever that can fix this, yes? lol

Golly, I think I've been experiencing this on some old code used on some other huge project I'm not allowed to see.

This is serious trouble I think.

  • Louie

Re: Buffer Get and Release
Mathias Mamsch - Tue Aug 16 04:29:07 EDT 2011

llandale - Fri Aug 12 15:21:09 EDT 2011
".. gets much (factor 100-1000 times) slower if you create more than 5000 Objects in DOORS (which is not much)" WHAT? Are you saying:
[] Creating the 5000th Buffer/Skip/Whatever causes DOORS to degrade; even if you've deleted many already. for i 0-4999{buf = create(); delete(buf)} what follows is degraded.
[] Having 5000 Buffers/Skip/Whatever in existence in one time causes degrade.
[] 4999 does not
[] What is "Whatever"
[] There is some XFLAG or pragma whatever that can fix this, yes? lol

Golly, I think I've been experiencing this on some old code used on some other huge project I'm not allowed to see.

This is serious trouble I think.

  • Louie

It is indeed serious trouble for all DXL programs dealing with large data sets. The made me revive the array for a lot of datasets, where I used nested skips, just to get around the performance leak. If you run the attached DXL script, you will get an ouput like the below.
 

Tick    Interval Counter        Total
15      1000    1000
31      1000    2000
78      1000    3000
125     1000    4000
156     1000    5000
203     1000    6000
250     1000    7000
...
27640   1000    36000
29531   1000    37000
31484   1000    38000
33500   1000    39000

 


It shows how many objects (Counter) can be created in which time. Each line (step) is the creation of 1000 Objects. As you can see from the numbers: The first 1000 Objects take something like 15 ms (below measurable time window from DXL). If you compare the difference between 5000 and 6000 objects (another 1000 created) you already have 50 ms (still reasonable fast). Now put this to an Excel sheet, draw a graph (Y = Tick, X = Counter). You will get something like the attached jpg.

From the JPG you can see: After a certain amount of allocations (obviously CPU dependend, the last test I did on another slower computer was 5000, on the computer where I did the test now the limit seems to be somewhere around 12000) the time for allocating another object rises linearly. For my computer with 40000 objects already created each allocation will take somewhere near 2 ms! This is the boundary I was talking about. As soon as you come across this boundary, your program will get slower and slower with every allocation.

Think about a string manipulation function that allocates a temporary buffer, does some replacements, deletes the buffer and returns the result. If you run it without any allocations present it will be super fast and return far below 1 ms (like 120 µs). If you run the same function with 40000 objects allocated, the allocation and the deletion will eat probably 4ms slowing the function down by factor 30-40.

So the performance tip of the week is: Each DXL program that deals with large datasets, should be designed in a way, that the number of allocations needed is not dependend on the size of the dataset. Otherwise it will suffer performance penalty due to DOORS memory management.

The reason for this slowdown by the way is: The DXL interpreter keeps track of the allocated objects in a simple linked list. Therefore inserting an item into the linked list takes more time the longer the list grows. And when I say allocated objects I mean: Buffer, Skip, ModuleVersion, Regexp, Array, OleAutoObj, OleAutoArgs, ... So it should not matter what kind of object you create. The performance penalty should be the same.

Regards, Mathias

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS


Attachments

attachment_14672594_Memory.zip

Re: Buffer Get and Release
llandale - Wed Aug 17 09:46:19 EDT 2011

Mathias Mamsch - Tue Aug 16 04:29:07 EDT 2011

It is indeed serious trouble for all DXL programs dealing with large data sets. The made me revive the array for a lot of datasets, where I used nested skips, just to get around the performance leak. If you run the attached DXL script, you will get an ouput like the below.
 

Tick    Interval Counter        Total
15      1000    1000
31      1000    2000
78      1000    3000
125     1000    4000
156     1000    5000
203     1000    6000
250     1000    7000
...
27640   1000    36000
29531   1000    37000
31484   1000    38000
33500   1000    39000

 


It shows how many objects (Counter) can be created in which time. Each line (step) is the creation of 1000 Objects. As you can see from the numbers: The first 1000 Objects take something like 15 ms (below measurable time window from DXL). If you compare the difference between 5000 and 6000 objects (another 1000 created) you already have 50 ms (still reasonable fast). Now put this to an Excel sheet, draw a graph (Y = Tick, X = Counter). You will get something like the attached jpg.

From the JPG you can see: After a certain amount of allocations (obviously CPU dependend, the last test I did on another slower computer was 5000, on the computer where I did the test now the limit seems to be somewhere around 12000) the time for allocating another object rises linearly. For my computer with 40000 objects already created each allocation will take somewhere near 2 ms! This is the boundary I was talking about. As soon as you come across this boundary, your program will get slower and slower with every allocation.

Think about a string manipulation function that allocates a temporary buffer, does some replacements, deletes the buffer and returns the result. If you run it without any allocations present it will be super fast and return far below 1 ms (like 120 µs). If you run the same function with 40000 objects allocated, the allocation and the deletion will eat probably 4ms slowing the function down by factor 30-40.

So the performance tip of the week is: Each DXL program that deals with large datasets, should be designed in a way, that the number of allocations needed is not dependend on the size of the dataset. Otherwise it will suffer performance penalty due to DOORS memory management.

The reason for this slowdown by the way is: The DXL interpreter keeps track of the allocated objects in a simple linked list. Therefore inserting an item into the linked list takes more time the longer the list grows. And when I say allocated objects I mean: Buffer, Skip, ModuleVersion, Regexp, Array, OleAutoObj, OleAutoArgs, ... So it should not matter what kind of object you create. The performance penalty should be the same.

Regards, Mathias

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

I sure hope you mean "every data structure in DOORS that is 'create'd" and hope that 'ModuleVersion' was a typo. If not then there is a much bigger problem since you cannot de-allocate that.

I see it only applies to the existence of these objects. Inserting a "delete(b)" at the bottom of the loop reveals no increase in interval times. Except sporadically for some reason it takes 16 extra ticks; perhaps something else is executing on the client.

Tweaked the code and I think successfully added the notion of finding out how much slower dealing with the created objects take vis-a-vis how many exist at that time. So in the middle of the interval I do some looping through a Skip looking at the Buffers; then proceed with the outer loop. Code below. Here are the printed results which I slimmed down. 1st column is counter, 2nd is how long the 1000 interval took, 3rd column is how much longer it took than the previous interval (shows how fast its slowing down), last column is how long finding stuff takes when there are Counter number of Buffers in existence.

max: 200000    interval = 1000
Counter Ticks   Delta   Find
1000    0       0       0
2000    31      31      0
3000    32      1       15
4000    32      0       15
5000    47      15      16
6000    62      15      16
7000    62      0       16
8000    94      32      15
9000    94      0       16
10000   109     15      16
...
30000   1078    172     78
...
40000   2485    172     109
...
50000   3562    -48     141
...
60000   4453    63      156
...
70000   5218    93      203
...
80000   6063    110     312
...
90000   6937    109     547
...
100000  7687    62      922
...
110000  8500    47      1391
...
120000  9328    78      1968
...
130000  10109   63      2328
...
140000  10922   94      2813
...
150000  11688   63      3218
...
160000  12485   63      3640
...
170000  13235   17      3984
...
180000  14063   78      4312
...
190000  14828   47      4641
191000  15000   172     4672
192000  15015   15      4750
193000  11922   -3093   4735
194000  7609    -4313   4781
195000  7782    173     4812
196000  8094    312     4844
197000  8234    140     4859
198000  8422    188     4891
199000  8703    281     4937
200000  8860    157     5015


Looking at the 80,000 mark it appears that the vast majority of time is spent creating (2nd column) and far doing work while that many are created (last column). But as we go down the create increase stays steady but the find gets far slower. Thus, Mathias conclusion that we MUST limit these 'creates' is important.

I note with much curiosity the 194,000 mark where suddenly the Creates take far less time. I've seen something like that before and tried to get Telelogic to admit that there is indeed a one-time shot at garbage collection but got no confirmation nor denial on that.

So perhaps we can chat about techniques about holding this proliferation, perhaps how to hold information about all the Objects in a module without creating a data structure for each. My emmediate issue is that I need to keep track of all the outgoing links for all objects in the module, figuring to mark the ones that should no longer exist, and later delete them.

 

// Modified by Landale
pragma runLim,0
 
int       maxObjects = 200000
int     interval = 1000
int     Counter  = 0
 
Skip    skpBuffs = create()             // KEY: 'int' sequency, DATA: 'Buffer'
Skip    skpInterval = create()          // ditto, only has interval entries
Buffer  bCreate, bFound
Buffer  bufResults = create()
bufResults      += "max: " maxObjects "\tinterval = " interval "\n"
bufResults      += "Counter\tTicks\tDelta\tFind\n"
 
int     IntervalStartTime = getTickCount_(),
                TicksThisInterval = 0, IntervalDelta, TicksLastInterval = 0,
                i, FindStartTime, FindEndTime
 
for (Counter=1; Counter<=maxObjects; Counter++)
{
        // Skip sk = create()
        bCreate = create(100)
        put(skpBuffs, Counter, bCreate)
        put(skpInterval, Counter, bCreate)
        // string s = "abc" "" 
        if ((Counter)%interval == 0) {
                //tick = getTickCount_() - IntervalStartTime
                //Counter += counter
                // print "Interval: " tick " -> Count = " counter " Gesamt:" Counter "\n"
                TicksThisInterval       = getTickCount_() - IntervalStartTime
                IntervalDelta   = TicksThisInterval - TicksLastInterval
 
                FindStartTime   = getTickCount_()
                for (i=(Counter-interval)+1; i<=Counter; i++)
                {  find(skpBuffs, i, bFound)
                   bFound       += "a"
                }
                delete(skpInterval)
                skpInterval     = create()
                FindEndTime             = getTickCount_() - FindStartTime
 
                bufResults += Counter "\t" TicksThisInterval "\t" IntervalDelta "\t" FindEndTime "\n"
 
                                // Get ready for next loop:
                IntervalStartTime       = getTickCount_()
                TicksLastInterval       = TicksThisInterval
        }
// delete(bCreate)
}
 
// bufResults += "Interval: " (getTickCount_() - IntervalStartTime) "  Gesamt:" (Counter) "\n"
print tempStringOf(bufResults)
delete(bufResults)
for bCreate in skpBuffs do{delete(bCreate)}
delete(skpBuffs)
delete(skpInterval)

 

Re: Buffer Get and Release
Mathias Mamsch - Thu Aug 18 04:47:14 EDT 2011

llandale - Wed Aug 17 09:46:19 EDT 2011

I sure hope you mean "every data structure in DOORS that is 'create'd" and hope that 'ModuleVersion' was a typo. If not then there is a much bigger problem since you cannot de-allocate that.

I see it only applies to the existence of these objects. Inserting a "delete(b)" at the bottom of the loop reveals no increase in interval times. Except sporadically for some reason it takes 16 extra ticks; perhaps something else is executing on the client.

Tweaked the code and I think successfully added the notion of finding out how much slower dealing with the created objects take vis-a-vis how many exist at that time. So in the middle of the interval I do some looping through a Skip looking at the Buffers; then proceed with the outer loop. Code below. Here are the printed results which I slimmed down. 1st column is counter, 2nd is how long the 1000 interval took, 3rd column is how much longer it took than the previous interval (shows how fast its slowing down), last column is how long finding stuff takes when there are Counter number of Buffers in existence.

max: 200000    interval = 1000
Counter Ticks   Delta   Find
1000    0       0       0
2000    31      31      0
3000    32      1       15
4000    32      0       15
5000    47      15      16
6000    62      15      16
7000    62      0       16
8000    94      32      15
9000    94      0       16
10000   109     15      16
...
30000   1078    172     78
...
40000   2485    172     109
...
50000   3562    -48     141
...
60000   4453    63      156
...
70000   5218    93      203
...
80000   6063    110     312
...
90000   6937    109     547
...
100000  7687    62      922
...
110000  8500    47      1391
...
120000  9328    78      1968
...
130000  10109   63      2328
...
140000  10922   94      2813
...
150000  11688   63      3218
...
160000  12485   63      3640
...
170000  13235   17      3984
...
180000  14063   78      4312
...
190000  14828   47      4641
191000  15000   172     4672
192000  15015   15      4750
193000  11922   -3093   4735
194000  7609    -4313   4781
195000  7782    173     4812
196000  8094    312     4844
197000  8234    140     4859
198000  8422    188     4891
199000  8703    281     4937
200000  8860    157     5015


Looking at the 80,000 mark it appears that the vast majority of time is spent creating (2nd column) and far doing work while that many are created (last column). But as we go down the create increase stays steady but the find gets far slower. Thus, Mathias conclusion that we MUST limit these 'creates' is important.

I note with much curiosity the 194,000 mark where suddenly the Creates take far less time. I've seen something like that before and tried to get Telelogic to admit that there is indeed a one-time shot at garbage collection but got no confirmation nor denial on that.

So perhaps we can chat about techniques about holding this proliferation, perhaps how to hold information about all the Objects in a module without creating a data structure for each. My emmediate issue is that I need to keep track of all the outgoing links for all objects in the module, figuring to mark the ones that should no longer exist, and later delete them.

 

// Modified by Landale
pragma runLim,0
 
int       maxObjects = 200000
int     interval = 1000
int     Counter  = 0
 
Skip    skpBuffs = create()             // KEY: 'int' sequency, DATA: 'Buffer'
Skip    skpInterval = create()          // ditto, only has interval entries
Buffer  bCreate, bFound
Buffer  bufResults = create()
bufResults      += "max: " maxObjects "\tinterval = " interval "\n"
bufResults      += "Counter\tTicks\tDelta\tFind\n"
 
int     IntervalStartTime = getTickCount_(),
                TicksThisInterval = 0, IntervalDelta, TicksLastInterval = 0,
                i, FindStartTime, FindEndTime
 
for (Counter=1; Counter<=maxObjects; Counter++)
{
        // Skip sk = create()
        bCreate = create(100)
        put(skpBuffs, Counter, bCreate)
        put(skpInterval, Counter, bCreate)
        // string s = "abc" "" 
        if ((Counter)%interval == 0) {
                //tick = getTickCount_() - IntervalStartTime
                //Counter += counter
                // print "Interval: " tick " -> Count = " counter " Gesamt:" Counter "\n"
                TicksThisInterval       = getTickCount_() - IntervalStartTime
                IntervalDelta   = TicksThisInterval - TicksLastInterval
 
                FindStartTime   = getTickCount_()
                for (i=(Counter-interval)+1; i<=Counter; i++)
                {  find(skpBuffs, i, bFound)
                   bFound       += "a"
                }
                delete(skpInterval)
                skpInterval     = create()
                FindEndTime             = getTickCount_() - FindStartTime
 
                bufResults += Counter "\t" TicksThisInterval "\t" IntervalDelta "\t" FindEndTime "\n"
 
                                // Get ready for next loop:
                IntervalStartTime       = getTickCount_()
                TicksLastInterval       = TicksThisInterval
        }
// delete(bCreate)
}
 
// bufResults += "Interval: " (getTickCount_() - IntervalStartTime) "  Gesamt:" (Counter) "\n"
print tempStringOf(bufResults)
delete(bufResults)
for bCreate in skpBuffs do{delete(bCreate)}
delete(skpBuffs)
delete(skpInterval)

 

I think you are mixing up some timings here ... In my previous post I was talking only about the time for allocations (create / delete). And to be clear, as you already states, this only refers to allocated objects, as soon as you delete them there is no performance loss anymore.

The speed of the operations on skips or buffers does NOT depend on the number of allocated objects. You can test this with the following code for skips or buffers:

int MAX_ALLOCATIONS = 50000
int INTERVAL        = 1000
int iCurrentAllocations = 0 
Array arAllocs = create(MAX_ALLOCATIONS, 1) 
 
// this function will allocate iCount objects and store them in the arAllocs array ...
void doAllocations(int iCount) {
   int i; for i in 0:(iCount-1) do {
       Skip val = create() 
       // Buffer val = create(100) 
       put (arAllocs, val, iCurrentAllocations + i, 0) 
   }
   iCurrentAllocations += iCount
}
 
pragma runLim,0
 
print "Allocated\tDelta\n"
 
// how many times shall we repeat the search
int TESTLOOP = 500000 // use 500000 for buffer test, 20000 for skip test
 
// prepare test data for Skip find: create a skip with 10000 entries
Skip skTest = create() 
{ int i; for i in 1:10000 do put(skTest, i,i) }
 
// prepare test data for buffer append
string sAppend = "1234567890"
Buffer buf = create(TESTLOOP * (length sAppend) + 100) // make sure the buffer does not overflow
 
 
while (iCurrentAllocations < MAX_ALLOCATIONS) { 
    doAllocations INTERVAL // make a couple of allocations => 1000 objects more
 
        // make this loop as efficient as possible: 
      // --> only put the test code in there 
        int tickStart = getTickCount_() 
      buf = ""
      int i,j; for i in 0:TESTLOOP do { 
          
                // find a value in the middle of the skip list a lot of times, to measure
                // the speed of find
                // find(skTest, 5000, j) 
 
            // append a string to a buffer a lot of times to measure the speed of append
            buf += sAppend
      }
        int timeDelta = getTickCount_() - tickStart
 
        print iCurrentAllocations "\t" timeDelta "\n"
}

 


The speed of a find(Skip, key, ...) only depends on the position of the key in the (sorted) skip. After all a skip is nothing more then a linked list with a 'fast-forward-lane'. Normally for skip lists you could achieve logarithmic find times. For old DOORS versions Telelogic made a bad implementation which only had linear times. IBM however seems to have replaced the skip implementations and this might have improved in DOORS 9+.

The speed of buffer append should also only depend on the length of the appended string, not on the current buffer length and not on the allocations. However if the buffer capacity is not suffient a new memory block will be allocated but I don't think this will be suffering from the same penalties, as the internal memory of the buffer is not a DXL allocated object.

The obvious solution for saving allocations is to revive the Array. Arrays have the big advantage of resizing automatically, as soon as you go beyond their capacity. Also are incredibly fast. So you can store every link in one line of the array (columns = link properties, e.g. source object, ...), and then use one skip list, that stores the position in the array, where the links of the objects are (key = object, value = line nr of first link in array). Then you will have only two allocated objects to store all your links. To receive the links of an object you get the start position in the array and read the links until the source object is not the object you are looking for. Note that reading from an array is much faster as finding stuff in a skip.

Hope that clears matters up. Regards, Mathias

 

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

 

Re: Buffer Get and Release
Mathias Mamsch - Thu Aug 18 04:59:24 EDT 2011

Mathias Mamsch - Thu Aug 18 04:47:14 EDT 2011

I think you are mixing up some timings here ... In my previous post I was talking only about the time for allocations (create / delete). And to be clear, as you already states, this only refers to allocated objects, as soon as you delete them there is no performance loss anymore.

The speed of the operations on skips or buffers does NOT depend on the number of allocated objects. You can test this with the following code for skips or buffers:

int MAX_ALLOCATIONS = 50000
int INTERVAL        = 1000
int iCurrentAllocations = 0 
Array arAllocs = create(MAX_ALLOCATIONS, 1) 
 
// this function will allocate iCount objects and store them in the arAllocs array ...
void doAllocations(int iCount) {
   int i; for i in 0:(iCount-1) do {
       Skip val = create() 
       // Buffer val = create(100) 
       put (arAllocs, val, iCurrentAllocations + i, 0) 
   }
   iCurrentAllocations += iCount
}
 
pragma runLim,0
 
print "Allocated\tDelta\n"
 
// how many times shall we repeat the search
int TESTLOOP = 500000 // use 500000 for buffer test, 20000 for skip test
 
// prepare test data for Skip find: create a skip with 10000 entries
Skip skTest = create() 
{ int i; for i in 1:10000 do put(skTest, i,i) }
 
// prepare test data for buffer append
string sAppend = "1234567890"
Buffer buf = create(TESTLOOP * (length sAppend) + 100) // make sure the buffer does not overflow
 
 
while (iCurrentAllocations < MAX_ALLOCATIONS) { 
    doAllocations INTERVAL // make a couple of allocations => 1000 objects more
 
        // make this loop as efficient as possible: 
      // --> only put the test code in there 
        int tickStart = getTickCount_() 
      buf = ""
      int i,j; for i in 0:TESTLOOP do { 
          
                // find a value in the middle of the skip list a lot of times, to measure
                // the speed of find
                // find(skTest, 5000, j) 
 
            // append a string to a buffer a lot of times to measure the speed of append
            buf += sAppend
      }
        int timeDelta = getTickCount_() - tickStart
 
        print iCurrentAllocations "\t" timeDelta "\n"
}

 


The speed of a find(Skip, key, ...) only depends on the position of the key in the (sorted) skip. After all a skip is nothing more then a linked list with a 'fast-forward-lane'. Normally for skip lists you could achieve logarithmic find times. For old DOORS versions Telelogic made a bad implementation which only had linear times. IBM however seems to have replaced the skip implementations and this might have improved in DOORS 9+.

The speed of buffer append should also only depend on the length of the appended string, not on the current buffer length and not on the allocations. However if the buffer capacity is not suffient a new memory block will be allocated but I don't think this will be suffering from the same penalties, as the internal memory of the buffer is not a DXL allocated object.

The obvious solution for saving allocations is to revive the Array. Arrays have the big advantage of resizing automatically, as soon as you go beyond their capacity. Also are incredibly fast. So you can store every link in one line of the array (columns = link properties, e.g. source object, ...), and then use one skip list, that stores the position in the array, where the links of the objects are (key = object, value = line nr of first link in array). Then you will have only two allocated objects to store all your links. To receive the links of an object you get the start position in the array and read the links until the source object is not the object you are looking for. Note that reading from an array is much faster as finding stuff in a skip.

Hope that clears matters up. Regards, Mathias

 

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

 

Oh and by the way: ModuleVersions (sourceVersion, targetVersion, moduleVersion, ...) all those perms ARE allocated objects and need to be deleted by the delete(ModuleVersion) perm. You can test for yourself:
 

int *::+(int *ptr1, int ofs) { int *ptr2 = ptr1; ptr2+=ofs; return ptr2 }
int *::@(int *ptr, int ofs) { int ad = *(ptr + ofs); int *ptr2 = addr_ ad; return ptr2 }
 
int *getCurrentDXLContextPtr () {
    DB x = create ""
        int *ptr = addr_ x
        int *result = ptr @ 48
        destroy x
        return result
}
 
int *getMemoryBlockNodes (int *cc) { return cc @ 0x74 }
int *nextNode      (int *memNode) { return memNode @ 8 }
 
int countAllocatedObjects() {
        int *memBlocks = getMemoryBlockNodes getCurrentDXLContextPtr() 
        int count = 0
        while (!null memBlocks) {
 
                memBlocks = nextNode memBlocks
                count++
        }
        return count
}
 
// comment me in to see the object counts increase; 
// Skip sk = create(); Buffer buf = create() 
 
print "Allocated Objects Before doing stuff:" countAllocatedObjects() "\n"
 
if (null current) error "Please open a module for this test!" 
 
ModuleVersion mod = moduleVersion current 
print "Allocated Objects with one moduleVersion:" countAllocatedObjects() "\n"
 
ModuleVersion mod2 = moduleVersion current 
print "Allocated Objects with two moduleVersions:" countAllocatedObjects() "\n"
 
delete mod; delete mod2
print "Allocated Objects after deletions:" countAllocatedObjects() "\n"

 


This code should work on all DOORS versions (tested it on DOORS 8.2 & 9.3). You can use it to find the real number of allocated objects in any piece of code by the way, to check if you suffer from allocation performance penalty (number of allocations > magic number on your computer)

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Re: Buffer Get and Release
llandale - Thu Aug 18 16:36:14 EDT 2011

Mathias Mamsch - Thu Aug 18 04:59:24 EDT 2011

Oh and by the way: ModuleVersions (sourceVersion, targetVersion, moduleVersion, ...) all those perms ARE allocated objects and need to be deleted by the delete(ModuleVersion) perm. You can test for yourself:
 

int *::+(int *ptr1, int ofs) { int *ptr2 = ptr1; ptr2+=ofs; return ptr2 }
int *::@(int *ptr, int ofs) { int ad = *(ptr + ofs); int *ptr2 = addr_ ad; return ptr2 }
 
int *getCurrentDXLContextPtr () {
    DB x = create ""
        int *ptr = addr_ x
        int *result = ptr @ 48
        destroy x
        return result
}
 
int *getMemoryBlockNodes (int *cc) { return cc @ 0x74 }
int *nextNode      (int *memNode) { return memNode @ 8 }
 
int countAllocatedObjects() {
        int *memBlocks = getMemoryBlockNodes getCurrentDXLContextPtr() 
        int count = 0
        while (!null memBlocks) {
 
                memBlocks = nextNode memBlocks
                count++
        }
        return count
}
 
// comment me in to see the object counts increase; 
// Skip sk = create(); Buffer buf = create() 
 
print "Allocated Objects Before doing stuff:" countAllocatedObjects() "\n"
 
if (null current) error "Please open a module for this test!" 
 
ModuleVersion mod = moduleVersion current 
print "Allocated Objects with one moduleVersion:" countAllocatedObjects() "\n"
 
ModuleVersion mod2 = moduleVersion current 
print "Allocated Objects with two moduleVersions:" countAllocatedObjects() "\n"
 
delete mod; delete mod2
print "Allocated Objects after deletions:" countAllocatedObjects() "\n"

 


This code should work on all DOORS versions (tested it on DOORS 8.2 & 9.3). You can use it to find the real number of allocated objects in any piece of code by the way, to check if you suffer from allocation performance penalty (number of allocations > magic number on your computer)

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

never saw "delete(ModuleVersion)" before and missed it yesterday when I looked for it.

So, an allocated object is anything with an associeated "delete" or "destroy"?

I see if you delete(mod); mod = mod2 you effectively re-allocate it again. Scary.

Re: Buffer Get and Release
Mathias Mamsch - Thu Aug 18 17:02:07 EDT 2011

llandale - Thu Aug 18 16:36:14 EDT 2011
never saw "delete(ModuleVersion)" before and missed it yesterday when I looked for it.

So, an allocated object is anything with an associeated "delete" or "destroy"?

I see if you delete(mod); mod = mod2 you effectively re-allocate it again. Scary.

No, you do not reallocate it again? Why would you think that?

I like to think of DXL variables like an arrow/pointer pointing to data. When you do
 

|       This is only    This one creates the Skip Object
|         an arrow         and returns an arrow to it
|            |                      |
|            V               /------^--------\
Skip         sk      =           create()
 
// copy the arrow ...
Skip sk2 = sk
 
 // delete the data, sk and sk2 both point to an invalid skip
delete sk
 
// create another skip without even storing a pointer. A new object is allocated but not accessible anymore, 
// through a pointer. 
create Skip

 


So you allocate a new object everytime you call certain perms like Skip create (). There are even some perms which allocate objects which have no deallocation function in DXL, like 'Baseline'. Try doing:

 

 

Baseline b = baseline (1,0,"") // this will allocate a baseline object. 
// how to free it?    
// delete b <- this is not a destructor, but will try to delete the baseline from the module! :-)


and check with the 'allocated objects' code. Now THAT is scary! I cannot even tell you which perms create allocations and which don't. Most of them do I guess.

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Re: Buffer Get and Release
llandale - Thu Aug 18 17:36:47 EDT 2011

Mathias Mamsch - Thu Aug 18 17:02:07 EDT 2011

No, you do not reallocate it again? Why would you think that?

I like to think of DXL variables like an arrow/pointer pointing to data. When you do
 

|       This is only    This one creates the Skip Object
|         an arrow         and returns an arrow to it
|            |                      |
|            V               /------^--------\
Skip         sk      =           create()
 
// copy the arrow ...
Skip sk2 = sk
 
 // delete the data, sk and sk2 both point to an invalid skip
delete sk
 
// create another skip without even storing a pointer. A new object is allocated but not accessible anymore, 
// through a pointer. 
create Skip

 


So you allocate a new object everytime you call certain perms like Skip create (). There are even some perms which allocate objects which have no deallocation function in DXL, like 'Baseline'. Try doing:

 

 

Baseline b = baseline (1,0,"") // this will allocate a baseline object. 
// how to free it?    
// delete b <- this is not a destructor, but will try to delete the baseline from the module! :-)


and check with the 'allocated objects' code. Now THAT is scary! I cannot even tell you which perms create allocations and which don't. Most of them do I guess.

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

I know I'm thick about this internal stuff, also thick about "*" and "addr_", and frankly cannot follow your code but presume its following a linked list.

I get these results from your tweaked code below:

#Units    After...
0       .. Start
1       .. mod = 
2       .. mod2 = 
1       .. delete(mod)
2       .. mod = mod2
1       .. delete mod2
0       .. delete mod


Seems to me that there is 1 unit left after <delete(mod)>, but there are again 2 after <mod=mod2>, and still one left after <delete mod2>. Seems like another one was allocated.

And, it seems to me if you can follow this linked list you could also de-allocate any such allocated object like "Baseline" that has no "delete" command, by removing it from the list.

 

 

  • Louie

 

 

/*
    posted Mathias Mamsch 18-Aug-2011
http://www.ibm.com/developerworks/forums/post!reply.jspa?messageID=14673586
Tweaked by Landale
*/
 
int *::+(int *ptr1, int ofs) { int *ptr2 = ptr1; ptr2+=ofs; return ptr2 }
int *::@(int *ptr,  int ofs) { int ad = *(ptr + ofs); int *ptr2 = addr_ ad; return ptr2 }
 
int *getCurrentDXLContextPtr () {
        DB x = create ""
        int *ptr = addr_ x
        int *result = ptr @ 48
        destroy x
        return result
}
 
int *getMemoryBlockNodes (int *cc) { return cc @ 0x74 }    // 0x74 = 116
int *nextNode      (int *memNode) { return memNode @ 8 }
 
int countAllocatedObjects() {
        int *memBlocks = getMemoryBlockNodes getCurrentDXLContextPtr() 
        int count = 0
        while (!null memBlocks) {
 
                memBlocks = nextNode memBlocks
                count++
        }
        return count
}
 
void    DumpCount(string Label)
{    // dump the allocated units with the Label
        print countAllocatedObjects() "\t.. " Label "\n"
}
 
 
print "#Units\tAfter...\n"
// comment me in to see the object counts increase; 
//Skip sk = create(); Buffer buf = create() 
DumpCount("Start")
 
if (null current) error "Please open a module for this test!" 
 
ModuleVersion mod = moduleVersion current 
DumpCount("mod = ")
 
ModuleVersion mod2 = moduleVersion current 
DumpCount("mod2 = ")
 
// print "\t\t(mod == mod2): " (mod == mod2) "\n"
 
delete mod;
DumpCount("delete(mod)")
 
mod = mod2
DumpCount("mod = mod2")
 
delete (mod2)
DumpCount("delete mod2")
delete(mod)
DumpCount("delete mod")

Re: Buffer Get and Release
Mathias Mamsch - Thu Aug 18 17:58:05 EDT 2011

llandale - Thu Aug 18 17:36:47 EDT 2011

I know I'm thick about this internal stuff, also thick about "*" and "addr_", and frankly cannot follow your code but presume its following a linked list.

I get these results from your tweaked code below:

#Units    After...
0       .. Start
1       .. mod = 
2       .. mod2 = 
1       .. delete(mod)
2       .. mod = mod2
1       .. delete mod2
0       .. delete mod


Seems to me that there is 1 unit left after <delete(mod)>, but there are again 2 after <mod=mod2>, and still one left after <delete mod2>. Seems like another one was allocated.

And, it seems to me if you can follow this linked list you could also de-allocate any such allocated object like "Baseline" that has no "delete" command, by removing it from the list.

 

 

  • Louie

 

 

/*
    posted Mathias Mamsch 18-Aug-2011
http://www.ibm.com/developerworks/forums/post!reply.jspa?messageID=14673586
Tweaked by Landale
*/
 
int *::+(int *ptr1, int ofs) { int *ptr2 = ptr1; ptr2+=ofs; return ptr2 }
int *::@(int *ptr,  int ofs) { int ad = *(ptr + ofs); int *ptr2 = addr_ ad; return ptr2 }
 
int *getCurrentDXLContextPtr () {
        DB x = create ""
        int *ptr = addr_ x
        int *result = ptr @ 48
        destroy x
        return result
}
 
int *getMemoryBlockNodes (int *cc) { return cc @ 0x74 }    // 0x74 = 116
int *nextNode      (int *memNode) { return memNode @ 8 }
 
int countAllocatedObjects() {
        int *memBlocks = getMemoryBlockNodes getCurrentDXLContextPtr() 
        int count = 0
        while (!null memBlocks) {
 
                memBlocks = nextNode memBlocks
                count++
        }
        return count
}
 
void    DumpCount(string Label)
{    // dump the allocated units with the Label
        print countAllocatedObjects() "\t.. " Label "\n"
}
 
 
print "#Units\tAfter...\n"
// comment me in to see the object counts increase; 
//Skip sk = create(); Buffer buf = create() 
DumpCount("Start")
 
if (null current) error "Please open a module for this test!" 
 
ModuleVersion mod = moduleVersion current 
DumpCount("mod = ")
 
ModuleVersion mod2 = moduleVersion current 
DumpCount("mod2 = ")
 
// print "\t\t(mod == mod2): " (mod == mod2) "\n"
 
delete mod;
DumpCount("delete(mod)")
 
mod = mod2
DumpCount("mod = mod2")
 
delete (mod2)
DumpCount("delete mod2")
delete(mod)
DumpCount("delete mod")

Argh ... now you scared me :-) Ok, what I said is the normal case ... UNFORTUNATELY: There are a lot of 'custom' ::= operators defined in DXL.
 

ModuleVersion ::= (ModuleVersion &, ModuleVersion)

 


seems to be such a case, with a special functionality to create a COPY of that object. I would have assumed that every object that has this special operator defined with a single argument and return type might show that behaviour.

In all other cases the default one:

 

 

 

_k ::= (_k&, _k)



will kick in which creates only a new reference. So the same behaviour might be observed for 'Comment' and 'Discussion' for what I can see in the perms list. Skips for example do not show that behaviour (I don't dare to test, to not loose the last bit of sanity). That makes things even worse... Good find ;-) Regards, Mathias



 

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Re: Buffer Get and Release
llandale - Fri Aug 19 15:26:13 EDT 2011

Mathias Mamsch - Thu Aug 18 17:58:05 EDT 2011

Argh ... now you scared me :-) Ok, what I said is the normal case ... UNFORTUNATELY: There are a lot of 'custom' ::= operators defined in DXL.
 

ModuleVersion ::= (ModuleVersion &, ModuleVersion)

 


seems to be such a case, with a special functionality to create a COPY of that object. I would have assumed that every object that has this special operator defined with a single argument and return type might show that behaviour.

In all other cases the default one:

 

 

 

_k ::= (_k&, _k)



will kick in which creates only a new reference. So the same behaviour might be observed for 'Comment' and 'Discussion' for what I can see in the perms list. Skips for example do not show that behaviour (I don't dare to test, to not loose the last bit of sanity). That makes things even worse... Good find ;-) Regards, Mathias



 

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Surely someone with full and clever understanding of "*" and "addr_" and internal data structures could write this function, yes?

void  delete(Comment &cmnt)
{
}

 

  • Louie

 

Re: Buffer Get and Release
Mathias Mamsch - Fri Aug 19 18:09:53 EDT 2011

llandale - Fri Aug 19 15:26:13 EDT 2011

Surely someone with full and clever understanding of "*" and "addr_" and internal data structures could write this function, yes?

void  delete(Comment &cmnt)
{
}

 

  • Louie

 

That is a tricky subject - I am not sure an understanding about pointers (*) and the ability to give a DXL variable another type (using addr_) helps you with this. The problem here is: As I already stated, DOORS keeps its memory nodes in a simple linked list. The list is stored in the DXL Context. So my code basically follows this linked list and counts its entries. (By the way: I get the pointer to the current DXL Context by creating a dialog, which stores a reference to its DXL Context, getting the value from there and destroying it right away. From the DXL Context I get a reference to the Memory Allocation List. The fancy @ and * stuff is only for making it easier to do 'Take this memory address, advance 106 Bytes and read whatever is stored there').

Anyway: An entry in the memory allocation list consists of
  • A pointer to the next node (it is a list after all)
  • A pointer to the object data (i.e. for a skip list, the memory location where the skip is stored).
  • A pointer to a destructor function for this object (i.e. a function that will clean up the object).

So what the delete (Skip) funtion does for example, is
a) to call the destructor function of the skip list
b) to unlink the skip entry from the memory allocation list

Now we could manually unlink any variable in the memory allocation list, effectively leaking the memory, creating a true global variable, whose value will persist after the DXL ends. This would get around the performance penalty but still leak the memory. To really write a delete (Comment &) function you would need to call the destructor function. But we cannot call it from DXL directly. So this would require another bad hack.

Is this of immediate concern for you? To delete Comment, Discussion or Baseline objects? Regards, Mathias

Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

Re: Buffer Get and Release
llandale - Mon Aug 22 14:03:38 EDT 2011

Mathias Mamsch - Fri Aug 19 18:09:53 EDT 2011
That is a tricky subject - I am not sure an understanding about pointers (*) and the ability to give a DXL variable another type (using addr_) helps you with this. The problem here is: As I already stated, DOORS keeps its memory nodes in a simple linked list. The list is stored in the DXL Context. So my code basically follows this linked list and counts its entries. (By the way: I get the pointer to the current DXL Context by creating a dialog, which stores a reference to its DXL Context, getting the value from there and destroying it right away. From the DXL Context I get a reference to the Memory Allocation List. The fancy @ and * stuff is only for making it easier to do 'Take this memory address, advance 106 Bytes and read whatever is stored there').

Anyway: An entry in the memory allocation list consists of

  • A pointer to the next node (it is a list after all)
  • A pointer to the object data (i.e. for a skip list, the memory location where the skip is stored).
  • A pointer to a destructor function for this object (i.e. a function that will clean up the object).

So what the delete (Skip) funtion does for example, is
a) to call the destructor function of the skip list
b) to unlink the skip entry from the memory allocation list

Now we could manually unlink any variable in the memory allocation list, effectively leaking the memory, creating a true global variable, whose value will persist after the DXL ends. This would get around the performance penalty but still leak the memory. To really write a delete (Comment &) function you would need to call the destructor function. But we cannot call it from DXL directly. So this would require another bad hack.

Is this of immediate concern for you? To delete Comment, Discussion or Baseline objects? Regards, Mathias

Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

If some data types creates a structure it may be beneficial to be able to delete it. But mostly I just wanted to see if there is a limit to your cleverness.

You cannot call a function given the address of that function?

Re: Buffer Get and Release
Mathias Mamsch - Mon Aug 22 15:58:00 EDT 2011

llandale - Mon Aug 22 14:03:38 EDT 2011
If some data types creates a structure it may be beneficial to be able to delete it. But mostly I just wanted to see if there is a limit to your cleverness.

You cannot call a function given the address of that function?

There is just a limit to my time unfortunately. Given endless time, there would probably no limit to my nosiness ;-) Yes I could call a machine code function given its address (see this post: https://www.ibm.com/developerworks/forums/thread.jspa?messageID=14545698&#14545698), but as I said this would require another bad hack. For this purpose this would be pretty pointless, because I would not want to have code like this in a production environment unless I could not avoid it at all. It is good to know about that stuff, but I guess thats enough for writing efficient DXL code.

But there are other purposes for which I consider implementing such a thing. Extending DOORS DXL with fast C or Assembler code would allow a completly new way of interfacing with other applications. Instead of communicating over COM (writing a COM server for your application) you could directly talk to the other programs API. I am also thinking about fast export/data exchange/change management tools, which would be possible with fast string processing. If I had a nice SHA1 function available from DXL, I could do awesome change management and lightning fast comparison. Calling the Windows API could give a new way for user interfaces (e.g. solve the DOORS IDLE problem).

But on the other hand, I guess in the end DXL will die sooner or later. So why bother ... Regards, Mathias

Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

Re: Buffer Get and Release
llandale - Mon Aug 22 18:55:04 EDT 2011

Mathias Mamsch - Mon Aug 22 15:58:00 EDT 2011
There is just a limit to my time unfortunately. Given endless time, there would probably no limit to my nosiness ;-) Yes I could call a machine code function given its address (see this post: https://www.ibm.com/developerworks/forums/thread.jspa?messageID=14545698&#14545698), but as I said this would require another bad hack. For this purpose this would be pretty pointless, because I would not want to have code like this in a production environment unless I could not avoid it at all. It is good to know about that stuff, but I guess thats enough for writing efficient DXL code.

But there are other purposes for which I consider implementing such a thing. Extending DOORS DXL with fast C or Assembler code would allow a completly new way of interfacing with other applications. Instead of communicating over COM (writing a COM server for your application) you could directly talk to the other programs API. I am also thinking about fast export/data exchange/change management tools, which would be possible with fast string processing. If I had a nice SHA1 function available from DXL, I could do awesome change management and lightning fast comparison. Calling the Windows API could give a new way for user interfaces (e.g. solve the DOORS IDLE problem).

But on the other hand, I guess in the end DXL will die sooner or later. So why bother ... Regards, Mathias


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

Sounds like fear of an "unloaded gun" to me :) There's hope for you yet.

I don't believe DXL is going anywhere, IBM and the community has FAR too much invested in it to let it go away. If they can maintain COBOL and RPG they can maintain DXL. Perhaps "frozen" however.

Re: Buffer Get and Release
SystemAdmin - Mon Aug 22 19:02:26 EDT 2011

I'm just getting back from vacation so I'm late to join the thread.

I have definitely seen this non-linear behavior as more and more objects are allocated and released. For certain repetitive applications, I got major performance improvement by emptying and reusing objects in lieu of deleting them and re-allocating new ones.

Is there a corollary to the database world and SQL connections? I know getting connections to a database is regarded as an "expensive" operation. As a result many websites use a "connection pool" of connections that can be tuned per application need.

Would a similar utility be of use for DXL apps for Skip lists or Buffers?

Mathias, do you have a "object pool" manager or was it something informal?

What is a good tool for monitoring different variations to discover the best tuning for a situation. I've used some generic Microsoft monitoring S/W but they changed the name a few years ago and I forgot what they call it now.

Re: Buffer Get and Release
Mathias Mamsch - Tue Aug 23 05:51:51 EDT 2011

SystemAdmin - Mon Aug 22 19:02:26 EDT 2011
I'm just getting back from vacation so I'm late to join the thread.

I have definitely seen this non-linear behavior as more and more objects are allocated and released. For certain repetitive applications, I got major performance improvement by emptying and reusing objects in lieu of deleting them and re-allocating new ones.

Is there a corollary to the database world and SQL connections? I know getting connections to a database is regarded as an "expensive" operation. As a result many websites use a "connection pool" of connections that can be tuned per application need.

Would a similar utility be of use for DXL apps for Skip lists or Buffers?

Mathias, do you have a "object pool" manager or was it something informal?

What is a good tool for monitoring different variations to discover the best tuning for a situation. I've used some generic Microsoft monitoring S/W but they changed the name a few years ago and I forgot what they call it now.

I am pretty sure that there is no performance penalty if you delete all allocated objects properly. However my experience is, most of the time you don't. Using the code above you can see if you did. If you did, you will experience your code running slower, because allocations become more and more expensive. Just reusing objects does only cure the symptom, not the cause. Sure when you have 40000 allocated objects every allocation becomes very expensive, so reusing an existing object will bring you a big performance gain. But If you get rid of the allocations itself (in most of my code I found unnecessary allocations, like ModuleVersions and stuff, which I simply did not free out of lazyness), then allocations will be super fast and you will be happy again.

To detect the allocations I simply used the code above, to check the number of allocated objects. I have a MemoryManagement library which will track the most common allocations and deallocations to tell me exactly where in the code the leak happens. This is accomplished by hooking all DXL functions that allocate and deallocate a variable. For OLE objects for example this looks like:
 

string oleGetOld (OleAutoObj o, string n, OleAutoArgs a, OleAutoObj& r) { return oleGet(o,n,a,r) } 
string oleGetOld (OleAutoObj o, string s, OleAutoObj &r) { return oleGet(o,s,r) }
string oleMethodOld (OleAutoObj o, string s, OleAutoArgs a, OleAutoObj &r) { return oleMethod(o,s,a,r) }
bool oleCloseAutoObjectOld (OleAutoObj& r) { return oleCloseAutoObject(r) }
 
string oleGet (OleAutoObj o, string n, OleAutoArgs a, OleAutoObj &r) { string res = oleGetOld(o,n,a,r); OleAutoObj rr = r; addAllocatedObject ((addr_ rr) int, "OleAutoObj"); return res } 
string oleGet (OleAutoObj o, string s, OleAutoObj &r) { string res = oleGetOld(o,s,r); OleAutoObj rr = r; addAllocatedObject ((addr_ rr) int, "OleAutoObj"); return res  }
string oleMethod (OleAutoObj o, string s, OleAutoArgs a, OleAutoObj &r) { string res = oleMethodOld(o,s,a,r); OleAutoObj rr = r; addAllocatedObject ((addr_ rr) int, "OleAutoObj"); return res  }
bool oleCloseAutoObject (OleAutoObj& r) { OleAutoObj rr = r; removeAllocatedObject ((addr_ rr) int, "OleAutoObj");  return oleCloseAutoObjectOld(r); }

 


I have the addAllocatedObject and removeAllocatedObject which keep track of allocations and deallocations and will use dxlHere() to store the position in the code, where the allocation and deallocation happens. Additionally I check the list of memoryNodes to see how many unknown allocations I have. For unknown allocations I continuously add the perms that are responsible for allocating and deallocating so I can find the piece of code that is leaking the objects. This is how I remove unnecessary allocations.

The second part of the optimization strategy is that you need new ways of memory management to cope with a large amount of objects. Instead of storing structs in DxlObjects for example I store them in arrays now, each object one line, each property one column. I then add housekeeping functions which will reuse lines in the array. So instead of storing 40000 Objects in one skip each, I allocate one array with 40000 lines.

Problematic are structs that have Buffers or Skips as properties. Every struct instance will still allocate one or more buffers or skips, even though they are all stored in one array. This is where additional datatypes are needed. For example I wrote a 'StringBuffer' class which will keep a couple of (immutable) strings and buffers all in one Buffer and store the string starts. But that is really an open problem. The goal would be to have a drop-in replacement for Skips or Buffers with Memory Management replaced, so the number of allocations is minimal.

The last piece of the puzzle I am investigating in is the DOORS string table. Despite my hasty comments in earlier posts the string table is very much alive and there is even a perm to print its contents: void printStrTab_ () ;-) Unfortunately you will only see the output if you give DOORS its console back. I am still investigating on how that thing really works and when exactly and why a performance loss will occur. Because I noticed you can create really many strings without any performance loss. Also string space will be freed by DOORS under certain circumstances when the DXL ends. Constructing 50MB strings in a DXL will not make them stay alive after the DXL ends. So there are still many open questions which I will be able to answer somewhen later. I already have access to the string table, so now I need to do tests to find out what is going on. I will keep you posted.

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Re: Buffer Get and Release
llandale - Tue Aug 23 12:34:04 EDT 2011

Mathias Mamsch - Tue Aug 23 05:51:51 EDT 2011

I am pretty sure that there is no performance penalty if you delete all allocated objects properly. However my experience is, most of the time you don't. Using the code above you can see if you did. If you did, you will experience your code running slower, because allocations become more and more expensive. Just reusing objects does only cure the symptom, not the cause. Sure when you have 40000 allocated objects every allocation becomes very expensive, so reusing an existing object will bring you a big performance gain. But If you get rid of the allocations itself (in most of my code I found unnecessary allocations, like ModuleVersions and stuff, which I simply did not free out of lazyness), then allocations will be super fast and you will be happy again.

To detect the allocations I simply used the code above, to check the number of allocated objects. I have a MemoryManagement library which will track the most common allocations and deallocations to tell me exactly where in the code the leak happens. This is accomplished by hooking all DXL functions that allocate and deallocate a variable. For OLE objects for example this looks like:
 

string oleGetOld (OleAutoObj o, string n, OleAutoArgs a, OleAutoObj& r) { return oleGet(o,n,a,r) } 
string oleGetOld (OleAutoObj o, string s, OleAutoObj &r) { return oleGet(o,s,r) }
string oleMethodOld (OleAutoObj o, string s, OleAutoArgs a, OleAutoObj &r) { return oleMethod(o,s,a,r) }
bool oleCloseAutoObjectOld (OleAutoObj& r) { return oleCloseAutoObject(r) }
 
string oleGet (OleAutoObj o, string n, OleAutoArgs a, OleAutoObj &r) { string res = oleGetOld(o,n,a,r); OleAutoObj rr = r; addAllocatedObject ((addr_ rr) int, "OleAutoObj"); return res } 
string oleGet (OleAutoObj o, string s, OleAutoObj &r) { string res = oleGetOld(o,s,r); OleAutoObj rr = r; addAllocatedObject ((addr_ rr) int, "OleAutoObj"); return res  }
string oleMethod (OleAutoObj o, string s, OleAutoArgs a, OleAutoObj &r) { string res = oleMethodOld(o,s,a,r); OleAutoObj rr = r; addAllocatedObject ((addr_ rr) int, "OleAutoObj"); return res  }
bool oleCloseAutoObject (OleAutoObj& r) { OleAutoObj rr = r; removeAllocatedObject ((addr_ rr) int, "OleAutoObj");  return oleCloseAutoObjectOld(r); }

 


I have the addAllocatedObject and removeAllocatedObject which keep track of allocations and deallocations and will use dxlHere() to store the position in the code, where the allocation and deallocation happens. Additionally I check the list of memoryNodes to see how many unknown allocations I have. For unknown allocations I continuously add the perms that are responsible for allocating and deallocating so I can find the piece of code that is leaking the objects. This is how I remove unnecessary allocations.

The second part of the optimization strategy is that you need new ways of memory management to cope with a large amount of objects. Instead of storing structs in DxlObjects for example I store them in arrays now, each object one line, each property one column. I then add housekeeping functions which will reuse lines in the array. So instead of storing 40000 Objects in one skip each, I allocate one array with 40000 lines.

Problematic are structs that have Buffers or Skips as properties. Every struct instance will still allocate one or more buffers or skips, even though they are all stored in one array. This is where additional datatypes are needed. For example I wrote a 'StringBuffer' class which will keep a couple of (immutable) strings and buffers all in one Buffer and store the string starts. But that is really an open problem. The goal would be to have a drop-in replacement for Skips or Buffers with Memory Management replaced, so the number of allocations is minimal.

The last piece of the puzzle I am investigating in is the DOORS string table. Despite my hasty comments in earlier posts the string table is very much alive and there is even a perm to print its contents: void printStrTab_ () ;-) Unfortunately you will only see the output if you give DOORS its console back. I am still investigating on how that thing really works and when exactly and why a performance loss will occur. Because I noticed you can create really many strings without any performance loss. Also string space will be freed by DOORS under certain circumstances when the DXL ends. Constructing 50MB strings in a DXL will not make them stay alive after the DXL ends. So there are still many open questions which I will be able to answer somewhen later. I already have access to the string table, so now I need to do tests to find out what is going on. I will keep you posted.

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Good to know I'm not the only one that gets carried away.

But surely if you can look for places that allocate and insert your clever code, cannot
you just make sure there is a 'delete' associated with it? I wonder how you find
"Comment" allocations that have no associated 'create', nor what you do about them
since they have no 'delete'. When you find a rouge allocation, can you determine
which 'type' it is?

I wonder about the following functions
[] Allocations_RememberCurrent() - Finds and remembers all currently active allocations
[] Allocations_DeleteRogues() - Finds all current allocations; those that were not
... remembered by the previous function are deleted.

So a main program can do this:
get list of all modules to processs
for each module
{.. Allocations_RememberCurrent()
... process module, creating lots of allocations
... Allocations_DeleteRogues()
}
Then a leak within a large module will slow down only that module, and speed picks
back up again when its done.

  • Louie

Re: Buffer Get and Release
Mathias Mamsch - Tue Aug 23 15:50:36 EDT 2011

llandale - Tue Aug 23 12:34:04 EDT 2011
Good to know I'm not the only one that gets carried away.

But surely if you can look for places that allocate and insert your clever code, cannot
you just make sure there is a 'delete' associated with it? I wonder how you find
"Comment" allocations that have no associated 'create', nor what you do about them
since they have no 'delete'. When you find a rouge allocation, can you determine
which 'type' it is?

I wonder about the following functions
[] Allocations_RememberCurrent() - Finds and remembers all currently active allocations
[] Allocations_DeleteRogues() - Finds all current allocations; those that were not
... remembered by the previous function are deleted.

So a main program can do this:
get list of all modules to processs
for each module
{.. Allocations_RememberCurrent()
... process module, creating lots of allocations
... Allocations_DeleteRogues()
}
Then a leak within a large module will slow down only that module, and speed picks
back up again when its done.

  • Louie

I think you did not get it right. I only need to include that MemoryManagement.inc include file at the start of the program. The file hooks (=replaces) all allocation functions to track where allocations happen. I do not need to change the code of the program at all, just include the file at the top and insert a GiveMeStatistics() at some sane place, where I know that most of the allocations should have been freed (usually at the end). Then the include file will tell me exactly where I allocated a Skip, Buffer, ModuleVersion, Baseline, OleAutoObj, Module or Array anwhere in the code or in the include code I called. This is a huge time saver to find memory leaks in programs. And I need it only for debugging purposes, I can remove it after I resolved the leaks.

For those Comments and stuff, well if there is no delete function I still hook creates to see how many I have and where in the code I allocated them, so I can see if there is a problem. If so I would need to reduce the allocations somehow, since I cannot just insert a delete.

Rogue Allocations (that is functions that I did not hook from the include file) are a problem, since I do not know where in the code they happened. In this case I would first find out the address of the destructor, and then go and search for the data type with that destructor. When I know the data type I will add the allocation and destruction function to the include file. In the next run I will then know where the allocations happened.

As for your idea of storing the allocation list and restoring it after the run. You would have the same problem as before, you would need to free the objects, not just leak them. Therefore you could create a list of destructor addresses with their corresponding datatypes and using that find each object in the list to call the appropriate delete function. Might work. Problematic would be global variables that were allocated during the loop, like a skip list with error messages or stuff like that. You could not decide which variables to keep. So I guess finding the place where the allocations happen and see that they get deleted might be the better approach.

Regards, Mathias

Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

Re: Buffer Get and Release
pommCannelle - Thu Dec 12 05:39:37 EST 2013

Mathias Mamsch - Tue Aug 16 04:29:07 EDT 2011

It is indeed serious trouble for all DXL programs dealing with large data sets. The made me revive the array for a lot of datasets, where I used nested skips, just to get around the performance leak. If you run the attached DXL script, you will get an ouput like the below.
 

Tick    Interval Counter        Total
15      1000    1000
31      1000    2000
78      1000    3000
125     1000    4000
156     1000    5000
203     1000    6000
250     1000    7000
...
27640   1000    36000
29531   1000    37000
31484   1000    38000
33500   1000    39000

 


It shows how many objects (Counter) can be created in which time. Each line (step) is the creation of 1000 Objects. As you can see from the numbers: The first 1000 Objects take something like 15 ms (below measurable time window from DXL). If you compare the difference between 5000 and 6000 objects (another 1000 created) you already have 50 ms (still reasonable fast). Now put this to an Excel sheet, draw a graph (Y = Tick, X = Counter). You will get something like the attached jpg.

From the JPG you can see: After a certain amount of allocations (obviously CPU dependend, the last test I did on another slower computer was 5000, on the computer where I did the test now the limit seems to be somewhere around 12000) the time for allocating another object rises linearly. For my computer with 40000 objects already created each allocation will take somewhere near 2 ms! This is the boundary I was talking about. As soon as you come across this boundary, your program will get slower and slower with every allocation.

Think about a string manipulation function that allocates a temporary buffer, does some replacements, deletes the buffer and returns the result. If you run it without any allocations present it will be super fast and return far below 1 ms (like 120 µs). If you run the same function with 40000 objects allocated, the allocation and the deletion will eat probably 4ms slowing the function down by factor 30-40.

So the performance tip of the week is: Each DXL program that deals with large datasets, should be designed in a way, that the number of allocations needed is not dependend on the size of the dataset. Otherwise it will suffer performance penalty due to DOORS memory management.

The reason for this slowdown by the way is: The DXL interpreter keeps track of the allocated objects in a simple linked list. Therefore inserting an item into the linked list takes more time the longer the list grows. And when I say allocated objects I mean: Buffer, Skip, ModuleVersion, Regexp, Array, OleAutoObj, OleAutoArgs, ... So it should not matter what kind of object you create. The performance penalty should be the same.

Regards, Mathias

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

Hi guys !

First of all, I love the graphics entry ! ;)
I just want to add an advice here: avoid the print if you need to have a correct measure.

Here after a code using the print ...

pragma runLim,0
print "Tick\tInterval Counter\tTotal\n"
int tickStart = getTickCount_()
string dxlCode = "int counter = 0; while (counter < 5000 ) { Buffer b = create(100) ; counter++; delete b}"

int i
for i in 1:500 do {
    eval_ dxlCode
    int tick = getTickCount_() - tickStart
    print tick "\t1000\t" (i*1000) "\n"
}

The tree lasts lines of my print:

4633    1000    498000
4648    1000    499000
4664    1000    500000

Now a version without the print feature ...

pragma runLim,0

void combine ( Buffer b, string s ) {
    Buffer tmp = create
    tmp = s
    combine ( b, tmp, 0 )
    delete tmp
}

print "Tick\tInterval Counter\tTotal\n"
int tickStart = getTickCount_()
string dxlCode = "int counter = 0; while (counter < 5000 ) { Buffer b = create(100) ; counter++; delete b}"
Buffer b = create
int i = 0
for i in 1:500 do {
    eval_ dxlCode
    int tick = getTickCount_() - tickStart
    combine ( b, tick " \t1000\t" (i*1000) "\n" )
}

print tempStringOf b
delete b

And my 3 lasts lines :

2247     1000    498000
2247     1000    499000
2262     1000    500000

2262 ms without the print against 4664 ms for the "using print" version.
Using the combine function, i create/destroy some objects out of the loop, but even with this disadvantage, the code without the print is more than 2 times faster.

So ... how do you usually follow the running of your scripts ?? Perhaps this can have a big impact on their global performances ...

PS : yes, i know my print is not correct ... 1000 should be replaced by 5000 ... but there is no impact on performance o;)

Re: Buffer Get and Release
llandale - Thu Dec 12 16:50:13 EST 2013

pommCannelle - Thu Dec 12 05:39:37 EST 2013

Hi guys !

First of all, I love the graphics entry ! ;)
I just want to add an advice here: avoid the print if you need to have a correct measure.

Here after a code using the print ...

pragma runLim,0
print "Tick\tInterval Counter\tTotal\n"
int tickStart = getTickCount_()
string dxlCode = "int counter = 0; while (counter < 5000 ) { Buffer b = create(100) ; counter++; delete b}"

int i
for i in 1:500 do {
    eval_ dxlCode
    int tick = getTickCount_() - tickStart
    print tick "\t1000\t" (i*1000) "\n"
}

The tree lasts lines of my print:

4633    1000    498000
4648    1000    499000
4664    1000    500000

Now a version without the print feature ...

pragma runLim,0

void combine ( Buffer b, string s ) {
    Buffer tmp = create
    tmp = s
    combine ( b, tmp, 0 )
    delete tmp
}

print "Tick\tInterval Counter\tTotal\n"
int tickStart = getTickCount_()
string dxlCode = "int counter = 0; while (counter < 5000 ) { Buffer b = create(100) ; counter++; delete b}"
Buffer b = create
int i = 0
for i in 1:500 do {
    eval_ dxlCode
    int tick = getTickCount_() - tickStart
    combine ( b, tick " \t1000\t" (i*1000) "\n" )
}

print tempStringOf b
delete b

And my 3 lasts lines :

2247     1000    498000
2247     1000    499000
2262     1000    500000

2262 ms without the print against 4664 ms for the "using print" version.
Using the combine function, i create/destroy some objects out of the loop, but even with this disadvantage, the code without the print is more than 2 times faster.

So ... how do you usually follow the running of your scripts ?? Perhaps this can have a big impact on their global performances ...

PS : yes, i know my print is not correct ... 1000 should be replaced by 5000 ... but there is no impact on performance o;)

Didn't study your code, but yes each print statement takes a little longer than the previous one; and 1000s of prints will drasically slow down your code; and yes staging your desired print messages in a buffer and then printing the buffer is much faster.

-Louie

Re: Buffer Get and Release
Mathias Mamsch - Fri Dec 13 06:29:04 EST 2013

pommCannelle - Thu Dec 12 05:39:37 EST 2013

Hi guys !

First of all, I love the graphics entry ! ;)
I just want to add an advice here: avoid the print if you need to have a correct measure.

Here after a code using the print ...

pragma runLim,0
print "Tick\tInterval Counter\tTotal\n"
int tickStart = getTickCount_()
string dxlCode = "int counter = 0; while (counter < 5000 ) { Buffer b = create(100) ; counter++; delete b}"

int i
for i in 1:500 do {
    eval_ dxlCode
    int tick = getTickCount_() - tickStart
    print tick "\t1000\t" (i*1000) "\n"
}

The tree lasts lines of my print:

4633    1000    498000
4648    1000    499000
4664    1000    500000

Now a version without the print feature ...

pragma runLim,0

void combine ( Buffer b, string s ) {
    Buffer tmp = create
    tmp = s
    combine ( b, tmp, 0 )
    delete tmp
}

print "Tick\tInterval Counter\tTotal\n"
int tickStart = getTickCount_()
string dxlCode = "int counter = 0; while (counter < 5000 ) { Buffer b = create(100) ; counter++; delete b}"
Buffer b = create
int i = 0
for i in 1:500 do {
    eval_ dxlCode
    int tick = getTickCount_() - tickStart
    combine ( b, tick " \t1000\t" (i*1000) "\n" )
}

print tempStringOf b
delete b

And my 3 lasts lines :

2247     1000    498000
2247     1000    499000
2262     1000    500000

2262 ms without the print against 4664 ms for the "using print" version.
Using the combine function, i create/destroy some objects out of the loop, but even with this disadvantage, the code without the print is more than 2 times faster.

So ... how do you usually follow the running of your scripts ?? Perhaps this can have a big impact on their global performances ...

PS : yes, i know my print is not correct ... 1000 should be replaced by 5000 ... but there is no impact on performance o;)

First of all, why would we include a print statement in the measurement at all. Normally you do something like:

int iStart = getTickCount_()
// ... time critical code ... don't print here!
int iEnd = getTickCount_()
// here you can print whatever you want...

You need to differentiate between a print on the DXL interaction window and a print in batch mode or using cout << ... Of course print statements on the DXL interaction window get slower, because DOORS needs to fill the richText box everytime with more content, so the graphic updates become slower. So print is a really "heavy" operation in interactive mode.

So if I understand your question correctly you are asking about how to profile your DXL programs?

The most important rule for optimizing is to first find out where your program needs most of its runtime. On larger programs I will normally go with logging timestamps on certain parts of the code. Normally you will have some kind of logging library, that will allow you to create log files for your programs. On BranchManager when branching a module for example I would log the time before, after opening the module, the time for creating the objects, the time for copying the attribute values, the time for creating the links, etc. This way you will have the knowledge where your program takes the most time. Then you optimize there.

The first thing you look for is logic errors. This is where you save the most time. Opening and Closing modules to often, repeating expensive calculations that could be cached, etc. All DXL programs that handle large strings or a lot of allocations need to take more care to avoid memory leaks and do correct string handling (avoid cluttering of string table and do not copy large strings to often). Only sometimes when you have functions that are called very often (e.g. string replace, or something) then going into the detailed performance of the basic DXL operations is relevant. This is a science of its own - I did a lot of profiling for the basic operations of DXL and compared how long does a function call take compared with an assignment, etc. When you are interested, I can make a tutorial about profiling DXL.

Regards, Mathias

Re: Buffer Get and Release
Gustavo Delfino - Wed Nov 09 16:24:10 EST 2016

Mathias Mamsch - Thu Aug 18 04:59:24 EDT 2011

Oh and by the way: ModuleVersions (sourceVersion, targetVersion, moduleVersion, ...) all those perms ARE allocated objects and need to be deleted by the delete(ModuleVersion) perm. You can test for yourself:
 

int *::+(int *ptr1, int ofs) { int *ptr2 = ptr1; ptr2+=ofs; return ptr2 }
int *::@(int *ptr, int ofs) { int ad = *(ptr + ofs); int *ptr2 = addr_ ad; return ptr2 }
 
int *getCurrentDXLContextPtr () {
    DB x = create ""
        int *ptr = addr_ x
        int *result = ptr @ 48
        destroy x
        return result
}
 
int *getMemoryBlockNodes (int *cc) { return cc @ 0x74 }
int *nextNode      (int *memNode) { return memNode @ 8 }
 
int countAllocatedObjects() {
        int *memBlocks = getMemoryBlockNodes getCurrentDXLContextPtr() 
        int count = 0
        while (!null memBlocks) {
 
                memBlocks = nextNode memBlocks
                count++
        }
        return count
}
 
// comment me in to see the object counts increase; 
// Skip sk = create(); Buffer buf = create() 
 
print "Allocated Objects Before doing stuff:" countAllocatedObjects() "\n"
 
if (null current) error "Please open a module for this test!" 
 
ModuleVersion mod = moduleVersion current 
print "Allocated Objects with one moduleVersion:" countAllocatedObjects() "\n"
 
ModuleVersion mod2 = moduleVersion current 
print "Allocated Objects with two moduleVersions:" countAllocatedObjects() "\n"
 
delete mod; delete mod2
print "Allocated Objects after deletions:" countAllocatedObjects() "\n"

 


This code should work on all DOORS versions (tested it on DOORS 8.2 & 9.3). You can use it to find the real number of allocated objects in any piece of code by the way, to check if you suffer from allocation performance penalty (number of allocations > magic number on your computer)

Regards, Mathias

 

 


Mathias Mamsch, IT-QBase GmbH, Consultant for Requirement Engineering and D00RS

 

Dear Mathias,

 

Is this code for countAllocatedObjects() supposed to work with DOORS  9.6.1.6? On my PC it results in an exception.

 

Regards, Gustavo

 


DOORS: **** Translating a structured exception ****
DOORS: Version 9.6.1.6, build number 96451, built on Mar 23 2016 22:04:27.
DOORS: Microsoft Windows 7
DOORS: DOORS: 64 percent of memory is in use.
DOORS: There are 8302096 total Kbytes of physical memory.
DOORS: There are 2946524 free Kbytes of physical memory.
DOORS: There are 16602332 total Kbytes of paging file.
DOORS: There are 9938340 free Kbytes of paging file.
DOORS: There are ffffff80 total Kbytes of virtual memory.
DOORS: There are ffc5816c free Kbytes of virtual memory.

DOORS: argv[0]: C:\Program Files\IBM\Rational\DOORS\9.6\bin\doors.exe
DOORS: argv[1]: -d
DOORS: argv[2]: <REMOVED>
DOORS: Exception timestamp: 09/11/2016 at 16:16:15
DOORS: doors.exe caused an EXCEPTION_ACCESS_VIOLATION in module doors.exe at 0000000041000058
DOORS: 0x00000141000058 doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x00000141006db0        doors.exe
0x000001410008f2        doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x0000014100445d        doors.exe
0x000001410040f2        doors.exe
0x00000141003f48        doors.exe
0x000001410084d6        doors.exe
0x0000014100b64e        doors.exe
0x00000140151cef        doors.exe
0x000001402adf8e        doors.exe
0x00000140b3660f        doors.exe
0x00000140b34c8d        doors.exe
0x00000140b2b785        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077196a9c        USER32.dll,     SetTimer()+00000364 byte(s)
0x00000077196ba1        USER32.dll,     SendMessageW()+00000093 byte(s)
0x0007fefb340c73        COMCTL32.dll,   TaskDialog()+00205139 byte(s)
0x0007fefb3448b2        COMCTL32.dll,   TaskDialog()+00220562 byte(s)
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077193ba4        USER32.dll,     CallWindowProcW()+00000156 byte(s)
0x00000077193b20        USER32.dll,     CallWindowProcW()+00000024 byte(s)
0x00000140b3aa7d        doors.exe
0x00000140b3aad6        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x000000771998e2        USER32.dll,     TranslateMessage()+00000482 byte(s)
0x00000140b23d56        doors.exe
0x0000013fd93305        doors.exe
0x0000013fd7d962        doors.exe
0x0000013fd7e081        doors.exe
0x000000772959cd        kernel32.dll,   BaseThreadInitThunk()+00000013 byte(s)
0x000000773ca2e1        ntdll.dll,      RtlUserThreadStart()+00000033 byte(s)

DOORS: **** end of event ****
DOORS: Writing exception details...
DOORS: Exception details have been written to: C:\Users\delfinog\AppData\Local\Temp\DOORS-96451-2016_11_09-16_16_15-1292-4964.dmp
DOORS: **** Translating a structured exception ****
DOORS: Version 9.6.1.6, build number 96451, built on Mar 23 2016 22:04:27.
DOORS: Microsoft Windows 7
DOORS: DOORS: 64 percent of memory is in use.
DOORS: There are 8302096 total Kbytes of physical memory.
DOORS: There are 2934264 free Kbytes of physical memory.
DOORS: There are 16602332 total Kbytes of paging file.
DOORS: There are 9900316 free Kbytes of paging file.
DOORS: There are ffffff80 total Kbytes of virtual memory.
DOORS: There are ffc5aedc free Kbytes of virtual memory.

DOORS: argv[0]: C:\Program Files\IBM\Rational\DOORS\9.6\bin\doors.exe
DOORS: argv[1]: -d
DOORS: argv[2]: <REMOVED>
DOORS: Exception timestamp: 09/11/2016 at 16:18:34
DOORS: doors.exe caused an EXCEPTION_ACCESS_VIOLATION in module doors.exe at 0000000041000058
DOORS: 0x00000141000058 doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x00000141006db0        doors.exe
0x000001410008f2        doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x0000014100445d        doors.exe
0x000001410040f2        doors.exe
0x00000141003f48        doors.exe
0x000001410084d6        doors.exe
0x0000014100b64e        doors.exe
0x00000140151cef        doors.exe
0x000001402adf8e        doors.exe
0x00000140b3660f        doors.exe
0x00000140b34c8d        doors.exe
0x00000140b2b785        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077196a9c        USER32.dll,     SetTimer()+00000364 byte(s)
0x00000077196ba1        USER32.dll,     SendMessageW()+00000093 byte(s)
0x0007fefb340c73        COMCTL32.dll,   TaskDialog()+00205139 byte(s)
0x0007fefb3448b2        COMCTL32.dll,   TaskDialog()+00220562 byte(s)
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077193ba4        USER32.dll,     CallWindowProcW()+00000156 byte(s)
0x00000077193b20        USER32.dll,     CallWindowProcW()+00000024 byte(s)
0x00000140b3aa7d        doors.exe
0x00000140b3aad6        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x000000771998e2        USER32.dll,     TranslateMessage()+00000482 byte(s)
0x00000140b23d56        doors.exe
0x0000013fd93305        doors.exe
0x0000013fd7d962        doors.exe
0x0000013fd7e081        doors.exe
0x000000772959cd        kernel32.dll,   BaseThreadInitThunk()+00000013 byte(s)
0x000000773ca2e1        ntdll.dll,      RtlUserThreadStart()+00000033 byte(s)

DOORS: **** end of event ****
DOORS: Writing exception details...
DOORS: Exception details have been written to: C:\Users\delfinog\AppData\Local\Temp\DOORS-96451-2016_11_09-16_18_34-1292-4964.dmp

 

Re: Buffer Get and Release
Mathias Mamsch - Thu Nov 10 03:03:07 EST 2016

Gustavo Delfino - Wed Nov 09 16:24:10 EST 2016

Dear Mathias,

 

Is this code for countAllocatedObjects() supposed to work with DOORS  9.6.1.6? On my PC it results in an exception.

 

Regards, Gustavo

 


DOORS: **** Translating a structured exception ****
DOORS: Version 9.6.1.6, build number 96451, built on Mar 23 2016 22:04:27.
DOORS: Microsoft Windows 7
DOORS: DOORS: 64 percent of memory is in use.
DOORS: There are 8302096 total Kbytes of physical memory.
DOORS: There are 2946524 free Kbytes of physical memory.
DOORS: There are 16602332 total Kbytes of paging file.
DOORS: There are 9938340 free Kbytes of paging file.
DOORS: There are ffffff80 total Kbytes of virtual memory.
DOORS: There are ffc5816c free Kbytes of virtual memory.

DOORS: argv[0]: C:\Program Files\IBM\Rational\DOORS\9.6\bin\doors.exe
DOORS: argv[1]: -d
DOORS: argv[2]: <REMOVED>
DOORS: Exception timestamp: 09/11/2016 at 16:16:15
DOORS: doors.exe caused an EXCEPTION_ACCESS_VIOLATION in module doors.exe at 0000000041000058
DOORS: 0x00000141000058 doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x00000141006db0        doors.exe
0x000001410008f2        doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x0000014100445d        doors.exe
0x000001410040f2        doors.exe
0x00000141003f48        doors.exe
0x000001410084d6        doors.exe
0x0000014100b64e        doors.exe
0x00000140151cef        doors.exe
0x000001402adf8e        doors.exe
0x00000140b3660f        doors.exe
0x00000140b34c8d        doors.exe
0x00000140b2b785        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077196a9c        USER32.dll,     SetTimer()+00000364 byte(s)
0x00000077196ba1        USER32.dll,     SendMessageW()+00000093 byte(s)
0x0007fefb340c73        COMCTL32.dll,   TaskDialog()+00205139 byte(s)
0x0007fefb3448b2        COMCTL32.dll,   TaskDialog()+00220562 byte(s)
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077193ba4        USER32.dll,     CallWindowProcW()+00000156 byte(s)
0x00000077193b20        USER32.dll,     CallWindowProcW()+00000024 byte(s)
0x00000140b3aa7d        doors.exe
0x00000140b3aad6        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x000000771998e2        USER32.dll,     TranslateMessage()+00000482 byte(s)
0x00000140b23d56        doors.exe
0x0000013fd93305        doors.exe
0x0000013fd7d962        doors.exe
0x0000013fd7e081        doors.exe
0x000000772959cd        kernel32.dll,   BaseThreadInitThunk()+00000013 byte(s)
0x000000773ca2e1        ntdll.dll,      RtlUserThreadStart()+00000033 byte(s)

DOORS: **** end of event ****
DOORS: Writing exception details...
DOORS: Exception details have been written to: C:\Users\delfinog\AppData\Local\Temp\DOORS-96451-2016_11_09-16_16_15-1292-4964.dmp
DOORS: **** Translating a structured exception ****
DOORS: Version 9.6.1.6, build number 96451, built on Mar 23 2016 22:04:27.
DOORS: Microsoft Windows 7
DOORS: DOORS: 64 percent of memory is in use.
DOORS: There are 8302096 total Kbytes of physical memory.
DOORS: There are 2934264 free Kbytes of physical memory.
DOORS: There are 16602332 total Kbytes of paging file.
DOORS: There are 9900316 free Kbytes of paging file.
DOORS: There are ffffff80 total Kbytes of virtual memory.
DOORS: There are ffc5aedc free Kbytes of virtual memory.

DOORS: argv[0]: C:\Program Files\IBM\Rational\DOORS\9.6\bin\doors.exe
DOORS: argv[1]: -d
DOORS: argv[2]: <REMOVED>
DOORS: Exception timestamp: 09/11/2016 at 16:18:34
DOORS: doors.exe caused an EXCEPTION_ACCESS_VIOLATION in module doors.exe at 0000000041000058
DOORS: 0x00000141000058 doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x00000141006db0        doors.exe
0x000001410008f2        doors.exe
0x000001410012cc        doors.exe
0x000001410008f2        doors.exe
0x0000014100445d        doors.exe
0x000001410040f2        doors.exe
0x00000141003f48        doors.exe
0x000001410084d6        doors.exe
0x0000014100b64e        doors.exe
0x00000140151cef        doors.exe
0x000001402adf8e        doors.exe
0x00000140b3660f        doors.exe
0x00000140b34c8d        doors.exe
0x00000140b2b785        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077196a9c        USER32.dll,     SetTimer()+00000364 byte(s)
0x00000077196ba1        USER32.dll,     SendMessageW()+00000093 byte(s)
0x0007fefb340c73        COMCTL32.dll,   TaskDialog()+00205139 byte(s)
0x0007fefb3448b2        COMCTL32.dll,   TaskDialog()+00220562 byte(s)
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x00000077193ba4        USER32.dll,     CallWindowProcW()+00000156 byte(s)
0x00000077193b20        USER32.dll,     CallWindowProcW()+00000024 byte(s)
0x00000140b3aa7d        doors.exe
0x00000140b3aad6        doors.exe
0x00000077199bdd        USER32.dll,     TranslateMessageEx()+00000669 byte(s)
0x000000771998e2        USER32.dll,     TranslateMessage()+00000482 byte(s)
0x00000140b23d56        doors.exe
0x0000013fd93305        doors.exe
0x0000013fd7d962        doors.exe
0x0000013fd7e081        doors.exe
0x000000772959cd        kernel32.dll,   BaseThreadInitThunk()+00000013 byte(s)
0x000000773ca2e1        ntdll.dll,      RtlUserThreadStart()+00000033 byte(s)

DOORS: **** end of event ****
DOORS: Writing exception details...
DOORS: Exception details have been written to: C:\Users\delfinog\AppData\Local\Temp\DOORS-96451-2016_11_09-16_18_34-1292-4964.dmp

 

Under DOORS 9.6.x 32 Bit probably. Under 64 Bit the code does not work anymore. Please read my post on 64 Bit memory management to find out why and how compatibility can be achieved.

I am sure, someone adapted the code already for 64 bit. If someone could post the modified code, that would be nice. 

Regards, Mathias