Opened 7 years ago

Closed 7 years ago

#2088 closed task (fixed)

introduce MemorySegment class

Reported by: jinmei Owned by: muks
Priority: medium Milestone: Sprint-20120717
Component: data source Version:
Keywords: Cc:
CVSS Scoring: Parent Tickets:
Sensitive: no Defect Severity: N/A
Sub-Project: DNS Feature Depending on Ticket: scalable inmemory
Estimated Difficulty: 4 Add Hours to Ticket: 0
Total Hours: 0 Internal?: no

Description (last modified by jinmei)

We'll use this so we can use various types of memory segment ('local',
'shared memory', 'mmap') transparently.

  • define an abstract base class. We only need the simplest allocator and deallocator:
      void* allocate(size_t size);
      void deallocate(void* ptr, size_t size);
    
    note: the size parameter for deallocate is not mandatory functionality-wise, but we'll need this if we want to detect memory leak.
  • it'd also be better to have "allMemoryDeallocated" for memory leak detection:
      bool allMemoryDeallocated() const;
    
  • define and implement LocalMemorySegment derived class. It internally uses malloc/free or new/delete. not sure this should be a "nothrow" version (probably not, but think about it)

No dependency. Can start anytime.

Subtickets

Change History (24)

comment:1 Changed 7 years ago by jinmei

  • Description modified (diff)

comment:2 Changed 7 years ago by jinmei

  • Description modified (diff)

comment:3 Changed 7 years ago by shane

  • Estimated Difficulty changed from 0 to 4

comment:4 Changed 7 years ago by jinmei

  • Feature Depending on Ticket changed from in-memory NSEC to scalable inmemory

comment:5 Changed 7 years ago by shane

  • Milestone changed from Next-Sprint-Proposed to Sprint-20120717

comment:6 Changed 7 years ago by muks

  • Owner set to muks
  • Status changed from new to assigned

Picking

comment:7 Changed 7 years ago by muks

I have added an initial implementation on the branch. These could very well not be the implementations we are looking for. From the bug report, it's not clear how these methods will be used:

  • Is deallocate() meant to free smaller blocks than what allocate() returns?
  • Because these methods will be virtual, a MemorySegment? object is necessary. Will this be a singleton?
  • Does allMemoryDeallocated() need anything fancy like a set of all allocated buffers kept, or is just a counter of total size of memory allocated sufficient? (The counter is implemented.)

Branch trac2088 contains the current implementation.

comment:8 Changed 7 years ago by muks

  • Owner changed from muks to UnAssigned
  • Status changed from assigned to reviewing

The branch is now ready for review.

comment:9 follow-up: Changed 7 years ago by jinmei

Some quick comments:

  • I myself am not really sure, but I guess we'd probably like allocate to throw when it fails to allocate the memory.
  • what if ptr is NULL in deallocate? What if size is too large?
  • unittest: test body can be in an unnamed namespace, and we normally do so.
  • there are some style issues, mainly about the position of *, e.g.
    +    void *ptr = malloc(size);
    

comment:10 in reply to: ↑ 9 ; follow-up: Changed 7 years ago by muks

Hi Jinmei

Replying to jinmei:

Some quick comments:

  • I myself am not really sure, but I guess we'd probably like allocate to throw when it fails to allocate the memory.

"Fails to allocate memory" depends on the implementation, so whether it would throw or not would depend on the implementation.

For the malloc() implementation, malloc() could return NULL. There are arguments both for and against NULL-testing this pointer, and I side with not NULL testing it unless we think it's reasonable for it to fail (such as for very large buffers). For small sizes if it returns NULL, in typical programs it is a fatal situation. Throwing here isn't likely to help and could have issues of its own. If large sizes are requested, then it's better left to the caller to handle a NULL return. For a well-written program running on a system reasonably configured for it, there should not be an unexpected out-of-memory issue in malloc(). If there is, it would be due to some programming (leak) or system configuration error.

There is one more issue. In some demand paging kernels (Linux included), allocations are over committed in the default configuration (unless it's a very very large buffer requested or some other conditions exist). For typical malloc()s, it will never return NULL. libc requests more memory and the kernel always allows the brk() to pass and creates table entries. Only when there's a page fault does the kernel try to find a frame for it and OOMs processes if necessary then. So NULL checking of malloc() output won't be useful.

NULL checking could be useful in other MemorySegment? implementations.

  • what if ptr is NULL in deallocate? What if size is too large?

Deallocating NULL would be a fatal programming mistake, not something that catching can fix. It's better for the process to segfault and dump its core then.

If size is too large, we can probably add a trivial check against the allocated count. It will only catch larger than allocated count problems. It could be caught in the system's implementation if the implementation takes the size of memory to deallocate. Even in the case of free(), at the end we'd have a non-zero allocated count pointing to a bug. But we can add this check as it'd catch one type of problem.

  • unittest: test body can be in an unnamed namespace, and we normally do so.

Fixed. :)

  • there are some style issues, mainly about the position of *, e.g.
    +    void *ptr = malloc(size);
    

Fixed. :)

comment:11 Changed 7 years ago by jelte

  • Owner changed from UnAssigned to jelte

comment:12 Changed 7 years ago by jelte

  • Owner changed from jelte to muks

Two additional comments;

  • The base class should have a virtual destructor
  • IIRC, we agreed on -dev at some point that the methods overriding virtual method should also be classified as virtual (to show that they are overriding something and not local to the derived class only)

comment:13 in reply to: ↑ 10 ; follow-up: Changed 7 years ago by jinmei

Replying to muks:

Hi Jinmei

Replying to jinmei:

Some quick comments:

  • I myself am not really sure, but I guess we'd probably like allocate to throw when it fails to allocate the memory.

"Fails to allocate memory" depends on the implementation, so whether it would throw or not would depend on the implementation.

First, let me clarify I actually meant the case where malloc() returns
NULL by "fail to allocate memory".

For the malloc() implementation, malloc() could return NULL. There are arguments both for and against NULL-testing this pointer, and I side with not NULL testing it unless we think it's reasonable for it to fail (such as for very large buffers). For small sizes if it returns NULL, in typical programs it is a fatal situation. Throwing here isn't likely to help and could have issues of its own. If large sizes are requested, then it's better left to the caller to handle a NULL return. For a well-written program running on a system reasonably configured for it, there should not be an unexpected out-of-memory issue in malloc(). If there is, it would be due to some programming (leak) or system configuration error.

I tend to agree that when malloc() returns NULL for a reasonable size
it would be leak or some system level error. But for the rest of the
above I either don't understand it or disagree on it.

I don't know what you mean by issues of throwing itself, but our
overall implementation generally expects when the free store operator
'new' fails to allocate memory bad_alloc is thrown (although we
generally only catch it at the highest level of the application and
exit immediately). Since MemorySegment::allocate() is a kind of
wrapper for the standard memory allocator, I think it provides more
consistent interface.

I'm not sure what you mean by 'not NULL testing'. The current
implementation in the branch actually does test if the returned value
is NULL or not:

    void* ptr = malloc(size);

    if (ptr != NULL) {
        allocated_size_ += size;
    }

If you mean by 'not NULL testing' we'd let the caller handle the
situation where the pointer is NULL, I'd say it'll just result in the
same behavior (throwing an exception) with forcing the caller to
repeat the same pattern. As noted above, our overall interface
expects memory allocation to generally succeed or result in an
exception in the worst case, so the caller would often have no other
way to handle it than by throwing an exception itself (it's generally
the case for constructors, and also apply to many methods/functions
that return void).

If you actually mean by 'not NULL testing' even the caller doesn't
care if MemorySegment::allocate() returns NULL and it will let the
program subsequently segfault or something, I've never heard of that
argument as a reasonable design option. At least I personally
strongly oppose to it. If the caller immediately tries to dereference
the returned pointer, it will immediately cause a segfault, and the
end result is not so different (I still prefer making it more explicit
via an exception (with which we could handle it a bit more gracefully
if we want) or assert()ing it, but at this level I see there are both
for and against camps). But if it simply stores the pointer somewhere
and returns, basically assuming the pointer is valid, it will create a
strange inconsistent state in the system, which will lead to very
strange crash (if we are relatively lucky) or some awkward behavior
(if we are more unlucky) that is very difficult to debug because the
real cause of the problem is far from where the visible issue happens.

So, to me, the only possible options were

  1. MemorySegment::allocate() throws if malloc() returns NULL, or
  2. MemorySegment::allocate() does nothing if malloc() returns NULL and passes it to caller. We request the caller handle that situation appropriately (in documentation), which would be throwing an exception in practice anyway.

Originally I was not sure which one is better in this specific case
because in the limited usage of MemorySegment::allocate(), it might be
more convenient for the caller to explicitly catch the rare failure
case (than doing try-catch or using some wrapper class for RAII). But
I now expect it will probably not be the case. Since option #1 is
more consistent with other part of our internal interfaces, I now
prefer this option. But if you meant option #2 for some specific
reason, we can discuss it.

There is one more issue. In some demand paging kernels (Linux included), allocations are over committed in the default configuration (unless it's a very very large buffer requested or some other conditions exist). For typical malloc()s, it will never return NULL. libc requests more memory and the kernel always allows the brk() to pass and creates table entries. Only when there's a page fault does the kernel try to find a frame for it and OOMs processes if necessary then. So NULL checking of malloc() output won't be useful.

This doesn't seem to be an "issue" to me, just another incident of
being locked in with a particular implementation. The fact there's an
implementation of malloc() that never returns NULL doesn't mean we
"shouldn't" handle this error case. And, there is in fact other
implementation of malloc() in the real world that can really return
NULL under memory shortage; I encountered that situation with BIND 9 -
if my memory is correct it was Solaris.

NULL checking could be useful in other MemorySegment? implementations.

If we use boost memory segment libraries, they throw by default.
Also, IMO the base class should provide the unified interface on this
point anyway; we shouldn't change the behavior depending on details of
the underlying segment implementation.

  • what if ptr is NULL in deallocate? What if size is too large?

Deallocating NULL would be a fatal programming mistake, not something that catching can fix. It's better for the process to segfault and dump its core then.

First off, free() won't segfault with NULL. So segfault won't
happen with this implementation:

void
MemorySegmentLocal::deallocate(void* ptr, size_t size) {
    allocated_size_ -= size;
    free(ptr);
}

Secondly, while I agree that deallocating attempt of NULL pointer is
generally a program error (but not necessarily so - sometimes it could
be a reasonable idiom for making the code simpler by eliminating the
need for an if block), and I also agree that we cannot directly fix
the error by catching it. But it doesn't mean explicitly catching it
doesn't make sense - it helps a possible bug earlier and more
clearly. I personally prefer such stricter checks and the benefit
they provide, but I see this is a case where two camps exist. Now
Jelte seems to become the official reviewer of this branch, I'd leave
it to you two. In either case, I think the expected behavior should
be documented.

If size is too large, we can probably add a trivial check against the allocated count. It will only catch larger than allocated count problems. It could be caught in the system's implementation if the implementation takes the size of memory to deallocate. Even in the case of free(), at the end we'd have a non-zero allocated count pointing to a bug. But we can add this check as it'd catch one type of problem.

If size is too large, allocated_size_ will become an obviously
invalid, very large value. I don't like to leave the object in such a
clearly broken state. Also, if this happens it's quite likely that
caller is passing a pointer that was not really allocated by this
segment. IMO it's better to catch such an error as soon as possible.

But I'd leave the decision on this point to you and Jelte, too.

One other thing: the base class destructor should be declared and
defined explicitly and be made virtual.

comment:14 in reply to: ↑ 13 ; follow-up: Changed 7 years ago by muks

Hi Jinmei

Replying to jinmei:

I'm not sure what you mean by 'not NULL testing'. The current
implementation in the branch actually does test if the returned value
is NULL or not:

    void* ptr = malloc(size);

    if (ptr != NULL) {
        allocated_size_ += size;
    }

This test is present because if malloc() does return NULL for some large size for example, the allocated_size variable doesn't become bogus. It means _we_ don't NULL test (to fail the allocate), but the caller _may_ NULL test the returned pointer.

If you mean by 'not NULL testing' we'd let the caller handle the
situation where the pointer is NULL, I'd say it'll just result in the
same behavior (throwing an exception) with forcing the caller to
repeat the same pattern. As noted above, our overall interface
expects memory allocation to generally succeed or result in an
exception in the worst case, so the caller would often have no other
way to handle it than by throwing an exception itself (it's generally
the case for constructors, and also apply to many methods/functions
that return void).

If you actually mean by 'not NULL testing' even the caller doesn't
care if MemorySegment::allocate() returns NULL and it will let the
program subsequently segfault or something, I've never heard of that
argument as a reasonable design option. At least I personally
strongly oppose to it. If the caller immediately tries to dereference
the returned pointer, it will immediately cause a segfault, and the
end result is not so different (I still prefer making it more explicit
via an exception (with which we could handle it a bit more gracefully
if we want) or assert()ing it, but at this level I see there are both
for and against camps). But if it simply stores the pointer somewhere
and returns, basically assuming the pointer is valid, it will create a
strange inconsistent state in the system, which will lead to very
strange crash (if we are relatively lucky) or some awkward behavior
(if we are more unlucky) that is very difficult to debug because the
real cause of the problem is far from where the visible issue happens.

When we are not able to allocate (by this, I don't mean large buffer requests that may fail, but smaller sizes which is unexpected), even if we handle this case, there's not much else to help the process then. It's an unreasonable condition for a typical process to continue beyond that point without requesting more from the data segment, even during cleanup (unless we just let it fail without any cleanup).

It's not unreasonable to not handle an unexpected NULL return. There are people who do it and who don't (a matter of personal style from their experience). The effect being far away from cause is going to happen even if we test for NULL (over commiting policy).

So, to me, the only possible options were

  1. MemorySegment::allocate() throws if malloc() returns NULL, or
  2. MemorySegment::allocate() does nothing if malloc() returns NULL and passes it to caller. We request the caller handle that situation appropriately (in documentation), which would be throwing an exception in practice anyway.

Originally I was not sure which one is better in this specific case
because in the limited usage of MemorySegment::allocate(), it might be
more convenient for the caller to explicitly catch the rare failure
case (than doing try-catch or using some wrapper class for RAII). But
I now expect it will probably not be the case. Since option #1 is
more consistent with other part of our internal interfaces, I now
prefer this option. But if you meant option #2 for some specific
reason, we can discuss it.

We are trying to find the best way for this. If you feel allocate() should throw, we'll add it. I don't mind both ways as I don't think either is less correct or makes a practical difference.

There is one more issue. In some demand paging kernels (Linux included), allocations are over committed in the default configuration (unless it's a very very large buffer requested or some other conditions exist). For typical malloc()s, it will never return NULL. libc requests more memory and the kernel always allows the brk() to pass and creates table entries. Only when there's a page fault does the kernel try to find a frame for it and OOMs processes if necessary then. So NULL checking of malloc() output won't be useful.

This doesn't seem to be an "issue" to me, just another incident of
being locked in with a particular implementation. The fact there's an
implementation of malloc() that never returns NULL doesn't mean we
"shouldn't" handle this error case. And, there is in fact other
implementation of malloc() in the real world that can really return
NULL under memory shortage; I encountered that situation with BIND 9 -
if my memory is correct it was Solaris.

This becomes an issue because code that NULL tests libc returned pointers will be dead code, and termination happens randomly when the pages are used.

NULL checking could be useful in other MemorySegment? implementations.

If we use boost memory segment libraries, they throw by default.
Also, IMO the base class should provide the unified interface on this
point anyway; we shouldn't change the behavior depending on details of
the underlying segment implementation.

  • what if ptr is NULL in deallocate? What if size is too large?

Deallocating NULL would be a fatal programming mistake, not something that catching can fix. It's better for the process to segfault and dump its core then.

First off, free() won't segfault with NULL. So segfault won't
happen with this implementation:

void
MemorySegmentLocal::deallocate(void* ptr, size_t size) {
    allocated_size_ -= size;
    free(ptr);
}

I didn't mean passing NULL to free(), but NULL to deallocate() regardless of its implementation.

Secondly, while I agree that deallocating attempt of NULL pointer is
generally a program error (but not necessarily so - sometimes it could
be a reasonable idiom for making the code simpler by eliminating the
need for an if block), and I also agree that we cannot directly fix
the error by catching it. But it doesn't mean explicitly catching it
doesn't make sense - it helps a possible bug earlier and more
clearly. I personally prefer such stricter checks and the benefit
they provide, but I see this is a case where two camps exist. Now
Jelte seems to become the official reviewer of this branch, I'd leave
it to you two. In either case, I think the expected behavior should
be documented.

If you want it to be handled simply allowing NULLs (i.e, we allow NULL in deallocate() and simply decrement the size), the code does that. deallocate() should be documented for the NULL case so it's consistent.

If size is too large, we can probably add a trivial check against the allocated count. It will only catch larger than allocated count problems. It could be caught in the system's implementation if the implementation takes the size of memory to deallocate. Even in the case of free(), at the end we'd have a non-zero allocated count pointing to a bug. But we can add this check as it'd catch one type of problem.

If size is too large, allocated_size_ will become an obviously
invalid, very large value. I don't like to leave the object in such a
clearly broken state. Also, if this happens it's quite likely that
caller is passing a pointer that was not really allocated by this
segment. IMO it's better to catch such an error as soon as possible.

I agree.. it's better to have this check than not have it.

One other thing: the base class destructor should be declared and
defined explicitly and be made virtual.

OK.

comment:15 Changed 7 years ago by muks

  • Owner changed from muks to jelte

These have been added:

* c0af8f8 [2088] Throw isc::OutOfRange when size passed to deallocate() is larger than what was allocated
* aa584b6 [2088] Throw bad_alloc when MemorySegment cannot allocate the required space
* fb9bdb9 [2088] Mark virtual methods as virtual even in derived classes
* 96c432d [2088] Add virtual destructor to MemorySegment

Back to review.

comment:16 Changed 7 years ago by muks

Added two more patches:

* 25cc38b [2088] Handle NULL deallocate as a nop
* 3e70efc [2088] Remove obsolete include

comment:17 follow-up: Changed 7 years ago by jelte

  • Owner changed from jelte to muks

Two minor things (trivial compared to the above):

  • for the current implementation, I think tracking allocated size would not really be useful, so maybe we should add a little documentation explaining that it is worthwhile to the main .h file
  • MemorySegmentLocal?.TestBadDeallocate? leaves memory allocated ;)

dunno if jinmei has any more followup, but giving it back to muks now

comment:18 in reply to: ↑ 14 ; follow-up: Changed 7 years ago by jinmei

Replying to muks:

When we are not able to allocate (by this, I don't mean large buffer requests that may fail, but smaller sizes which is unexpected), even if we handle this case, there's not much else to help the process then. It's an unreasonable condition for a typical process to continue beyond that point without requesting more from the data segment, even during cleanup (unless we just let it fail without any cleanup).

As I admitted, I tend to agree if malloc() returns NULL for a
reasonably small size of memory it's quite unlikely to recover from
that situation in practice. That's why we generally catch
bad_alloc (as part of other std::exception) only at the top level of
the program and only to immediately terminate.

But,

It's not unreasonable to not handle an unexpected NULL
return. There are people who do it and who don't (a matter of
personal style from their experience).

this is my first time to hear intentionally ignoring this case could
be considered a style matter (at least unless it only expects
prototype-level quality or is expected to be used in some limited
systems where malloc() returns non NULL or makes the program
terminate). Any production-quality software that is expected to be
portable I've seen catches this error mode and somehow tries to handle
it explicitly, if only to terminate the program. Our of curiosity,
exactly who are the people ignoring an unexpected NULL from malloc()
(other than you:-)? Is there any well known project that adopts that
style?

The effect being far away
from cause is going to happen even if we test for NULL (over
commiting policy).

I don't think that's true at least if we throw an exception. If
allocate() is implemented this way:

    void* ptr = malloc(requested_size);
    if (ptr == NULL) {
        throw bad_alloc();
    }
    return (ptr);

unless the caller explicitly catches this exception and intentionally
handles it in an inappropriate way (like just ignoring it and using a
NULL pointer instead).

So, to me, the only possible options were

[...]

We are trying to find the best way for this. If you feel allocate() should throw, we'll add it. I don't mind both ways as I don't think either is less correct or makes a practical difference.

I prefer throwing an exception mainly for the reason described in the
previous paragraph. If there's a reason I'm okay with pass NULL
through to the caller, requesting the caller to handle that case
properly. I don't really like to just ignore this case both within
allocate() and in the caller (and let the program die in some random
way or go crazy).

This becomes an issue because code that NULL tests libc returned pointers will be dead code, and termination happens randomly when the pages are used.

I don't understand this...sure, if a particular system ensures
malloc() either returns non NULL or makes the program terminate, the
if+throw in the above code snippet is effectively dead code for that
platform. But I don't think it will make compile error, and while the
additional if is an unnecessary cost, we generally don't expect this
code to be super efficient so the additional cost should be marginal.
If we really care about the redundancy, we could ifdef-out that part
for that particular system - then we can still ensure safety for
others. I simply don't understand the "termination happens randomly"
part. If malloc() could make the program terminate, termination could
happen, but it's not because of the added if. If this meant something
else, I simply didn't understand it.

comment:19 in reply to: ↑ 18 ; follow-up: Changed 7 years ago by muks

Replying to jinmei:

this is my first time to hear intentionally ignoring this case could
be considered a style matter (at least unless it only expects
prototype-level quality or is expected to be used in some limited
systems where malloc() returns non NULL or makes the program
terminate). Any production-quality software that is expected to be
portable I've seen catches this error mode and somehow tries to handle
it explicitly, if only to terminate the program. Our of curiosity,
exactly who are the people ignoring an unexpected NULL from malloc()
(other than you:-)? Is there any well known project that adopts that
style?

"Intentionally ignoring" doesn't quite represent what happens. When every allocation is NULL tested, in C there is code that fails each function to top-level (unless it's a thin wrapper around malloc() that aborts immediately, such as in glib which aborts every failure). In C++, we have the exception. In the case where we don't test, the process creates a core when it segfaults. This does not cause some random behavior (unless you do arithmetic with that pointer and somehow point it outside page 0 before dereferencing it). As soon as that pointer is used indirectly, the process will segfault and the core tells where.

Not every malloc() output goes unchecked, just the ones that are expected to pass. If you can recover from the failure, then by all means, NULL test the output.

Regarding examples, we didn't test many malloc()s output in other workplaces where failure was not reasonable such as for structs. If you do a Google search, you'll find people discussing it. People are for NULL testing overwhelmingly, but not doing it isn't unrepresented. :) There are some such cases of malloc() use in GIMP, but these are not representative. GIMP doesn't do any NULL testing itself, but uses glib's g_new() and g_slice_alloc() for performance which simply abort when they are unable to allocate.

The effect being far away
from cause is going to happen even if we test for NULL (over
commiting policy).

I don't think that's true at least if we throw an exception. If
allocate() is implemented this way:

I think you missed the over committing policy part. You can't throw an exception if the effect is far from the cause. If memory is over committed, NULL is not returned and no exception is thrown.

This becomes an issue because code that NULL tests libc returned pointers will be dead code, and termination happens randomly when the pages are used.

I don't understand this...sure, if a particular system ensures
malloc() either returns non NULL or makes the program terminate, the
if+throw in the above code snippet is effectively dead code for that
platform. But I don't think it will make compile error, and while the
additional if is an unnecessary cost, we generally don't expect this
code to be super efficient so the additional cost should be marginal.
If we really care about the redundancy, we could ifdef-out that part
for that particular system - then we can still ensure safety for
others. I simply don't understand the "termination happens randomly"
part. If malloc() could make the program terminate, termination could
happen, but it's not because of the added if. If this meant something
else, I simply didn't understand it.

It's not a matter of performance or extra code. What I meant is that in such over commit cases (this quoted discussion) the code effectively is dead code (what we think would happen and carefully write code for doesn't happen). The effect of memory exhaustion happens far away from the malloc() which returns non-NULL.

You and I understand what each other is saying at least. I think both approaches are equivalent in practice. You think checking is superior as the process gracefully exits at top-level and reports an allocation problem. Let's agree to disagree on this. :) The code now throws, which I'm fine with too.

comment:20 in reply to: ↑ 17 Changed 7 years ago by muks

  • Owner changed from muks to jelte

Replying to jelte:

Two minor things (trivial compared to the above):

  • for the current implementation, I think tracking allocated size would not really be useful, so maybe we should add a little documentation explaining that it is worthwhile to the main .h file

Done :)

Done :)

comment:21 in reply to: ↑ 19 ; follow-up: Changed 7 years ago by jinmei

Replying to muks:

You and I understand what each other is saying at least. I think both approaches are equivalent in practice. You think checking is superior as the process gracefully exits at top-level and reports an allocation problem. Let's agree to disagree on this. :) The code now throws, which I'm fine with too.

Okay...I'm feeling a bit guilty if you were not convinced but changed
the code, but if you can live with that let's move forward with it for
now. I'd leave the ticket to you and Jelte at this point.

comment:22 Changed 7 years ago by jelte

  • Owner changed from jelte to muks

I'm fine with either approach, and my smaller points have been addressed, so I think you can go ahead

comment:23 in reply to: ↑ 21 Changed 7 years ago by muks

Replying to jinmei:

Okay...I'm feeling a bit guilty if you were not convinced but changed
the code, but if you can live with that let's move forward with it for
now. I'd leave the ticket to you and Jelte at this point.

No need to feel guilty. I appreciate how you evaluate everything on its merit alone. There was surprise expressed at how long this thread was getting, so I wanted to move it along too. :) I'm fine with the throwing behaviour.

comment:24 Changed 7 years ago by muks

  • Resolution set to fixed
  • Status changed from reviewing to closed

Merged to master in commit 9726cd1627ba4c074f65e57799e8b77ec158a491:

* c9f90e8 [2088] Deallocate all memory in TestBadDeallocate unit test
* 59c5e16 [2088] Add comments about size argument to deallocate()
* 25cc38b [2088] Handle NULL deallocate as a nop
* 3e70efc [2088] Remove obsolete include
* c0af8f8 [2088] Throw isc::OutOfRange when size passed to deallocate() is larger than what was allocated
* aa584b6 [2088] Throw bad_alloc when MemorySegment cannot allocate the required space
* fb9bdb9 [2088] Mark virtual methods as virtual even in derived classes
* 96c432d [2088] Add virtual destructor to MemorySegment
* 16f7c06 [2088] Fix coding style issues
* 093aab9 [2088] Put test in anonymous namespace
* 05a1e1a [2088] Indent test code
* 2269069 [2088] Rename test
* a4296e2 [2088] Document MemorySegment classes
* 00c9a91 [2088] Add initial MemorySegment and MemorySegmentLocal classes

Resolving as fixed. Thank you for the reviews Jinmei and Jelte.

Note: See TracTickets for help on using tickets.