Opened 8 years ago

Closed 7 years ago

Last modified 7 years ago

#1828 closed defect (fixed)

Python tests print warnings about unclosed file handles / sockets

Reported by: muks Owned by: muks
Priority: medium Milestone: Sprint-20120619
Component: Unclassified Version:
Keywords: Cc:
CVSS Scoring: Parent Tickets:
Sensitive: no Defect Severity: N/A
Sub-Project: DNS Feature Depending on Ticket:
Estimated Difficulty: 4 Add Hours to Ticket: 0
Total Hours: 0 Internal?: no

Description (last modified by muks)

Python tests in src/lib print warnings about unclosed file handles / sockets. They should be closed before they go out of scope.

<jelte> openbsd has a default open file limit of 128, so we need to do more cleanup in our python unit tests

Subtickets

Change History (29)

comment:1 Changed 8 years ago by muks

  • Description modified (diff)

comment:2 Changed 8 years ago by muks

  • Description modified (diff)
  • Summary changed from Python tests in src/lib print warnings about unclosed file handles to Python tests in src/lib print warnings about unclosed file handles / sockets

comment:3 Changed 8 years ago by muks

  • Milestone changed from New Tasks to Next-Sprint-Proposed

comment:4 Changed 8 years ago by jelte

  • Milestone changed from Next-Sprint-Proposed to Sprint-20120417

comment:5 Changed 8 years ago by muks

  • Owner set to muks
  • Status changed from new to assigned

Picking

comment:6 Changed 8 years ago by muks

  • Summary changed from Python tests in src/lib print warnings about unclosed file handles / sockets to Python tests print warnings about unclosed file handles / sockets

It's not just src/lib/

comment:7 Changed 8 years ago by shane

Probably we can close #845 when this is resolved. (To be honest I'm not sure why these warnings exist - don't we use modern languages like Python so we don't have to worry about this nonsense.)

comment:8 follow-up: Changed 8 years ago by muks

  • Owner changed from muks to UnAssigned
  • Status changed from assigned to reviewing

9 warnings remain in trac1828 branch. These are not easily fixed. First the problem is not obvious from the warning messages, and reviewing the code doesn't reveal anything. We'll end up spending a lot of time on these remaining 9.

Over 100 reports have been fixed in trac1828. This bug was created so that we can get the tests passing in OpenBSD without running out of fds. I think we should merge this into master now, and create a new lower-priority bug for the remaining 9 (which are in tests anyway and not criticial). 9 fds shouldn't cause issues on OpenBSD.

I'm putting it up for review. Maybe the reviewer can make check and have a look too.

Last edited 8 years ago by muks (previous) (diff)

comment:9 in reply to: ↑ 8 ; follow-up: Changed 8 years ago by jinmei

I've not fully reviewed the code yet, but have some initial comments:

First off, do we really need things like this?

-        open(self.pid_file, "w").write('dummy data\n')
+        fd = open(self.pid_file, "w")
+        fd.write('dummy data\n')
+        fd.close()

As Shane mentioned in #845, this is very counter intuitive, and if we
really need to explicitly close opened files this way, it seems to me
something is very wrong. At the very least I don't think we can
reliably keep this convention in code we'll develop in future (except
by seeing the warnings and modifying the code to silence them).

I've written the following simple test code:

while True:
    open('f.py', 'r')

and ran it with 'python -W default'. It produced the warning, but
otherwise it just kept (busy) running, suggesting the temporary file object
is closed at the end of the loop:

% python3.2 -W default f.py
f.py:4: ResourceWarning: unclosed file <_io.TextIOWrapper name='f.py' mode='r' encoding='UTF-8'>
  open('f.py', 'r')
(no other disruption)

So, I suspect there are only a small number of real leak (or very delayed
close), and the others are false alarm. I also suspect this warning
is quite naive and just performs simple syntax level checks (e.g.,
whether explicit close() follows an open()). While seeing too noisy
warning is not a good thing, I don't like to tweak our code in a
counter intuitive manner just to silence the warning.

My suggestion is to identify the real leak and fix them, and leave
others open, and, ideally, suppress this warning (in general I don't
like to suppress warning, but in this specific case I think it does
more harm than good).

See also this: http://projects.scipy.org/scipy/ticket/1385

One other minor comment at this moment: for the real (few, I suspect)
cases we need to store the result of open() in a variable, 'fd'
doesn't seem to be a good name because the return value is a 'file
object', not a descriptor. I'd use something like 'fobj', 'file', etc.

comment:10 Changed 8 years ago by jinmei

  • Owner changed from UnAssigned to muks

comment:11 in reply to: ↑ 9 ; follow-up: Changed 8 years ago by muks

  • Owner changed from muks to jinmei

Replying to jinmei:

I've not fully reviewed the code yet, but have some initial comments:

First off, do we really need things like this?

-        open(self.pid_file, "w").write('dummy data\n')
+        fd = open(self.pid_file, "w")
+        fd.write('dummy data\n')
+        fd.close()

As Shane mentioned in #845, this is very counter intuitive, and if we
really need to explicitly close opened files this way, it seems to me
something is very wrong. At the very least I don't think we can
reliably keep this convention in code we'll develop in future (except
by seeing the warnings and modifying the code to silence them).

By counter-intuitive, I gather you mean having to close() the file/socket object manually, and not the verbosity of the code itself.

I've written the following simple test code:

while True:
    open('f.py', 'r')

Nod. It does destroy the object (and close() the underlying fd) at the block exit where it goes out of scope.

The interpreter also destroys immediately an old object when a variable is overwritten, though it may not be right to assume that this non-lazy behavior (it seems to be refcounted) will exist in future versions or other interpreters.

a = Something()
a = Something() # first Something is destroyed immediately after
                # the second Something's construction, during assignment

So, I suspect there are only a small number of real leak (or very delayed
close), and the others are false alarm. I also suspect this warning
is quite naive and just performs simple syntax level checks (e.g.,
whether explicit close() follows an open()). While seeing too noisy
warning is not a good thing, I don't like to tweak our code in a
counter intuitive manner just to silence the warning.

My suggestion is to identify the real leak and fix them, and leave
others open, and, ideally, suppress this warning (in general I don't
like to suppress warning, but in this specific case I think it does
more harm than good).

See also this: http://projects.scipy.org/scipy/ticket/1385

This ticket explains the reason why we are seeing these warnings (unittests enabling warnings at default level). In that ticket's case, they are also silencing the warnings by closing the fds.

I don't think we should call simplefilter() to suppress all RuntimeWarning warnings as something important may be missed. It seems to be possible to suppress on a line-by-line basis, but it involves putting line numbers for suppressions into the code which is, IMO, a worse hack than actually closing the objects.

I have checked every case in the patches in the trac1828 branch now to see if there are actual (albeit temporary) leaks to cause exhaustion of fds. In each of these cases, the object goes out of scope (exits basic block, or the variable is overwritten) and these don't happen in a recusive style of looping where the basic block is not exited before other fds are alloc'd. So the patches help in suppressing the RuntimeWarnings alone, but the close() on the underlying fds should happen anyway without the patches.

Also, the warnings are not caused by syntax checks but happen during runtime in the .c module when the last dereference causes a check on the underlying fd to see if it's a valid one (!= -1 in socketmodule.c, or > 0 in fileio.c). I don't know why the Python devs such checks, but they actually expect the explicit .close() to be performed then.

So in light of this, shall we keep this branch to suppress the warnings, or abandon it?

comment:13 in reply to: ↑ 11 ; follow-up: Changed 8 years ago by jinmei

Replying to muks:

My suggestion is to identify the real leak and fix them, and leave
others open, and, ideally, suppress this warning (in general I don't
like to suppress warning, but in this specific case I think it does
more harm than good).

See also this: http://projects.scipy.org/scipy/ticket/1385

This ticket explains the reason why we are seeing these warnings (unittests enabling warnings at default level). In that ticket's case, they are also silencing the warnings by closing the fds.

I don't think we should call simplefilter() to suppress all RuntimeWarning warnings as something important may be missed. It seems to be possible to suppress on a line-by-line basis, but it involves putting line numbers for suppressions into the code which is, IMO, a worse hack than actually closing the objects.

I agree with this.

I have checked every case in the patches in the trac1828 branch now to see if there are actual (albeit temporary) leaks to cause exhaustion of fds. In each of these cases, the object goes out of scope (exits basic block, or the variable is overwritten) and these don't happen in a recusive style of looping where the basic block is not exited before other fds are alloc'd. So the patches help in suppressing the RuntimeWarnings alone, but the close() on the underlying fds should happen anyway without the patches.

I'm afraid I don't understand this...

  • In "the object goes out of scope" case, the underlying FD should be immediately closed (with the warning), so it basically shouldn't cause the descriptor exhaustion (as long as the scope is sufficiently small), right?
  • "these don't happen in a recu(r)sive style..." is not clear to me. Are these the points that actually cause the FD exhaustion?

But, in any case,

Also, the warnings are not caused by syntax checks but happen during runtime in the .c module when the last dereference causes a check on the underlying fd to see if it's a valid one (!= -1 in socketmodule.c, or > 0 in fileio.c). I don't know why the Python devs such checks, but they actually expect the explicit .close() to be performed then.

hmm, I see, you're probably right on this. So it appears that doing
explicit close() is the expected practice in Python, even if the file
object will be GC'ed eventually and the file will be closed at that
point. In that case I don't object to following that convention.

But since it's not so obvious why we need to do the seemingly
redundant close(), I'd suggest updating the coding guideline page
http://bind10.isc.org/wiki/CodingGuidelines#PythonStyle
to explain this issue and what we generally should do in our code. I
also suggest raising this at the bind10-dev list so other developers
can recognize it.

Regarding the branch itself...

  • (repeating my original comment) I'd suggest using 'file' or at least something different from 'fd' to represent file object.
  • it would probably be even better to use the with idiom so the code is more exception safe in terms of the timing of close(). That is, instead of
        file = open(self.pid_file, "w")
        file.write('dummy data\n')
        file.close()
    
    do this:
        # or in this simple case it could be a one-liner.
        with open(self.pid_file, "w") as file:
            file.write('dummy data\n')
    

comment:14 Changed 8 years ago by jinmei

  • Owner changed from jinmei to muks

comment:15 in reply to: ↑ 13 ; follow-up: Changed 8 years ago by muks

  • Owner changed from muks to jinmei

Replying to jinmei:

I'm afraid I don't understand this...

  • In "the object goes out of scope" case, the underlying FD should be immediately closed (with the warning), so it basically shouldn't cause the descriptor exhaustion (as long as the scope is sufficiently small), right?
  • "these don't happen in a recu(r)sive style..." is not clear to me. Are these the points that actually cause the FD exhaustion?

None of the cases that print these warnings should cause fd exhaustion with CPython.

But since it's not so obvious why we need to do the seemingly
redundant close(), I'd suggest updating the coding guideline page
http://bind10.isc.org/wiki/CodingGuidelines#PythonStyle
to explain this issue and what we generally should do in our code. I
also suggest raising this at the bind10-dev list so other developers
can recognize it.

http://bind10.isc.org/wiki/CodingGuidelines#Closeallfileandsocketobjects

I have mailed bind10-dev too.

Regarding the branch itself...

  • (repeating my original comment) I'd suggest using 'file' or at least something different from 'fd' to represent file object.
  • it would probably be even better to use the with idiom so the code is more exception safe in terms of the timing of close(). That is, instead of
        file = open(self.pid_file, "w")
        file.write('dummy data\n')
        file.close()
    
    do this:
        # or in this simple case it could be a one-liner.
        with open(self.pid_file, "w") as file:
            file.write('dummy data\n')
    

This still doesn't call .close().

comment:16 in reply to: ↑ 15 ; follow-up: Changed 8 years ago by jinmei

Replying to muks:

  • In "the object goes out of scope" case, the underlying FD should be immediately closed (with the warning), so it basically shouldn't cause the descriptor exhaustion (as long as the scope is sufficiently small), right?
  • "these don't happen in a recu(r)sive style..." is not clear to me. Are these the points that actually cause the FD exhaustion?

None of the cases that print these warnings should cause fd exhaustion with CPython.

Hmm, I'm getting confused. So the changes in this branch are
irrelevant to the OpenBSD's FD exhaustion problem?

Regarding the branch itself...

  • (repeating my original comment) I'd suggest using 'file' or at least something different from 'fd' to represent file object.
  • it would probably be even better to use the with idiom so the code is more exception safe in terms of the timing of close(). That is, instead of
        file = open(self.pid_file, "w")
        file.write('dummy data\n')
        file.close()
    
    do this:
        # or in this simple case it could be a one-liner.
        with open(self.pid_file, "w") as file:
            file.write('dummy data\n')
    

This still doesn't call .close().

The with idiom should mean implicit close(), at least in my
understanding. See http://docs.python.org/py3k/library/io.html#module-io
(and look for "with statement").

comment:17 Changed 8 years ago by jinmei

  • Owner changed from jinmei to muks

comment:18 in reply to: ↑ 16 ; follow-up: Changed 8 years ago by muks

Replying to jinmei:

None of the cases that print these warnings should cause fd exhaustion with CPython.

Hmm, I'm getting confused. So the changes in this branch are
irrelevant to the OpenBSD's FD exhaustion problem?

The changes in this branch should fix the warnings, but not any fd exhaustion. None of the warnings printed by Python during the tests point to fd exhaustion. The bug itself was created assuming that there was fd exhaustion when close() was not called (due to the warnings).

Regarding the branch itself...

  • (repeating my original comment) I'd suggest using 'file' or at least something different from 'fd' to represent file object.
  • it would probably be even better to use the with idiom so the code is more exception safe in terms of the timing of close(). That is, instead of
        file = open(self.pid_file, "w")
        file.write('dummy data\n')
        file.close()
    
    do this:
        # or in this simple case it could be a one-liner.
        with open(self.pid_file, "w") as file:
            file.write('dummy data\n')
    

This still doesn't call .close().

The with idiom should mean implicit close(), at least in my
understanding. See http://docs.python.org/py3k/library/io.html#module-io
(and look for "with statement").

You are right. Using the with statement is indeed better then, so I'll rewrite the patches for this.

comment:19 in reply to: ↑ 18 Changed 8 years ago by muks

  • Owner changed from muks to jinmei

Replying to muks:

The with idiom should mean implicit close(), at least in my
understanding. See http://docs.python.org/py3k/library/io.html#module-io
(and look for "with statement").

You are right. Using the with statement is indeed better then, so I'll rewrite the patches for this.

Done now.

comment:20 follow-up: Changed 7 years ago by jinmei

general

  • is it possible to make the test fail if this warning is triggered? since the need for it isn't obvious, it's pretty easy to introduce new ones without automatic and demanding check.
  • related to the previous point, I didn't check if this branch fixes all current warnings.

bindctl_test.py

Maybe it's a matter of taste, but I guess this may be better:

        with open(os.devnull, 'w') as f:
            sys.stdout = f
            cmd = bindcmd.BindCmdInterpreter()
            ...
            os.remove(csvfilename)

In the original trac1828 version it's not so obvious to me that why we
do sys.stdout.close(). And the with version is more exception
safe.

Also, while it'd probably beyond the scope of this task, it'd be
better to restore sys.stdout in tearDown(); otherwise if this test
fails sys.stdout will be kept faked in subsequent tests.

ddns_test

  • I suspect the intent of this close may not be obvious and suggest adding some comments about why we do it:
            if self.ddns_server._listen_socket is not None:
                self.ddns_server._listen_socket.close()
    
  • TestDDNSServer.test_listen: it rather seems to be the task of DDNSServer. Since it's the one who opens the socket on construction, I think it should explicitly close it at least in shutdown_cleanup(). Then this test would call this method on ddnss, rather than trying to close it by itself.

msgq_test

  • I think this one should be fixed within msgq: setup_listener() should close the socket in the except block.
  • I'd simplify infinite_sender() as follows:
            with socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)[1] as write:
                msgq.register_socket(write)
                # Keep sending while it is not closed by the msgq
                try:
                    while True:
                        sender(msgq, write)
                except socket.error:
                    pass
    
  • infinite_sender/send_many: like others, the intent of the close() isn't obvious to me. But in these cases I cannot come up with a cleaner alternative. If this is the least bad one, I'd at least clarify the intent in comments.

xfrout_test

  • test_guess_remote: with should be able to be used in these cases.
  • test_sock_file_in_use: same comment as bindctl_test.py applies. besides, I don't even understand why we need to tweak stdout here. I guess it was for very older versions that dumped noisy logs directly to stdout. I don't see any noise or other failures even if I remove this trick. I suggest we simply stop tweaking.
  • test_remove_unused_sock_file_dir: likewise.

xfrout.py

  • _guess_remote: I'd prefer using with
            if socket.has_ipv6:
                with socket.fromfd(sock_fd, socket.AF_INET6, socket.SOCK_STREAM) \
                        as s:
                    peer = s.getpeername()
            else:
                # To make it work even on hosts without IPv6 support
                # (Any idea how to simulate this in test?)
                with socket.fromfd(sock_fd, socket.AF_INET, socket.SOCK_STREAM) \
                        as s:
                    peer = s.getpeername()
    
    although admittedly this version doesn't seem to look very simple. In any case, 'close()' can be done much earlier than the current position.
  • _sock_file_in_use: I'd use with here (and it would make much simpler)

zonemgr_test

  • I don't see the need for tweaking stderr here in the first place. Also, zonemgr itself has this problem for the socketpair. What we should do is to close them appropriately.

sockcreator_test

  • at least this can be revised with with:
            p1 = socket.fromfd(t2.read_fd(), socket.AF_UNIX, socket.SOCK_STREAM)
    

socketsession_test

  • shouldn't it better to close self.start_listen in tearDown() (and maybe in start_listen() if it can be called multiple times)?
  • check_push_and_pop: at least sock should be able to be managed with with.
  • test_bad_pop: can use with:
            with self.accept_forwarder() as accept_sock:
                receiver = SocketSessionReceiver(accept_sock)
                s.close()
                self.assertRaises(SocketSessionError, receiver.pop)
    

comment:21 Changed 7 years ago by jinmei

  • Owner changed from jinmei to muks

comment:22 in reply to: ↑ 20 ; follow-up: Changed 7 years ago by muks

  • Owner changed from muks to jinmei

Replying to jinmei:

general

  • is it possible to make the test fail if this warning is triggered? since the need for it isn't obvious, it's pretty easy to introduce new ones without automatic and demanding check.
  • related to the previous point, I didn't check if this branch fixes all current warnings.

It is possible to assert if a warning is generated by using `warnings.catch_warnings()', but as you note, not all warnings are fixed by this branch as the reason for some are not obvious.

bindctl_test.py

Maybe it's a matter of taste, but I guess this may be better:

        with open(os.devnull, 'w') as f:
            sys.stdout = f
            cmd = bindcmd.BindCmdInterpreter()
            ...
            os.remove(csvfilename)

In the original trac1828 version it's not so obvious to me that why we
do sys.stdout.close(). And the with version is more exception
safe.

Also, while it'd probably beyond the scope of this task, it'd be
better to restore sys.stdout in tearDown(); otherwise if this test
fails sys.stdout will be kept faked in subsequent tests.

Done :)

ddns_test

  • I suspect the intent of this close may not be obvious and suggest adding some comments about why we do it:
            if self.ddns_server._listen_socket is not None:
                self.ddns_server._listen_socket.close()
    

Comment has been added.

  • TestDDNSServer.test_listen: it rather seems to be the task of DDNSServer. Since it's the one who opens the socket on construction, I think it should explicitly close it at least in shutdown_cleanup(). Then this test would call this method on ddnss, rather than trying to close it by itself.

Done.

msgq_test

  • I think this one should be fixed within msgq: setup_listener() should close the socket in the except block.

Done.

  • I'd simplify infinite_sender() as follows:
            with socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)[1] as write:
                msgq.register_socket(write)
                # Keep sending while it is not closed by the msgq
                try:
                    while True:
                        sender(msgq, write)
                except socket.error:
                    pass
    

This with statement leaves pair[0] open and generates a warning.

  • infinite_sender/send_many: like others, the intent of the close() isn't obvious to me. But in these cases I cannot come up with a cleaner alternative. If this is the least bad one, I'd at least clarify the intent in comments.

Same condition as above, I've not used with for these. The .close() itself should not need explanation, no? We are closing when done with the object.

xfrout_test

  • test_guess_remote: with should be able to be used in these cases.

Uses with now.

  • test_sock_file_in_use: same comment as bindctl_test.py applies. besides, I don't even understand why we need to tweak stdout here. I guess it was for very older versions that dumped noisy logs directly to stdout. I don't see any noise or other failures even if I remove this trick. I suggest we simply stop tweaking.

Done.

  • test_remove_unused_sock_file_dir: likewise.

Done.

xfrout.py

  • _guess_remote: I'd prefer using with
            if socket.has_ipv6:
                with socket.fromfd(sock_fd, socket.AF_INET6, socket.SOCK_STREAM) \
                        as s:
                    peer = s.getpeername()
            else:
                # To make it work even on hosts without IPv6 support
                # (Any idea how to simulate this in test?)
                with socket.fromfd(sock_fd, socket.AF_INET, socket.SOCK_STREAM) \
                        as s:
                    peer = s.getpeername()
    
    although admittedly this version doesn't seem to look very simple. In any case, 'close()' can be done much earlier than the current position.

I've made it look like this now:

        if socket.has_ipv6:
            socktype = socket.AF_INET6
        else:
            # To make it work even on hosts without IPv6 support                                                                  
            # (Any idea how to simulate this in test?)                                                                            
            socktype = socket.AF_INET

        with socket.fromfd(sock_fd, socktype, socket.SOCK_STREAM) as sock:
            peer = sock.getpeername()
  • _sock_file_in_use: I'd use with here (and it would make much simpler)

Done.

zonemgr_test

  • I don't see the need for tweaking stderr here in the first place. Also, zonemgr itself has this problem for the socketpair. What we should do is to close them appropriately.

Done.

sockcreator_test

  • at least this can be revised with with:
            p1 = socket.fromfd(t2.read_fd(), socket.AF_UNIX, socket.SOCK_STREAM)
    

Done.

socketsession_test

  • shouldn't it better to close self.start_listen in tearDown() (and maybe in start_listen() if it can be called multiple times)?

This has been fixed now.

  • check_push_and_pop: at least sock should be able to be managed with with.

Done.

  • test_bad_pop: can use with:
            with self.accept_forwarder() as accept_sock:
                receiver = SocketSessionReceiver(accept_sock)
                s.close()
                self.assertRaises(SocketSessionError, receiver.pop)
    

Done.

comment:23 in reply to: ↑ 22 ; follow-up: Changed 7 years ago by jinmei

Replying to muks:

  • is it possible to make the test fail if this warning is triggered? since the need for it isn't obvious, it's pretty easy to introduce new ones without automatic and demanding check.
  • related to the previous point, I didn't check if this branch fixes all current warnings.

It is possible to assert if a warning is generated by using `warnings.catch_warnings()', but as you note, not all warnings are fixed by this branch as the reason for some are not obvious.

Okay, then I suggest creating a separate followup ticket so we can
eventually fix all known issues and run automatic checker.

  • I'd simplify infinite_sender() as follows:
            with socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)[1] as write:
                msgq.register_socket(write)
                # Keep sending while it is not closed by the msgq
                try:
                    while True:
                        sender(msgq, write)
                except socket.error:
                    pass
    

This with statement leaves pair[0] open and generates a warning.

Ah, right, I indeed realized it but I'm not sure why I still made this
comment.

  • infinite_sender/send_many: like others, the intent of the close() isn't obvious to me. But in these cases I cannot come up with a cleaner alternative. If this is the least bad one, I'd at least clarify the intent in comments.

Same condition as above, I've not used with for these. The .close() itself should not need explanation, no? We are closing when done with the object.

I guess my concern is subtle, but I still think it's not obvious why
we need to do the explicit close in general (and someone may think
it's just redundant and try to "clean them up"). Also, in this
particular case there's pretty much of code between the points of open
and close, which would make me wonder what would happen if any of
the code raises an exception (i.e., if the close() is really
necessary, it would just look wrong if the code isn't exception safe
in that sense, and the larger amount of code make it uncertain).

Maybe one compromise is to add a comment like this:

        # Explicitly close temporary socket pair as the Python interpreter
        # expects it.  It may not be 100% exception safe, but since this is
        # only for tests we prefer brevity.
        queue.close()
        out.close()

Some other comments follow:

general

Maybe we need a changelog?

ddns.py

  • the docstring for shutdown_cleanup is now not 100% correct, so I've committed a suggested change (part of which was moved from docstring). Please check.

ddns_test.py

  • the intent of calling shutdown_cleanup is probably not so obvious. I've committed a suggested comment. Please check.

xfrout.py

  • the revised version of _guess_remote looks nice, but I'd rename 'socktype' to 'sock_family' or something. The term for sockets sounds like SOCK_STREAM/SOCK_DGRAM, etc.
  • _sock_file_in_use: the revised version changed the semantics a bit in case socket.socket() raises an exception. I'd suggest revising it to:
            try:
                with socket.socket(socket.AF_UNIX) as sock:
                    sock.connect(sock_file)
            except socket.error as err:
                return False
            else:
                return True
    

comment:24 Changed 7 years ago by jinmei

  • Owner changed from jinmei to muks

comment:25 in reply to: ↑ 23 ; follow-up: Changed 7 years ago by muks

  • Owner changed from muks to jinmei

Replying to jinmei:

Okay, then I suggest creating a separate followup ticket so we can
eventually fix all known issues and run automatic checker.

#2043 has been filed for it.

I guess my concern is subtle, but I still think it's not obvious why
we need to do the explicit close in general (and someone may think
it's just redundant and try to "clean them up"). Also, in this
particular case there's pretty much of code between the points of open
and close, which would make me wonder what would happen if any of
the code raises an exception (i.e., if the close() is really
necessary, it would just look wrong if the code isn't exception safe
in that sense, and the larger amount of code make it uncertain).

Maybe one compromise is to add a comment like this:

        # Explicitly close temporary socket pair as the Python interpreter
        # expects it.  It may not be 100% exception safe, but since this is
        # only for tests we prefer brevity.
        queue.close()
        out.close()

Done.

Some other comments follow:

general

Maybe we need a changelog?

How is this:

XXX.    [bug]   	muks
	A number of warnings reported by Python about unclosed file and
	socket objects were fixed. Some related code was also made safer.
        (Trac #128, git ...)

ddns.py

  • the docstring for shutdown_cleanup is now not 100% correct, so I've committed a suggested change (part of which was moved from docstring). Please check.

Looks fine :)

ddns_test.py

  • the intent of calling shutdown_cleanup is probably not so obvious. I've committed a suggested comment. Please check.

This looks good too.

xfrout.py

  • the revised version of _guess_remote looks nice, but I'd rename 'socktype' to 'sock_family' or something. The term for sockets sounds like SOCK_STREAM/SOCK_DGRAM, etc.

Updated the variable name.

  • _sock_file_in_use: the revised version changed the semantics a bit in case socket.socket() raises an exception. I'd suggest revising it to:
            try:
                with socket.socket(socket.AF_UNIX) as sock:
                    sock.connect(sock_file)
            except socket.error as err:
                return False
            else:
                return True
    

Done. :)

comment:26 in reply to: ↑ 25 Changed 7 years ago by jinmei

Replying to muks:

Looks okay, please merge.

One note:

XXX.    [bug]   	muks
	A number of warnings reported by Python about unclosed file and
	socket objects were fixed. Some related code was also made safer.
        (Trac #128, git ...)

"128" should be "1828".

comment:27 Changed 7 years ago by jinmei

  • Owner changed from jinmei to muks

comment:28 Changed 7 years ago by muks

  • Resolution set to fixed
  • Status changed from reviewing to closed

Merged to master in commit 464682a2180c672f1ed12d8a56fd0a5ab3eb96ed:

* e295b40 [1828] Make code safer in case socket.socket() raises an exception
* c37cc2d [1828] Rename variable to better match its use
* c4b20d7 [1828] Add comments about the close() calls
* 6ad0070 [1828] a couple of suggested clarification comments
* 1fff16e [1828] style fix: folded a long line.
* 74b4b2f [1828] Cleanup code by using with (for accept_sock)
* ff9dd7d [1828] Make code safer by using with (for sock)
* 8e741d8 [1828] Close the listen_sock in tearDown()
* 7d741f7 [1828] Cleanup code by using with
* d295c9e [1828] Close sockets created by socketpair when done
* 74dbc76 [1828] Don't redirect stderr in tests
* 9040c49 [1828] Cleanup code by using with
* bbeee53 [1828] Don't redirect stdout in some tests
* 86b4a93 [1828] Make code safer by using with
* 5744904 [1828] Make code safer by using with
* d96e91d [1828] Close MsgQ socket in except block
* 3dd46b7 [1828] Cleanup listen socket in DDNSServer.shutdown_cleanup()
* 1ece2ea [1828] Make code safer by using with
* 933d2b6 [1828] Use with statement to auto-close file handles when done using them
* aef9cc2 [1828] Close open handles when done using them (contd.)
* 6725498 [1828] Close open handles when done using them (contd.)
* 542d531 [1828] Close open handles when done using them (contd.)
* fdb3dfb [1828] Close open handles when done using them (contd.)
* eae421b [1828] Close open handles when done using them (contd.)
* a9d5033 [1828] Close open handles when done

Resolving as fixed. Thank you for the reviews Jinmei.

comment:29 Changed 7 years ago by muks

The use of with statements on socket objects caused a problem with CPython version < 3.2. So these changes were reverted to the vanilla open() + explicit close(). This is done in the trac1828_2 branch.

Note: See TracTickets for help on using tickets.