Opened 8 years ago

Closed 8 years ago

#1772 closed defect (fixed)

"two bind10 instances" lettuce test fails

Reported by: jinmei Owned by: jinmei
Priority: medium Milestone: Sprint-20120403
Component: ~Boss of BIND (obsolete) Version:
Keywords: Cc:
CVSS Scoring: Parent Tickets:
Sensitive: no Defect Severity: N/A
Sub-Project: DNS Feature Depending on Ticket:
Estimated Difficulty: 5 Add Hours to Ticket: 0
Total Hours: 1.75 Internal?: no

Description

I've not looked into details, but it's always reproduceable on my
environment.

  Scenario: two bind10 instances                                                                   # features/example.feature:128
    When I start bind10 with configuration example.org.config as bind10_one           When I start bind10 with configuration example.org.config as bind10_one                        # features/terrain/bind10_control.py:22
    Traceback (most recent call last):
      File "/Library/Python/2.7/site-packages/lettuce/core.py", line 117, in __call__
        ret = self.function(self.step, *args, **kw)
      File "/Users/jinmei/src/isc/git/bind10-1747/tests/lettuce/features/terrain/bind10_control.py", line 67, in start_bind10
        "BIND10_STARTUP_ERROR"])
      File "/Users/jinmei/src/isc/git/bind10-1747/tests/lettuce/features/terrain/terrain.py", line 317, in wait_for_stderr_str
        only_new)
      File "/Users/jinmei/src/isc/git/bind10-1747/tests/lettuce/features/terrain/terrain.py", line 220, in wait_for_stderr_str
        strings, only_new)
      File "/Users/jinmei/src/isc/git/bind10-1747/tests/lettuce/features/terrain/terrain.py", line 205, in _wait_for_output_str
        assert False, "Timeout waiting for process output: " + str(strings)
    AssertionError: Timeout waiting for process output: ['BIND10_STARTUP_COMPLETE', 'BIND10_STARTUP_ERROR']

Subtickets

Attachments (1)

Change History (13)

comment:1 follow-up: Changed 8 years ago by jelte

Can you upload the stderr file left by this test (from the tests/lettuce/output/ dir)?

comment:2 in reply to: ↑ 1 Changed 8 years ago by jinmei

Replying to jelte:

Can you upload the stderr file left by this test (from the tests/lettuce/output/ dir)?

done.

comment:3 Changed 8 years ago by jelte

  • Estimated Difficulty changed from 0 to 5

comment:4 Changed 8 years ago by jelte

  • Milestone changed from Next-Sprint-Proposed to Sprint-20120403

comment:5 Changed 8 years ago by jinmei

  • Owner set to jinmei
  • Status changed from new to accepted

comment:6 Changed 8 years ago by jinmei

trac1772 is ready for review.

It turns out that on my laptop there's an application (always running
from boot to shutdown) uses a port (127.0.0.1:47807) that is used by
the failing tests.

This branch I used [::1]:47807 and confirmed it solved *my* problem.
Of course this is a hack, but I thought this is least bad considering
various tradeoffs. See commit log for e3f4b29.

As Jelte guessed #1795 failed for the same reason, and the same hack
solved that problem, too. So I also fixed it in this ticket. If the
proposed solution is okay, we can also close #1795.

comment:7 Changed 8 years ago by jinmei

  • Owner changed from jinmei to UnAssigned
  • Status changed from accepted to reviewing

comment:8 follow-up: Changed 8 years ago by jelte

i don't suppose there's a range of ports that is reserved for testing/temporary services?

comment:9 in reply to: ↑ 8 Changed 8 years ago by jinmei

Replying to jelte:

i don't suppose there's a range of ports that is reserved for testing/temporary services?

"Dynamic ports" (49152-65535) are probably better (see RFC6335),
although they will then have another issue of conflicting with client
apps that happen to use those ports, so, basically, I don't think
there's an easy solution, like just choosing a different fixed port
(I'm not saying there are no solutions. but non easy solutions will
be, well, non easy).

comment:10 follow-up: Changed 8 years ago by jelte

  • Owner changed from UnAssigned to jinmei
  • Total Hours changed from 0 to 0.5

I'm fine with any range of ports or addresses, btw. Also don't mind ::1, though I do wonder if we want to be able to run this on non-v6 machine (if not, then all is fine imo, in fact, it may be good to explicitely test v6 as well). I was wondering, perhaps we can also default to different 127.0.0.0/8 addresses (ie. 127.0.0.5 or whatever).

I don't really feel like building in some unused-port-detection for this class of listening ports, but perhaps we can write a small tool that finds one (obviously, such a setup would be subject to race conditions if other applications steal it between the finding and the running of the tests), but even for that I wonder if it's worth the trouble.

I do worry about that loop; but not sure how it can be prevented if it is talking to a totally different application.

Anyway, I think this branch is fine to merge as an interim solution.

comment:11 in reply to: ↑ 10 Changed 8 years ago by jinmei

Replying to jelte:

I'm fine with any range of ports or addresses, btw. Also don't mind ::1, though I do wonder if we want to be able to run this on non-v6 machine (if not, then all is fine imo, in fact, it may be good to explicitely test v6 as well). I was wondering, perhaps we can also default to different 127.0.0.0/8 addresses (ie. 127.0.0.5 or whatever).

System tests uses some 10/8 and IPv6 ULAs. Abusing 10/8 would also be
bad, but as long as we also keep using the sytem tests and keep using
these convention for them, following that may be one easier option.
If this sounds an acceptable option we can create a ticket for it.

Anyway, I think this branch is fine to merge as an interim solution.

Okay, merge done, closing this ticket.

Thanks,

comment:12 Changed 8 years ago by jinmei

  • Resolution set to fixed
  • Status changed from reviewing to closed
  • Total Hours changed from 0.5 to 1.75
Note: See TracTickets for help on using tickets.