#210 BIND does not start (../../../lib/dns/rdataset.c:252: REQUIRE(rdataset->methods != ((void *)0)) failed)
Opened by ruslanvd. Modified

Hi.
We use FreeIPA server, after its restart BIND does not start.
On next starts / restarts, it stops after a few seconds.
All attempts to start the service with its existing settings lead to nothing.
We use 40 DNS zones.
After disabling / enabling certain DNS zones that are used in FreeIPA, the BIND was able to start, but only by disabling all reverse zones with PTR records and one more zone (for some reason, the service did not start with it either, although errors, in particular syntactic ones, in she was not). All these zones worked without problems until 13.10.2021, no syntax or other errors were found in them.

Please tell me how to solve this problem.

Thank you in advance.


cat /etc/*release

Fedora release 34 (Thirty Four)


named -v

BIND 9.16.21-RH (Extended Support Version)


dnf list installed | grep bind

bind.x86_64 32:9.16.21-1.fc34 @updates
bind-dnssec-doc.noarch 32:9.16.21-1.fc34 @updates
bind-dnssec-utils.x86_64 32:9.16.21-1.fc34 @updates
bind-dyndb-ldap.x86_64 11.9-5.fc34 @updates
bind-libs.x86_64 32:9.16.21-1.fc34 @updates
bind-license.noarch 32:9.16.21-1.fc34 @updates
bind-pkcs11-libs.x86_64 32:9.16.21-1.fc34 @updates
bind-pkcs11-utils.x86_64 32:9.16.21-1.fc34 @updates
bind-utils.x86_64 32:9.16.21-1.fc34 @updates
jackson-databind.noarch 2.11.4-2.fc34 @fedora
python3-bind.noarch 32:9.16.21-1.fc34 @updates
rpcbind.x86_64 1.2.6-0.fc34 @updates
samba-winbind.x86_64 2:4.14.8-0.fc34 @updates
samba-winbind-modules.x86_64 2:4.14.8-0.fc34 @updates


The startup behavior is described in the attached file - log_start.txt



log_start contains dump of different version, 9.16.20. But I doubt there was any change related to this issue in BIND 9.16.21 release. Would it be possible to share at least query log or better coredump produced?

I think it might happen because some weird referral to some zone. Especially if that did not start with any recent version upgrade. But without additional data, it is just speculation.

Can you please create bug in bugzilla and attach privately some core dumps?

log_start contains dump of different version, 9.16.20. But I doubt there was any change related to this issue in BIND 9.16.21 release. Would it be possible to share at least query log or better coredump produced?

I think it might happen because some weird referral to some zone. Especially if that did not start with any recent version upgrade. But without additional data, it is just speculation.

Can you please create bug in bugzilla and attach privately some core dumps?

Thanks. We have already solved our problem. It consisted in the fact that we used a large number of NS servers and their data exceeded 512 bytes. Before this problem, we added a few more NS servers from our other DCs, which may have served as a trigger. These answers helped - https://serverfault.com/questions/726211/how-many-is-too-many-name-servers-ns/726567
It would be nice if in the next releases of FreeIPA all restrictions were provided, limits for DNS, in particular, it would be forbidden to add data through the WEB interface in excess of the limits.
Thanks again.

Metadata Update from @ruslanvd:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

Metadata Update from @ruslanvd:
- Issue status updated to: Open (was: Closed)

I don't think there should be any such limit, definitely not caused just by higher number of nameservers and therefore big response to NS query. I should not be limited in web interface, such limit belongs maybe to best practices documentation. It MUST NOT crash when this limit is reached. It should be able to fall back to TCP where no such limit matters.

Thanks for sharing what might trigger it. I would try to reproduce and fix it eventually.

I don't think there should be any such limit, definitely not caused just by higher number of nameservers and therefore big response to NS query. I should not be limited in web interface, such limit belongs maybe to best practices documentation. It MUST NOT crash when this limit is reached. It should be able to fall back to TCP where no such limit matters.

Thanks for sharing what might trigger it. I would try to reproduce and fix it eventually.

Thanks. Maybe we need to reproduce this on a test node and send you all the results?

Log in to comment on this ticket.

Metadata