Hey team, I'm looking into building out nftables s...
# core
z
Hey team, I'm looking into building out nftables support to function similarly to the existing iptables table. As noted in the issue here, nftables does not seem to expose the same proc interface that the iptables functionality relies on. The nft docs here explain the available userspace interfaces. After exploring the options a bit it seems like we could bring in libnftables to export rules as JSON, and parse rows from there with a similar output structure to the iptables table. libnftables appears to be a simple, stable API from the history we have available, but I wanted to get some early feedback- how do others feel about bringing in a library like this? Is this a realistic approach?
s
My gut sense is that
nftables
is something we’ll need, and if we can only get it via library than we should pull in one of those. @alessandrogario @Stefano Bonicatti y’all are linux folks, what do you think? (FYI, Zack works with me at Kolide)
s
I mean, the usual issues are: 1. An additional dependency to manage 2. The language used by the dependency and how is complicated to build it (what are dependencies it wants) 3. Since the library interfaces with the system, the question is then how backward compatible it is, since some of those interfaces may have changed between the current minimum supported distro version (CentOS 7) and whatever is considered current
And as far as I recall some of the Netlink protocols structures, as exposed by the system headers, do change
so the question is if those higher level libraries are handling that
maybe 4. build time. But I think those are C libraries which normally are small/quick to build anyway. Just to clarify, are we talking about https://netfilter.org/projects/libnftnl/index.html ? (and the https://netfilter.org/projects/libmnl/index.html dependency)
Oh I see, there "is" a libnftables, but it's inside the nftables project itself
so it's technically 3 libraries
s
Yeah, (1) is the thing that always worries me most. Zack tells me that it’s basically the only way he’s found.
z
yeah a second option is using only libnftnl (+libmnl) and working with the structs a lot more intimately. libnftables would be an additional library but would do a lot of the work for us and could allow us to work directly from JSON
s
That been said for 3. the only way I know is to manually check all the structures used across versions, or build a test binary with the osquery toolchain and test it at both ends of the distro versions we supports. Which still doesn't guarantee that things won't break in the future, but unfortunately backward compatibility is not something that gets talked about, since the common expectation is that you're rebuilding for each kernel as necessary
z
okay I can dig into the compatibility a bit more there, thank you!
s
With libaudit and the specific netlink protocol it uses for instance, you have to send the size of your datastructures with the requests to the kernel (so that the kernel knows what are you using), and for that to work, those datastructure only change at the end, so no previous bytes get shifted around. The other way is that for the kernel response, the expected buffer size is fixed to the maximum. Then the userspace has to interpret it correctly. And here there's the second issue obviously, since if the userspace thinks that the structure should be bigger (and doesn't checks how many bytes it has been actually handed), it will parse something bogus. For instance we can see that
audit_status
, which is used internally in the library to store the answer from the kernel about the status of audit, has changed between 3.19 and latest: https://elixir.bootlin.com/linux/v3.19.8/source/include/uapi/linux/audit.h#L406 https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/audit.h#L464
The check on that last member is only done on the tools, so I guess that the new member remains 0 initialized.
s
Do we have a good pointer for how to bring in new libraries? I know we can dig through commit logs, I can’t remembver if there’s a better doc
s
I mean, there isn't a playbook, it really depends library by library, which build system it uses. For configuring the library you can check one of the READMEs we have of the libraries that use a similar build system (CMake, vs autotools/configure script). You should be using CentOS 7 to do the first configuration. The general idea is to use the osquery toolchain, to remove any dependency which is not strictly necessary, and have whatever has dependency point to the ones you're introducing (or already exists) in osquery. All generated files that are necessary for the build have to be saved in the project. Otherwise from there it's "just" adding the library as a submodule pointing to a
src
directory under another dir with the name of the library here https://github.com/osquery/osquery/tree/master/libraries/cmake/source Then you have to write the CMakeLists.txt that will build the library as its original build system was doing, but with everything hardcoded. Finally create a
.cmake
file here: https://github.com/osquery/osquery/tree/master/libraries/cmake/source/modules which imports the submodule, and add the name of the libraries here too: https://github.com/osquery/osquery/blob/4a8d99b87be22cf0352a3cf1b7320ffc47461072/CMakeLists.txt#L124
z
this is all really helpful, thank you!
s
actually, forgot one step (but this would be when you open the PR), this needs to be updated; especially check if they have an entry in the NIST database: https://github.com/osquery/osquery/blob/master/libraries/third_party_libraries_manifest.json
👍 1