Merge remote-tracking branch 'nixos/master' into bundlerenv_usecases

This commit is contained in:
Judson 2017-04-30 12:27:52 -07:00
commit b2065a2790
No known key found for this signature in database
GPG key ID: 1817B08954BF0B7D
891 changed files with 32098 additions and 18808 deletions

View file

@ -37,16 +37,9 @@
</para>
<para>
In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>.
All are guaranteed to contain at least a <varname>platform</varname> field, which contains detailed information on the platform.
All three are always defined at the top level, so one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...</programlisting>
</para>
<warning><para>
These platforms should all have the same structure in all scenarios, but that is currently not the case.
When not cross-compiling, they will each contain a <literal>system</literal> field with a short 2-part, hyphen-separated summering string name for the platform.
But, when when cross compiling, <literal>hostPlatform</literal> and <literal>targetPlatform</literal> may instead contain <literal>config</literal> with a fuller 3- or 4-part string in the manner of LLVM.
We should have all 3 platforms always contain both, and maybe give <literal>config</literal> a better name while we are at it.
</para></warning>
<variablelist>
<varlistentry>
<term><varname>buildPlatform</varname></term>
@ -83,7 +76,7 @@
Nixpkgs tries to avoid this where possible too, but still, because the concept of a target platform is so ingrained now in Autoconf and other tools, it is best to support it as is.
Tools like LLVM that don't need up-front target platforms can safely ignore it like normal packages, and it will do no harm.
</para>
</listitem>
</listitem>
</varlistentry>
</variablelist>
<note><para>
@ -91,6 +84,56 @@
This field defined as <varname>hostPlatform</varname> when the host and build platforms differ, but otherwise not defined at all.
This field is obsolete and will soon disappear—please do not use it.
</para></note>
<para>
The exact scheme these fields is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up.
For now, here are few fields can count on them containing:
</para>
<variablelist>
<varlistentry>
<term><varname>system</varname></term>
<listitem>
<para>
This is a two-component shorthand for the platform.
Examples of this would be "x86_64-darwin" and "i686-linux"; see <literal>lib.systems.doubles</literal> for more.
This format isn't very standard, but has built-in support in Nix, such as the <varname>builtins.currentSystem</varname> impure string.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>config</varname></term>
<listitem>
<para>
This is a 3- or 4- component shorthand for the platform.
Examples of this would be "x86_64-unknown-linux-gnu" and "aarch64-apple-darwin14".
This is a standard format called the "LLVM target triple", as they are pioneered by LLVM and traditionally just used for the <varname>targetPlatform</varname>.
This format is strictly more informative than the "Nix host double", as the previous format could analogously be termed.
This needs a better name than <varname>config</varname>!
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>parsed</varname></term>
<listitem>
<para>
This is a nix representation of a parsed LLVM target triple with white-listed components.
This can be specified directly, or actually parsed from the <varname>config</varname>.
[Technically, only one need be specified and the others can be inferred, though the precision of inference may not be very good.]
See <literal>lib.systems.parse</literal> for the exact representation, along with some <literal>is*</literal>predicates.
These predicates are superior to the ones in <varname>stdenv</varname> as they aren't tied to the build platform (host, as previously discussed, would be a saner default).
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>platform</varname></term>
<listitem>
<para>
This is, quite frankly, a dumping ground of ad-hoc settings (it's an attribute set).
See <literal>lib.systems.platforms</literal> for examples—there's hopefully one in there that will work verbatim for each platform one is working.
Please help us triage these flags and give them better homes!
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section>
@ -124,6 +167,11 @@
Because of this, a best-of-both-worlds solution is in the works with no splicing or explicit access of <varname>buildPackages</varname> needed.
For now, feel free to use either method.
</para>
<note><para>
There is also a "backlink" <varname>__targetPackages</varname>, yielding a package set whose <varname>buildPackages</varname> is the current package set.
This is a hack, though, to accommodate compilers with lousy build systems.
Please do not use this unless you are absolutely sure you are packaging such a compiler and there is no other way.
</para></note>
</section>
</section>

View file

@ -628,6 +628,9 @@ with import <nixpkgs> {};
In contrast to `python.buildEnv`, `python.withPackages` does not support the more advanced options
such as `ignoreCollisions = true` or `postBuild`. If you need them, you have to use `python.buildEnv`.
Python 2 namespace packages may provide `__init__.py` that collide. In that case `python.buildEnv`
should be used with `ignoreCollisions = true`.
### Development mode
Development or editable mode is supported. To develop Python packages

View file

@ -16,8 +16,7 @@ $ cd sensu
$ cat > Gemfile
source 'https://rubygems.org'
gem 'sensu'
$ nix-shell -p bundler --command "bundler package --path /tmp/vendor/bundle"
$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix
$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix --magic
$ cat > default.nix
{ lib, bundlerEnv, ruby }:

View file

@ -17,8 +17,8 @@ into the `environment.systemPackages` or bring them into scope with
`nix-shell -p rustStable.rustc -p rustStable.cargo`.
There are also `rustBeta` and `rustNightly` package sets available.
These are not updated very regulary. For daily builds see
[Using the Rust nightlies overlay](#using-the-rust-nightlies-overlay)
These are not updated very regulary. For daily builds use either rustup from
nixpkgs or use the [Rust nightlies overlay](#using-the-rust-nightlies-overlay).
## Packaging Rust applications

View file

@ -16,7 +16,6 @@
<section><title>Installing a split package</title>
<para>When installing a package via <varname>systemPackages</varname> or <command>nix-env</command> you have several options:</para>
<warning><para>Currently <command>nix-env</command> almost always installs all outputs until https://github.com/NixOS/nix/pull/815 gets merged.</para></warning>
<itemizedlist>
<listitem><para>You can install particular outputs explicitly, as each is available in the Nix language as an attribute of the package. The <varname>outputs</varname> attribute contains a list of output names.</para></listitem>
<listitem><para>You can let it use the default outputs. These are handled by <varname>meta.outputsToInstall</varname> attribute that contains a list of output names.</para>

View file

@ -2,7 +2,7 @@
let
inherit (builtins) head tail length;
inherit (import ./trivial.nix) or;
inherit (import ./trivial.nix) and or;
inherit (import ./default.nix) fold;
inherit (import ./strings.nix) concatStringsSep;
inherit (import ./lists.nix) concatMap concatLists all deepSeqList;
@ -116,7 +116,7 @@ rec {
listToAttrs (concatMap (name: let v = set.${name}; in if pred name v then [(nameValuePair name v)] else []) (attrNames set));
/* Filter an attribute set recursivelly by removing all attributes for
/* Filter an attribute set recursively by removing all attributes for
which the given predicate return false.
Example:
@ -334,7 +334,7 @@ rec {
value = f name (catAttrs name sets);
}) names);
/* Implentation note: Common names appear multiple times in the list of
/* Implementation note: Common names appear multiple times in the list of
names, hopefully this does not affect the system because the maximal
laziness avoid computing twice the same expression and listToAttrs does
not care about duplicated attribute names.
@ -353,7 +353,7 @@ rec {
zipAttrs = zipAttrsWith (name: values: values);
/* Does the same as the update operator '//' except that attributes are
merged until the given pedicate is verified. The predicate should
merged until the given predicate is verified. The predicate should
accept 3 arguments which are the path to reach the attribute, a part of
the first attribute set and a part of the second attribute set. When
the predicate is verified, the value of the first attribute set is
@ -417,18 +417,15 @@ rec {
/* Returns true if the pattern is contained in the set. False otherwise.
FIXME(zimbatm): this example doesn't work !!!
Example:
sys = mkSystem { }
matchAttrs { cpu = { bits = 64; }; } sys
matchAttrs { cpu = {}; } { cpu = { bits = 64; }; }
=> true
*/
matchAttrs = pattern: attrs:
fold or false (attrValues (zipAttrsWithNames (attrNames pattern) (n: values:
matchAttrs = pattern: attrs: assert isAttrs pattern;
fold and true (attrValues (zipAttrsWithNames (attrNames pattern) (n: values:
let pat = head values; val = head (tail values); in
if length values == 1 then false
else if isAttrs pat then isAttrs val && matchAttrs head values
else if isAttrs pat then isAttrs val && matchAttrs pat val
else pat == val
) [pattern attrs]));

View file

@ -39,7 +39,7 @@ let inherit (lib) nv nvs; in
#
# issues:
# * its complicated to understand
# * some "features" such as exact merge behaviour are burried in mergeAttrBy
# * some "features" such as exact merge behaviour are buried in mergeAttrBy
# and defaultOverridableDelayableArgs assuming the default behaviour does
# the right thing in the common case
# * Eelco once said using such fix style functions are slow to evaluate
@ -48,7 +48,7 @@ let inherit (lib) nv nvs; in
# / add patches the way you want without having to declare function arguments
#
# nice features:
# declaring "optional featuers" is modular. For instance:
# declaring "optional features" is modular. For instance:
# flags.curl = {
# configureFlags = ["--with-curl=${curl.dev}" "--with-curlwrappers"];
# buildInputs = [curl openssl];

View file

@ -10,7 +10,7 @@ rec {
/* `overrideDerivation drv f' takes a derivation (i.e., the result
of a call to the builtin function `derivation') and returns a new
derivation in which the attributes of the original are overriden
derivation in which the attributes of the original are overridden
according to the function `f'. The function `f' is called with
the original derivation attributes.
@ -167,7 +167,7 @@ rec {
/* Make a set of packages with a common scope. All packages called
with the provided `callPackage' will be evaluated with the same
arguments. Any package in the set may depend on any other. The
`override' function allows subsequent modification of the package
`overrideScope' function allows subsequent modification of the package
set in a consistent way, i.e. all packages in the set will be
called with the overridden packages. The package sets may be
hierarchical: the packages in the set are called with the scope
@ -177,7 +177,7 @@ rec {
let self = f self // {
newScope = scope: newScope (self // scope);
callPackage = self.newScope {};
override = g:
overrideScope = g:
makeScope newScope
(self_: let super = f self_; in super // g super self_);
packages = f;

View file

@ -1,3 +1,8 @@
/* Library of low-level helper functions for nix expressions.
*
* Please implement (mostly) exhaustive unit tests
* for new functions in `./tests.nix'.
*/
let
# trivial, often used functions
@ -22,8 +27,7 @@ let
# constants
licenses = import ./licenses.nix;
platforms = import ./platforms.nix;
systems = import ./systems.nix;
systems = import ./systems;
# misc
debug = import ./debug.nix;
@ -42,13 +46,15 @@ in
attrsets lists strings stringsWithDeps
customisation maintainers meta sources
modules options types
licenses platforms systems
licenses systems
debug generators misc
sandbox fetchers filesystem;
# back-compat aliases
platforms = systems.doubles;
}
# !!! don't include everything at top-level; perhaps only the most
# commonly used functions.
// trivial // lists // strings // stringsWithDeps // attrsets // sources
// options // types // meta // debug // misc // modules
// systems
// customisation

View file

@ -253,11 +253,11 @@ rec {
# eg { a = 7; } { a = [ 2 3 ]; } becomes { a = [ 7 2 3 ]; }
mergeAttrsConcatenateValues = mergeAttrsWithFunc ( a: b: (toList a) ++ (toList b) );
# merges attributes using //, if a name exisits in both attributes
# merges attributes using //, if a name exists in both attributes
# an error will be triggered unless its listed in mergeLists
# so you can mergeAttrsNoOverride { buildInputs = [a]; } { buildInputs = [a]; } {} to get
# { buildInputs = [a b]; }
# merging buildPhase does'nt really make sense. The cases will be rare where appending /prefixing will fit your needs?
# merging buildPhase doesn't really make sense. The cases will be rare where appending /prefixing will fit your needs?
# in these cases the first buildPhase will override the second one
# ! deprecated, use mergeAttrByFunc instead
mergeAttrsNoOverride = { mergeLists ? ["buildInputs" "propagatedBuildInputs"],

View file

@ -1,4 +1,4 @@
# snippets that can be shared by mutliple fetchers (pkgs/build-support)
# snippets that can be shared by multiple fetchers (pkgs/build-support)
{
proxyImpureEnvVars = [

View file

@ -357,6 +357,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Lucent Public License v1.02";
};
miros = {
fullname = "MirOS License";
url = https://opensource.org/licenses/MirOS;
};
# spdx.org does not (yet) differentiate between the X11 and Expat versions
# for details see http://en.wikipedia.org/wiki/MIT_License#Various_versions
mit = spdx {
@ -526,6 +531,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Do What The F*ck You Want To Public License";
};
wxWindows = spdx {
spdxId = "WXwindows";
fullName = "wxWindows Library Licence, Version 3.1";
};
zlib = spdx {
spdxId = "Zlib";
fullName = "zlib License";

View file

@ -191,7 +191,7 @@ rec {
*/
optional = cond: elem: if cond then [elem] else [];
/* Return a list or an empty list, dependening on a boolean value.
/* Return a list or an empty list, depending on a boolean value.
Example:
optionals true [ 2 3 ]

View file

@ -215,6 +215,7 @@
heel = "Sergii Paryzhskyi <parizhskiy@gmail.com>";
henrytill = "Henry Till <henrytill@gmail.com>";
hinton = "Tom Hinton <t@larkery.com>";
hodapp = "Chris Hodapp <hodapp87@gmail.com>";
hrdinka = "Christoph Hrdinka <c.nix@hrdinka.at>";
iand675 = "Ian Duncan <ian@iankduncan.com>";
ianwookim = "Ian-Woo Kim <ianwookim@gmail.com>";
@ -238,6 +239,7 @@
jgillich = "Jakob Gillich <jakob@gillich.me>";
jhhuh = "Ji-Haeng Huh <jhhuh.note@gmail.com>";
jirkamarsik = "Jirka Marsik <jiri.marsik89@gmail.com>";
jlesquembre = "José Luis Lafuente <jl@lafuente.me>";
joachifm = "Joachim Fasting <joachifm@fastmail.fm>";
joamaki = "Jussi Maki <joamaki@gmail.com>";
joelmo = "Joel Moberg <joel.moberg@gmail.com>";
@ -387,6 +389,7 @@
paholg = "Paho Lurie-Gregg <paho@paholg.com>";
pakhfn = "Fedor Pakhomov <pakhfn@gmail.com>";
palo = "Ingolf Wanger <palipalo9@googlemail.com>";
panaeon = "Vitalii Voloshyn <vitalii.voloshyn@gmail.com";
paperdigits = "Mica Semrick <mica@silentumbrella.com>";
pashev = "Igor Pashev <pashev.igor@gmail.com>";
patternspandemic = "Brad Christensen <patternspandemic@live.com>";
@ -451,7 +454,7 @@
romildo = "José Romildo Malaquias <malaquias@gmail.com>";
rongcuid = "Rongcui Dong <rongcuid@outlook.com>";
ronny = "Ronny Pfannschmidt <nixos@ronnypfannschmidt.de>";
rszibele = "Richard Szibele <richard_szibele@hotmail.com>";
rszibele = "Richard Szibele <richard@szibele.com>";
rtreffer = "Rene Treffer <treffer+nixos@measite.de>";
rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>";
rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>";
@ -461,6 +464,7 @@
ryantm = "Ryan Mulligan <ryan@ryantm.com>";
rycee = "Robert Helgesson <robert@rycee.net>";
ryneeverett = "Ryne Everett <ryneeverett@gmail.com>";
rzetterberg = "Richard Zetterberg <richard.zetterberg@gmail.com>";
s1lvester = "Markus Silvester <s1lvester@bockhacker.me>";
samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>";
sander = "Sander van der Burg <s.vanderburg@tudelft.nl>";
@ -477,6 +481,7 @@
shell = "Shell Turner <cam.turn@gmail.com>";
shlevy = "Shea Levy <shea@shealevy.com>";
siddharthist = "Langston Barrett <langston.barrett@gmail.com>";
sigma = "Yann Hodique <yann.hodique@gmail.com>";
simonvandel = "Simon Vandel Sillesen <simon.vandel@gmail.com>";
sjagoe = "Simon Jagoe <simon@simonjagoe.com>";
sjmackenzie = "Stewart Mackenzie <setori88@gmail.com>";
@ -497,6 +502,7 @@
sternenseemann = "Lukas Epple <post@lukasepple.de>";
stesie = "Stefan Siegl <stesie@brokenpipe.de>";
steveej = "Stefan Junker <mail@stefanjunker.de>";
SuprDewd = "Bjarki Ágúst Guðmundsson <suprdewd@gmail.com>";
swarren83 = "Shawn Warren <shawn.w.warren@gmail.com>";
swistak35 = "Rafał Łasocha <me@swistak35.com>";
szczyp = "Szczyp <qb@szczyp.com>";

View file

@ -423,7 +423,7 @@ rec {
in concatMap (def: if getPrio def == highestPrio then [(strip def)] else []) defs;
/* Sort a list of properties. The sort priority of a property is
1000 by default, but can be overriden by wrapping the property
1000 by default, but can be overridden by wrapping the property
using mkOrder. */
sortProperties = defs:
let

View file

@ -1,24 +0,0 @@
let lists = import ./lists.nix; in
rec {
all = linux ++ darwin ++ cygwin ++ freebsd ++ openbsd ++ netbsd ++ illumos;
allBut = platforms: lists.filter (x: !(builtins.elem x platforms)) all;
none = [];
arm = ["armv5tel-linux" "armv6l-linux" "armv7l-linux" ];
i686 = ["i686-linux" "i686-freebsd" "i686-netbsd" "i686-cygwin"];
mips = [ "mips64el-linux" ];
x86_64 = ["x86_64-linux" "x86_64-darwin" "x86_64-freebsd" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin"];
cygwin = ["i686-cygwin" "x86_64-cygwin"];
darwin = ["x86_64-darwin"];
freebsd = ["i686-freebsd" "x86_64-freebsd"];
gnu = linux; /* ++ hurd ++ kfreebsd ++ ... */
illumos = ["x86_64-solaris"];
linux = ["i686-linux" "x86_64-linux" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "mips64el-linux"];
netbsd = ["i686-netbsd" "x86_64-netbsd"];
openbsd = ["i686-openbsd" "x86_64-openbsd"];
unix = linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos;
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux"];
}

View file

@ -126,8 +126,8 @@ rec {
*/
makePerlPath = makeSearchPathOutput "lib" "lib/perl5/site_perl";
/* Dependening on the boolean `cond', return either the given string
or the empty string. Useful to contatenate against a bigger string.
/* Depending on the boolean `cond', return either the given string
or the empty string. Useful to concatenate against a bigger string.
Example:
optionalString true "some-string"

View file

@ -1,126 +0,0 @@
# Define the list of system with their properties. Only systems tested for
# Nixpkgs are listed below
with import ./lists.nix;
with import ./types.nix;
with import ./attrsets.nix;
let
lib = import ./default.nix;
setTypes = type:
mapAttrs (name: value:
setType type ({inherit name;} // value)
);
in
rec {
isSignificantByte = isType "significant-byte";
significantBytes = setTypes "significant-byte" {
bigEndian = {};
littleEndian = {};
};
isCpuType = x: isType "cpu-type" x
&& elem x.bits [8 16 32 64 128]
&& (8 < x.bits -> isSignificantByte x.significantByte);
cpuTypes = with significantBytes;
setTypes "cpu-type" {
arm = { bits = 32; significantByte = littleEndian; };
armv5tel = { bits = 32; significantByte = littleEndian; };
armv7l = { bits = 32; significantByte = littleEndian; };
i686 = { bits = 32; significantByte = littleEndian; };
powerpc = { bits = 32; significantByte = bigEndian; };
x86_64 = { bits = 64; significantByte = littleEndian; };
};
isExecFormat = isType "exec-format";
execFormats = setTypes "exec-format" {
aout = {}; # a.out
elf = {};
macho = {};
pe = {};
unknow = {};
};
isKernel = isType "kernel";
kernels = with execFormats;
setTypes "kernel" {
cygwin = { execFormat = pe; };
darwin = { execFormat = macho; };
freebsd = { execFormat = elf; };
linux = { execFormat = elf; };
netbsd = { execFormat = elf; };
none = { execFormat = unknow; };
openbsd = { execFormat = elf; };
win32 = { execFormat = pe; };
};
isArchitecture = isType "architecture";
architectures = setTypes "architecture" {
apple = {};
pc = {};
unknow = {};
};
isSystem = x: isType "system" x
&& isCpuType x.cpu
&& isArchitecture x.arch
&& isKernel x.kernel;
mkSystem = {
cpu ? cpuTypes.i686,
arch ? architectures.pc,
kernel ? kernels.linux,
name ? "${cpu.name}-${arch.name}-${kernel.name}"
}: setType "system" {
inherit name cpu arch kernel;
};
is64Bit = matchAttrs { cpu = { bits = 64; }; };
isDarwin = matchAttrs { kernel = kernels.darwin; };
isi686 = matchAttrs { cpu = cpuTypes.i686; };
isLinux = matchAttrs { kernel = kernels.linux; };
# This should revert the job done by config.guess from the gcc compiler.
mkSystemFromString = s: let
l = lib.splitString "-" s;
getCpu = name:
attrByPath [name] (throw "Unknow cpuType `${name}'.")
cpuTypes;
getArch = name:
attrByPath [name] (throw "Unknow architecture `${name}'.")
architectures;
getKernel = name:
attrByPath [name] (throw "Unknow kernel `${name}'.")
kernels;
system =
if builtins.length l == 2 then
mkSystem rec {
name = s;
cpu = getCpu (head l);
arch =
if isDarwin system
then architectures.apple
else architectures.pc;
kernel = getKernel (head (tail l));
}
else
mkSystem {
name = s;
cpu = getCpu (head l);
arch = getArch (head (tail l));
kernel = getKernel (head (tail (tail l)));
};
in assert isSystem system; system;
}

23
lib/systems/default.nix Normal file
View file

@ -0,0 +1,23 @@
rec {
doubles = import ./doubles.nix;
parse = import ./parse.nix;
platforms = import ./platforms.nix;
# Elaborate a `localSystem` or `crossSystem` so that it contains everything
# necessary.
#
# `parsed` is inferred from args, both because there are two options with one
# clearly prefered, and to prevent cycles. A simpler fixed point where the RHS
# always just used `final.*` would fail on both counts.
elaborate = args: let
final = {
# Prefer to parse `config` as it is strictly more informative.
parsed = parse.mkSystemFromString (if args ? config then args.config else args.system);
# Either of these can be losslessly-extracted from `parsed` iff parsing succeeds.
system = parse.doubleFromSystem final.parsed;
config = parse.tripleFromSystem final.parsed;
# Just a guess, based on `system`
platform = platforms.selectBySystem final.system;
} // args;
in final;
}

44
lib/systems/doubles.nix Normal file
View file

@ -0,0 +1,44 @@
let lists = import ../lists.nix; in
let parse = import ./parse.nix; in
let inherit (import ../attrsets.nix) matchAttrs; in
let
all = [
"aarch64-linux"
"armv5tel-linux" "armv6l-linux" "armv7l-linux"
"mips64el-linux"
"i686-cygwin" "i686-freebsd" "i686-linux" "i686-netbsd" "i686-openbsd"
"x86_64-cygwin" "x86_64-darwin" "x86_64-freebsd" "x86_64-linux"
"x86_64-netbsd" "x86_64-openbsd" "x86_64-solaris"
];
allParsed = map parse.mkSystemFromString all;
filterDoubles = f: map parse.doubleFromSystem (lists.filter f allParsed);
in rec {
inherit all;
allBut = platforms: lists.filter (x: !(builtins.elem x platforms)) all;
none = [];
arm = filterDoubles (matchAttrs { cpu = { family = "arm"; bits = 32; }; });
i686 = filterDoubles parse.isi686;
mips = filterDoubles (matchAttrs { cpu = { family = "mips"; }; });
x86_64 = filterDoubles parse.isx86_64;
cygwin = filterDoubles parse.isCygwin;
darwin = filterDoubles parse.isDarwin;
freebsd = filterDoubles (matchAttrs { kernel = parse.kernels.freebsd; });
gnu = filterDoubles (matchAttrs { kernel = parse.kernels.linux; abi = parse.abis.gnu; }); # Should be better
illumos = filterDoubles (matchAttrs { kernel = parse.kernels.solaris; });
linux = filterDoubles parse.isLinux;
netbsd = filterDoubles (matchAttrs { kernel = parse.kernels.netbsd; });
openbsd = filterDoubles (matchAttrs { kernel = parse.kernels.openbsd; });
unix = filterDoubles parse.isUnix;
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux"];
}

181
lib/systems/parse.nix Normal file
View file

@ -0,0 +1,181 @@
# Define the list of system with their properties.
#
# See https://clang.llvm.org/docs/CrossCompilation.html and
# http://llvm.org/docs/doxygen/html/Triple_8cpp_source.html especially
# Triple::normalize. Parsing should essentially act as a more conservative
# version of that last function.
with import ../lists.nix;
with import ../types.nix;
with import ../attrsets.nix;
let
lib = import ../default.nix;
setTypesAssert = type: pred:
mapAttrs (name: value:
assert pred value;
setType type ({ inherit name; } // value));
setTypes = type: setTypesAssert type (_: true);
in
rec {
isSignificantByte = isType "significant-byte";
significantBytes = setTypes "significant-byte" {
bigEndian = {};
littleEndian = {};
};
isCpuType = isType "cpu-type";
cpuTypes = with significantBytes; setTypesAssert "cpu-type"
(x: elem x.bits [8 16 32 64 128]
&& (if 8 < x.bits
then isSignificantByte x.significantByte
else !(x ? significantByte)))
{
arm = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv7a = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv7l = { bits = 32; significantByte = littleEndian; family = "arm"; };
aarch64 = { bits = 64; significantByte = littleEndian; family = "arm"; };
i686 = { bits = 32; significantByte = littleEndian; family = "x86"; };
x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; };
mips64el = { bits = 32; significantByte = littleEndian; family = "mips"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "powerpc"; };
};
isVendor = isType "vendor";
vendors = setTypes "vendor" {
apple = {};
pc = {};
unknown = {};
};
isExecFormat = isType "exec-format";
execFormats = setTypes "exec-format" {
aout = {}; # a.out
elf = {};
macho = {};
pe = {};
unknown = {};
};
isKernelFamily = isType "kernel-family";
kernelFamilies = setTypes "kernel-family" {
bsd = {};
unix = {};
};
isKernel = x: isType "kernel" x;
kernels = with execFormats; with kernelFamilies; setTypesAssert "kernel"
(x: isExecFormat x.execFormat && all isKernelFamily (attrValues x.families))
{
darwin = { execFormat = macho; families = { inherit unix; }; };
freebsd = { execFormat = elf; families = { inherit unix bsd; }; };
linux = { execFormat = elf; families = { inherit unix; }; };
netbsd = { execFormat = elf; families = { inherit unix bsd; }; };
none = { execFormat = unknown; families = { inherit unix; }; };
openbsd = { execFormat = elf; families = { inherit unix bsd; }; };
solaris = { execFormat = elf; families = { inherit unix; }; };
windows = { execFormat = pe; families = { }; };
} // { # aliases
win32 = kernels.windows;
};
isAbi = isType "abi";
abis = setTypes "abi" {
cygnus = {};
gnu = {};
msvc = {};
eabi = {};
androideabi = {};
gnueabi = {};
gnueabihf = {};
unknown = {};
};
isSystem = isType "system";
mkSystem = { cpu, vendor, kernel, abi }:
assert isCpuType cpu && isVendor vendor && isKernel kernel && isAbi abi;
setType "system" {
inherit cpu vendor kernel abi;
};
is64Bit = matchAttrs { cpu = { bits = 64; }; };
is32Bit = matchAttrs { cpu = { bits = 32; }; };
isi686 = matchAttrs { cpu = cpuTypes.i686; };
isx86_64 = matchAttrs { cpu = cpuTypes.x86_64; };
isDarwin = matchAttrs { kernel = kernels.darwin; };
isLinux = matchAttrs { kernel = kernels.linux; };
isUnix = matchAttrs { kernel = { families = { inherit (kernelFamilies) unix; }; }; };
isWindows = matchAttrs { kernel = kernels.windows; };
isCygwin = matchAttrs { kernel = kernels.windows; abi = abis.cygnus; };
isMinGW = matchAttrs { kernel = kernels.windows; abi = abis.gnu; };
mkSkeletonFromList = l: {
"2" = # We only do 2-part hacks for things Nix already supports
if elemAt l 1 == "cygwin"
then { cpu = elemAt l 0; kernel = "windows"; abi = "cygnus"; }
else { cpu = elemAt l 0; kernel = elemAt l 1; };
"3" = # Awkwards hacks, beware!
if elemAt l 1 == "apple"
then { cpu = elemAt l 0; vendor = "apple"; kernel = elemAt l 2; }
else if (elemAt l 1 == "linux") || (elemAt l 2 == "gnu")
then { cpu = elemAt l 0; kernel = elemAt l 1; abi = elemAt l 2; }
else if (elemAt l 2 == "mingw32") # autotools breaks on -gnu for window
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "windows"; abi = "gnu"; }
else throw "Target specification with 3 components is ambiguous";
"4" = { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; abi = elemAt l 3; };
}.${toString (length l)}
or (throw "system string has invalid number of hyphen-separated components");
# This should revert the job done by config.guess from the gcc compiler.
mkSystemFromSkeleton = { cpu
, # Optional, but fallback too complex for here.
# Inferred below instead.
vendor ? assert false; null
, kernel
, # Also inferred below
abi ? assert false; null
} @ args: let
getCpu = name: cpuTypes.${name} or (throw "Unknown CPU type: ${name}");
getVendor = name: vendors.${name} or (throw "Unknown vendor: ${name}");
getKernel = name: kernels.${name} or (throw "Unknown kernel: ${name}");
getAbi = name: abis.${name} or (throw "Unknown ABI: ${name}");
system = rec {
cpu = getCpu args.cpu;
vendor =
/**/ if args ? vendor then getVendor args.vendor
else if isDarwin system then vendors.apple
else if isWindows system then vendors.pc
else vendors.unknown;
kernel = getKernel args.kernel;
abi =
/**/ if args ? abi then getAbi args.abi
else if isLinux system then abis.gnu
else if isWindows system then abis.gnu
else abis.unknown;
};
in mkSystem system;
mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s));
doubleFromSystem = { cpu, vendor, kernel, abi, ... }:
if vendor == kernels.windows && abi == abis.cygnus
then "${cpu.name}-cygwin"
else "${cpu.name}-${kernel.name}";
tripleFromSystem = { cpu, vendor, kernel, abi, ... } @ sys: assert isSystem sys; let
optAbi = lib.optionalString (abi != abis.unknown) "-${abi.name}";
in "${cpu.name}-${vendor.name}-${kernel.name}${optAbi}";
}

486
lib/systems/platforms.nix Normal file
View file

@ -0,0 +1,486 @@
rec {
pcBase = {
name = "pc";
uboot = null;
kernelHeadersBaseConfig = "defconfig";
kernelBaseConfig = "defconfig";
# Build whatever possible as a module, if not stated in the extra config.
kernelAutoModules = true;
kernelTarget = "bzImage";
};
pc64 = pcBase // { kernelArch = "x86_64"; };
pc32 = pcBase // { kernelArch = "i386"; };
pc32_simplekernel = pc32 // {
kernelAutoModules = false;
};
pc64_simplekernel = pc64 // {
kernelAutoModules = false;
};
sheevaplug = {
name = "sheevaplug";
kernelMajor = "2.6";
kernelHeadersBaseConfig = "multi_v5_defconfig";
kernelBaseConfig = "multi_v5_defconfig";
kernelArch = "arm";
kernelAutoModules = false;
kernelExtraConfig = ''
BLK_DEV_RAM y
BLK_DEV_INITRD y
BLK_DEV_CRYPTOLOOP m
BLK_DEV_DM m
DM_CRYPT m
MD y
REISERFS_FS m
BTRFS_FS m
XFS_FS m
JFS_FS m
EXT4_FS m
USB_STORAGE_CYPRESS_ATACB m
# mv cesa requires this sw fallback, for mv-sha1
CRYPTO_SHA1 y
# Fast crypto
CRYPTO_TWOFISH y
CRYPTO_TWOFISH_COMMON y
CRYPTO_BLOWFISH y
CRYPTO_BLOWFISH_COMMON y
IP_PNP y
IP_PNP_DHCP y
NFS_FS y
ROOT_NFS y
TUN m
NFS_V4 y
NFS_V4_1 y
NFS_FSCACHE y
NFSD m
NFSD_V2_ACL y
NFSD_V3 y
NFSD_V3_ACL y
NFSD_V4 y
NETFILTER y
IP_NF_IPTABLES y
IP_NF_FILTER y
IP_NF_MATCH_ADDRTYPE y
IP_NF_TARGET_LOG y
IP_NF_MANGLE y
IPV6 m
VLAN_8021Q m
CIFS y
CIFS_XATTR y
CIFS_POSIX y
CIFS_FSCACHE y
CIFS_ACL y
WATCHDOG y
WATCHDOG_CORE y
ORION_WATCHDOG m
ZRAM m
NETCONSOLE m
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
# Fail to build
DRM n
SCSI_ADVANSYS n
USB_ISP1362_HCD n
SND_SOC n
SND_ALI5451 n
FB_SAVAGE n
SCSI_NSP32 n
ATA_SFF n
SUNGEM n
IRDA n
ATM_HE n
SCSI_ACARD n
BLK_DEV_CMD640_ENHANCED n
FUSE_FS m
# systemd uses cgroups
CGROUPS y
# Latencytop
LATENCYTOP y
# Ubi for the mtd
MTD_UBI y
UBIFS_FS y
UBIFS_FS_XATTR y
UBIFS_FS_ADVANCED_COMPR y
UBIFS_FS_LZO y
UBIFS_FS_ZLIB y
UBIFS_FS_DEBUG n
# Kdb, for kernel troubles
KGDB y
KGDB_SERIAL_CONSOLE y
KGDB_KDB y
'';
kernelMakeFlags = [ "LOADADDR=0x0200000" ];
kernelTarget = "uImage";
uboot = "sheevaplug";
# Only for uboot = uboot :
ubootConfig = "sheevaplug_config";
kernelDTB = true; # Beyond 3.10
gcc = {
arch = "armv5te";
float = "soft";
};
};
raspberrypi = {
name = "raspberrypi";
kernelMajor = "2.6";
kernelHeadersBaseConfig = "bcm2835_defconfig";
kernelBaseConfig = "bcmrpi_defconfig";
kernelDTB = true;
kernelArch = "arm";
kernelAutoModules = false;
kernelExtraConfig = ''
BLK_DEV_RAM y
BLK_DEV_INITRD y
BLK_DEV_CRYPTOLOOP m
BLK_DEV_DM m
DM_CRYPT m
MD y
REISERFS_FS m
BTRFS_FS y
XFS_FS m
JFS_FS y
EXT4_FS y
IP_PNP y
IP_PNP_DHCP y
NFS_FS y
ROOT_NFS y
TUN m
NFS_V4 y
NFS_V4_1 y
NFS_FSCACHE y
NFSD m
NFSD_V2_ACL y
NFSD_V3 y
NFSD_V3_ACL y
NFSD_V4 y
NETFILTER y
IP_NF_IPTABLES y
IP_NF_FILTER y
IP_NF_MATCH_ADDRTYPE y
IP_NF_TARGET_LOG y
IP_NF_MANGLE y
IPV6 m
VLAN_8021Q m
CIFS y
CIFS_XATTR y
CIFS_POSIX y
CIFS_FSCACHE y
CIFS_ACL y
ZRAM m
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
# Fail to build
DRM n
SCSI_ADVANSYS n
USB_ISP1362_HCD n
SND_SOC n
SND_ALI5451 n
FB_SAVAGE n
SCSI_NSP32 n
ATA_SFF n
SUNGEM n
IRDA n
ATM_HE n
SCSI_ACARD n
BLK_DEV_CMD640_ENHANCED n
FUSE_FS m
# nixos mounts some cgroup
CGROUPS y
# Latencytop
LATENCYTOP y
'';
kernelTarget = "zImage";
uboot = null;
gcc = {
arch = "armv6";
fpu = "vfp";
float = "hard";
};
};
raspberrypi2 = armv7l-hf-multiplatform // {
name = "raspberrypi2";
kernelBaseConfig = "bcm2709_defconfig";
kernelDTB = true;
kernelAutoModules = false;
kernelExtraConfig = ''
BLK_DEV_RAM y
BLK_DEV_INITRD y
BLK_DEV_CRYPTOLOOP m
BLK_DEV_DM m
DM_CRYPT m
MD y
REISERFS_FS m
BTRFS_FS y
XFS_FS m
JFS_FS y
EXT4_FS y
IP_PNP y
IP_PNP_DHCP y
NFS_FS y
ROOT_NFS y
TUN m
NFS_V4 y
NFS_V4_1 y
NFS_FSCACHE y
NFSD m
NFSD_V2_ACL y
NFSD_V3 y
NFSD_V3_ACL y
NFSD_V4 y
NETFILTER y
IP_NF_IPTABLES y
IP_NF_FILTER y
IP_NF_MATCH_ADDRTYPE y
IP_NF_TARGET_LOG y
IP_NF_MANGLE y
IPV6 m
VLAN_8021Q m
CIFS y
CIFS_XATTR y
CIFS_POSIX y
CIFS_FSCACHE y
CIFS_ACL y
ZRAM m
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
# Fail to build
DRM n
SCSI_ADVANSYS n
USB_ISP1362_HCD n
SND_SOC n
SND_ALI5451 n
FB_SAVAGE n
SCSI_NSP32 n
ATA_SFF n
SUNGEM n
IRDA n
ATM_HE n
SCSI_ACARD n
BLK_DEV_CMD640_ENHANCED n
FUSE_FS m
# nixos mounts some cgroup
CGROUPS y
# Latencytop
LATENCYTOP y
# Disable the common config Xen, it doesn't build on ARM
XEN? n
'';
kernelTarget = "zImage";
uboot = null;
};
guruplug = sheevaplug // {
# Define `CONFIG_MACH_GURUPLUG' (see
# <http://kerneltrap.org/mailarchive/git-commits-head/2010/5/19/33618>)
# and other GuruPlug-specific things. Requires the `guruplug-defconfig'
# patch.
kernelBaseConfig = "guruplug_defconfig";
#kernelHeadersBaseConfig = "guruplug_defconfig";
};
fuloong2f_n32 = {
name = "fuloong2f_n32";
kernelMajor = "2.6";
kernelHeadersBaseConfig = "fuloong2e_defconfig";
kernelBaseConfig = "lemote2f_defconfig";
kernelArch = "mips";
kernelAutoModules = false;
kernelExtraConfig = ''
MIGRATION n
COMPACTION n
# nixos mounts some cgroup
CGROUPS y
BLK_DEV_RAM y
BLK_DEV_INITRD y
BLK_DEV_CRYPTOLOOP m
BLK_DEV_DM m
DM_CRYPT m
MD y
REISERFS_FS m
EXT4_FS m
USB_STORAGE_CYPRESS_ATACB m
IP_PNP y
IP_PNP_DHCP y
IP_PNP_BOOTP y
NFS_FS y
ROOT_NFS y
TUN m
NFS_V4 y
NFS_V4_1 y
NFS_FSCACHE y
NFSD m
NFSD_V2_ACL y
NFSD_V3 y
NFSD_V3_ACL y
NFSD_V4 y
# Fail to build
DRM n
SCSI_ADVANSYS n
USB_ISP1362_HCD n
SND_SOC n
SND_ALI5451 n
FB_SAVAGE n
SCSI_NSP32 n
ATA_SFF n
SUNGEM n
IRDA n
ATM_HE n
SCSI_ACARD n
BLK_DEV_CMD640_ENHANCED n
FUSE_FS m
# Needed for udev >= 150
SYSFS_DEPRECATED_V2 n
VGA_CONSOLE n
VT_HW_CONSOLE_BINDING y
SERIAL_8250_CONSOLE y
FRAMEBUFFER_CONSOLE y
EXT2_FS y
EXT3_FS y
REISERFS_FS y
MAGIC_SYSRQ y
# The kernel doesn't boot at all, with FTRACE
FTRACE n
'';
kernelTarget = "vmlinux";
uboot = null;
gcc.arch = "loongson2f";
};
beaglebone = armv7l-hf-multiplatform // {
name = "beaglebone";
kernelBaseConfig = "omap2plus_defconfig";
kernelAutoModules = false;
kernelExtraConfig = ""; # TBD kernel config
kernelTarget = "zImage";
uboot = null;
};
armv7l-hf-multiplatform = {
name = "armv7l-hf-multiplatform";
kernelMajor = "2.6"; # Using "2.6" enables 2.6 kernel syscalls in glibc.
kernelHeadersBaseConfig = "multi_v7_defconfig";
kernelBaseConfig = "multi_v7_defconfig";
kernelArch = "arm";
kernelDTB = true;
kernelAutoModules = true;
kernelPreferBuiltin = true;
uboot = null;
kernelTarget = "zImage";
kernelExtraConfig = ''
# Fix broken sunxi-sid nvmem driver.
TI_CPTS y
# Hangs ODROID-XU4
ARM_BIG_LITTLE_CPUIDLE n
'';
gcc = {
# Some table about fpu flags:
# http://community.arm.com/servlet/JiveServlet/showImage/38-1981-3827/blogentry-103749-004812900+1365712953_thumb.png
# Cortex-A5: -mfpu=neon-fp16
# Cortex-A7 (rpi2): -mfpu=neon-vfpv4
# Cortex-A8 (beaglebone): -mfpu=neon
# Cortex-A9: -mfpu=neon-fp16
# Cortex-A15: -mfpu=neon-vfpv4
# More about FPU:
# https://wiki.debian.org/ArmHardFloatPort/VfpComparison
# vfpv3-d16 is what Debian uses and seems to be the best compromise: NEON is not supported in e.g. Scaleway or Tegra 2,
# and the above page suggests NEON is only an improvement with hand-written assembly.
arch = "armv7-a";
fpu = "vfpv3-d16";
float = "hard";
# For Raspberry Pi the 2 the best would be:
# cpu = "cortex-a7";
# fpu = "neon-vfpv4";
};
};
aarch64-multiplatform = {
name = "aarch64-multiplatform";
kernelMajor = "2.6"; # Using "2.6" enables 2.6 kernel syscalls in glibc.
kernelHeadersBaseConfig = "defconfig";
kernelBaseConfig = "defconfig";
kernelArch = "arm64";
kernelDTB = true;
kernelAutoModules = true;
kernelPreferBuiltin = true;
kernelExtraConfig = ''
# Raspberry Pi 3 stuff. Not needed for kernels >= 4.10.
ARCH_BCM2835 y
BCM2835_MBOX y
BCM2835_WDT y
RASPBERRYPI_FIRMWARE y
RASPBERRYPI_POWER y
SERIAL_8250_BCM2835AUX y
SERIAL_8250_EXTENDED y
SERIAL_8250_SHARE_IRQ y
# Cavium ThunderX stuff.
PCI_HOST_THUNDER_ECAM y
'';
uboot = null;
kernelTarget = "Image";
gcc = {
arch = "armv8-a";
};
};
selectBySystem = system: {
"i686-linux" = pc32;
"x86_64-linux" = pc64;
"armv5tel-linux" = sheevaplug;
"armv6l-linux" = raspberrypi;
"armv7l-linux" = armv7l-hf-multiplatform;
"aarch64-linux" = aarch64-multiplatform;
"mips64el-linux" = fuloong2f_n32;
}.${system} or pcBase;
}

View file

@ -231,7 +231,7 @@ runTests {
};
in {
expr = generators.toJSON {} val;
# trival implementation
# trivial implementation
expected = builtins.toJSON val;
};
@ -243,7 +243,7 @@ runTests {
};
in {
expr = generators.toYAML {} val;
# trival implementation
# trivial implementation
expected = builtins.toJSON val;
};
@ -277,4 +277,14 @@ runTests {
expected = [ "2001" "db8" "0" "0042" "" "8a2e" "370" "" ];
};
testComposeExtensions = {
expr = let obj = makeExtensible (self: { foo = self.bar; });
f = self: super: { bar = false; baz = true; };
g = self: super: { bar = super.baz or false; };
f_o_g = composeExtensions f g;
composed = obj.extend f_o_g;
in composed.foo;
expected = true;
};
}

View file

@ -1,31 +1,40 @@
{ nixpkgs }:
{ nixpkgs ? { outPath = (import ../.).cleanSource ../..; revCount = 1234; shortRev = "abcdef"; }
, # The platforms for which we build Nixpkgs.
supportedSystems ? [ builtins.currentSystem ]
, # Strip most of attributes when evaluating to spare memory usage
scrubJobs ? true
}:
with import ../.. { };
with import ../../pkgs/top-level/release-lib.nix { inherit supportedSystems scrubJobs; };
with lib;
stdenv.mkDerivation {
name = "nixpkgs-lib-tests";
buildInputs = [ nix ];
NIX_PATH="nixpkgs=${nixpkgs}";
{
systems = import ./systems.nix { inherit lib assertTrue; };
buildCommand = ''
datadir="${nix}/share"
export TEST_ROOT=$(pwd)/test-tmp
export NIX_BUILD_HOOK=
export NIX_CONF_DIR=$TEST_ROOT/etc
export NIX_DB_DIR=$TEST_ROOT/db
export NIX_LOCALSTATE_DIR=$TEST_ROOT/var
export NIX_LOG_DIR=$TEST_ROOT/var/log/nix
export NIX_MANIFESTS_DIR=$TEST_ROOT/var/nix/manifests
export NIX_STATE_DIR=$TEST_ROOT/var/nix
export NIX_STORE_DIR=$TEST_ROOT/store
export PAGER=cat
cacheDir=$TEST_ROOT/binary-cache
nix-store --init
moduleSystem = pkgs.stdenv.mkDerivation {
name = "nixpkgs-lib-tests";
buildInputs = [ pkgs.nix ];
NIX_PATH="nixpkgs=${nixpkgs}";
cd ${nixpkgs}/lib/tests
./modules.sh
buildCommand = ''
datadir="${pkgs.nix}/share"
export TEST_ROOT=$(pwd)/test-tmp
export NIX_BUILD_HOOK=
export NIX_CONF_DIR=$TEST_ROOT/etc
export NIX_DB_DIR=$TEST_ROOT/db
export NIX_LOCALSTATE_DIR=$TEST_ROOT/var
export NIX_LOG_DIR=$TEST_ROOT/var/log/nix
export NIX_MANIFESTS_DIR=$TEST_ROOT/var/nix/manifests
export NIX_STATE_DIR=$TEST_ROOT/var/nix
export NIX_STORE_DIR=$TEST_ROOT/store
export PAGER=cat
cacheDir=$TEST_ROOT/binary-cache
nix-store --init
touch $out
'';
cd ${nixpkgs}/lib/tests
./modules.sh
touch $out
'';
};
}

31
lib/tests/systems.nix Normal file
View file

@ -0,0 +1,31 @@
# We assert that the new algorithmic way of generating these lists matches the
# way they were hard-coded before.
#
# One might think "if we exhaustively test, what's the point of procedurally
# calculating the lists anyway?". The answer is one can mindlessly update these
# tests as new platforms become supported, and then just give the diff a quick
# sanity check before committing :).
{ lib, assertTrue }:
with lib.systems.doubles;
let mseteq = x: y: lib.sort lib.lessThan x == lib.sort lib.lessThan y; in
{
all = assertTrue (mseteq all (linux ++ darwin ++ cygwin ++ freebsd ++ openbsd ++ netbsd ++ illumos));
arm = assertTrue (mseteq arm [ "armv5tel-linux" "armv6l-linux" "armv7l-linux" ]);
i686 = assertTrue (mseteq i686 [ "i686-linux" "i686-freebsd" "i686-netbsd" "i686-openbsd" "i686-cygwin" ]);
mips = assertTrue (mseteq mips [ "mips64el-linux" ]);
x86_64 = assertTrue (mseteq x86_64 [ "x86_64-linux" "x86_64-darwin" "x86_64-freebsd" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin" "x86_64-solaris" ]);
cygwin = assertTrue (mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ]);
darwin = assertTrue (mseteq darwin [ "x86_64-darwin" ]);
freebsd = assertTrue (mseteq freebsd [ "i686-freebsd" "x86_64-freebsd" ]);
gnu = assertTrue (mseteq gnu (linux /* ++ hurd ++ kfreebsd ++ ... */));
illumos = assertTrue (mseteq illumos [ "x86_64-solaris" ]);
linux = assertTrue (mseteq linux [ "i686-linux" "x86_64-linux" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "mips64el-linux" ]);
netbsd = assertTrue (mseteq netbsd [ "i686-netbsd" "x86_64-netbsd" ]);
openbsd = assertTrue (mseteq openbsd [ "i686-openbsd" "x86_64-openbsd" ]);
unix = assertTrue (mseteq unix (linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos));
}

View file

@ -30,10 +30,15 @@ rec {
/* boolean and */
and = x: y: x && y;
/* Convert a boolean to a string.
Note that toString on a bool returns "1" and "".
*/
boolToString = b: if b then "true" else "false";
/* Merge two attribute sets shallowly, right side trumps left
Example:
mergeAttrs { a = 1; b = 2; } // { b = 3; c = 4; }
mergeAttrs { a = 1; b = 2; } { b = 3; c = 4; }
=> { a = 1; b = 3; c = 4; }
*/
mergeAttrs = x: y: x // y;
@ -80,6 +85,15 @@ rec {
# argument, but it's nice this way if several uses of `extends` are cascaded.
extends = f: rattrs: self: let super = rattrs self; in super // f self super;
# Compose two extending functions of the type expected by 'extends'
# into one where changes made in the first are available in the
# 'super' of the second
composeExtensions =
f: g: self: super:
let fApplied = f self super;
super' = super // fApplied;
in fApplied // g self super';
# Create an overridable, recursive attribute set. For example:
#
# nix-repl> obj = makeExtensible (self: { })
@ -108,6 +122,9 @@ rec {
# Flip the order of the arguments of a binary function.
flip = f: a: b: f b a;
# Apply function if argument is non-null
mapNullable = f: a: if isNull a then a else f a;
# Pull in some builtins not included elsewhere.
inherit (builtins)
pathExists readFile isBool isFunction

View file

@ -52,7 +52,7 @@ rec {
{ # Human-readable representation of the type, should be equivalent to
# the type function name.
name
, # Description of the type, defined recursively by embedding the the wrapped type if any.
, # Description of the type, defined recursively by embedding the wrapped type if any.
description ? null
, # Function applied to each definition that should return true if
# its type-correct, false otherwise.

View file

@ -14,5 +14,5 @@ removeAttrs (import ../../pkgs/top-level/release.nix
supportedSystems = [ "x86_64-linux" ];
})
[ # Remove jobs whose evaluation depends on a writable Nix store.
"tarball" "unstable"
"tarball" "unstable" "darwin-tested"
]

View file

@ -29,8 +29,10 @@ line. For instance, to create a container that has
<literal>root</literal>:
<screen>
# nixos-container create foo --config 'services.openssh.enable = true; \
users.extraUsers.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];'
# nixos-container create foo --config '
services.openssh.enable = true;
users.extraUsers.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];
'
</screen>
</para>
@ -97,8 +99,11 @@ This will build and activate the new configuration. You can also
specify a new configuration on the command line:
<screen>
# nixos-container update foo --config 'services.httpd.enable = true; \
services.httpd.adminAddr = "foo@example.org";'
# nixos-container update foo --config '
services.httpd.enable = true;
services.httpd.adminAddr = "foo@example.org";
networking.firewall.allowedTCPPorts = [ 80 ];
'
# curl http://$(nixos-container show-ip foo)/
&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…

View file

@ -35,6 +35,12 @@ or <literal>ext4</literal>, then its best to specify
<option>fsType</option> to ensure that the kernel module is
available.</para>
<note><para>System startup will fail if any of the filesystems fails to mount,
dropping you to the emergency shell.
You can make a mount asynchronous and non-critical by adding
<literal>options = [ "nofail" ];</literal>.
</para></note>
<xi:include href="luks-file-systems.xml" />
</chapter>

View file

@ -34,6 +34,11 @@ ISO, copy its contents verbatim to your drive, then either:
in <link xlink:href="https://www.kernel.org/doc/Documentation/kernel-parameters.txt">
the kernel documentation</link> for more details).</para>
</listitem>
<listitem>
<para>If you want to load the contents of the ISO to ram after bootin
(So you can remove the stick after bootup) you can append the parameter
<literal>copytoram</literal>to the <literal>options</literal> field.</para>
</listitem>
</itemizedlist>
</para>

View file

@ -35,6 +35,8 @@ following incompatible changes:</para>
<itemizedlist>
<listitem>
<para>
Top-level <literal>idea</literal> package collection was renamed.
All JetBrains IDEs are now at <literal>jetbrains</literal>.
</para>
</listitem>
</itemizedlist>

View file

@ -33,42 +33,124 @@
, name ? "nixos-disk-image"
# This prevents errors while checking nix-store validity, see
# https://github.com/NixOS/nix/issues/1134
, fixValidity ? true
, format ? "raw"
}:
with lib;
pkgs.vmTools.runInLinuxVM (
let
# Copied from https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/installer/cd-dvd/channel.nix
# TODO: factor out more cleanly
# Do not include these things:
# - The '.git' directory
# - Result symlinks from nix-build ('result', 'result-2', 'result-bin', ...)
# - VIM/Emacs swap/backup files ('.swp', '.swo', '.foo.swp', 'foo~', ...)
filterFn = path: type: let basename = baseNameOf (toString path); in
if type == "directory" then basename != ".git"
else if type == "symlink" then builtins.match "^result(|-.*)$" basename == null
else builtins.match "^((|\..*)\.sw[a-z]|.*~)$" basename == null;
nixpkgs = builtins.filterSource filterFn pkgs.path;
channelSources = pkgs.runCommand "nixos-${config.system.nixosVersion}" {} ''
mkdir -p $out
cp -prd ${nixpkgs} $out/nixos
chmod -R u+w $out/nixos
if [ ! -e $out/nixos/nixpkgs ]; then
ln -s . $out/nixos/nixpkgs
fi
rm -rf $out/nixos/.git
echo -n ${config.system.nixosVersionSuffix} > $out/nixos/.version-suffix
'';
metaClosure = pkgs.writeText "meta" ''
${config.system.build.toplevel}
${config.nix.package.out}
${channelSources}
'';
prepareImageInputs = with pkgs; [ rsync utillinux parted e2fsprogs lkl fakeroot config.system.build.nixos-prepare-root ] ++ stdenv.initialPath;
# I'm preserving the line below because I'm going to search for it across nixpkgs to consolidate
# image building logic. The comment right below this now appears in 4 different places in nixpkgs :)
# !!! should use XML.
sources = map (x: x.source) contents;
targets = map (x: x.target) contents;
prepareImage = ''
export PATH=${pkgs.lib.makeSearchPathOutput "bin" "bin" prepareImageInputs}
mkdir $out
diskImage=nixos.raw
truncate -s ${toString diskSize}M $diskImage
${if partitioned then ''
parted $diskImage -- mklabel msdos mkpart primary ext4 1M -1s
offset=$((2048*512))
'' else ''
offset=0
''}
mkfs.${fsType} -F -L nixos -E offset=$offset $diskImage
root="$PWD/root"
mkdir -p $root
# Copy arbitrary other files into the image
# Semi-shamelessly copied from make-etc.sh. I (@copumpkin) shall factor this stuff out as part of
# https://github.com/NixOS/nixpkgs/issues/23052.
set -f
sources_=(${concatStringsSep " " sources})
targets_=(${concatStringsSep " " targets})
set +f
for ((i = 0; i < ''${#targets_[@]}; i++)); do
source="''${sources_[$i]}"
target="''${targets_[$i]}"
if [[ "$source" =~ '*' ]]; then
# If the source name contains '*', perform globbing.
mkdir -p $root/$target
for fn in $source; do
rsync -a --no-o --no-g "$fn" $root/$target/
done
else
mkdir -p $root/$(dirname $target)
if ! [ -e $root/$target ]; then
rsync -a --no-o --no-g $source $root/$target
else
echo "duplicate entry $target -> $source"
exit 1
fi
fi
done
# TODO: Nix really likes to chown things it creates to its current user...
fakeroot nixos-prepare-root $root ${channelSources} ${config.system.build.toplevel} closure
echo "copying staging root to image..."
cptofs ${pkgs.lib.optionalString partitioned "-P 1"} -t ${fsType} -i $diskImage $root/* /
'';
in pkgs.vmTools.runInLinuxVM (
pkgs.runCommand name
{ preVM =
''
mkdir $out
diskImage=$out/nixos.${if format == "qcow2" then "qcow2" else "img"}
${pkgs.vmTools.qemu}/bin/qemu-img create -f ${format} $diskImage "${toString diskSize}M"
mv closure xchg/
'';
buildInputs = with pkgs; [ utillinux perl e2fsprogs parted rsync ];
# I'm preserving the line below because I'm going to search for it across nixpkgs to consolidate
# image building logic. The comment right below this now appears in 4 different places in nixpkgs :)
# !!! should use XML.
sources = map (x: x.source) contents;
targets = map (x: x.target) contents;
exportReferencesGraph =
[ "closure" config.system.build.toplevel ];
inherit postVM;
{ preVM = prepareImage;
buildInputs = with pkgs; [ utillinux e2fsprogs ];
exportReferencesGraph = [ "closure" metaClosure ];
postVM = ''
${if format == "raw" then ''
mv $diskImage $out/nixos.img
diskImage=$out/nixos.img
'' else ''
${pkgs.qemu}/bin/qemu-img convert -f raw -O qcow2 $diskImage $out/nixos.qcow2
diskImage=$out/nixos.qcow2
''}
${postVM}
'';
memSize = 1024;
}
''
${if partitioned then ''
# Create a single / partition.
parted /dev/vda mklabel msdos
parted /dev/vda -- mkpart primary ext2 1M -1s
. /sys/class/block/vda1/uevent
mknod /dev/vda1 b $MAJOR $MINOR
rootDisk=/dev/vda1
@ -76,74 +158,34 @@ pkgs.vmTools.runInLinuxVM (
rootDisk=/dev/vda
''}
# Create an empty filesystem and mount it.
mkfs.${fsType} -L nixos $rootDisk
mkdir /mnt
mount $rootDisk /mnt
# Register the paths in the Nix database.
printRegistration=1 perl ${pkgs.pathsFromGraph} /tmp/xchg/closure | \
${config.nix.package.out}/bin/nix-store --load-db --option build-users-group ""
${if fixValidity then ''
# Add missing size/hash fields to the database. FIXME:
# exportReferencesGraph should provide these directly.
${config.nix.package.out}/bin/nix-store --verify --check-contents --option build-users-group ""
'' else ""}
# In case the bootloader tries to write to /dev/sda…
# Some tools assume these exist
ln -s vda /dev/xvda
ln -s vda /dev/sda
# Install the closure onto the image
USER=root ${config.system.build.nixos-install}/bin/nixos-install \
--closure ${config.system.build.toplevel} \
--no-channel-copy \
--no-root-passwd \
${optionalString (!installBootLoader) "--no-bootloader"}
mountPoint=/mnt
mkdir $mountPoint
mount $rootDisk $mountPoint
# Install a configuration.nix.
# Install a configuration.nix
mkdir -p /mnt/etc/nixos
${optionalString (configFile != null) ''
cp ${configFile} /mnt/etc/nixos/configuration.nix
''}
# Remove /etc/machine-id so that each machine cloning this image will get its own id
rm -f /mnt/etc/machine-id
mount --rbind /dev $mountPoint/dev
mount --rbind /proc $mountPoint/proc
mount --rbind /sys $mountPoint/sys
# Copy arbitrary other files into the image
# Semi-shamelessly copied from make-etc.sh. I (@copumpkin) shall factor this stuff out as part of
# https://github.com/NixOS/nixpkgs/issues/23052.
set -f
sources_=($sources)
targets_=($targets)
set +f
# Set up core system link, GRUB, etc.
NIXOS_INSTALL_BOOTLOADER=1 chroot $mountPoint /nix/var/nix/profiles/system/bin/switch-to-configuration boot
for ((i = 0; i < ''${#targets_[@]}; i++)); do
source="''${sources_[$i]}"
target="''${targets_[$i]}"
# TODO: figure out if I should activate, but for now I won't
# chroot $mountPoint /nix/var/nix/profiles/system/activate
if [[ "$source" =~ '*' ]]; then
# The above scripts will generate a random machine-id and we don't want to bake a single ID into all our images
rm -f $mountPoint/etc/machine-id
# If the source name contains '*', perform globbing.
mkdir -p /mnt/$target
for fn in $source; do
rsync -a --no-o --no-g "$fn" /mnt/$target/
done
else
mkdir -p /mnt/$(dirname $target)
if ! [ -e /mnt/$target ]; then
rsync -a --no-o --no-g $source /mnt/$target
else
echo "duplicate entry $target -> $source"
exit 1
fi
fi
done
umount /mnt
umount -R /mnt
# Make sure resize2fs works. Note that resize2fs has stricter criteria for resizing than a normal
# mount, so the `-c 0` and `-i 0` don't affect it. Setting it to `now` doesn't produce deterministic

View file

@ -542,16 +542,20 @@ sub getScreenText {
$self->nest("performing optical character recognition", sub {
my $tmpbase = Cwd::abs_path(".")."/ocr";
my $tmpin = $tmpbase."in.ppm";
my $tmpout = "$tmpbase.ppm";
$self->sendMonitorCommand("screendump $tmpin");
system("ppmtopgm $tmpin | pamscale 4 -filter=lanczos > $tmpout") == 0
or die "cannot scale screenshot";
my $magickArgs = "-filter Catrom -density 72 -resample 300 "
. "-contrast -normalize -despeckle -type grayscale "
. "-sharpen 1 -posterize 3 -negate -gamma 100 "
. "-blur 1x65535";
my $tessArgs = "-c debug_file=/dev/null --psm 11 --oem 2";
$text = `convert $magickArgs $tmpin tiff:- | tesseract - - $tessArgs`;
my $status = $? >> 8;
unlink $tmpin;
system("tesseract $tmpout $tmpbase") == 0 or die "OCR failed";
unlink $tmpout;
$text = read_file("$tmpbase.txt");
unlink "$tmpbase.txt";
die "OCR failed with exit code $status" if $status != 0;
});
return $text;
}

View file

@ -93,7 +93,7 @@ rec {
vms = map (m: m.config.system.build.vm) (lib.attrValues nodes);
ocrProg = tesseract;
ocrProg = tesseract_4.override { enableLanguages = [ "eng" ]; };
# Generate onvenience wrappers for running the test driver
# interactively with the specified network, and for starting the
@ -111,7 +111,8 @@ rec {
vms=($(for i in ${toString vms}; do echo $i/bin/run-*-vm; done))
wrapProgram $out/bin/nixos-test-driver \
--add-flags "''${vms[*]}" \
${lib.optionalString enableOCR "--prefix PATH : '${ocrProg}/bin'"} \
${lib.optionalString enableOCR
"--prefix PATH : '${ocrProg}/bin:${imagemagick}/bin'"} \
--run "testScript=\"\$(cat $out/test-script)\"" \
--set testScript '$testScript' \
--set VLANS '${toString vlans}'

View file

@ -6,10 +6,7 @@ let
cfg = config.amazonImage;
in {
imports =
[ ../../../modules/installer/cd-dvd/channel.nix
../../../modules/virtualisation/amazon-image.nix
];
imports = [ ../../../modules/virtualisation/amazon-image.nix ];
options.amazonImage = {
contents = mkOption {

View file

@ -1,15 +1,23 @@
#! /bin/sh -e
#!/usr/bin/env nix-shell
#! nix-shell -i bash -p google-cloud-sdk
BUCKET_NAME=${BUCKET_NAME:-nixos-images}
export NIX_PATH=nixpkgs=../../../..
export NIXOS_CONFIG=$(dirname $(readlink -f $0))/../../../modules/virtualisation/google-compute-image.nix
export TIMESTAMP=$(date +%Y%m%d%H%M)
set -euo pipefail
BUCKET_NAME="${BUCKET_NAME:-nixos-images}"
TIMESTAMP="$(date +%Y%m%d%H%M)"
export TIMESTAMP
nix-build '<nixpkgs/nixos>' \
-A config.system.build.googleComputeImage --argstr system x86_64-linux -o gce --option extra-binary-caches http://hydra.nixos.org -j 10
-A config.system.build.googleComputeImage \
--arg configuration "{ imports = [ <nixpkgs/nixos/modules/virtualisation/google-compute-image.nix> ]; }" \
--argstr system x86_64-linux \
-o gce \
-j 10
img=$(echo gce/*.tar.gz)
if ! gsutil ls gs://${BUCKET_NAME}/$(basename $img); then
gsutil cp $img gs://${BUCKET_NAME}/$(basename $img)
img_path=$(echo gce/*.tar.gz)
img_name=$(basename "$img_path")
img_id=$(echo "$img_name" | sed 's|.raw.tar.gz$||;s|\.|-|g;s|_|-|g')
if ! gsutil ls "gs://${BUCKET_NAME}/$img_name"; then
gsutil cp "$img_path" "gs://${BUCKET_NAME}/$img_name"
fi
gcloud compute images create $(basename $img .raw.tar.gz | sed 's|\.|-|' | sed 's|_|-|') --source-uri gs://${BUCKET_NAME}/$(basename $img)
gcloud compute images create "$img_id" --source-uri "gs://${BUCKET_NAME}/$img_name"

View file

@ -5,7 +5,7 @@ with lib;
let
cfg = config.fonts.fontconfig;
fcBool = x: "<bool>" + (if x then "true" else "false") + "</bool>";
fcBool = x: "<bool>" + (boolToString x) + "</bool>";
# back-supported fontconfig version and package
# version is used for font cache generation

View file

@ -20,7 +20,7 @@ with lib;
let cfg = config.fonts.fontconfig;
fcBool = x: "<bool>" + (if x then "true" else "false") + "</bool>";
fcBool = x: "<bool>" + (boolToString x) + "</bool>";
# back-supported fontconfig version and package
# version is used for font cache generation

View file

@ -2,21 +2,27 @@
with lib;
let
glibcLocales = pkgs.glibcLocales.override {
allLocales = any (x: x == "all") config.i18n.supportedLocales;
locales = config.i18n.supportedLocales;
};
in
{
###### interface
options = {
i18n = {
glibcLocales = mkOption {
type = types.path;
default = pkgs.glibcLocales.override {
allLocales = any (x: x == "all") config.i18n.supportedLocales;
locales = config.i18n.supportedLocales;
};
example = literalExample "pkgs.glibcLocales";
description = ''
Customized pkg.glibcLocales package.
Changing this option can disable handling of i18n.defaultLocale
and supportedLocale.
'';
};
defaultLocale = mkOption {
type = types.str;
default = "en_US.UTF-8";
@ -118,7 +124,7 @@ in
'');
environment.systemPackages =
optional (config.i18n.supportedLocales != []) glibcLocales;
optional (config.i18n.supportedLocales != []) config.i18n.glibcLocales;
environment.sessionVariables =
{ LANG = config.i18n.defaultLocale;
@ -126,7 +132,7 @@ in
};
systemd.globalEnvironment = mkIf (config.i18n.supportedLocales != []) {
LOCALE_ARCHIVE = "${glibcLocales}/lib/locale/locale-archive";
LOCALE_ARCHIVE = "${config.i18n.glibcLocales}/lib/locale/locale-archive";
};
# /etc/locale.conf is used by systemd.

View file

@ -1,5 +1,5 @@
{
x86_64-linux = "/nix/store/j6q3pb75q1sbk0xsa5x6a629ph98ycdl-nix-1.11.8";
i686-linux = "/nix/store/4m6ps568l988bbr1p2k3w9raq3rblppi-nix-1.11.8";
x86_64-darwin = "/nix/store/cc5q944yn3j2hrs8k0kxx9r2mk9mni8a-nix-1.11.8";
x86_64-linux = "/nix/store/71im965h634iy99zsmlncw6qhx5jcclx-nix-1.11.9";
i686-linux = "/nix/store/cgvavixkayc36l6kl92i8mxr6k0p2yhy-nix-1.11.9";
x86_64-darwin = "/nix/store/w1c96v5yxvdmq4nvqlxjvg6kp7xa2lag-nix-1.11.9";
}

View file

@ -87,38 +87,6 @@ if ! test -e "$mountPoint"; then
exit 1
fi
# Mount some stuff in the target root directory.
mkdir -m 0755 -p $mountPoint/dev $mountPoint/proc $mountPoint/sys $mountPoint/etc $mountPoint/run $mountPoint/home
mkdir -m 01777 -p $mountPoint/tmp
mkdir -m 0755 -p $mountPoint/tmp/root
mkdir -m 0755 -p $mountPoint/var
mkdir -m 0700 -p $mountPoint/root
mount --rbind /dev $mountPoint/dev
mount --rbind /proc $mountPoint/proc
mount --rbind /sys $mountPoint/sys
mount --rbind / $mountPoint/tmp/root
mount -t tmpfs -o "mode=0755" none $mountPoint/run
rm -rf $mountPoint/var/run
ln -s /run $mountPoint/var/run
for f in /etc/resolv.conf /etc/hosts; do rm -f $mountPoint/$f; [ -f "$f" ] && cp -Lf $f $mountPoint/etc/; done
for f in /etc/passwd /etc/group; do touch $mountPoint/$f; [ -f "$f" ] && mount --rbind -o ro $f $mountPoint/$f; done
cp -Lf "@cacert@" "$mountPoint/tmp/ca-cert.crt"
export SSL_CERT_FILE=/tmp/ca-cert.crt
# For Nix 1.7
export CURL_CA_BUNDLE=/tmp/ca-cert.crt
if [ -n "$runChroot" ]; then
if ! [ -L $mountPoint/nix/var/nix/profiles/system ]; then
echo "$0: installation not finished; cannot chroot into installation directory"
exit 1
fi
ln -s /nix/var/nix/profiles/system $mountPoint/run/current-system
exec chroot $mountPoint "${chrootCommand[@]}"
fi
# Get the path of the NixOS configuration file.
if test -z "$NIXOS_CONFIG"; then
NIXOS_CONFIG=/etc/nixos/configuration.nix
@ -130,121 +98,60 @@ if [ ! -e "$mountPoint/$NIXOS_CONFIG" ] && [ -z "$closure" ]; then
fi
# Create the necessary Nix directories on the target device, if they
# don't already exist.
mkdir -m 0755 -p \
$mountPoint/nix/var/nix/gcroots \
$mountPoint/nix/var/nix/temproots \
$mountPoint/nix/var/nix/userpool \
$mountPoint/nix/var/nix/profiles \
$mountPoint/nix/var/nix/db \
$mountPoint/nix/var/log/nix/drvs
mkdir -m 1775 -p $mountPoint/nix/store
chown @root_uid@:@nixbld_gid@ $mountPoint/nix/store
# There is no daemon in the chroot.
unset NIX_REMOTE
# We don't have locale-archive in the chroot, so clear $LANG.
export LANG=
export LC_ALL=
export LC_TIME=
# Builds will use users that are members of this group
extraBuildFlags+=(--option "build-users-group" "$buildUsersGroup")
# Inherit binary caches from the host
# TODO: will this still work with Nix 1.12 now that it has no perl? Probably not...
binary_caches="$(@perl@/bin/perl -I @nix@/lib/perl5/site_perl/*/* -e 'use Nix::Config; Nix::Config::readConfig; print $Nix::Config::config{"binary-caches"};')"
extraBuildFlags+=(--option "binary-caches" "$binary_caches")
nixpkgs="$(readlink -f "$(nix-instantiate --find-file nixpkgs)")"
export NIX_PATH="nixpkgs=$nixpkgs:nixos-config=$mountPoint/$NIXOS_CONFIG"
unset NIXOS_CONFIG
# Copy Nix to the Nix store on the target device, unless it's already there.
if ! NIX_DB_DIR=$mountPoint/nix/var/nix/db nix-store --check-validity @nix@ 2> /dev/null; then
echo "copying Nix to $mountPoint...."
for i in $(@perl@/bin/perl @pathsFromGraph@ @nixClosure@); do
echo " $i"
chattr -R -i $mountPoint/$i 2> /dev/null || true # clear immutable bit
@rsync@/bin/rsync -a $i $mountPoint/nix/store/
done
# Register the paths in the Nix closure as valid. This is necessary
# to prevent them from being deleted the first time we install
# something. (I.e., Nix will see that, e.g., the glibc path is not
# valid, delete it to get it out of the way, but as a result nothing
# will work anymore.)
chroot $mountPoint @nix@/bin/nix-store --register-validity < @nixClosure@
fi
# TODO: do I need to set NIX_SUBSTITUTERS here or is the --option binary-caches above enough?
# Create the required /bin/sh symlink; otherwise lots of things
# (notably the system() function) won't work.
mkdir -m 0755 -p $mountPoint/bin
# !!! assuming that @shell@ is in the closure
ln -sf @shell@ $mountPoint/bin/sh
# A place to drop temporary closures
trap "rm -rf $tmpdir" EXIT
tmpdir="$(mktemp -d)"
# Build a closure (on the host; we then copy it into the guest)
function closure() {
nix-build "${extraBuildFlags[@]}" --no-out-link -E "with import <nixpkgs> {}; runCommand \"closure\" { exportReferencesGraph = [ \"x\" (buildEnv { name = \"env\"; paths = [ ($1) stdenv ]; }) ]; } \"cp x \$out\""
}
# Build hooks likely won't function correctly in the minimal chroot; just disable them.
unset NIX_BUILD_HOOK
# Make the build below copy paths from the CD if possible. Note that
# /tmp/root in the chroot is the root of the CD.
export NIX_OTHER_STORES=/tmp/root/nix:$NIX_OTHER_STORES
p=@nix@/libexec/nix/substituters
export NIX_SUBSTITUTERS=$p/copy-from-other-stores.pl:$p/download-from-binary-cache.pl
system_closure="$tmpdir/system.closure"
if [ -z "$closure" ]; then
# Get the absolute path to the NixOS/Nixpkgs sources.
nixpkgs="$(readlink -f $(nix-instantiate --find-file nixpkgs))"
nixEnvAction="-f <nixpkgs/nixos> --set -A system"
expr="(import <nixpkgs/nixos> {}).system"
system_root="$(nix-build -E "$expr")"
system_closure="$(closure "$expr")"
else
nixpkgs=""
nixEnvAction="--set $closure"
system_root=$closure
# Create a temporary file ending in .closure (so nixos-prepare-root knows to --import it) to transport the store closure
# to the filesytem we're preparing. Also delete it on exit!
nix-store --export $(nix-store -qR $closure) > $system_closure
fi
# Build the specified Nix expression in the target store and install
# it into the system configuration profile.
echo "building the system configuration..."
NIX_PATH="nixpkgs=/tmp/root/$nixpkgs:nixos-config=$NIXOS_CONFIG" NIXOS_CONFIG= \
chroot $mountPoint @nix@/bin/nix-env \
"${extraBuildFlags[@]}" -p /nix/var/nix/profiles/system $nixEnvAction
channel_root="$(nix-env -p /nix/var/nix/profiles/per-user/root/channels -q nixos --no-name --out-path 2>/dev/null || echo -n "")"
channel_closure="$tmpdir/channel.closure"
nix-store --export $channel_root > $channel_closure
# Populate the target root directory with the basics
@prepare_root@/bin/nixos-prepare-root $mountPoint $channel_root $system_root @nixClosure@ $system_closure $channel_closure
# Copy the NixOS/Nixpkgs sources to the target as the initial contents
# of the NixOS channel.
mkdir -m 0755 -p $mountPoint/nix/var/nix/profiles
mkdir -m 1777 -p $mountPoint/nix/var/nix/profiles/per-user
mkdir -m 0755 -p $mountPoint/nix/var/nix/profiles/per-user/root
srcs=$(nix-env "${extraBuildFlags[@]}" -p /nix/var/nix/profiles/per-user/root/channels -q nixos --no-name --out-path 2>/dev/null || echo -n "")
if [ -z "$noChannelCopy" ] && [ -n "$srcs" ]; then
echo "copying NixOS/Nixpkgs sources..."
chroot $mountPoint @nix@/bin/nix-env \
"${extraBuildFlags[@]}" -p /nix/var/nix/profiles/per-user/root/channels -i "$srcs" --quiet
fi
mkdir -m 0700 -p $mountPoint/root/.nix-defexpr
ln -sfn /nix/var/nix/profiles/per-user/root/channels $mountPoint/root/.nix-defexpr/channels
# Get rid of the /etc bind mounts.
for f in /etc/passwd /etc/group; do [ -f "$f" ] && umount $mountPoint/$f; done
# nixos-prepare-root doesn't currently do anything with file ownership, so we set it up here instead
chown @root_uid@:@nixbld_gid@ $mountPoint/nix/store
mount --rbind /dev $mountPoint/dev
mount --rbind /proc $mountPoint/proc
mount --rbind /sys $mountPoint/sys
# Grub needs an mtab.
ln -sfn /proc/mounts $mountPoint/etc/mtab
# Mark the target as a NixOS installation, otherwise
# switch-to-configuration will chicken out.
touch $mountPoint/etc/NIXOS
# Switch to the new system configuration. This will install Grub with
# a menu default pointing at the kernel/initrd/etc of the new
# configuration.

View file

@ -0,0 +1,105 @@
#! @shell@
# This script's goal is to perform all "static" setup of a filesystem structure from pre-built store paths. Everything
# in here should run in a non-root context and inside a Nix builder. It's designed primarily to be called from image-
# building scripts and from nixos-install, but because it makes very few assumptions about the context in which it runs,
# it could be useful in other contexts as well.
#
# Current behavior:
# - set up basic filesystem structure
# - make Nix store etc.
# - copy Nix, system, channel, and misceallaneous closures to target Nix store
# - register validity of all paths in the target store
# - set up channel and system profiles
# Ensure a consistent umask.
umask 0022
set -e
mountPoint="$1"
channel="$2"
system="$3"
shift 3
closures="$@"
PATH="@coreutils@/bin:@nix@/bin:@perl@/bin:@utillinux@/bin:@rsync@/bin"
if ! test -e "$mountPoint"; then
echo "mount point $mountPoint doesn't exist"
exit 1
fi
# Create a few of the standard directories in the target root directory.
mkdir -m 0755 -p $mountPoint/dev $mountPoint/proc $mountPoint/sys $mountPoint/etc $mountPoint/run $mountPoint/home
mkdir -m 01777 -p $mountPoint/tmp
mkdir -m 0755 -p $mountPoint/tmp/root
mkdir -m 0755 -p $mountPoint/var
mkdir -m 0700 -p $mountPoint/root
ln -s /run $mountPoint/var/run
# Create the necessary Nix directories on the target device
mkdir -m 0755 -p \
$mountPoint/nix/var/nix/gcroots \
$mountPoint/nix/var/nix/temproots \
$mountPoint/nix/var/nix/userpool \
$mountPoint/nix/var/nix/profiles \
$mountPoint/nix/var/nix/db \
$mountPoint/nix/var/log/nix/drvs
mkdir -m 1775 -p $mountPoint/nix/store
# All Nix operations below should operate on our target store, not /nix/store.
# N.B: this relies on Nix 1.12 or higher
export NIX_REMOTE=local?root=$mountPoint
# Copy our closures to the Nix store on the target mount point, unless they're already there.
for i in $closures; do
# We support closures both in the format produced by `nix-store --export` and by `exportReferencesGraph`,
# mostly because there doesn't seem to be a single format that can be produced outside of a nix build and
# inside one. See https://github.com/NixOS/nix/issues/1242 for more discussion.
if [[ "$i" =~ \.closure$ ]]; then
echo "importing serialized closure $i to $mountPoint..."
nix-store --import < $i
else
# There has to be a better way to do this, right?
echo "copying closure $i to $mountPoint..."
for j in $(perl @pathsFromGraph@ $i); do
echo " $j... "
rsync -a $j $mountPoint/nix/store/
done
nix-store --option build-users-group root --register-validity < $i
fi
done
# Create the required /bin/sh symlink; otherwise lots of things
# (notably the system() function) won't work.
if [ ! -x $mountPoint/@shell@ ]; then
echo "Error: @shell@ wasn't included in the closure" >&2
exit 1
fi
mkdir -m 0755 -p $mountPoint/bin
ln -sf @shell@ $mountPoint/bin/sh
echo "setting the system closure to '$system'..."
nix-env "${extraBuildFlags[@]}" -p $mountPoint/nix/var/nix/profiles/system --set "$system"
ln -sfn /nix/var/nix/profiles/system $mountPoint/run/current-system
# Copy the NixOS/Nixpkgs sources to the target as the initial contents of the NixOS channel.
mkdir -m 0755 -p $mountPoint/nix/var/nix/profiles
mkdir -m 1777 -p $mountPoint/nix/var/nix/profiles/per-user
mkdir -m 0755 -p $mountPoint/nix/var/nix/profiles/per-user/root
if [ -z "$noChannelCopy" ] && [ -n "$channel" ]; then
echo "copying channel..."
nix-env --option build-use-substitutes false "${extraBuildFlags[@]}" -p $mountPoint/nix/var/nix/profiles/per-user/root/channels --set "$channel" --quiet
fi
mkdir -m 0700 -p $mountPoint/root/.nix-defexpr
ln -sfn /nix/var/nix/profiles/per-user/root/channels $mountPoint/root/.nix-defexpr/channels
# Mark the target as a NixOS installation, otherwise switch-to-configuration will chicken out.
touch $mountPoint/etc/NIXOS

View file

@ -4,7 +4,6 @@
{ config, pkgs, modulesPath, ... }:
let
cfg = config.installer;
makeProg = args: pkgs.substituteAll (args // {
@ -17,6 +16,14 @@ let
src = ./nixos-build-vms/nixos-build-vms.sh;
};
nixos-prepare-root = makeProg {
name = "nixos-prepare-root";
src = ./nixos-prepare-root.sh;
nix = pkgs.nixUnstable;
inherit (pkgs) perl pathsFromGraph rsync utillinux coreutils;
};
nixos-install = makeProg {
name = "nixos-install";
src = ./nixos-install.sh;
@ -26,6 +33,7 @@ let
cacert = "${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt";
root_uid = config.ids.uids.root;
nixbld_gid = config.ids.gids.nixbld;
prepare_root = nixos-prepare-root;
nixClosure = pkgs.runCommand "closure"
{ exportReferencesGraph = ["refs" config.nix.package.out]; }
@ -69,6 +77,7 @@ in
environment.systemPackages =
[ nixos-build-vms
nixos-prepare-root
nixos-install
nixos-rebuild
nixos-generate-config
@ -77,7 +86,7 @@ in
];
system.build = {
inherit nixos-install nixos-generate-config nixos-option nixos-rebuild;
inherit nixos-install nixos-prepare-root nixos-generate-config nixos-option nixos-rebuild;
};
};

View file

@ -2,16 +2,6 @@
{
_module.args = {
pkgs_i686 = import ../../.. {
system = "i686-linux";
# FIXME: we enable config.allowUnfree to make packages like
# nvidia-x11 available. This isn't a problem because if the user has
# nixpkgs.config.allowUnfree = false, then evaluation will fail on
# the 64-bit package anyway. However, it would be cleaner to respect
# nixpkgs.config here.
config.allowUnfree = true;
};
utils = import ../../lib/utils.nix pkgs;
};
}

View file

@ -289,6 +289,10 @@
rpc = 271;
geoip = 272;
fcron = 273;
sonarr = 274;
radarr = 275;
jackett = 276;
aria2 = 277;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -547,6 +551,10 @@
#rpc = 271; # unused
#geoip = 272; # unused
fcron = 273;
sonarr = 274;
radarr = 275;
jackett = 276;
aria2 = 277;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View file

@ -42,6 +42,8 @@ let
merge = lib.mergeOneOption;
};
_pkgs = import ../../.. config.nixpkgs;
in
{
@ -97,6 +99,9 @@ in
};
config = {
_module.args.pkgs = import ../../.. config.nixpkgs;
_module.args = {
pkgs = _pkgs;
pkgs_i686 = _pkgs.pkgsi686Linux;
};
};
}

View file

@ -88,7 +88,9 @@
./programs/mtr.nix
./programs/nano.nix
./programs/oblogout.nix
./programs/qt5ct.nix
./programs/screen.nix
./programs/slock.nix
./programs/shadow.nix
./programs/shell.nix
./programs/spacefm.nix
@ -101,7 +103,9 @@
./programs/wvdial.nix
./programs/xfs_quota.nix
./programs/xonsh.nix
./programs/zsh/oh-my-zsh.nix
./programs/zsh/zsh.nix
./programs/zsh/zsh-syntax-highlighting.nix
./rename.nix
./security/acme.nix
./security/apparmor.nix
@ -113,6 +117,7 @@
./security/duosec.nix
./security/grsecurity.nix
./security/hidepid.nix
./security/lock-kernel-modules.nix
./security/oath.nix
./security/pam.nix
./security/pam_usb.nix
@ -419,6 +424,7 @@
./services/networking/i2p.nix
./services/networking/iodine.nix
./services/networking/ircd-hybrid/default.nix
./services/networking/keepalived/default.nix
./services/networking/kippo.nix
./services/networking/kresd.nix
./services/networking/lambdabot.nix
@ -659,6 +665,7 @@
./tasks/scsi-link-power-management.nix
./tasks/swraid.nix
./tasks/trackpoint.nix
./tasks/powertop.nix
./testing/service-runner.nix
./virtualisation/container-config.nix
./virtualisation/containers.nix

View file

@ -0,0 +1,62 @@
# A profile with most (vanilla) hardening options enabled by default,
# potentially at the cost of features and performance.
{ config, lib, pkgs, ... }:
with lib;
{
boot.kernelPackages = mkDefault pkgs.linuxPackages_hardened;
security.hideProcessInformation = mkDefault true;
security.lockKernelModules = mkDefault true;
security.apparmor.enable = mkDefault true;
boot.kernelParams = [
# Overwrite free'd memory
"page_poison=1"
# Disable legacy virtual syscalls
"vsyscall=none"
# Disable hibernation (allows replacing the running kernel)
"nohibernate"
];
# Restrict ptrace() usage to processes with a pre-defined relationship
# (e.g., parent/child)
boot.kernel.sysctl."kernel.yama.ptrace_scope" = mkOverride 500 1;
# Prevent replacing the running kernel image w/o reboot
boot.kernel.sysctl."kernel.kexec_load_disabled" = mkDefault true;
# Restrict access to kernel ring buffer (information leaks)
boot.kernel.sysctl."kernel.dmesg_restrict" = mkDefault true;
# Hide kptrs even for processes with CAP_SYSLOG
boot.kernel.sysctl."kernel.kptr_restrict" = mkOverride 500 2;
# Unprivileged access to bpf() has been used for privilege escalation in
# the past
boot.kernel.sysctl."kernel.unprivileged_bpf_disabled" = mkDefault true;
# Disable bpf() JIT (to eliminate spray attacks)
boot.kernel.sysctl."net.core.bpf_jit_enable" = mkDefault false;
# ... or at least apply some hardening to it
boot.kernel.sysctl."net.core.bpf_jit_harden" = mkDefault true;
# A recurring problem with user namespaces is that there are
# still code paths where the kernel's permission checking logic
# fails to account for namespacing, instead permitting a
# namespaced process to act outside the namespace with the
# same privileges as it would have inside it. This is particularly
# bad in the common case of running as root within the namespace.
#
# Setting the number of allowed userns to 0 effectively disables
# the feature at runtime. Attempting to create a user namespace
# with unshare will then fail with "no space left on device".
boot.kernel.sysctl."user.max_user_namespaces" = mkDefault 0;
}

View file

@ -8,13 +8,14 @@
with lib;
let
cfg = config.programs.command-not-found;
commandNotFound = pkgs.substituteAll {
name = "command-not-found";
dir = "bin";
src = ./command-not-found.pl;
isExecutable = true;
inherit (pkgs) perl;
inherit (cfg) dbPath;
perlFlags = concatStrings (map (path: "-I ${path}/lib/perl5/site_perl ")
[ pkgs.perlPackages.DBI pkgs.perlPackages.DBDSQLite pkgs.perlPackages.StringShellQuote ]);
};
@ -22,50 +23,66 @@ let
in
{
options.programs.command-not-found = {
programs.bash.interactiveShellInit =
''
# This function is called whenever a command is not found.
command_not_found_handle() {
local p=/run/current-system/sw/bin/command-not-found
if [ -x $p -a -f /nix/var/nix/profiles/per-user/root/channels/nixos/programs.sqlite ]; then
# Run the helper program.
$p "$@"
# Retry the command if we just installed it.
if [ $? = 126 ]; then
"$@"
enable = mkEnableOption "command-not-found hook for interactive shell";
dbPath = mkOption {
default = "/nix/var/nix/profiles/per-user/root/channels/nixos/programs.sqlite" ;
description = ''
Absolute path to programs.sqlite.
By default this file will be provided by your channel
(nixexprs.tar.xz).
'';
type = types.path;
};
};
config = mkIf cfg.enable {
programs.bash.interactiveShellInit =
''
# This function is called whenever a command is not found.
command_not_found_handle() {
local p=${commandNotFound}/bin/command-not-found
if [ -x $p -a -f ${cfg.dbPath} ]; then
# Run the helper program.
$p "$@"
# Retry the command if we just installed it.
if [ $? = 126 ]; then
"$@"
else
return 127
fi
else
echo "$1: command not found" >&2
return 127
fi
else
echo "$1: command not found" >&2
return 127
fi
}
'';
}
'';
programs.zsh.interactiveShellInit =
''
# This function is called whenever a command is not found.
command_not_found_handler() {
local p=/run/current-system/sw/bin/command-not-found
if [ -x $p -a -f /nix/var/nix/profiles/per-user/root/channels/nixos/programs.sqlite ]; then
# Run the helper program.
$p "$@"
programs.zsh.interactiveShellInit =
''
# This function is called whenever a command is not found.
command_not_found_handler() {
local p=${commandNotFound}/bin/command-not-found
if [ -x $p -a -f ${cfg.dbPath} ]; then
# Run the helper program.
$p "$@"
# Retry the command if we just installed it.
if [ $? = 126 ]; then
"$@"
# Retry the command if we just installed it.
if [ $? = 126 ]; then
"$@"
fi
else
# Indicate than there was an error so ZSH falls back to its default handler
echo "$1: command not found" >&2
return 127
fi
else
# Indicate than there was an error so ZSH falls back to its default handler
return 127
fi
}
'';
}
'';
environment.systemPackages = [ commandNotFound ];
# TODO: tab completion for uninstalled commands! :-)
environment.systemPackages = [ commandNotFound ];
};
}

View file

@ -8,7 +8,7 @@ use Config;
my $program = $ARGV[0];
my $dbPath = "/nix/var/nix/profiles/per-user/root/channels/nixos/programs.sqlite";
my $dbPath = "@dbPath@";
my $dbh = DBI->connect("dbi:SQLite:dbname=$dbPath", "", "")
or die "cannot open database `$dbPath'";

View file

@ -0,0 +1,31 @@
{ config, lib, pkgs, ... }:
with lib;
{
meta.maintainers = [ maintainers.romildo ];
###### interface
options = {
programs.qt5ct = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to enable the Qt5 Configuration Tool (qt5ct), a
program that allows users to configure Qt5 settings (theme,
font, icons, etc.) under desktop environments or window
manager without Qt integration.
Official home page: <link xlink:href="https://sourceforge.net/projects/qt5ct/">https://sourceforge.net/projects/qt5ct/</link>
'';
};
};
};
###### implementation
config = mkIf config.programs.qt5ct.enable {
environment.variables.QT_QPA_PLATFORMTHEME = "qt5ct";
environment.systemPackages = [ pkgs.qt5ct ];
};
}

View file

@ -0,0 +1,26 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.slock;
in
{
options = {
programs.slock = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to install slock screen locker with setuid wrapper.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.slock ];
security.wrappers.slock.source = "${pkgs.slock.out}/bin/slock";
};
}

View file

@ -0,0 +1,66 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.zsh.oh-my-zsh;
in
{
options = {
programs.zsh.oh-my-zsh = {
enable = mkOption {
default = false;
description = ''
Enable oh-my-zsh.
'';
};
plugins = mkOption {
default = [];
type = types.listOf(types.str);
description = ''
List of oh-my-zsh plugins
'';
};
custom = mkOption {
default = "";
type = types.str;
description = ''
Path to a custom oh-my-zsh package to override config of oh-my-zsh.
'';
};
theme = mkOption {
default = "";
type = types.str;
description = ''
Name of the theme to be used by oh-my-zsh.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ oh-my-zsh ];
programs.zsh.interactiveShellInit = with pkgs; with builtins; ''
# oh-my-zsh configuration generated by NixOS
export ZSH=${oh-my-zsh}/share/oh-my-zsh
${optionalString (length(cfg.plugins) > 0)
"plugins=(${concatStringsSep " " cfg.plugins})"
}
${optionalString (stringLength(cfg.custom) > 0)
"ZSH_CUSTOM=\"${cfg.custom}\""
}
${optionalString (stringLength(cfg.theme) > 0)
"ZSH_THEME=\"${cfg.theme}\""
}
source $ZSH/oh-my-zsh.sh
'';
};
}

View file

@ -0,0 +1,53 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.zsh.syntax-highlighting;
in
{
options = {
programs.zsh.syntax-highlighting = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Enable zsh-syntax-highlighting.
'';
};
highlighters = mkOption {
default = [ "main" ];
# https://github.com/zsh-users/zsh-syntax-highlighting/blob/master/docs/highlighters.md
type = types.listOf(types.enum([
"main"
"brackets"
"pattern"
"cursor"
"root"
"line"
]));
description = ''
Specifies the highlighters to be used by zsh-syntax-highlighting.
The following defined options can be found here:
https://github.com/zsh-users/zsh-syntax-highlighting/blob/master/docs/highlighters.md
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ zsh-syntax-highlighting ];
programs.zsh.interactiveShellInit = with pkgs; with builtins; ''
source ${zsh-syntax-highlighting}/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
${optionalString (length(cfg.highlighters) > 0)
"ZSH_HIGHLIGHT_HIGHLIGHTERS=(${concatStringsSep " " cfg.highlighters})"
}
'';
};
}

View file

@ -84,14 +84,6 @@ in
type = types.bool;
};
enableSyntaxHighlighting = mkOption {
default = false;
description = ''
Enable zsh-syntax-highlighting
'';
type = types.bool;
};
enableAutosuggestions = mkOption {
default = false;
description = ''
@ -130,10 +122,6 @@ in
${if cfg.enableCompletion then "autoload -U compinit && compinit" else ""}
${optionalString (cfg.enableSyntaxHighlighting)
"source ${pkgs.zsh-syntax-highlighting}/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh"
}
${optionalString (cfg.enableAutosuggestions)
"source ${pkgs.zsh-autosuggestions}/share/zsh-autosuggestions/zsh-autosuggestions.zsh"
}
@ -143,7 +131,6 @@ in
${cfge.interactiveShellInit}
HELPDIR="${pkgs.zsh}/share/zsh/$ZSH_VERSION/help"
'';
@ -206,8 +193,7 @@ in
environment.etc."zinputrc".source = ./zinputrc;
environment.systemPackages = [ pkgs.zsh ]
++ optional cfg.enableCompletion pkgs.nix-zsh-completions
++ optional cfg.enableSyntaxHighlighting pkgs.zsh-syntax-highlighting;
++ optional cfg.enableCompletion pkgs.nix-zsh-completions;
environment.pathsToLink = optional cfg.enableCompletion "/share/zsh";

View file

@ -204,5 +204,8 @@ with lib;
"Set the option `services.xserver.displayManager.sddm.package' instead.")
(mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
# ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntax-highlighting" "enable" ])
];
}

View file

@ -19,6 +19,12 @@ let
'';
};
domain = mkOption {
type = types.nullOr types.str;
default = null;
description = "Domain to fetch certificate for (defaults to the entry name)";
};
email = mkOption {
type = types.nullOr types.str;
default = null;
@ -157,9 +163,10 @@ in
servicesLists = mapAttrsToList certToServices cfg.certs;
certToServices = cert: data:
let
domain = if data.domain != null then data.domain else cert;
cpath = "${cfg.directory}/${cert}";
rights = if data.allowKeysForGroup then "750" else "700";
cmdline = [ "-v" "-d" cert "--default_root" data.webroot "--valid_min" cfg.validMin ]
cmdline = [ "-v" "-d" domain "--default_root" data.webroot "--valid_min" cfg.validMin ]
++ optionals (data.email != null) [ "--email" data.email ]
++ concatMap (p: [ "-f" p ]) data.plugins
++ concatLists (mapAttrsToList (name: root: [ "-d" (if root == null then name else "${name}:${root}")]) data.extraDomains);

View file

@ -13,7 +13,7 @@ in
{
meta = {
maintainers = with maintainers; [ joachifm ];
maintainers = with maintainers; [ ];
doc = ./grsecurity.xml;
};

View file

@ -26,9 +26,11 @@
<link xlink:href="https://wiki.archlinux.org/index.php/Grsecurity">Arch
Linux wiki page on grsecurity</link>.
<note><para>grsecurity/PaX is only available for the latest linux -stable
kernel; patches against older kernels are available from upstream only for
a fee.</para></note>
<warning><para>Upstream has ceased free support for grsecurity/PaX. See
<link xlink:href="https://grsecurity.net/passing_the_baton.php">
the announcement</link> for more information. Consequently, NixOS
support for grsecurity/PaX also must cease. Enabling this module will
result in a build error.</para></warning>
<note><para>We standardise on a desktop oriented configuration primarily due
to lack of resources. The grsecurity/PaX configuration state space is huge
and each configuration requires quite a bit of testing to ensure that the

View file

@ -0,0 +1,36 @@
{ config, lib, ... }:
with lib;
{
options = {
security.lockKernelModules = mkOption {
type = types.bool;
default = false;
description = ''
Disable kernel module loading once the system is fully initialised.
Module loading is disabled until the next reboot. Problems caused
by delayed module loading can be fixed by adding the module(s) in
question to <option>boot.kernelModules</option>.
'';
};
};
config = mkIf config.security.lockKernelModules {
systemd.services.disable-kernel-module-loading = rec {
description = "Disable kernel module loading";
wantedBy = [ config.systemd.defaultUnit ];
after = [ "systemd-udev-settle.service" "firewall.service" "systemd-modules-load.service" ] ++ wantedBy;
script = "echo -n 1 > /proc/sys/kernel/modules_disabled";
unitConfig.ConditionPathIsReadWrite = "/proc/sys/kernel";
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
};
};
}

View file

@ -45,7 +45,7 @@ let
cniConfig = pkgs.buildEnv {
name = "kubernetes-cni-config";
paths = imap (i: entry:
pkgs.writeTextDir "${10+i}-${entry.type}.conf" (builtins.toJSON entry)
pkgs.writeTextDir "${toString (10+i)}-${entry.type}.conf" (builtins.toJSON entry)
) cfg.kubelet.cni.config;
};
@ -597,7 +597,7 @@ in {
(mkIf cfg.kubelet.enable {
systemd.services.kubelet = {
description = "Kubernetes Kubelet Service";
wantedBy = [ "multi-user.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "network.target" "docker.service" "kube-apiserver.service" ];
path = with pkgs; [ gitMinimal openssh docker utillinux iproute ethtool thin-provisioning-tools iptables ];
preStart = ''
@ -606,14 +606,15 @@ in {
${concatMapStringsSep "\n" (p: "ln -fs ${p.plugins}/* /opt/cni/bin") cfg.kubelet.cni.packages}
'';
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${cfg.package}/bin/kubelet \
--pod-manifest-path=${manifests} \
--kubeconfig=${kubeconfig} \
--require-kubeconfig \
--address=${cfg.kubelet.address} \
--port=${toString cfg.kubelet.port} \
--register-node=${if cfg.kubelet.registerNode then "true" else "false"} \
--register-schedulable=${if cfg.kubelet.registerSchedulable then "true" else "false"} \
--register-node=${boolToString cfg.kubelet.registerNode} \
--register-schedulable=${boolToString cfg.kubelet.registerSchedulable} \
${optionalString (cfg.kubelet.tlsCertFile != null)
"--tls-cert-file=${cfg.kubelet.tlsCertFile}"} \
${optionalString (cfg.kubelet.tlsKeyFile != null)
@ -621,7 +622,7 @@ in {
--healthz-bind-address=${cfg.kubelet.healthz.bind} \
--healthz-port=${toString cfg.kubelet.healthz.port} \
--hostname-override=${cfg.kubelet.hostname} \
--allow-privileged=${if cfg.kubelet.allowPrivileged then "true" else "false"} \
--allow-privileged=${boolToString cfg.kubelet.allowPrivileged} \
--root-dir=${cfg.dataDir} \
--cadvisor_port=${toString cfg.kubelet.cadvisorPort} \
${optionalString (cfg.kubelet.clusterDns != "")
@ -655,9 +656,10 @@ in {
(mkIf cfg.apiserver.enable {
systemd.services.kube-apiserver = {
description = "Kubernetes Kubelet Service";
wantedBy = [ "multi-user.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "network.target" "docker.service" ];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${cfg.package}/bin/kube-apiserver \
--etcd-servers=${concatStringsSep "," cfg.etcd.servers} \
${optionalString (cfg.etcd.caFile != null)
@ -670,14 +672,14 @@ in {
--bind-address=0.0.0.0 \
${optionalString (cfg.apiserver.advertiseAddress != null)
"--advertise-address=${cfg.apiserver.advertiseAddress}"} \
--allow-privileged=${if cfg.apiserver.allowPrivileged then "true" else "false"} \
--allow-privileged=${boolToString cfg.apiserver.allowPrivileged}\
${optionalString (cfg.apiserver.tlsCertFile != null)
"--tls-cert-file=${cfg.apiserver.tlsCertFile}"} \
${optionalString (cfg.apiserver.tlsKeyFile != null)
"--tls-private-key-file=${cfg.apiserver.tlsKeyFile}"} \
${optionalString (cfg.apiserver.tokenAuth != null)
"--token-auth-file=${cfg.apiserver.tokenAuth}"} \
--kubelet-https=${if cfg.apiserver.kubeletHttps then "true" else "false"} \
--kubelet-https=${boolToString cfg.apiserver.kubeletHttps} \
${optionalString (cfg.apiserver.kubeletClientCaFile != null)
"--kubelet-certificate-authority=${cfg.apiserver.kubeletClientCaFile}"} \
${optionalString (cfg.apiserver.kubeletClientCertFile != null)
@ -713,13 +715,14 @@ in {
(mkIf cfg.scheduler.enable {
systemd.services.kube-scheduler = {
description = "Kubernetes Scheduler Service";
wantedBy = [ "multi-user.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${cfg.package}/bin/kube-scheduler \
--address=${cfg.scheduler.address} \
--port=${toString cfg.scheduler.port} \
--leader-elect=${if cfg.scheduler.leaderElect then "true" else "false"} \
--leader-elect=${boolToString cfg.scheduler.leaderElect} \
--kubeconfig=${kubeconfig} \
${optionalString cfg.verbose "--v=6"} \
${optionalString cfg.verbose "--log-flush-frequency=1s"} \
@ -735,16 +738,17 @@ in {
(mkIf cfg.controllerManager.enable {
systemd.services.kube-controller-manager = {
description = "Kubernetes Controller Manager Service";
wantedBy = [ "multi-user.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
serviceConfig = {
RestartSec = "30s";
Restart = "on-failure";
Slice = "kubernetes.slice";
ExecStart = ''${cfg.package}/bin/kube-controller-manager \
--address=${cfg.controllerManager.address} \
--port=${toString cfg.controllerManager.port} \
--kubeconfig=${kubeconfig} \
--leader-elect=${if cfg.controllerManager.leaderElect then "true" else "false"} \
--leader-elect=${boolToString cfg.controllerManager.leaderElect} \
${if (cfg.controllerManager.serviceAccountKeyFile!=null)
then "--service-account-private-key-file=${cfg.controllerManager.serviceAccountKeyFile}"
else "--service-account-private-key-file=/var/run/kubernetes/apiserver.key"} \
@ -767,10 +771,11 @@ in {
(mkIf cfg.proxy.enable {
systemd.services.kube-proxy = {
description = "Kubernetes Proxy Service";
wantedBy = [ "multi-user.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
path = [pkgs.iptables];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${cfg.package}/bin/kube-proxy \
--kubeconfig=${kubeconfig} \
--bind-address=${cfg.proxy.address} \
@ -786,9 +791,10 @@ in {
(mkIf cfg.dns.enable {
systemd.services.kube-dns = {
description = "Kubernetes Dns Service";
wantedBy = [ "multi-user.target" ];
wantedBy = [ "kubernetes.target" ];
after = [ "kube-apiserver.service" ];
serviceConfig = {
Slice = "kubernetes.slice";
ExecStart = ''${cfg.package}/bin/kube-dns \
--kubecfg-file=${kubeconfig} \
--dns-port=${toString cfg.dns.port} \
@ -836,6 +842,11 @@ in {
cfg.proxy.enable ||
cfg.dns.enable
) {
systemd.targets.kubernetes = {
description = "Kubernetes";
wantedBy = [ "multi-user.target" ];
};
systemd.tmpfiles.rules = [
"d /opt/cni/bin 0755 root root -"
"d /var/run/kubernetes 0755 kubernetes kubernetes -"

View file

@ -233,6 +233,7 @@ in
hydra_logo ${cfg.logo}
''}
gc_roots_dir ${cfg.gcRootsDir}
use-substitutes = ${if cfg.useSubstitutes then "1" else "0"}
'';
environment.systemPackages = [ cfg.package ];
@ -328,7 +329,7 @@ in
IN_SYSTEMD = "1"; # to get log severity levels
};
serviceConfig =
{ ExecStart = "@${cfg.package}/bin/hydra-queue-runner hydra-queue-runner -v --option build-use-substitutes ${if cfg.useSubstitutes then "true" else "false"}";
{ ExecStart = "@${cfg.package}/bin/hydra-queue-runner hydra-queue-runner -v --option build-use-substitutes ${boolToString cfg.useSubstitutes}";
ExecStopPost = "${cfg.package}/bin/hydra-queue-runner --unlock";
User = "hydra-queue-runner";
Restart = "always";

View file

@ -21,8 +21,8 @@ let
cassandraConf = ''
cluster_name: ${cfg.clusterName}
num_tokens: 256
auto_bootstrap: ${if cfg.autoBootstrap then "true" else "false"}
hinted_handoff_enabled: ${if cfg.hintedHandOff then "true" else "false"}
auto_bootstrap: ${boolToString cfg.autoBootstrap}
hinted_handoff_enabled: ${boolToString cfg.hintedHandOff}
hinted_handoff_throttle_in_kb: ${builtins.toString cfg.hintedHandOffThrottle}
max_hints_delivery_threads: 2
max_hint_window_in_ms: 10800000 # 3 hours
@ -62,7 +62,7 @@ let
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: ${if cfg.incrementalBackups then "true" else "false"}
incremental_backups: ${boolToString cfg.incrementalBackups}
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
@ -89,7 +89,7 @@ let
truststore: ${cfg.trustStorePath}
truststore_password: ${cfg.trustStorePassword}
client_encryption_options:
enabled: ${if cfg.clientEncryption then "true" else "false"}
enabled: ${boolToString cfg.clientEncryption}
keystore: ${cfg.keyStorePath}
keystore_password: ${cfg.keyStorePassword}
internode_compression: all

View file

@ -4,8 +4,6 @@ with lib;
let
b2s = x: if x then "true" else "false";
cfg = config.services.mongodb;
mongodb = cfg.package;

View file

@ -25,15 +25,22 @@
path = [ pkgs.bash ];
description = "Disable AMD Card";
after = [ "sys-kernel-debug.mount" ];
requires = [ "sys-kernel-debug.mount" ];
wantedBy = [ "multi-user.target" ];
before = [ "systemd-vconsole-setup.service" "display-manager.service" ];
requires = [ "sys-kernel-debug.mount" "vgaswitcheroo.path" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.bash}/bin/sh -c 'echo -e \"IGD\\nOFF\" > /sys/kernel/debug/vgaswitcheroo/switch; exit 0'";
ExecStop = "${pkgs.bash}/bin/sh -c 'echo ON >/sys/kernel/debug/vgaswitcheroo/switch; exit 0'";
ExecStart = "${pkgs.bash}/bin/sh -c 'echo -e \"IGD\\nOFF\" > /sys/kernel/debug/vgaswitcheroo/switch'";
ExecStop = "${pkgs.bash}/bin/sh -c 'echo ON >/sys/kernel/debug/vgaswitcheroo/switch'";
};
};
systemd.paths."vgaswitcheroo" = {
pathConfig = {
PathExists = "/sys/kernel/debug/vgaswitcheroo/switch";
Unit = "amd-hybrid-graphics.service";
};
wantedBy = ["multi-user.target"];
};
};
}

View file

@ -6,9 +6,7 @@ let
bluez-bluetooth = pkgs.bluez;
cfg = config.hardware.bluetooth;
in
{
in {
###### interface
@ -32,6 +30,8 @@ in
'';
description = ''
Set additional configuration for system-wide bluetooth (/etc/bluetooth/main.conf).
NOTE: We already include [Policy], so any configuration under the Policy group should come first.
'';
};
};
@ -45,7 +45,12 @@ in
environment.systemPackages = [ bluez-bluetooth pkgs.openobex pkgs.obexftp ];
environment.etc = singleton {
source = pkgs.writeText "main.conf" cfg.extraConfig;
source = pkgs.writeText "main.conf" ''
[Policy]
AutoEnable=${lib.boolToString cfg.powerOnBoot}
${cfg.extraConfig}
'';
target = "bluetooth/main.conf";
};
@ -53,29 +58,11 @@ in
services.dbus.packages = [ bluez-bluetooth ];
systemd.packages = [ bluez-bluetooth ];
services.udev.extraRules = optionalString cfg.powerOnBoot ''
ACTION=="add", KERNEL=="hci[0-9]*", ENV{SYSTEMD_WANTS}="bluetooth-power@%k.service"
'';
systemd.services = {
bluetooth = {
wantedBy = [ "bluetooth.target" ];
aliases = [ "dbus-org.bluez.service" ];
};
"bluetooth-power@" = mkIf cfg.powerOnBoot {
description = "Power up bluetooth controller";
after = [
"bluetooth.service"
"suspend.target"
"sys-subsystem-bluetooth-devices-%i.device"
];
wantedBy = [ "suspend.target" ];
serviceConfig.Type = "oneshot";
serviceConfig.ExecStart = "${pkgs.bluez.out}/bin/hciconfig %i up";
};
};
systemd.user.services = {

View file

@ -58,6 +58,9 @@ in
powerManagement.cpuFreqGovernor = null;
systemd.services = {
"systemd-rfkill@".enable = false;
"systemd-rfkill".enable = false;
tlp = {
description = "TLP system startup/shutdown";

View file

@ -4,16 +4,15 @@ with lib;
let
cfg = config.services.graylog;
configBool = b: if b then "true" else "false";
confFile = pkgs.writeText "graylog.conf" ''
is_master = ${configBool cfg.isMaster}
is_master = ${boolToString cfg.isMaster}
node_id_file = ${cfg.nodeIdFile}
password_secret = ${cfg.passwordSecret}
root_username = ${cfg.rootUsername}
root_password_sha2 = ${cfg.rootPasswordSha2}
elasticsearch_cluster_name = ${cfg.elasticsearchClusterName}
elasticsearch_discovery_zen_ping_multicast_enabled = ${configBool cfg.elasticsearchDiscoveryZenPingMulticastEnabled}
elasticsearch_discovery_zen_ping_multicast_enabled = ${boolToString cfg.elasticsearchDiscoveryZenPingMulticastEnabled}
elasticsearch_discovery_zen_ping_unicast_hosts = ${cfg.elasticsearchDiscoveryZenPingUnicastHosts}
message_journal_dir = ${cfg.messageJournalDir}
mongodb_uri = ${cfg.mongodbUri}

View file

@ -21,7 +21,7 @@ in
configure a number of bepasty servers which will be started with
gunicorn.
'';
type = with types ; attrsOf (submodule ({
type = with types ; attrsOf (submodule ({ config, ... } : {
options = {
@ -34,7 +34,6 @@ in
default = "127.0.0.1:8000";
};
dataDir = mkOption {
type = types.str;
description = ''
@ -73,10 +72,28 @@ in
type = types.str;
description = ''
server secret for safe session cookies, must be set.
Warning: this secret is stored in the WORLD-READABLE Nix store!
It's recommended to use <option>secretKeyFile</option>
which takes precedence over <option>secretKey</option>.
'';
default = "";
};
secretKeyFile = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
A file that contains the server secret for safe session cookies, must be set.
<option>secretKeyFile</option> takes precedence over <option>secretKey</option>.
Warning: when <option>secretKey</option> is non-empty <option>secretKeyFile</option>
defaults to a file in the WORLD-READABLE Nix store containing that secret.
'';
};
workDir = mkOption {
type = types.str;
description = ''
@ -87,11 +104,22 @@ in
};
};
config = {
secretKeyFile = mkDefault (
if config.secretKey != ""
then toString (pkgs.writeTextFile {
name = "bepasty-secret-key";
text = config.secretKey;
})
else null
);
};
}));
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ bepasty ];
# creates gunicorn systemd service for each configured server
@ -115,7 +143,7 @@ in
serviceConfig = {
Type = "simple";
PrivateTmp = true;
ExecStartPre = assert server.secretKey != ""; pkgs.writeScript "bepasty-server.${name}-init" ''
ExecStartPre = assert !isNull server.secretKeyFile; pkgs.writeScript "bepasty-server.${name}-init" ''
#!/bin/sh
mkdir -p "${server.workDir}"
mkdir -p "${server.dataDir}"
@ -123,7 +151,7 @@ in
cat > ${server.workDir}/bepasty-${name}.conf <<EOF
SITENAME="${name}"
STORAGE_FILESYSTEM_DIRECTORY="${server.dataDir}"
SECRET_KEY="${server.secretKey}"
SECRET_KEY="$(cat "${server.secretKeyFile}")"
DEFAULT_PERMISSIONS="${server.defaultPermissions}"
${server.extraConfig}
EOF

View file

@ -6,7 +6,7 @@ let
cfg = config.services.cgminer;
convType = with builtins;
v: if isBool v then (if v then "true" else "false") else toString v;
v: if isBool v then boolToString v else toString v;
mergedHwConfig =
mapAttrsToList (n: v: ''"${n}": "${(concatStringsSep "," (map convType v))}"'')
(foldAttrs (n: a: [n] ++ a) [] cfg.hardware);

View file

@ -12,7 +12,7 @@ let
nodes = [ ${concatMapStringsSep "," (s: ''"${s}"'') cfg.nodes}, ]
prefix = "${cfg.prefix}"
log-level = "${cfg.logLevel}"
watch = ${if cfg.watch then "true" else "false"}
watch = ${boolToString cfg.watch}
'';
in {

View file

@ -119,7 +119,7 @@ in {
extraConf = mkOption {
description = ''
Etcd extra configuration. See
<link xlink:href='https://github.com/coreos/etcd/blob/master/Documentation/configuration.md#environment-variables' />
<link xlink:href='https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#configuration-flags' />
'';
type = types.attrsOf types.str;
default = {};

View file

@ -22,14 +22,14 @@ in
echo "Creating jackett data directory in /var/lib/jackett/"
mkdir -p /var/lib/jackett/
}
chown -R jackett /var/lib/jackett/
chown -R jackett:jackett /var/lib/jackett/
chmod 0700 /var/lib/jackett/
'';
serviceConfig = {
Type = "simple";
User = "jackett";
Group = "nogroup";
Group = "jackett";
PermissionsStartOnly = "true";
ExecStart = "${pkgs.jackett}/bin/Jackett";
Restart = "on-failure";
@ -37,8 +37,11 @@ in
};
users.extraUsers.jackett = {
uid = config.ids.uids.jackett;
home = "/var/lib/jackett";
group = "jackett";
};
users.extraGroups.jackett.gid = config.ids.gids.jackett;
};
}

View file

@ -5,9 +5,8 @@ with lib;
let
cfg = config.services.matrix-synapse;
logConfigFile = pkgs.writeText "log_config.yaml" cfg.logConfig;
mkResource = r: ''{names: ${builtins.toJSON r.names}, compress: ${fromBool r.compress}}'';
mkListener = l: ''{port: ${toString l.port}, bind_address: "${l.bind_address}", type: ${l.type}, tls: ${fromBool l.tls}, x_forwarded: ${fromBool l.x_forwarded}, resources: [${concatStringsSep "," (map mkResource l.resources)}]}'';
fromBool = x: if x then "true" else "false";
mkResource = r: ''{names: ${builtins.toJSON r.names}, compress: ${boolToString r.compress}}'';
mkListener = l: ''{port: ${toString l.port}, bind_address: "${l.bind_address}", type: ${l.type}, tls: ${boolToString l.tls}, x_forwarded: ${boolToString l.x_forwarded}, resources: [${concatStringsSep "," (map mkResource l.resources)}]}'';
configFile = pkgs.writeText "homeserver.yaml" ''
${optionalString (cfg.tls_certificate_path != null) ''
tls_certificate_path: "${cfg.tls_certificate_path}"
@ -18,7 +17,7 @@ tls_private_key_path: "${cfg.tls_private_key_path}"
${optionalString (cfg.tls_dh_params_path != null) ''
tls_dh_params_path: "${cfg.tls_dh_params_path}"
''}
no_tls: ${fromBool cfg.no_tls}
no_tls: ${boolToString cfg.no_tls}
${optionalString (cfg.bind_port != null) ''
bind_port: ${toString cfg.bind_port}
''}
@ -30,7 +29,7 @@ bind_host: "${cfg.bind_host}"
''}
server_name: "${cfg.server_name}"
pid_file: "/var/run/matrix-synapse.pid"
web_client: ${fromBool cfg.web_client}
web_client: ${boolToString cfg.web_client}
${optionalString (cfg.public_baseurl != null) ''
public_baseurl: "${cfg.public_baseurl}"
''}
@ -58,8 +57,8 @@ media_store_path: "/var/lib/matrix-synapse/media"
uploads_path: "/var/lib/matrix-synapse/uploads"
max_upload_size: "${cfg.max_upload_size}"
max_image_pixels: "${cfg.max_image_pixels}"
dynamic_thumbnails: ${fromBool cfg.dynamic_thumbnails}
url_preview_enabled: ${fromBool cfg.url_preview_enabled}
dynamic_thumbnails: ${boolToString cfg.dynamic_thumbnails}
url_preview_enabled: ${boolToString cfg.url_preview_enabled}
${optionalString (cfg.url_preview_enabled == true) ''
url_preview_ip_range_blacklist: ${builtins.toJSON cfg.url_preview_ip_range_blacklist}
url_preview_ip_range_whitelist: ${builtins.toJSON cfg.url_preview_ip_range_whitelist}
@ -67,10 +66,10 @@ url_preview_url_blacklist: ${builtins.toJSON cfg.url_preview_url_blacklist}
''}
recaptcha_private_key: "${cfg.recaptcha_private_key}"
recaptcha_public_key: "${cfg.recaptcha_public_key}"
enable_registration_captcha: ${fromBool cfg.enable_registration_captcha}
enable_registration_captcha: ${boolToString cfg.enable_registration_captcha}
turn_uris: ${builtins.toJSON cfg.turn_uris}
turn_shared_secret: "${cfg.turn_shared_secret}"
enable_registration: ${fromBool cfg.enable_registration}
enable_registration: ${boolToString cfg.enable_registration}
${optionalString (cfg.registration_shared_secret != null) ''
registration_shared_secret: "${cfg.registration_shared_secret}"
''}
@ -78,15 +77,15 @@ recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
turn_user_lifetime: "${cfg.turn_user_lifetime}"
user_creation_max_duration: ${cfg.user_creation_max_duration}
bcrypt_rounds: ${cfg.bcrypt_rounds}
allow_guest_access: ${fromBool cfg.allow_guest_access}
allow_guest_access: ${boolToString cfg.allow_guest_access}
trusted_third_party_id_servers: ${builtins.toJSON cfg.trusted_third_party_id_servers}
room_invite_state_types: ${builtins.toJSON cfg.room_invite_state_types}
${optionalString (cfg.macaroon_secret_key != null) ''
macaroon_secret_key: "${cfg.macaroon_secret_key}"
''}
expire_access_token: ${fromBool cfg.expire_access_token}
enable_metrics: ${fromBool cfg.enable_metrics}
report_stats: ${fromBool cfg.report_stats}
expire_access_token: ${boolToString cfg.expire_access_token}
enable_metrics: ${boolToString cfg.enable_metrics}
report_stats: ${boolToString cfg.report_stats}
signing_key_path: "/var/lib/matrix-synapse/homeserver.signing.key"
key_refresh_interval: "${cfg.key_refresh_interval}"
perspectives:

View file

@ -41,12 +41,12 @@ let
build-users-group = nixbld
build-max-jobs = ${toString (cfg.maxJobs)}
build-cores = ${toString (cfg.buildCores)}
build-use-sandbox = ${if (builtins.isBool cfg.useSandbox) then (if cfg.useSandbox then "true" else "false") else cfg.useSandbox}
build-use-sandbox = ${if (builtins.isBool cfg.useSandbox) then boolToString cfg.useSandbox else cfg.useSandbox}
build-sandbox-paths = ${toString cfg.sandboxPaths} /bin/sh=${sh} $(echo $extraPaths)
binary-caches = ${toString cfg.binaryCaches}
trusted-binary-caches = ${toString cfg.trustedBinaryCaches}
binary-cache-public-keys = ${toString cfg.binaryCachePublicKeys}
auto-optimise-store = ${if cfg.autoOptimiseStore then "true" else "false"}
auto-optimise-store = ${boolToString cfg.autoOptimiseStore}
${optionalString cfg.requireSignedBinaryCaches ''
signed-binary-caches = *
''}

View file

@ -91,9 +91,11 @@ in
# Copy the database skeleton files to /var/lib/plex/.skeleton
# See the the Nix expression for Plex's package for more information on
# why this is done.
test -d "${cfg.dataDir}/.skeleton" || mkdir "${cfg.dataDir}/.skeleton"
install --owner ${cfg.user} --group ${cfg.group} -d "${cfg.dataDir}/.skeleton"
for db in "com.plexapp.plugins.library.db"; do
cp "${cfg.package}/usr/lib/plexmediaserver/Resources/base_$db" "${cfg.dataDir}/.skeleton/$db"
if [ ! -e "${cfg.dataDir}/.skeleton/$db" ]; then
cp "${cfg.package}/usr/lib/plexmediaserver/Resources/base_$db" "${cfg.dataDir}/.skeleton/$db"
fi
chmod u+w "${cfg.dataDir}/.skeleton/$db"
chown ${cfg.user}:${cfg.group} "${cfg.dataDir}/.skeleton/$db"
done
@ -136,6 +138,7 @@ in
Group = cfg.group;
PermissionsStartOnly = "true";
ExecStart = "/bin/sh -c ${cfg.package}/usr/lib/plexmediaserver/Plex\\ Media\\ Server";
KillSignal = "SIGQUIT";
Restart = "on-failure";
};
environment = {

View file

@ -22,14 +22,14 @@ in
echo "Creating radarr data directory in /var/lib/radarr/"
mkdir -p /var/lib/radarr/
}
chown -R radarr /var/lib/radarr/
chown -R radarr:radarr /var/lib/radarr/
chmod 0700 /var/lib/radarr/
'';
serviceConfig = {
Type = "simple";
User = "radarr";
Group = "nogroup";
Group = "radarr";
PermissionsStartOnly = "true";
ExecStart = "${pkgs.radarr}/bin/Radarr";
Restart = "on-failure";
@ -37,8 +37,11 @@ in
};
users.extraUsers.radarr = {
uid = config.ids.uids.radarr;
home = "/var/lib/radarr";
group = "radarr";
};
users.extraGroups.radarr.gid = config.ids.gids.radarr;
};
}

View file

@ -22,14 +22,14 @@ in
echo "Creating sonarr data directory in /var/lib/sonarr/"
mkdir -p /var/lib/sonarr/
}
chown -R sonarr /var/lib/sonarr/
chown -R sonarr:sonarr /var/lib/sonarr/
chmod 0700 /var/lib/sonarr/
'';
serviceConfig = {
Type = "simple";
User = "sonarr";
Group = "nogroup";
Group = "sonarr";
PermissionsStartOnly = "true";
ExecStart = "${pkgs.sonarr}/bin/NzbDrone --no-browser";
Restart = "on-failure";
@ -37,8 +37,11 @@ in
};
users.extraUsers.sonarr = {
uid = config.ids.uids.sonarr;
home = "/var/lib/sonarr";
group = "sonarr";
};
users.extraGroups.sonarr.gid = config.ids.gids.sonarr;
};
}

View file

@ -128,7 +128,7 @@ let
certBits = cfg.pki.auto.bits;
clientExpiration = cfg.pki.auto.expiration.client;
crlExpiration = cfg.pki.auto.expiration.crl;
isAutoConfig = if needToCreateCA then "True" else "False";
isAutoConfig = boolToString needToCreateCA;
}}" > "$out/main.py"
cat > "$out/setup.py" <<EOF
from setuptools import setup

View file

@ -8,7 +8,7 @@ let
conf = pkgs.writeText "collectd.conf" ''
BaseDir "${cfg.dataDir}"
PIDFile "${cfg.pidFile}"
AutoLoadPlugin ${if cfg.autoLoadPlugin then "true" else "false"}
AutoLoadPlugin ${boolToString cfg.autoLoadPlugin}
Hostname "${config.networking.hostName}"
LoadPlugin syslog

View file

@ -5,8 +5,6 @@ with lib;
let
cfg = config.services.grafana;
b2s = val: if val then "true" else "false";
envOptions = {
PATHS_DATA = cfg.dataDir;
PATHS_PLUGINS = "${cfg.dataDir}/plugins";
@ -32,16 +30,16 @@ let
SECURITY_ADMIN_PASSWORD = cfg.security.adminPassword;
SECURITY_SECRET_KEY = cfg.security.secretKey;
USERS_ALLOW_SIGN_UP = b2s cfg.users.allowSignUp;
USERS_ALLOW_ORG_CREATE = b2s cfg.users.allowOrgCreate;
USERS_AUTO_ASSIGN_ORG = b2s cfg.users.autoAssignOrg;
USERS_ALLOW_SIGN_UP = boolToString cfg.users.allowSignUp;
USERS_ALLOW_ORG_CREATE = boolToString cfg.users.allowOrgCreate;
USERS_AUTO_ASSIGN_ORG = boolToString cfg.users.autoAssignOrg;
USERS_AUTO_ASSIGN_ORG_ROLE = cfg.users.autoAssignOrgRole;
AUTH_ANONYMOUS_ENABLED = b2s cfg.auth.anonymous.enable;
AUTH_ANONYMOUS_ENABLED = boolToString cfg.auth.anonymous.enable;
AUTH_ANONYMOUS_ORG_NAME = cfg.auth.anonymous.org_name;
AUTH_ANONYMOUS_ORG_ROLE = cfg.auth.anonymous.org_role;
ANALYTICS_REPORTING_ENABLED = b2s cfg.analytics.reporting.enable;
ANALYTICS_REPORTING_ENABLED = boolToString cfg.analytics.reporting.enable;
} // cfg.extraOptions;
in {

View file

@ -4,7 +4,7 @@ with lib;
let
cfg = config.services.graphite;
writeTextOrNull = f: t: if t == null then null else pkgs.writeTextDir f t;
writeTextOrNull = f: t: mapNullable (pkgs.writeTextDir f) t;
dataDir = cfg.dataDir;
@ -400,7 +400,8 @@ in {
mkdir -p ${cfg.dataDir}/whisper
chmod 0700 ${cfg.dataDir}/whisper
chown -R graphite:graphite ${cfg.dataDir}
chown graphite:graphite ${cfg.dataDir}
chown graphite:graphite ${cfg.dataDir}/whisper
'';
};
})
@ -487,9 +488,11 @@ in {
# create index
${pkgs.python27Packages.graphite_web}/bin/build-index.sh
touch ${dataDir}/db-created
chown graphite:graphite ${cfg.dataDir}
chown graphite:graphite ${cfg.dataDir}/whisper
chown -R graphite:graphite ${cfg.dataDir}/log
chown -R graphite:graphite ${cfg.dataDir}
touch ${dataDir}/db-created
fi
'';
};
@ -526,9 +529,10 @@ in {
mkdir -p ${dataDir}/cache/
chmod 0700 ${dataDir}/cache/
touch ${dataDir}/db-created
chown graphite:graphite ${cfg.dataDir}
chown -R graphite:graphite ${cfg.dataDir}/cache
chown -R graphite:graphite ${cfg.dataDir}
touch ${dataDir}/db-created
fi
'';
};
@ -549,7 +553,7 @@ in {
preStart = ''
if ! test -e ${dataDir}/db-created; then
mkdir -p ${dataDir}
chown -R graphite:graphite ${dataDir}
chown graphite:graphite ${dataDir}
fi
'';
};

View file

@ -34,7 +34,7 @@ let
cap=$(sed -nr 's/.*#%#\s+capabilities\s*=\s*(.+)/\1/p' $file)
wrapProgram $file \
--set PATH "/run/wrappers/bin:/run/current-system/sw/bin:/run/current-system/sw/bin" \
--set PATH "/run/wrappers/bin:/run/current-system/sw/bin" \
--set MUNIN_LIBDIR "${pkgs.munin}/lib" \
--set MUNIN_PLUGSTATE "/var/run/munin"
@ -184,7 +184,7 @@ in
mkdir -p /etc/munin/plugins
rm -rf /etc/munin/plugins/*
PATH="/run/wrappers/bin:/run/current-system/sw/bin:/run/current-system/sw/bin" ${pkgs.munin}/sbin/munin-node-configure --shell --families contrib,auto,manual --config ${nodeConf} --libdir=${muninPlugins} --servicedir=/etc/munin/plugins 2>/dev/null | ${pkgs.bash}/bin/bash
PATH="/run/wrappers/bin:/run/current-system/sw/bin" ${pkgs.munin}/sbin/munin-node-configure --shell --families contrib,auto,manual --config ${nodeConf} --libdir=${muninPlugins} --servicedir=/etc/munin/plugins 2>/dev/null | ${pkgs.bash}/bin/bash
'';
serviceConfig = {
ExecStart = "${pkgs.munin}/sbin/munin-node --config ${nodeConf} --servicedir /etc/munin/plugins/";

View file

@ -116,6 +116,13 @@ let
The URL scheme with which to fetch metrics from targets.
'';
};
params = mkOption {
type = types.attrsOf (types.listOf types.str);
default = {};
description = ''
Optional HTTP URL parameters.
'';
};
basic_auth = mkOption {
type = types.nullOr (types.submodule {
options = {
@ -134,7 +141,7 @@ let
};
});
default = null;
apply = x: if x == null then null else _filter x;
apply = x: mapNullable _filter x;
description = ''
Optional http login credentials for metrics scraping.
'';

View file

@ -9,7 +9,7 @@ let
extmapFile = pkgs.writeText "extmap.conf" cfg.extmap;
afpToString = x: if builtins.typeOf x == "bool"
then (if x then "true" else "false")
then boolToString x
else toString x;
volumeConfig = name:

View file

@ -5,7 +5,7 @@ with lib;
let
smbToString = x: if builtins.typeOf x == "bool"
then (if x then "true" else "false")
then boolToString x
else toString x;
cfg = config.services.samba;

View file

@ -290,14 +290,14 @@ in
shares.total = ${toString settings.client.shares.total}
[storage]
enabled = ${if settings.storage.enable then "true" else "false"}
enabled = ${boolToString settings.storage.enable}
reserved_space = ${settings.storage.reservedSpace}
[helper]
enabled = ${if settings.helper.enable then "true" else "false"}
enabled = ${boolToString settings.helper.enable}
[sftpd]
enabled = ${if settings.sftpd.enable then "true" else "false"}
enabled = ${boolToString settings.sftpd.enable}
${optionalString (settings.sftpd.port != null)
"port = ${toString settings.sftpd.port}"}
${optionalString (settings.sftpd.hostPublicKeyFile != null)

View file

@ -5,7 +5,6 @@ with lib;
let
cfg = config.services.aiccu;
showBool = b: if b then "true" else "false";
notNull = a: ! isNull a;
configFile = pkgs.writeText "aiccu.conf" ''
${if notNull cfg.username then "username " + cfg.username else ""}
@ -13,16 +12,16 @@ let
protocol ${cfg.protocol}
server ${cfg.server}
ipv6_interface ${cfg.interfaceName}
verbose ${showBool cfg.verbose}
verbose ${boolToString cfg.verbose}
daemonize true
automatic ${showBool cfg.automatic}
requiretls ${showBool cfg.requireTLS}
automatic ${boolToString cfg.automatic}
requiretls ${boolToString cfg.requireTLS}
pidfile ${cfg.pidFile}
defaultroute ${showBool cfg.defaultRoute}
defaultroute ${boolToString cfg.defaultRoute}
${if notNull cfg.setupScript then cfg.setupScript else ""}
makebeats ${showBool cfg.makeHeartBeats}
noconfigure ${showBool cfg.noConfigure}
behindnat ${showBool cfg.behindNAT}
makebeats ${boolToString cfg.makeHeartBeats}
noconfigure ${boolToString cfg.noConfigure}
behindnat ${boolToString cfg.behindNAT}
${if cfg.localIPv4Override then "local_ipv4_override" else ""}
'';

View file

@ -0,0 +1,135 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.aria2;
homeDir = "/var/lib/aria2";
settingsDir = "${homeDir}";
sessionFile = "${homeDir}/aria2.session";
downloadDir = "${homeDir}/Downloads";
rangesToStringList = map (x: builtins.toString x.from +"-"+ builtins.toString x.to);
settingsFile = pkgs.writeText "aria2.conf"
''
dir=${cfg.downloadDir}
listen-port=${concatStringsSep "," (rangesToStringList cfg.listenPortRange)}
rpc-listen-port=${toString cfg.rpcListenPort}
rpc-secret=${cfg.rpcSecret}
'';
in
{
options = {
services.aria2 = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether or not to enable the headless Aria2 daemon service.
Aria2 daemon can be controlled via the RPC interface using
one of many WebUI (http://localhost:6800/ by default).
Targets are downloaded to ${downloadDir} by default and are
accessible to users in the "aria2" group.
'';
};
openPorts = mkOption {
type = types.bool;
default = false;
description = ''
Open listen and RPC ports found in listenPortRange and rpcListenPort
options in the firewall.
'';
};
downloadDir = mkOption {
type = types.string;
default = "${downloadDir}";
description = ''
Directory to store downloaded files.
'';
};
listenPortRange = mkOption {
type = types.listOf types.attrs;
default = [ { from = 6881; to = 6999; } ];
description = ''
Set UDP listening port range used by DHT(IPv4, IPv6) and UDP tracker.
'';
};
rpcListenPort = mkOption {
type = types.int;
default = 6800;
description = "Specify a port number for JSON-RPC/XML-RPC server to listen to. Possible Values: 1024-65535";
};
rpcSecret = mkOption {
type = types.string;
default = "aria2rpc";
description = ''
Set RPC secret authorization token.
Read https://aria2.github.io/manual/en/html/aria2c.html#rpc-auth to know how this option value is used.
'';
};
extraArguments = mkOption {
type = types.string;
example = "--rpc-listen-all --remote-time=true";
default = "";
description = ''
Additional arguments to be passed to Aria2.
'';
};
};
};
config = mkIf cfg.enable {
# Need to open ports for proper functioning
networking.firewall = mkIf cfg.openPorts {
allowedUDPPortRanges = config.services.aria2.listenPortRange;
allowedTCPPorts = [ config.services.aria2.rpcListenPort ];
};
users.extraUsers.aria2 = {
group = "aria2";
uid = config.ids.uids.aria2;
description = "aria2 user";
home = homeDir;
createHome = false;
};
users.extraGroups.aria2.gid = config.ids.gids.aria2;
systemd.services.aria2 = {
description = "aria2 Service";
after = [ "local-fs.target" "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -m 0770 -p "${homeDir}"
chown aria2:aria2 "${homeDir}"
if [[ ! -d "${config.services.aria2.downloadDir}" ]]
then
mkdir -m 0770 -p "${config.services.aria2.downloadDir}"
chown aria2:aria2 "${config.services.aria2.downloadDir}"
fi
if [[ ! -e "${sessionFile}" ]]
then
touch "${sessionFile}"
chown aria2:aria2 "${sessionFile}"
fi
cp -f "${settingsFile}" "${settingsDir}/aria2.conf"
'';
serviceConfig = {
Restart = "on-abort";
ExecStart = "${pkgs.aria2}/bin/aria2c --enable-rpc --conf-path=${settingsDir}/aria2.conf ${config.services.aria2.extraArguments} --save-session=${sessionFile}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
User = "aria2";
Group = "aria2";
PermissionsStartOnly = true;
};
};
};
}

View file

@ -33,6 +33,9 @@ let
publish-hinfo=${yesNo publish.hinfo}
publish-workstation=${yesNo publish.workstation}
publish-domain=${yesNo publish.domain}
[reflector]
enable-reflector=${yesNo reflector}
'';
in
@ -113,6 +116,11 @@ in
description = ''Whether to enable wide-area service discovery.'';
};
reflector = mkOption {
default = false;
description = ''Reflect incoming mDNS requests to all allowed network interfaces.'';
};
publish = {
enable = mkOption {
default = false;

View file

@ -9,7 +9,6 @@ let
listenAddr = cfg.httpListenAddr + ":" + (toString cfg.httpListenPort);
boolStr = x: if x then "true" else "false";
optionalEmptyStr = b: v: optionalString (b != "") v;
webUIConfig = optionalString cfg.enableWebUI
@ -31,7 +30,7 @@ let
sharedFoldersRecord =
concatStringsSep "," (map (entry:
let helper = attr: v:
if (entry ? attr) then boolStr entry.attr else boolStr v;
if (entry ? attr) then boolToString entry.attr else boolToString v;
in
''
{
@ -65,11 +64,11 @@ let
"listening_port": ${toString cfg.listeningPort},
"use_gui": false,
"check_for_updates": ${boolStr cfg.checkForUpdates},
"use_upnp": ${boolStr cfg.useUpnp},
"check_for_updates": ${boolToString cfg.checkForUpdates},
"use_upnp": ${boolToString cfg.useUpnp},
"download_limit": ${toString cfg.downloadLimit},
"upload_limit": ${toString cfg.uploadLimit},
"lan_encrypt_data": ${boolStr cfg.encryptLAN},
"lan_encrypt_data": ${boolToString cfg.encryptLAN},
${webUIConfig}
${sharedFoldersConfig}

View file

@ -71,8 +71,7 @@ let
# anything ever again ("couldn't resolve ..., giving up on
# it"), so we silently lose time synchronisation. This also
# applies to openntpd.
${config.systemd.package}/bin/systemctl try-restart ntpd.service
${config.systemd.package}/bin/systemctl try-restart openntpd.service
${config.systemd.package}/bin/systemctl try-reload-or-restart ntpd.service openntpd.service || true
fi
${cfg.runHook}

View file

@ -19,7 +19,7 @@ let
[syncserver]
public_url = ${cfg.publicUrl}
${optionalString (cfg.sqlUri != "") "sqluri = ${cfg.sqlUri}"}
allow_new_users = ${if cfg.allowNewUsers then "true" else "false"}
allow_new_users = ${boolToString cfg.allowNewUsers}
[browserid]
backend = tokenserver.verifiers.LocalVerifier

View file

@ -8,10 +8,6 @@ let
homeDir = "/var/lib/i2pd";
extip = "EXTIP=\$(${pkgs.curl.bin}/bin/curl -sLf \"http://jsonip.com\" | ${pkgs.gawk}/bin/awk -F'\"' '{print $4}')";
toYesNo = b: if b then "true" else "false";
mkEndpointOpt = name: addr: port: {
enable = mkEnableOption name;
name = mkOption {
@ -76,10 +72,10 @@ let
i2pdConf = pkgs.writeText "i2pd.conf"
''
ipv4 = ${toYesNo cfg.enableIPv4}
ipv6 = ${toYesNo cfg.enableIPv6}
notransit = ${toYesNo cfg.notransit}
floodfill = ${toYesNo cfg.floodfill}
ipv4 = ${boolToString cfg.enableIPv4}
ipv6 = ${boolToString cfg.enableIPv6}
notransit = ${boolToString cfg.notransit}
floodfill = ${boolToString cfg.floodfill}
netid = ${toString cfg.netid}
${if isNull cfg.bandwidth then "" else "bandwidth = ${toString cfg.bandwidth}" }
${if isNull cfg.port then "" else "port = ${toString cfg.port}"}
@ -88,14 +84,14 @@ let
transittunnels = ${toString cfg.limits.transittunnels}
[upnp]
enabled = ${toYesNo cfg.upnp.enable}
enabled = ${boolToString cfg.upnp.enable}
name = ${cfg.upnp.name}
[precomputation]
elgamal = ${toYesNo cfg.precomputation.elgamal}
elgamal = ${boolToString cfg.precomputation.elgamal}
[reseed]
verify = ${toYesNo cfg.reseed.verify}
verify = ${boolToString cfg.reseed.verify}
file = ${cfg.reseed.file}
urls = ${builtins.concatStringsSep "," cfg.reseed.urls}
@ -107,11 +103,11 @@ let
(proto: let portStr = toString proto.port; in
''
[${proto.name}]
enabled = ${toYesNo proto.enable}
enabled = ${boolToString proto.enable}
address = ${proto.address}
port = ${toString proto.port}
${if proto ? keys then "keys = ${proto.keys}" else ""}
${if proto ? auth then "auth = ${toYesNo proto.auth}" else ""}
${if proto ? auth then "auth = ${boolToString proto.auth}" else ""}
${if proto ? user then "user = ${proto.user}" else ""}
${if proto ? pass then "pass = ${proto.pass}" else ""}
${if proto ? outproxy then "outproxy = ${proto.outproxy}" else ""}
@ -154,9 +150,8 @@ let
i2pdSh = pkgs.writeScriptBin "i2pd" ''
#!/bin/sh
${if isNull cfg.extIp then extip else ""}
${pkgs.i2pd}/bin/i2pd \
--host=${if isNull cfg.extIp then "$EXTIP" else cfg.extIp} \
${if isNull cfg.extIp then "" else "--host="+cfg.extIp} \
--conf=${i2pdConf} \
--tunconf=${i2pdTunnelConf}
'';

View file

@ -12,7 +12,7 @@ let
substFiles = [ "=>/conf" ./ircd.conf ];
inherit (pkgs) ircdHybrid coreutils su iproute gnugrep procps;
ipv6Enabled = if config.networking.enableIPv6 then "true" else "false";
ipv6Enabled = boolToString config.networking.enableIPv6;
inherit (cfg) serverName sid description adminEmail
extraPort;

View file

@ -0,0 +1,245 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.keepalived;
keepalivedConf = pkgs.writeText "keepalived.conf" ''
global_defs {
${snmpGlobalDefs}
${cfg.extraGlobalDefs}
}
${vrrpInstancesStr}
${cfg.extraConfig}
'';
snmpGlobalDefs = with cfg.snmp; optionalString enable (
optionalString (socket != null) "snmp_socket ${socket}\n"
+ optionalString enableKeepalived "enable_snmp_keepalived\n"
+ optionalString enableChecker "enable_snmp_checker\n"
+ optionalString enableRfc "enable_snmp_rfc\n"
+ optionalString enableRfcV2 "enable_snmp_rfcv2\n"
+ optionalString enableRfcV3 "enable_snmp_rfcv3\n"
+ optionalString enableTraps "enable_traps"
);
vrrpInstancesStr = concatStringsSep "\n" (map (i:
''
vrrp_instance ${i.name} {
interface ${i.interface}
state ${i.state}
virtual_router_id ${toString i.virtualRouterId}
priority ${toString i.priority}
${optionalString i.noPreempt "nopreempt"}
${optionalString i.useVmac (
"use_vmac" + optionalString (i.vmacInterface != null) " ${i.vmacInterface}"
)}
${optionalString i.vmacXmitBase "vmac_xmit_base"}
${optionalString (i.unicastSrcIp != null) "unicast_src_ip ${i.unicastSrcIp}"}
unicast_peer {
${concatStringsSep "\n" i.unicastPeers}
}
virtual_ipaddress {
${concatMapStringsSep "\n" virtualIpLine i.virtualIps}
}
${i.extraConfig}
}
''
) vrrpInstances);
virtualIpLine = (ip:
ip.addr
+ optionalString (notNullOrEmpty ip.brd) " brd ${ip.brd}"
+ optionalString (notNullOrEmpty ip.dev) " dev ${ip.dev}"
+ optionalString (notNullOrEmpty ip.scope) " scope ${ip.scope}"
+ optionalString (notNullOrEmpty ip.label) " label ${ip.label}"
);
notNullOrEmpty = s: !(s == null || s == "");
vrrpInstances = mapAttrsToList (iName: iConfig:
{
name = iName;
} // iConfig
) cfg.vrrpInstances;
vrrpInstanceAssertions = i: [
{ assertion = i.interface != "";
message = "services.keepalived.vrrpInstances.${i.name}.interface option cannot be empty.";
}
{ assertion = i.virtualRouterId >= 0 && i.virtualRouterId <= 255;
message = "services.keepalived.vrrpInstances.${i.name}.virtualRouterId must be an integer between 0..255.";
}
{ assertion = i.priority >= 0 && i.priority <= 255;
message = "services.keepalived.vrrpInstances.${i.name}.priority must be an integer between 0..255.";
}
{ assertion = i.vmacInterface == null || i.useVmac;
message = "services.keepalived.vrrpInstances.${i.name}.vmacInterface has no effect when services.keepalived.vrrpInstances.${i.name}.useVmac is not set.";
}
{ assertion = !i.vmacXmitBase || i.useVmac;
message = "services.keepalived.vrrpInstances.${i.name}.vmacXmitBase has no effect when services.keepalived.vrrpInstances.${i.name}.useVmac is not set.";
}
] ++ flatten (map (virtualIpAssertions i.name) i.virtualIps);
virtualIpAssertions = vrrpName: ip: [
{ assertion = ip.addr != "";
message = "The 'addr' option for an services.keepalived.vrrpInstances.${vrrpName}.virtualIps entry cannot be empty.";
}
];
pidFile = "/run/keepalived.pid";
in
{
options = {
services.keepalived = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable Keepalived.
'';
};
snmp = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable the builtin AgentX subagent.
'';
};
socket = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Socket to use for connecting to SNMP master agent. If this value is
set to null, keepalived's default will be used, which is
unix:/var/agentx/master, unless using a network namespace, when the
default is udp:localhost:705.
'';
};
enableKeepalived = mkOption {
type = types.bool;
default = false;
description = ''
Enable SNMP handling of vrrp element of KEEPALIVED MIB.
'';
};
enableChecker = mkOption {
type = types.bool;
default = false;
description = ''
Enable SNMP handling of checker element of KEEPALIVED MIB.
'';
};
enableRfc = mkOption {
type = types.bool;
default = false;
description = ''
Enable SNMP handling of RFC2787 and RFC6527 VRRP MIBs.
'';
};
enableRfcV2 = mkOption {
type = types.bool;
default = false;
description = ''
Enable SNMP handling of RFC2787 VRRP MIB.
'';
};
enableRfcV3 = mkOption {
type = types.bool;
default = false;
description = ''
Enable SNMP handling of RFC6527 VRRP MIB.
'';
};
enableTraps = mkOption {
type = types.bool;
default = false;
description = ''
Enable SNMP traps.
'';
};
};
vrrpInstances = mkOption {
type = types.attrsOf (types.submodule (import ./vrrp-options.nix {
inherit lib;
}));
default = {};
description = "Declarative vhost config";
};
extraGlobalDefs = mkOption {
type = types.lines;
default = "";
description = ''
Extra lines to be added verbatim to the 'global_defs' block of the
configuration file
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra lines to be added verbatim to the configuration file.
'';
};
};
};
config = mkIf cfg.enable {
assertions = flatten (map vrrpInstanceAssertions vrrpInstances);
systemd.timers.keepalived-boot-delay = {
description = "Keepalive Daemon delay to avoid instant transition to MASTER state";
after = [ "network.target" "network-online.target" "syslog.target" ];
requires = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
timerConfig = {
OnActiveSec = "5s";
Unit = "keepalived.service";
};
};
systemd.services.keepalived = {
description = "Keepalive Daemon (LVS and VRRP)";
after = [ "network.target" "network-online.target" "syslog.target" ];
wants = [ "network-online.target" ];
serviceConfig = {
Type = "forking";
PIDFile = pidFile;
KillMode = "process";
ExecStart = "${pkgs.keepalived}/sbin/keepalived"
+ " -f ${keepalivedConf}"
+ " -p ${pidFile}"
+ optionalString cfg.snmp.enable " --snmp";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
Restart = "always";
RestartSec = "1s";
};
};
};
}

View file

@ -0,0 +1,50 @@
{ lib } :
with lib;
{
options = {
addr = mkOption {
type = types.str;
description = ''
IP address, optionally with a netmask: IPADDR[/MASK]
'';
};
brd = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The broadcast address on the interface.
'';
};
dev = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The name of the device to add the address to.
'';
};
scope = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The scope of the area where this address is valid.
'';
};
label = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Each address may be tagged with a label string. In order to preserve
compatibility with Linux-2.0 net aliases, this string must coincide with
the name of the device or must be prefixed with the device name followed
by colon.
'';
};
};
}

View file

@ -0,0 +1,121 @@
{ lib } :
with lib;
{
options = {
interface = mkOption {
type = types.str;
description = ''
Interface for inside_network, bound by vrrp.
'';
};
state = mkOption {
type = types.enum [ "MASTER" "BACKUP" ];
default = "BACKUP";
description = ''
Initial state. As soon as the other machine(s) come up, an election will
be held and the machine with the highest "priority" will become MASTER.
So the entry here doesn't matter a whole lot.
'';
};
virtualRouterId = mkOption {
type = types.int;
description = ''
Arbitrary unique number 0..255. Used to differentiate multiple instances
of vrrpd running on the same NIC (and hence same socket).
'';
};
priority = mkOption {
type = types.int;
default = 100;
description = ''
For electing MASTER, highest priority wins. To be MASTER, make 50 more
than other machines.
'';
};
noPreempt = mkOption {
type = types.bool;
default = false;
description = ''
VRRP will normally preempt a lower priority machine when a higher
priority machine comes online. "nopreempt" allows the lower priority
machine to maintain the master role, even when a higher priority machine
comes back online. NOTE: For this to work, the initial state of this
entry must be BACKUP.
'';
};
useVmac = mkOption {
type = types.bool;
default = false;
description = ''
Use VRRP Virtual MAC.
'';
};
vmacInterface = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Name of the vmac interface to use. keepalived will come up with a name
if you don't specify one.
'';
};
vmacXmitBase = mkOption {
type = types.bool;
default = false;
description = ''
Send/Recv VRRP messages from base interface instead of VMAC interface.
'';
};
unicastSrcIp = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Default IP for binding vrrpd is the primary IP on interface. If you
want to hide location of vrrpd, use this IP as src_addr for unicast
vrrp packets.
'';
};
unicastPeers = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Do not send VRRP adverts over VRRP multicast group. Instead it sends
adverts to the following list of ip addresses using unicast design
fashion. It can be cool to use VRRP FSM and features in a networking
environment where multicast is not supported! IP Addresses specified can
IPv4 as well as IPv6.
'';
};
virtualIps = mkOption {
type = types.listOf (types.submodule (import ./virtual-ip-options.nix {
inherit lib;
}));
default = [];
example = literalExample ''
TODO: Example
'';
description = "Declarative vhost config";
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra lines to be added verbatim to the vrrp_instance section.
'';
};
};
}

View file

@ -16,7 +16,7 @@ let
pid_file /run/mosquitto/pid
acl_file ${aclFile}
persistence true
allow_anonymous ${if cfg.allowAnonymous then "true" else "false"}
allow_anonymous ${boolToString cfg.allowAnonymous}
bind_address ${cfg.host}
port ${toString cfg.port}
${listenerConf}

Some files were not shown because too many files have changed in this diff Show more