Merge branch 'master' into update/flutter

This commit is contained in:
kolaente 2020-07-05 17:12:46 +02:00
commit 0cf41434cd
No known key found for this signature in database
GPG key ID: F40E70337AB24C9B
1801 changed files with 35431 additions and 21149 deletions

3
.github/CODEOWNERS vendored
View file

@ -193,3 +193,6 @@
/nixos/modules/virtualisation/cri-o.nix @NixOS/podman
/nixos/modules/virtualisation/podman.nix @NixOS/podman
/nixos/tests/podman.nix @NixOS/podman
# Blockchains
/pkgs/applications/blockchains @mmahut

32
.github/stale.yml vendored
View file

@ -9,17 +9,33 @@ exemptLabels:
# Label to use when marking an issue as stale
staleLabel: "2.status: stale"
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: |
Thank you for your contributions.
pulls:
markComment: |
Hello, I'm a bot and I thank you in the name of the community for your contributions.
This has been automatically marked as stale because it has had no activity for 180 days.
Nixpkgs is a busy repository, and unfortunately sometimes PRs get left behind for too long. Nevertheless, we'd like to help committers reach the PRs that are still important. This PR has had no activity for 180 days, and so I marked it as stale, but you can rest assured it will never be closed by a non-human.
If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity.
If this is still important to you and you'd like to remove the stale label, we ask that you leave a comment. Your comment can be as simple as "still important to me". But there's a bit more you can do:
Here are suggestions that might help resolve this more quickly:
If you received an approval by an unpriviledged maintainer and you are just waiting for a merge, you can @ mention someone with merge permissions and ask them to help. You might be able to find someone relevant by using [Git blame](https://git-scm.com/docs/git-blame) on the relevant files, or via [GitHub's web interface](https://docs.github.com/en/github/managing-files-in-a-repository/tracking-changes-in-a-file). You can see if someone's a member of the [nixpkgs-committers](https://github.com/orgs/NixOS/teams/nixpkgs-committers) team, by hovering with the mouse over their username on the web interface, or by searching them directly on [the list](https://github.com/orgs/NixOS/teams/nixpkgs-committers).
If your PR wasn't reviewed at all, it might help to find someone who's perhaps a user of the package or module you are changing, or alternatively, ask once more for a review by the maintainer of the package/module this is about. If you don't know any, you can use [Git blame](https://git-scm.com/docs/git-blame) on the relevant files, or [GitHub's web interface](https://docs.github.com/en/github/managing-files-in-a-repository/tracking-changes-in-a-file) to find someone who touched the relevant files in the past.
If your PR has had reviews and nevertheless got stale, make sure you've responded to all of the reviewer's requests / questions. Usually when PR authors show responsibility and dedication, reviewers (privileged or not) show dedication as well. If you've pushed a change, it's possible the reviewer wasn't notified about your push via email, so you can always [officially request them for a review](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/requesting-a-pull-request-review), or just @ mention them and say you've addressed their comments.
Lastly, you can always ask for help at [our Discourse Forum](https://discourse.nixos.org/), or more specifically, [at this thread](https://discourse.nixos.org/t/prs-in-distress/3604) or at [#nixos' IRC channel](https://webchat.freenode.net/#nixos).
issues:
markComment: |
Hello, I'm a bot and I thank you in the name of the community for opening this issue.
To help our human contributors focus on the most-relevant reports, I check up on old issues to see if they're still relevant. This issue has had no activity for 180 days, and so I marked it as stale, but you can rest assured it will never be closed by a non-human.
The community would appreciate your effort in checking if the issue is still valid. If it isn't, please close it.
If the issue persists, and you'd like to remove the stale label, you simply need to leave a comment. Your comment can be as simple as "still important to me". If you'd like it to get more attention, you can ask for help by searching for maintainers and people that previously touched related code and @ mention them in a comment. You can use [Git blame](https://git-scm.com/docs/git-blame) or [GitHub's web interface](https://docs.github.com/en/github/managing-files-in-a-repository/tracking-changes-in-a-file) on the relevant files to find them.
Lastly, you can always ask for help at [our Discourse Forum](https://discourse.nixos.org/) or at [#nixos' IRC channel](https://webchat.freenode.net/#nixos).
1. Search for maintainers and people that previously touched the related code and @ mention them in a comment.
2. Ask on the [NixOS Discourse](https://discourse.nixos.org/).
3. Ask on the [#nixos channel](irc://irc.freenode.net/#nixos) on [irc.freenode.net](https://freenode.net).
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View file

@ -34,6 +34,7 @@ the main ones:
* [Nix](https://github.com/NixOS/nix) - the purely functional package manager
* [NixOps](https://github.com/NixOS/nixops) - the tool to remotely deploy NixOS machines
* [nixos-hardware](https://github.com/NixOS/nixos-hardware) - NixOS profiles to optimize settings for different hardware
* [Nix RFCs](https://github.com/NixOS/rfcs) - the formal process for making substantial changes to the community
* [NixOS homepage](https://github.com/NixOS/nixos-homepage) - the [NixOS.org](https://nixos.org) website
* [hydra](https://github.com/NixOS/hydra) - our continuous integration system

View file

@ -166,7 +166,7 @@ hello latest de2bf4786de6 About a minute ago 25.2MB
<title>buildLayeredImage</title>
<para>
Create a Docker image with many of the store paths being on their own layer to improve sharing between images.
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use <function>streamLayeredImage</function> instead, which this function uses internally.
</para>
<variablelist>
@ -327,6 +327,27 @@ pkgs.dockerTools.buildLayeredImage {
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-streamLayeredImage">
<title>streamLayeredImage</title>
<para>
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for <function>buildLayeredImage</function>. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
</para>
<para>
The image produced by running the output script can be piped directly into <command>docker load</command>, to load it into the local docker daemon:
<screen><![CDATA[
$(nix-build) | docker load
]]></screen>
</para>
<para>
Alternatively, the image be piped via <command>gzip</command> into <command>skopeo</command>, e.g. to copy it into a registry:
<screen><![CDATA[
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
]]></screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title>

View file

@ -67,13 +67,13 @@ A derivation can then be written using `agdaPackages.mkDerivation`. This has sim
+ `libraryName` should be the name that appears in the `*.agda-lib` file, defaulting to `pname`.
+ `libraryFile` should be the file name of the `*.agda-lib` file, defaulting to `${libraryName}.agda-lib`.
### Build phase
### Building Agda packages
The default build phase for `agdaPackages.mkDerivation` simply runs `agda` on the `Everything.agda` file.
If something else is needed to build the package (e.g. `make`) then the `buildPhase` should be overridden.
Additionally, a `preBuild` or `configurePhase` can be used if there are steps that need to be done prior to checking the `Everything.agda` file.
`agda` and the Agda libraries contained in `buildInputs` are made available during the build phase.
### Install phase
### Installing Agda packages
The default install phase copies agda source files, agda interface files (`*.agdai`) and `*.agda-lib` files to the output directory.
This can be overridden.

View file

@ -85,19 +85,19 @@
<title>Installing packages on unsupported systems</title>
<para>
There are also two ways to try compiling a package which has been marked as unsuported for the given system.
There are also two ways to try compiling a package which has been marked as unsupported for the given system.
</para>
<itemizedlist>
<listitem>
<para>
For allowing the build of a broken package once, you can use an environment variable for a single invocation of the nix tools:
For allowing the build of an unsupported package once, you can use an environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1</programlisting>
</para>
</listitem>
<listitem>
<para>
For permanently allowing broken packages to be built, you may add <literal>allowUnsupportedSystem = true;</literal> to your user's configuration file, like this:
For permanently allowing unsupported packages to be built, you may add <literal>allowUnsupportedSystem = true;</literal> to your user's configuration file, like this:
<programlisting>
{
allowUnsupportedSystem = true;

View file

@ -469,6 +469,7 @@ rec {
getBin = getOutput "bin";
getLib = getOutput "lib";
getDev = getOutput "dev";
getMan = getOutput "man";
/* Pick the outputs of packages to place in buildInputs */
chooseDevOutputs = drvs: builtins.map getDev drvs;

View file

@ -77,7 +77,7 @@ let
genAttrs isDerivation toDerivation optionalAttrs
zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil
recursiveUpdate matchAttrs overrideExisting getOutput getBin
getLib getDev chooseDevOutputs zipWithNames zip
getLib getDev getMan chooseDevOutputs zipWithNames zip
recurseIntoAttrs dontRecurseIntoAttrs;
inherit (lists) singleton forEach foldr fold foldl foldl' imap0 imap1
concatMap flatten remove findSingle findFirst any all count

View file

@ -95,6 +95,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = ''BSD 2-clause "Simplified" License'';
};
bsd2Patent = spdx {
spdxId = "BSD-2-Clause-Patent";
fullName = ''BSD-2-Clause Plus Patent License'';
};
bsd3 = spdx {
spdxId = "BSD-3-Clause";
fullName = ''BSD 3-clause "New" or "Revised" License'';
@ -462,6 +467,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU Lesser General Public License v3.0 or later";
};
lgpllr = spdx {
spdxId = "LGPLLR";
fullName = "Lesser General Public License For Linguistic Resources";
};
libpng = spdx {
spdxId = "Libpng";
fullName = "libpng License";
@ -482,6 +492,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
url = "https://opensource.franz.com/preamble.html";
};
llvm-exception = spdx {
spdxId = "LLVM-exception";
fullName = "LLVM Exception"; # LLVM exceptions to the Apache 2.0 License
};
lppl12 = spdx {
spdxId = "LPPL-1.2";
fullName = "LaTeX Project Public License v1.2";
@ -545,6 +560,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "Non-Profit Open Software License 3.0";
};
obsidian = {
fullName = "Obsidian End User Agreement";
url = "https://obsidian.md/eula";
free = false;
};
ocamlpro_nc = {
fullName = "OCamlPro Non Commercial license version 1";
url = "https://alt-ergo.ocamlpro.com/http/alt-ergo-2.2.0/OCamlPro-Non-Commercial-License.pdf";

View file

@ -139,6 +139,12 @@
githubId = 1517066;
name = "Aiken Cairncross";
};
aciceri = {
name = "Andrea Ciceri";
email = "andrea.ciceri@autistici.org";
github = "aciceri";
githubId = 2318843;
};
acowley = {
email = "acowley@gmail.com";
github = "acowley";
@ -277,6 +283,12 @@
githubId = 273837;
name = "Mateusz Czapliński";
};
akamaus = {
email = "dmitryvyal@gmail.com";
github = "akamaus";
githubId = 58955;
name = "Dmitry Vyal";
};
akaWolf = {
email = "akawolf0@gmail.com";
github = "akaWolf";
@ -313,6 +325,12 @@
githubId = 2387841;
name = "Alexander Bakker";
};
alexbiehl = {
email = "alexbiehl@gmail.com";
github = "alexbiehl";
githubId = 1876617;
name = "Alex Biehl";
};
alexchapman = {
email = "alex@farfromthere.net";
github = "AJChapman";
@ -1121,6 +1139,12 @@
githubId = 7716744;
name = "Berno Strik";
};
breakds = {
email = "breakds@gmail.com";
github = "breakds";
githubId = 1111035;
name = "Break Yang";
};
brettlyons = {
email = "blyons@fastmail.com";
github = "brettlyons";
@ -1586,6 +1610,12 @@
githubId = 32609395;
name = "B YI";
};
conradmearns = {
email = "conradmearns+github@pm.me";
github = "ConradMearns";
githubId = 5510514;
name = "Conrad Mearns";
};
couchemar = {
email = "couchemar@yandex.ru";
github = "couchemar";
@ -1764,6 +1794,12 @@
email = "christoph.senjak@googlemail.com";
name = "Christoph-Simon Senjak";
};
david-sawatzke = {
email = "d-nix@sawatzke.dev";
github = "david-sawatzke";
githubId = 11035569;
name = "David Sawatzke";
};
david50407 = {
email = "me@davy.tw";
github = "david50407";
@ -2460,6 +2496,12 @@
githubId = 7432848;
name = "Daniel Albert";
};
eskytthe = {
email = "eskytthe@gmail.com";
github = "eskytthe";
githubId = 2544204;
name = "Erik Skytthe";
};
Esteth = {
email = "adam.copp@gmail.com";
name = "Adam Copp";
@ -3518,6 +3560,12 @@
githubId = 117874;
name = "Jeroen de Haas";
};
jduan = {
name = "Jingjing Duan";
email = "duanjingjing@gmail.com";
github = "jduan";
githubId = 452450;
};
jefdaj = {
email = "jefdaj@gmail.com";
github = "jefdaj";
@ -3674,12 +3722,24 @@
githubId = 41977;
name = "Joachim Fasting";
};
joachimschmidt557 = {
email = "joachim.schmidt557@outlook.com";
github = "joachimschmidt557";
githubId = 28556218;
name = "Joachim Schmidt";
};
joamaki = {
email = "joamaki@gmail.com";
github = "joamaki";
githubId = 1102396;
name = "Jussi Maki";
};
jobojeha = {
email = "jobojeha@jeppener.de";
github = "jobojeha";
githubId = 60272884;
name = "Jonathan Jeppener-Haltenhoff";
};
joelburget = {
email = "joelburget@gmail.com";
github = "joelburget";
@ -3901,6 +3961,12 @@
githubId = 2396926;
name = "Justin Woo";
};
jwatt = {
email = "jwatt@broken.watch";
github = "jjwatt";
githubId = 2397327;
name = "Jesse Wattenbarger";
};
jwiegley = {
email = "johnw@newartisans.com";
github = "jwiegley";
@ -4996,6 +5062,10 @@
github = "mdlayher";
githubId = 1926905;
name = "Matt Layher";
keys = [{
longkeyid = "rsa2048/0x77BFE531397EDE94";
fingerprint = "D709 03C8 0BE9 ACDC 14F0 3BFB 77BF E531 397E DE94";
}];
};
meditans = {
email = "meditans@gmail.com";
@ -6256,6 +6326,12 @@
fingerprint = "240B 57DE 4271 2480 7CE3 EAC8 4F74 D536 1C4C A31E";
}];
};
priegger = {
email = "philipp@riegger.name";
github = "priegger";
githubId = 228931;
name = "Philipp Riegger";
};
prikhi = {
email = "pavan.rikhi@gmail.com";
github = "prikhi";
@ -7226,6 +7302,16 @@
githubId = 2770647;
name = "Simon Vandel Sillesen";
};
siriobalmelli = {
email = "sirio@b-ad.ch";
github = "siriobalmelli";
githubId = 23038812;
name = "Sirio Balmelli";
keys = [{
longkeyid = "ed25519/0xF72C4A887F9A24CA";
fingerprint = "B234 EFD4 2B42 FE81 EE4D 7627 F72C 4A88 7F9A 24CA";
}];
};
sivteck = {
email = "sivaram1992@gmail.com";
github = "sivteck";
@ -7800,6 +7886,12 @@
githubId = 1141680;
name = "Thane Gill";
};
TheBrainScrambler = {
email = "esthromeris@riseup.net";
github = "TheBrainScrambler";
githubId = 34945377;
name = "John Smith";
};
thedavidmeister = {
email = "thedavidmeister@gmail.com";
github = "thedavidmeister";
@ -8903,6 +8995,12 @@
githubId = 474343;
name = "Xavier Zwirtz";
};
ymarkus = {
name = "Yannick Markus";
email = "nixpkgs@ymarkus.dev";
github = "ymarkus";
githubId = 62380378;
};
ymeister = {
name = "Yuri Meister";
email = "47071325+ymeister@users.noreply.github.com";
@ -8915,4 +9013,16 @@
github = "cpcloud";
githubId = 417981;
};
davegallant = {
name = "Dave Gallant";
email = "davegallant@gmail.com";
github = "davegallant";
githubId = 4519234;
};
saulecabrera = {
name = "Saúl Cabrera";
email = "saulecabrera@gmail.com";
github = "saulecabrera";
githubId = 1423601;
};
}

View file

@ -2,7 +2,7 @@
# Download patches from debian project
# Usage $0 debian-patches.txt debian-patches.nix
# An example input and output files can be found in applications/graphics/xara/
# An example input and output files can be found in tools/graphics/plotutils
DEB_URL=https://sources.debian.org/data/main
declare -a deb_patches

View file

@ -74,6 +74,7 @@ moonscript,,,,,arobyn
nvim-client,,,,,
penlight,,,,,
rapidjson,,,,,
readline,,,,,
say,,,,,
std__debug,std._debug,,,,
std_normalize,std.normalize,,,,

1 # nix name luarocks name server version luaversion maintainers
74 nvim-client
75 penlight
76 rapidjson
77 readline
78 say
79 std__debug std._debug
80 std_normalize std.normalize

View file

@ -40,6 +40,7 @@ with lib.maintainers; {
cstrahan
Frostman
kalbasit
mdlayher
mic92
orivej
rvolosatovs
@ -57,6 +58,18 @@ with lib.maintainers; {
scope = "Maintain GNOME desktop environment and platform.";
};
matrix = {
members = [
ma27
pacien
fadenb
mguentner
ekleog
ralith
];
scope = "Maintain the ecosystem around Matrix, a decentralized messenger.";
};
php = {
members = [
aanderse

View file

@ -96,6 +96,47 @@
The options are named identically for all other display managers.
</para>
</simplesect>
<simplesect xml:id="sec-x11--graphics-cards-intel">
<title>Intel Graphics drivers</title>
<para>
There are two choices for Intel Graphics drivers in X.org:
<literal>modesetting</literal> (included in the <package>xorg-server</package> itself)
and <literal>intel</literal> (provided by the package <package>xf86-video-intel</package>).
</para>
<para>
The default and recommended is <literal>modesetting</literal>.
It is a generic driver which uses the kernel
<link xlink:href="https://en.wikipedia.org/wiki/Mode_setting">mode setting</link>
(KMS) mechanism. It supports Glamor (2D graphics acceleration via OpenGL)
and is actively maintained but may perform worse in some cases (like in old chipsets).
</para>
<para>
The second driver, <literal>intel</literal>, is specific to Intel GPUs,
but not recommended by most distributions: it lacks several modern features
(for example, it doesn't support Glamor) and the package hasn't been officially
updated since 2015.
</para>
<para>
The results vary depending on the hardware, so you may have to try both drivers.
Use the option <xref linkend="opt-services.xserver.videoDrivers"/> to set one.
The recommended configuration for modern systems is:
<programlisting>
<xref linkend="opt-services.xserver.videoDrivers"/> = [ "modesetting" ];
<xref linkend="opt-services.xserver.useGlamor"/> = true;
</programlisting>
If you experience screen tearing no matter what, this configuration was
reported to resolve the issue:
<programlisting>
<xref linkend="opt-services.xserver.videoDrivers"/> = [ "intel" ];
<xref linkend="opt-services.xserver.deviceSection"/> = ''
Option "DRI" "2"
Option "TearFree" "true"
'';
</programlisting>
Note that this will likely downgrade the performance compared to
<literal>modesetting</literal> or <literal>intel</literal> with DRI 3 (default).
</para>
</simplesect>
<simplesect xml:id="sec-x11-graphics-cards-nvidia">
<title>Proprietary NVIDIA drivers</title>
<para>

View file

@ -38,7 +38,12 @@ starting VDE switch for network 1
</para>
<para>
The machine state is kept across VM restarts in
<filename>/tmp/vm-state-</filename><varname>machinename</varname>.
You can re-use the VM states coming from a previous run
by setting the <command>--keep-vm-state</command> flag.
<screen>
<prompt>$ </prompt>./result/bin/nixos-run-vms --keep-vm-state
</screen>
The machine state is stored in the
<filename>$TMPDIR/vm-state-</filename><varname>machinename</varname> directory.
</para>
</section>

View file

@ -360,6 +360,18 @@ start_all()
</note>
</listitem>
</varlistentry>
<varlistentry>
<term>
<methodname>wait_for_console_text</methodname>
</term>
<listitem>
<para>
Wait until the supplied regular expressions match a line of the serial
console output. This method is useful when OCR is not possibile or
accurate enough.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<methodname>wait_for_window</methodname>

View file

@ -49,7 +49,7 @@
</listitem>
<listitem>
<para>
Click on Settings / Display / Screen and select VBoxVGA as Graphics Controller
Click on Settings / Display / Screen and select VMSVGA as Graphics Controller
</para>
</listitem>
<listitem>

View file

@ -146,7 +146,7 @@
partition. It uses the initially reserved 512MiB at the start of the
disk.
<screen language="commands"><prompt># </prompt>parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB
<prompt># </prompt>parted /dev/sda -- set 3 boot on</screen>
<prompt># </prompt>parted /dev/sda -- set 3 esp on</screen>
</para>
</listitem>
</orderedlist>
@ -513,7 +513,7 @@ Retype new UNIX password: ***</screen>
<prompt># </prompt>parted /dev/sda -- mkpart primary 512MiB -8GiB
<prompt># </prompt>parted /dev/sda -- mkpart primary linux-swap -8GiB 100%
<prompt># </prompt>parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB
<prompt># </prompt>parted /dev/sda -- set 3 boot on</screen>
<prompt># </prompt>parted /dev/sda -- set 3 esp on</screen>
</example>
<example xml:id="ex-install-sequence">

View file

@ -110,6 +110,15 @@ systemd.services.mysql.serviceConfig.ReadWritePaths = [ "/var/data" ];
</programlisting>
</para>
</listitem>
<listitem>
<para>
Two new option <link linkend="opt-documentation.man.generateCaches">documentation.man.generateCaches</link>
has been added to automatically generate the <literal>man-db</literal> caches, which are needed by utilities
like <command>whatis</command> and <command>apropos</command>. The caches are generated during the build of
the NixOS configuration: since this can be expensive when a large number of packages are installed, the
feature is disabled by default.
</para>
</listitem>
</itemizedlist>
</section>
@ -490,6 +499,21 @@ systemd.services.nginx.serviceConfig.ReadWritePaths = [ "/var/www" ];
<link xlink:href="https://github.com/NixOS/nixpkgs/issues/89205">#89205</link>.
</para>
</listitem>
<listitem>
<para>
In the <literal>resilio</literal> module, <xref linkend="opt-services.resilio.httpListenAddr"/> has been changed to listen to <literal>[::1]</literal> instead of <literal>0.0.0.0</literal>.
</para>
</listitem>
<listitem>
<para>
Radicale's default package has changed from 2.x to 3.x. An upgrade
checklist can be found
<link xlink:href="https://github.com/Kozea/Radicale/blob/3.0.x/NEWS.md#upgrade-checklist">here</link>.
You can use the newer version in the NixOS service by setting the
<literal>package</literal> to <literal>radicale3</literal>, which is done
automatically if <literal>stateVersion</literal> is 20.09 or higher.
</para>
</listitem>
</itemizedlist>
</section>
@ -614,6 +638,50 @@ systemd.services.nginx.serviceConfig.ReadWritePaths = [ "/var/www" ];
queued on the kernel side of the netlink socket.
</para>
</listitem>
<listitem>
<para>
Specifying <link linkend="opt-services.dovecot2.mailboxes">mailboxes</link> in the <package>dovecot2</package> module
as a list is deprecated and will break eval in 21.03. Instead, an attribute-set should be specified where the <literal>name</literal>
should be the key of the attribute.
</para>
<para>
This means that a configuration like this
<programlisting>{
<link linkend="opt-services.dovecot2.mailboxes">services.dovecot2.mailboxes</link> = [
{ name = "Junk";
auto = "create";
}
];
}</programlisting>
should now look like this:
<programlisting>{
<link linkend="opt-services.dovecot2.mailboxes">services.dovecot2.mailboxes</link> = {
Junk.auto = "create";
};
}</programlisting>
</para>
</listitem>
<listitem>
<para>
<package>netbeans</package> was upgraded to 12.0 and now defaults to OpenJDK 11. This might cause problems if your projects depend on packages that were removed in Java 11.
</para>
</listitem>
<listitem>
<para>
<package>nextcloud</package> has been updated to <link xlink:href="https://nextcloud.com/blog/nextcloud-hub-brings-productivity-to-home-office/">v19</link>.
</para>
<para>
If you have an existing installation, please make sure that you're on
<package>nextcloud18</package> before upgrading to <package>nextcloud19</package>
since Nextcloud doesn't support upgrades across multiple major versions.
</para>
<para>
The <literal>nixos-run-vms</literal> script now deletes the
previous run machines states on test startup. You can use the
<literal>--keep-vm-state</literal> flag to match the previous
behaviour and keep the same VM state between different test runs.
</para>
</listitem>
</itemizedlist>
</section>
</section>

View file

@ -17,7 +17,7 @@
, e2fsprogs
, libfaketime
, perl
, lkl
, fakeroot
}:
let
@ -26,7 +26,7 @@ in
pkgs.stdenv.mkDerivation {
name = "ext4-fs.img${lib.optionalString compressImage ".zst"}";
nativeBuildInputs = [ e2fsprogs.bin libfaketime perl lkl ]
nativeBuildInputs = [ e2fsprogs.bin libfaketime perl fakeroot ]
++ lib.optional compressImage zstd;
buildCommand =
@ -37,32 +37,31 @@ pkgs.stdenv.mkDerivation {
${populateImageCommands}
)
# Add the closures of the top-level store objects.
storePaths=$(cat ${sdClosureInfo}/store-paths)
echo "Preparing store paths for image..."
# Create nix/store before copying path
mkdir -p ./rootImage/nix/store
xargs -I % cp -a --reflink=auto % -t ./rootImage/nix/store/ < ${sdClosureInfo}/store-paths
(
GLOBIGNORE=".:.."
shopt -u dotglob
cp -a --reflink=auto ./files/* -t ./rootImage/
)
# Also include a manifest of the closures in a format suitable for nix-store --load-db
cp ${sdClosureInfo}/registration ./rootImage/nix-path-registration
# Make a crude approximation of the size of the target image.
# If the script starts failing, increase the fudge factors here.
numInodes=$(find $storePaths ./files | wc -l)
numDataBlocks=$(du -s -c -B 4096 --apparent-size $storePaths ./files | tail -1 | awk '{ print int($1 * 1.03) }')
numInodes=$(find ./rootImage | wc -l)
numDataBlocks=$(du -s -c -B 4096 --apparent-size ./rootImage | tail -1 | awk '{ print int($1 * 1.10) }')
bytes=$((2 * 4096 * $numInodes + 4096 * $numDataBlocks))
echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)"
truncate -s $bytes $img
faketime -f "1970-01-01 00:00:01" mkfs.ext4 -L ${volumeLabel} -U ${uuid} $img
# Also include a manifest of the closures in a format suitable for nix-store --load-db.
cp ${sdClosureInfo}/registration nix-path-registration
cptofs -t ext4 -i $img nix-path-registration /
# Create nix/store before copying paths
faketime -f "1970-01-01 00:00:01" mkdir -p nix/store
cptofs -t ext4 -i $img nix /
echo "copying store paths to image..."
cptofs -t ext4 -i $img $storePaths /nix/store/
echo "copying files to image..."
cptofs -t ext4 -i $img ./files/* /
faketime -f "1970-01-01 00:00:01" fakeroot mkfs.ext4 -L ${volumeLabel} -U ${uuid} -d ./rootImage $img
export EXT2FS_NO_MTAB_OK=yes
# I have ended up with corrupted images sometimes, I suspect that happens when the build machine's disk gets full during the build.

View file

@ -3,7 +3,10 @@ from contextlib import contextmanager, _GeneratorContextManager
from queue import Queue, Empty
from typing import Tuple, Any, Callable, Dict, Iterator, Optional, List
from xml.sax.saxutils import XMLGenerator
import queue
import io
import _thread
import argparse
import atexit
import base64
import codecs
@ -671,6 +674,22 @@ class Machine:
with self.nested("waiting for {} to appear on screen".format(regex)):
retry(screen_matches)
def wait_for_console_text(self, regex: str) -> None:
self.log("waiting for {} to appear on console".format(regex))
# Buffer the console output, this is needed
# to match multiline regexes.
console = io.StringIO()
while True:
try:
console.write(self.last_lines.get())
except queue.Empty:
self.sleep(1)
continue
console.seek(0)
matches = re.search(regex, console.read())
if matches is not None:
return
def send_key(self, key: str) -> None:
key = CHAR_TO_KEY.get(key, key)
self.send_monitor_command("sendkey {}".format(key))
@ -734,11 +753,16 @@ class Machine:
self.monitor, _ = self.monitor_socket.accept()
self.shell, _ = self.shell_socket.accept()
# Store last serial console lines for use
# of wait_for_console_text
self.last_lines: Queue = Queue()
def process_serial_output() -> None:
assert self.process.stdout is not None
for _line in self.process.stdout:
# Ignore undecodable bytes that may occur in boot menus
line = _line.decode(errors="ignore").replace("\r", "").rstrip()
self.last_lines.put(line)
eprint("{} # {}".format(self.name, line))
self.logger.enqueue({"msg": line, "machine": self.name})
@ -751,6 +775,11 @@ class Machine:
self.log("QEMU running (pid {})".format(self.pid))
def cleanup_statedir(self) -> None:
self.log("delete the VM state directory")
if os.path.isfile(self.state_dir):
shutil.rmtree(self.state_dir)
def shutdown(self) -> None:
if not self.booted:
return
@ -889,6 +918,15 @@ def subtest(name: str) -> Iterator[None]:
if __name__ == "__main__":
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
"-K",
"--keep-vm-state",
help="re-use a VM state coming from a previous run",
action="store_true",
)
(cli_args, vm_scripts) = arg_parser.parse_known_args()
log = Logger()
vlan_nrs = list(dict.fromkeys(os.environ.get("VLANS", "").split()))
@ -896,8 +934,10 @@ if __name__ == "__main__":
for nr, vde_socket, _, _ in vde_sockets:
os.environ["QEMU_VDE_SOCKET_{}".format(nr)] = vde_socket
vm_scripts = sys.argv[1:]
machines = [create_machine({"startCommand": s}) for s in vm_scripts]
for machine in machines:
if not cli_args.keep_vm_state:
machine.cleanup_statedir()
machine_eval = [
"{0} = machines[{1}]".format(m.name, idx) for idx, m in enumerate(machines)
]
@ -911,7 +951,6 @@ if __name__ == "__main__":
continue
log.log("killing {} (pid {})".format(machine.name, machine.pid))
machine.process.kill()
for _, _, process, _ in vde_sockets:
process.terminate()
log.close()

View file

@ -114,7 +114,7 @@ rec {
imagemagick_tiff = imagemagick_light.override { inherit libtiff; };
# Generate onvenience wrappers for running the test driver
# Generate convenience wrappers for running the test driver
# interactively with the specified network, and for starting the
# VMs from the command line.
driver = let warn = if skipLint then lib.warn "Linting is disabled!" else lib.id; in warn (runCommand testDriverName

View file

@ -278,7 +278,14 @@ in
(mkRemovedOptionModule [ "fonts" "fontconfig" "hinting" "style" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
];
] ++ lib.forEach [ "enable" "substitutions" "preset" ]
(opt: lib.mkRemovedOptionModule [ "fonts" "fontconfig" "ultimate" "${opt}" ] ''
The fonts.fontconfig.ultimate module and configuration is obsolete.
The repository has since been archived and activity has ceased.
https://github.com/bohoomil/fontconfig-ultimate/issues/171.
No action should be needed for font configuration, as the fonts.fontconfig
module is already used by default.
'');
options = {

View file

@ -27,6 +27,15 @@ let
hashedPasswordDescription = ''
To generate hashed password install <literal>mkpasswd</literal>
package and run <literal>mkpasswd -m sha-512</literal>.
For password-less logins without password prompt, use
the empty string <literal>""</literal>.
For logins with a fixed password (including the empty-string password with
prompt), use one of the un-hashed password options instead, such as
<option>users.users.&lt;name?&gt;.password</option>.
Such unprotected logins should only be used for e.g. bootable live systems.
'';
userOpts = { name, config, ... }: {
@ -626,7 +635,7 @@ in {
then
''
The password hash of user "${name}" may be invalid. You must set a
valid hash or the user will be locked out of his account. Please
valid hash or the user will be locked out of their account. Please
check the value of option `users.users."${name}".hashedPassword`.
''
else null

View file

@ -22,11 +22,22 @@ in {
example = literalExample "pkgs.device-tree_rpi";
type = types.path;
description = ''
The package containing the base device-tree (.dtb) to boot. Contains
The path containing the base device-tree (.dtb) to boot. Contains
device trees bundled with the Linux kernel by default.
'';
};
name = mkOption {
default = null;
example = "some-dtb.dtb";
type = types.nullOr types.str;
description = ''
The name of an explicit dtb to be loaded, relative to the dtb base.
Useful in extlinux scenarios if the bootloader doesn't pick the
right .dtb file from FDTDIR.
'';
};
overlays = mkOption {
default = [];
example = literalExample

View file

@ -84,7 +84,7 @@ in {
model = mkOption {
type = types.str;
example = literalExample ''
gutenprint.''${lib.version.majorMinor (lib.getVersion pkgs.cups)}://brother-hl-5140/expert
gutenprint.''${lib.versions.majorMinor (lib.getVersion pkgs.gutenprint)}://brother-hl-5140/expert
'';
description = ''
Location of the ppd driver file for the printer.

View file

@ -2,12 +2,6 @@
# nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-aarch64.nix -A config.system.build.sdImage
{ config, lib, pkgs, ... }:
let
extlinux-conf-builder =
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
pkgs = pkgs.buildPackages;
};
in
{
imports = [
../../profiles/base.nix
@ -56,7 +50,7 @@ in
'';
populateRootCommands = ''
mkdir -p ./files/boot
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./files/boot
${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
'';
};

View file

@ -2,12 +2,6 @@
# nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-armv7l-multiplatform.nix -A config.system.build.sdImage
{ config, lib, pkgs, ... }:
let
extlinux-conf-builder =
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
pkgs = pkgs.buildPackages;
};
in
{
imports = [
../../profiles/base.nix
@ -53,7 +47,7 @@ in
'';
populateRootCommands = ''
mkdir -p ./files/boot
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./files/boot
${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
'';
};

View file

@ -2,12 +2,6 @@
# nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/sd-image-raspberrypi.nix -A config.system.build.sdImage
{ config, lib, pkgs, ... }:
let
extlinux-conf-builder =
import ../../system/boot/loader/generic-extlinux-compatible/extlinux-conf-builder.nix {
pkgs = pkgs.buildPackages;
};
in
{
imports = [
../../profiles/base.nix
@ -42,7 +36,7 @@ in
'';
populateRootCommands = ''
mkdir -p ./files/boot
${extlinux-conf-builder} -t 3 -c ${config.system.build.toplevel} -d ./files/boot
${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
'';
};

View file

@ -18,6 +18,7 @@
sdImage = {
firmwareSize = 128;
firmwarePartitionName = "NIXOS_BOOT";
# This is a hack to avoid replicating config.txt from boot.loader.raspberryPi
populateFirmwareCommands =
"${config.system.build.installBootLoader} ${config.system.build.toplevel} -d ./firmware";
@ -25,6 +26,12 @@
populateRootCommands = "";
};
fileSystems."/boot/firmware" = {
# This effectively "renames" the loaOf entry set in sd-image.nix
mountPoint = "/boot";
neededForBoot = true;
};
# the installation media is also the installation target,
# so we don't want to provide the installation configuration.nix.
installer.cloneConfig = false;

View file

@ -63,6 +63,14 @@ in
'';
};
firmwarePartitionName = mkOption {
type = types.str;
default = "FIRMWARE";
description = ''
Name of the filesystem which holds the boot firmware.
'';
};
rootPartitionUUID = mkOption {
type = types.nullOr types.str;
default = null;
@ -91,7 +99,7 @@ in
};
populateRootCommands = mkOption {
example = literalExample "''\${extlinux-conf-builder} -t 3 -c \${config.system.build.toplevel} -d ./files/boot''";
example = literalExample "''\${config.boot.loader.generic-extlinux-compatible.populateCmd} -c \${config.system.build.toplevel} -d ./files/boot''";
description = ''
Shell commands to populate the ./files directory.
All files in that directory are copied to the
@ -114,7 +122,7 @@ in
config = {
fileSystems = {
"/boot/firmware" = {
device = "/dev/disk/by-label/FIRMWARE";
device = "/dev/disk/by-label/${config.sdImage.firmwarePartitionName}";
fsType = "vfat";
# Alternatively, this could be removed from the configuration.
# The filesystem is not needed at runtime, it could be treated
@ -178,7 +186,7 @@ in
# Create a FAT32 /boot/firmware partition of suitable size into firmware_part.img
eval $(partx $img -o START,SECTORS --nr 1 --pairs)
truncate -s $((SECTORS * 512)) firmware_part.img
faketime "1970-01-01 00:00:00" mkfs.vfat -i ${config.sdImage.firmwarePartitionID} -n FIRMWARE firmware_part.img
faketime "1970-01-01 00:00:00" mkfs.vfat -i ${config.sdImage.firmwarePartitionID} -n ${config.sdImage.firmwarePartitionName} firmware_part.img
# Populate the files intended for /boot/firmware
mkdir firmware

View file

@ -628,6 +628,7 @@ EOF
write_file($fn, <<EOF);
@configuration@
EOF
print STDERR "For more hardware-specific settings, see https://github.com/NixOS/nixos-hardware"
} else {
print STDERR "warning: not overwriting existing $fn\n";
}

View file

@ -102,6 +102,16 @@ in
'';
};
man.generateCaches = mkOption {
type = types.bool;
default = false;
description = ''
Whether to generate the manual page index caches using
<literal>mandb(8)</literal>. This allows searching for a page or
keyword using utilities like <literal>apropos(1)</literal>.
'';
};
info.enable = mkOption {
type = types.bool;
default = true;
@ -187,7 +197,33 @@ in
environment.systemPackages = [ pkgs.man-db ];
environment.pathsToLink = [ "/share/man" ];
environment.extraOutputsToInstall = [ "man" ] ++ optional cfg.dev.enable "devman";
environment.etc."man.conf".source = "${pkgs.man-db}/etc/man_db.conf";
environment.etc."man_db.conf".text =
let
manualPages = pkgs.buildEnv {
name = "man-paths";
paths = config.environment.systemPackages;
pathsToLink = [ "/share/man" ];
extraOutputsToInstall = ["man"];
ignoreCollisions = true;
};
manualCache = pkgs.runCommandLocal "man-cache" { }
''
echo "MANDB_MAP ${manualPages}/share/man $out" > man.conf
${pkgs.man-db}/bin/mandb -C man.conf -psc
'';
in
''
# Manual pages paths for NixOS
MANPATH_MAP /run/current-system/sw/bin /run/current-system/sw/share/man
MANPATH_MAP /run/wrappers/bin /run/current-system/sw/share/man
${optionalString cfg.man.generateCaches ''
# Generated manual pages cache for NixOS (immutable)
MANDB_MAP /run/current-system/sw/share/man ${manualCache}
''}
# Manual pages caches for NixOS
MANDB_MAP /run/current-system/sw/share/man /var/cache/man/nixos
'';
})
(mkIf cfg.info.enable {

View file

@ -589,6 +589,7 @@
./services/networking/autossh.nix
./services/networking/bird.nix
./services/networking/bitlbee.nix
./services/networking/blockbook-frontend.nix
./services/networking/charybdis.nix
./services/networking/cjdns.nix
./services/networking/cntlm.nix
@ -685,6 +686,7 @@
./services/networking/ocserv.nix
./services/networking/ofono.nix
./services/networking/oidentd.nix
./services/networking/onedrive.nix
./services/networking/openfire.nix
./services/networking/openvpn.nix
./services/networking/ostinato.nix
@ -757,6 +759,7 @@
./services/networking/v2ray.nix
./services/networking/vsftpd.nix
./services/networking/wakeonlan.nix
./services/networking/wasabibackend.nix
./services/networking/websockify.nix
./services/networking/wg-quick.nix
./services/networking/wicd.nix
@ -830,6 +833,7 @@
./services/web-apps/atlassian/crowd.nix
./services/web-apps/atlassian/jira.nix
./services/web-apps/codimd.nix
./services/web-apps/convos.nix
./services/web-apps/cryptpad.nix
./services/web-apps/documize.nix
./services/web-apps/dokuwiki.nix
@ -934,6 +938,7 @@
./system/boot/grow-partition.nix
./system/boot/initrd-network.nix
./system/boot/initrd-ssh.nix
./system/boot/initrd-openvpn.nix
./system/boot/kernel.nix
./system/boot/kexec.nix
./system/boot/loader/efi.nix

View file

@ -102,6 +102,9 @@ in
programs.fish.shellAliases = mapAttrs (name: mkDefault) cfge.shellAliases;
# Required for man completions
documentation.man.generateCaches = true;
environment.etc."fish/foreign-env/shellInit".text = cfge.shellInit;
environment.etc."fish/foreign-env/loginShellInit".text = cfge.loginShellInit;
environment.etc."fish/foreign-env/interactiveShellInit".text = cfge.interactiveShellInit;

View file

@ -3,7 +3,7 @@
with lib;
{
meta.maintainers = maintainers.fabianhauser;
meta.maintainers = pkgs.hamster.meta.maintainers;
options.programs.hamster.enable =
mkEnableOption "Whether to enable hamster time tracking.";

View file

@ -173,7 +173,9 @@ in
config = mkIf cfg.enable {
security.sudo.extraRules = [
# We `mkOrder 600` so that the default rule shows up first, but there is
# still enough room for a user to `mkBefore` it.
security.sudo.extraRules = mkOrder 600 [
{ groups = [ "wheel" ];
commands = [ { command = "ALL"; options = (if cfg.wheelNeedsPassword then [ "SETENV" ] else [ "NOPASSWD" "SETENV" ]); } ];
}

View file

@ -21,6 +21,12 @@ let
${optionalString (cfg.network.listenAddress != "any") ''bind_to_address "${cfg.network.listenAddress}"''}
${optionalString (cfg.network.port != 6600) ''port "${toString cfg.network.port}"''}
${optionalString (cfg.fluidsynth) ''
decoder {
plugin "fluidsynth"
soundfont "${pkgs.soundfont-fluid}/share/soundfonts/FluidR3_GM2-2.sf2"
}
''}
${cfg.extraConfig}
'';
@ -133,6 +139,14 @@ in {
parameter is omitted from the configuration.
'';
};
fluidsynth = mkOption {
type = types.bool;
default = false;
description = ''
If set, add fluidsynth soundfont and configure the plugin.
'';
};
};
};

View file

@ -67,7 +67,7 @@ in
type = types.bool;
default = false;
description = ''
Wether to enable the slurm control daemon.
Whether to enable the slurm control daemon.
Note that the standard authentication method is "munge".
The "munge" service needs to be provided with a password file in order for
slurm to work properly (see <literal>services.munge.password</literal>).
@ -135,7 +135,7 @@ in
type = types.bool;
default = false;
description = ''
Wether to provide a slurm.conf file.
Whether to provide a slurm.conf file.
Enable this option if you do not run a slurm daemon on this host
(i.e. <literal>server.enable</literal> and <literal>client.enable</literal> are <literal>false</literal>)
but you still want to run slurm commands from this host.

View file

@ -25,7 +25,7 @@ let
change_source = [ ${concatStringsSep "," cfg.changeSource} ],
schedulers = [ ${concatStringsSep "," cfg.schedulers} ],
builders = [ ${concatStringsSep "," cfg.builders} ],
status = [ ${concatStringsSep "," cfg.status} ],
services = [ ${concatStringsSep "," cfg.reporters} ],
)
for step in [ ${concatStringsSep "," cfg.factorySteps} ]:
factory.addStep(step)
@ -119,10 +119,10 @@ in {
default = [ "worker.Worker('example-worker', 'pass')" ];
};
status = mkOption {
reporters = mkOption {
default = [];
type = types.listOf types.str;
description = "List of status notification endpoints.";
description = "List of reporter objects used to present build status to various users.";
};
user = mkOption {
@ -276,6 +276,10 @@ in {
imports = [
(mkRenamedOptionModule [ "services" "buildbot-master" "bpPort" ] [ "services" "buildbot-master" "pbPort" ])
(mkRemovedOptionModule [ "services" "buildbot-master" "status" ] ''
Since Buildbot 0.9.0, status targets are deprecated and ignored.
Review your configuration and migrate to reporters (available at services.buildbot-master.reporters).
'')
];
meta.maintainers = with lib.maintainers; [ nand0p mic92 ];

View file

@ -1,14 +1,16 @@
{ config, lib, pkgs, ... }:
with builtins;
with lib;
let
cfg = config.services.gitlab-runner;
hasDocker = config.virtualisation.docker.enable;
hashedServices = with builtins; (mapAttrs' (name: service: nameValuePair
"${name}_${config.networking.hostName}_${
hashedServices = mapAttrs'
(name: service: nameValuePair
"${name}_${config.networking.hostName}_${
substring 0 12
(hashString "md5" (unsafeDiscardStringContext (toJSON service)))}"
service)
cfg.services);
cfg.services;
configPath = "$HOME/.gitlab-runner/config.toml";
configureScript = pkgs.writeShellScriptBin "gitlab-runner-configure" (
if (cfg.configFile != null) then ''
@ -76,7 +78,7 @@ let
++ map (v: "--docker-allowed-images ${escapeShellArg v}") service.dockerAllowedImages
++ map (v: "--docker-allowed-services ${escapeShellArg v}") service.dockerAllowedServices
)
))} && sleep 1
))} && sleep 1 || exit 1
fi
'') hashedServices)}
@ -89,8 +91,17 @@ let
# update global options
remarshal --if toml --of json ${configPath} \
| jq -cM '.check_interval = ${toString cfg.checkInterval} |
.concurrent = ${toString cfg.concurrent}' \
| jq -cM ${escapeShellArg (concatStringsSep " | " [
".check_interval = ${toJSON cfg.checkInterval}"
".concurrent = ${toJSON cfg.concurrent}"
".sentry_dsn = ${toJSON cfg.sentryDSN}"
".listen_address = ${toJSON cfg.prometheusListenAddress}"
".session_server.listen_address = ${toJSON cfg.sessionServer.listenAddress}"
".session_server.advertise_address = ${toJSON cfg.sessionServer.advertiseAddress}"
".session_server.session_timeout = ${toJSON cfg.sessionServer.sessionTimeout}"
"del(.[] | nulls)"
"del(.session_server[] | nulls)"
])} \
| remarshal --if json --of toml \
| sponge ${configPath}
@ -141,6 +152,66 @@ in
0 does not mean unlimited.
'';
};
sentryDSN = mkOption {
type = types.nullOr types.str;
default = null;
example = "https://public:private@host:port/1";
description = ''
Data Source Name for tracking of all system level errors to Sentry.
'';
};
prometheusListenAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "localhost:8080";
description = ''
Address (&lt;host&gt;:&lt;port&gt;) on which the Prometheus metrics HTTP server
should be listening.
'';
};
sessionServer = mkOption {
type = types.submodule {
options = {
listenAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "0.0.0.0:8093";
description = ''
An internal URL to be used for the session server.
'';
};
advertiseAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "runner-host-name.tld:8093";
description = ''
The URL that the Runner will expose to GitLab to be used
to access the session server.
Fallbacks to <option>listenAddress</option> if not defined.
'';
};
sessionTimeout = mkOption {
type = types.int;
default = 1800;
description = ''
How long in seconds the session can stay active after
the job completes (which will block the job from finishing).
'';
};
};
};
default = { };
example = literalExample ''
{
listenAddress = "0.0.0.0:8093";
}
'';
description = ''
The session server allows the user to interact with jobs
that the Runner is responsible for. A good example of this is the
<link xlink:href="https://docs.gitlab.com/ee/ci/interactive_web_terminal/index.html">interactive web terminal</link>.
'';
};
gracefulTermination = mkOption {
type = types.bool;
default = false;

View file

@ -5,14 +5,14 @@ with lib;
let
cfg = config.services.openldap;
openldap = pkgs.openldap;
openldap = cfg.package;
dataFile = pkgs.writeText "ldap-contents.ldif" cfg.declarativeContents;
configFile = pkgs.writeText "slapd.conf" ((optionalString cfg.defaultSchemas ''
include ${pkgs.openldap.out}/etc/schema/core.schema
include ${pkgs.openldap.out}/etc/schema/cosine.schema
include ${pkgs.openldap.out}/etc/schema/inetorgperson.schema
include ${pkgs.openldap.out}/etc/schema/nis.schema
include ${openldap.out}/etc/schema/core.schema
include ${openldap.out}/etc/schema/cosine.schema
include ${openldap.out}/etc/schema/inetorgperson.schema
include ${openldap.out}/etc/schema/nis.schema
'') + ''
${cfg.extraConfig}
database ${cfg.database}
@ -46,6 +46,18 @@ in
";
};
package = mkOption {
type = types.package;
default = pkgs.openldap;
description = ''
OpenLDAP package to use.
This can be used to, for example, set an OpenLDAP package
with custom overrides to enable modules or other
functionality.
'';
};
user = mkOption {
type = types.str;
default = "openldap";
@ -152,10 +164,10 @@ in
";
example = literalExample ''
'''
include ${pkgs.openldap.out}/etc/schema/core.schema
include ${pkgs.openldap.out}/etc/schema/cosine.schema
include ${pkgs.openldap.out}/etc/schema/inetorgperson.schema
include ${pkgs.openldap.out}/etc/schema/nis.schema
include ${openldap.out}/etc/schema/core.schema
include ${openldap.out}/etc/schema/cosine.schema
include ${openldap.out}/etc/schema/inetorgperson.schema
include ${openldap.out}/etc/schema/nis.schema
database bdb
suffix dc=example,dc=org
@ -232,7 +244,7 @@ in
};
meta = {
maintainers = lib.maintainers.mic92;
maintainers = [ lib.maintainers.mic92 ];
};

View file

@ -1,18 +1,35 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.undervolt;
in {
cliArgs = lib.cli.toGNUCommandLineShell {} {
inherit (cfg)
verbose
temp
;
# `core` and `cache` are both intentionally set to `cfg.coreOffset` as according to the undervolt docs:
#
# Core or Cache offsets have no effect. It is not possible to set different offsets for
# CPU Core and Cache. The CPU will take the smaller of the two offsets, and apply that to
# both CPU and Cache. A warning message will be displayed if you attempt to set different offsets.
core = cfg.coreOffset;
cache = cfg.coreOffset;
gpu = cfg.gpuOffset;
uncore = cfg.uncoreOffset;
analogio = cfg.analogioOffset;
temp-bat = cfg.tempBat;
temp-ac = cfg.tempAc;
};
in
{
options.services.undervolt = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to undervolt intel cpus.
'';
};
enable = mkEnableOption ''
Undervolting service for Intel CPUs.
Warning: This service is not endorsed by Intel and may permanently damage your hardware. Use at your own risk!
'';
verbose = mkOption {
type = types.bool;
@ -32,58 +49,58 @@ in {
};
coreOffset = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The amount of voltage to offset the CPU cores by. Accepts a floating point number.
The amount of voltage in mV to offset the CPU cores by.
'';
};
gpuOffset = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The amount of voltage to offset the GPU by. Accepts a floating point number.
The amount of voltage in mV to offset the GPU by.
'';
};
uncoreOffset = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The amount of voltage to offset uncore by. Accepts a floating point number.
The amount of voltage in mV to offset uncore by.
'';
};
analogioOffset = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The amount of voltage to offset analogio by. Accepts a floating point number.
The amount of voltage in mV to offset analogio by.
'';
};
temp = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The temperature target. Accepts a floating point number.
The temperature target in Celsius degrees.
'';
};
tempAc = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The temperature target on AC power. Accepts a floating point number.
The temperature target on AC power in Celsius degrees.
'';
};
tempBat = mkOption {
type = types.nullOr types.str;
type = types.nullOr types.int;
default = null;
description = ''
The temperature target on battery power. Accepts a floating point number.
The temperature target on battery power in Celsius degrees.
'';
};
};
@ -100,24 +117,7 @@ in {
serviceConfig = {
Type = "oneshot";
Restart = "no";
# `core` and `cache` are both intentionally set to `cfg.coreOffset` as according to the undervolt docs:
#
# Core or Cache offsets have no effect. It is not possible to set different offsets for
# CPU Core and Cache. The CPU will take the smaller of the two offsets, and apply that to
# both CPU and Cache. A warning message will be displayed if you attempt to set different offsets.
ExecStart = ''
${pkgs.undervolt}/bin/undervolt \
${optionalString cfg.verbose "--verbose"} \
${optionalString (cfg.coreOffset != null) "--core ${cfg.coreOffset}"} \
${optionalString (cfg.coreOffset != null) "--cache ${cfg.coreOffset}"} \
${optionalString (cfg.gpuOffset != null) "--gpu ${cfg.gpuOffset}"} \
${optionalString (cfg.uncoreOffset != null) "--uncore ${cfg.uncoreOffset}"} \
${optionalString (cfg.analogioOffset != null) "--analogio ${cfg.analogioOffset}"} \
${optionalString (cfg.temp != null) "--temp ${cfg.temp}"} \
${optionalString (cfg.tempAc != null) "--temp-ac ${cfg.tempAc}"} \
${optionalString (cfg.tempBat != null) "--temp-bat ${cfg.tempBat}"}
'';
ExecStart = "${pkgs.undervolt}/bin/undervolt ${cliArgs}";
};
};

View file

@ -125,6 +125,8 @@ let
mailboxConfig = mailbox: ''
mailbox "${mailbox.name}" {
auto = ${toString mailbox.auto}
'' + optionalString (mailbox.autoexpunge != null) ''
autoexpunge = ${mailbox.autoexpunge}
'' + optionalString (mailbox.specialUse != null) ''
special_use = \${toString mailbox.specialUse}
'' + "}";
@ -132,8 +134,9 @@ let
mailboxes = { ... }: {
options = {
name = mkOption {
type = types.strMatching ''[^"]+'';
type = types.nullOr (types.strMatching ''[^"]+'');
example = "Spam";
default = null;
description = "The name of the mailbox.";
};
auto = mkOption {
@ -148,6 +151,15 @@ let
example = "Junk";
description = "Null if no special use flag is set. Other than that every use flag mentioned in the RFC is valid.";
};
autoexpunge = mkOption {
type = types.nullOr types.str;
default = null;
example = "60d";
description = ''
To automatically remove all email from the mailbox which is older than the
specified time.
'';
};
};
};
in
@ -323,9 +335,24 @@ in
};
mailboxes = mkOption {
type = types.listOf (types.submodule mailboxes);
default = [];
example = [ { name = "Spam"; specialUse = "Junk"; auto = "create"; } ];
type = with types; let m = submodule mailboxes; in either (listOf m) (attrsOf m);
default = {};
apply = x:
if isList x then warn "Declaring `services.dovecot2.mailboxes' as a list is deprecated and will break eval in 21.03!" x
else mapAttrsToList (name: value:
if value.name != null
then throw ''
When specifying dovecot2 mailboxes as attributes, declaring
a `name'-attribute is prohibited! The name ${value.name} should
be the attribute key!
''
else value // { inherit name; }
) x;
example = literalExample ''
{
Spam = { specialUse = "Junk"; auto = "create"; };
}
'';
description = "Configure mailboxes and auto create or subscribe them.";
};

View file

@ -6,42 +6,46 @@ let
cfg = config.services.mailman;
pythonEnv = pkgs.python3.withPackages (ps:
[ps.mailman ps.mailman-web]
++ lib.optional cfg.hyperkitty.enable ps.mailman-hyperkitty
++ cfg.extraPythonPackages);
# This deliberately doesn't use recursiveUpdate so users can
# override the defaults.
settings = {
webSettings = {
DEFAULT_FROM_EMAIL = cfg.siteOwner;
SERVER_EMAIL = cfg.siteOwner;
ALLOWED_HOSTS = [ "localhost" "127.0.0.1" ] ++ cfg.webHosts;
COMPRESS_OFFLINE = true;
STATIC_ROOT = "/var/lib/mailman-web/static";
STATIC_ROOT = "/var/lib/mailman-web-static";
MEDIA_ROOT = "/var/lib/mailman-web/media";
LOGGING = {
version = 1;
disable_existing_loggers = true;
handlers.console.class = "logging.StreamHandler";
loggers.django = {
handlers = [ "console" ];
level = "INFO";
};
};
HAYSTACK_CONNECTIONS.default = {
ENGINE = "haystack.backends.whoosh_backend.WhooshEngine";
PATH = "/var/lib/mailman-web/fulltext-index";
};
} // cfg.webSettings;
settingsJSON = pkgs.writeText "settings.json" (builtins.toJSON settings);
webSettingsJSON = pkgs.writeText "settings.json" (builtins.toJSON webSettings);
mailmanCfg = ''
[mailman]
site_owner: ${cfg.siteOwner}
layout: fhs
[paths.fhs]
bin_dir: ${pkgs.python3Packages.mailman}/bin
var_dir: /var/lib/mailman
queue_dir: $var_dir/queue
template_dir: $var_dir/templates
log_dir: $var_dir/log
lock_dir: $var_dir/lock
etc_dir: /etc
ext_dir: $etc_dir/mailman.d
pid_file: /run/mailman/master.pid
'' + optionalString cfg.hyperkitty.enable ''
[archiver.hyperkitty]
class: mailman_hyperkitty.Archiver
enable: yes
configuration: /var/lib/mailman/mailman-hyperkitty.cfg
# TODO: Should this be RFC42-ised so that users can set additional options without modifying the module?
mtaConfig = pkgs.writeText "mailman-postfix.cfg" ''
[postfix]
postmap_command: ${pkgs.postfix}/bin/postmap
transport_file_type: hash
'';
mailmanCfg = lib.generators.toINI {} cfg.settings;
mailmanHyperkittyCfg = pkgs.writeText "mailman-hyperkitty.cfg" ''
[general]
# This is your HyperKitty installation, preferably on the localhost. This
@ -84,7 +88,7 @@ in {
type = types.package;
default = pkgs.mailman;
defaultText = "pkgs.mailman";
example = "pkgs.mailman.override { archivers = []; }";
example = literalExample "pkgs.mailman.override { archivers = []; }";
description = "Mailman package to use";
};
@ -98,18 +102,6 @@ in {
'';
};
webRoot = mkOption {
type = types.path;
default = "${pkgs.mailman-web}/${pkgs.python3.sitePackages}";
defaultText = "\${pkgs.mailman-web}/\${pkgs.python3.sitePackages}";
description = ''
The web root for the Hyperkity + Postorius apps provided by Mailman.
This variable can be set, of course, but it mainly exists so that site
admins can refer to it in their own hand-written web server
configuration files.
'';
};
webHosts = mkOption {
type = types.listOf types.str;
default = [];
@ -124,7 +116,7 @@ in {
webUser = mkOption {
type = types.str;
default = config.services.httpd.user;
default = "mailman-web";
description = ''
User to run mailman-web as
'';
@ -138,6 +130,22 @@ in {
'';
};
serve = {
enable = mkEnableOption "Automatic nginx and uwsgi setup for mailman-web";
};
extraPythonPackages = mkOption {
description = "Packages to add to the python environment used by mailman and mailman-web";
type = types.listOf types.package;
default = [];
};
settings = mkOption {
description = "Settings for mailman.cfg";
type = types.attrsOf (types.attrsOf types.str);
default = {};
};
hyperkitty = {
enable = mkEnableOption "the Hyperkitty archiver for Mailman";
@ -158,6 +166,35 @@ in {
config = mkIf cfg.enable {
services.mailman.settings = {
mailman.site_owner = lib.mkDefault cfg.siteOwner;
mailman.layout = "fhs";
"paths.fhs" = {
bin_dir = "${pkgs.python3Packages.mailman}/bin";
var_dir = "/var/lib/mailman";
queue_dir = "$var_dir/queue";
template_dir = "$var_dir/templates";
log_dir = "/var/log/mailman";
lock_dir = "$var_dir/lock";
etc_dir = "/etc";
ext_dir = "$etc_dir/mailman.d";
pid_file = "/run/mailman/master.pid";
};
mta.configuration = lib.mkDefault "${mtaConfig}";
"archiver.hyperkitty" = lib.mkIf cfg.hyperkitty.enable {
class = "mailman_hyperkitty.Archiver";
enable = "yes";
configuration = "/var/lib/mailman/mailman-hyperkitty.cfg";
};
} // (let
loggerNames = ["root" "archiver" "bounce" "config" "database" "debug" "error" "fromusenet" "http" "locks" "mischief" "plugins" "runner" "smtp"];
loggerSectionNames = map (n: "logging.${n}") loggerNames;
in lib.genAttrs loggerSectionNames(name: { handler = "stderr"; })
);
assertions = let
inherit (config.services) postfix;
@ -183,7 +220,17 @@ in {
(requirePostfixHash [ "config" "local_recipient_maps" ] "postfix_lmtp")
];
users.users.mailman = { description = "GNU Mailman"; isSystemUser = true; };
users.users.mailman = {
description = "GNU Mailman";
isSystemUser = true;
group = "mailman";
};
users.users.mailman-web = lib.mkIf (cfg.webUser == "mailman-web") {
description = "GNU Mailman web interface";
isSystemUser = true;
group = "mailman";
};
users.groups.mailman = {};
environment.etc."mailman.cfg".text = mailmanCfg;
@ -198,14 +245,35 @@ in {
import json
with open('${settingsJSON}') as f:
with open('${webSettingsJSON}') as f:
globals().update(json.load(f))
with open('/var/lib/mailman-web/settings_local.json') as f:
globals().update(json.load(f))
'';
environment.systemPackages = [ cfg.package ] ++ (with pkgs; [ mailman-web ]);
services.nginx = mkIf cfg.serve.enable {
enable = mkDefault true;
virtualHosts."${lib.head cfg.webHosts}" = {
serverAliases = cfg.webHosts;
locations = {
"/".extraConfig = "uwsgi_pass unix:/run/mailman-web.socket;";
"/static/".alias = webSettings.STATIC_ROOT + "/";
};
};
};
environment.systemPackages = [ (pkgs.buildEnv {
name = "mailman-tools";
# We don't want to pollute the system PATH with a python
# interpreter etc. so let's pick only the stuff we actually
# want from pythonEnv
pathsToLink = ["/bin"];
paths = [pythonEnv];
postBuild = ''
find $out/bin/ -mindepth 1 -not -name "mailman*" -delete
'';
}) ];
services.postfix = {
recipientDelimiter = "+"; # bake recipient addresses in mail envelopes via VERP
@ -214,181 +282,156 @@ in {
};
};
systemd.services.mailman = {
description = "GNU Mailman Master Process";
after = [ "network.target" ];
restartTriggers = [ config.environment.etc."mailman.cfg".source ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/mailman start";
ExecStop = "${cfg.package}/bin/mailman stop";
User = "mailman";
Type = "forking";
RuntimeDirectory = "mailman";
PIDFile = "/run/mailman/master.pid";
};
systemd.sockets.mailman-uwsgi = lib.mkIf cfg.serve.enable {
wantedBy = ["sockets.target"];
before = ["nginx.service"];
socketConfig.ListenStream = "/run/mailman-web.socket";
};
systemd.services.mailman-settings = {
description = "Generate settings files (including secrets) for Mailman";
before = [ "mailman.service" "mailman-web.service" "hyperkitty.service" "httpd.service" "uwsgi.service" ];
requiredBy = [ "mailman.service" "mailman-web.service" "hyperkitty.service" "httpd.service" "uwsgi.service" ];
path = with pkgs; [ jq ];
script = ''
mailmanDir=/var/lib/mailman
mailmanWebDir=/var/lib/mailman-web
mailmanCfg=$mailmanDir/mailman-hyperkitty.cfg
mailmanWebCfg=$mailmanWebDir/settings_local.json
install -m 0700 -o mailman -g nogroup -d $mailmanDir
install -m 0700 -o ${cfg.webUser} -g nogroup -d $mailmanWebDir
if [ ! -e $mailmanWebCfg ]; then
hyperkittyApiKey=$(tr -dc A-Za-z0-9 < /dev/urandom | head -c 64)
secretKey=$(tr -dc A-Za-z0-9 < /dev/urandom | head -c 64)
mailmanWebCfgTmp=$(mktemp)
jq -n '.MAILMAN_ARCHIVER_KEY=$archiver_key | .SECRET_KEY=$secret_key' \
--arg archiver_key "$hyperkittyApiKey" \
--arg secret_key "$secretKey" \
>"$mailmanWebCfgTmp"
chown ${cfg.webUser} "$mailmanWebCfgTmp"
mv -n "$mailmanWebCfgTmp" $mailmanWebCfg
fi
hyperkittyApiKey="$(jq -r .MAILMAN_ARCHIVER_KEY $mailmanWebCfg)"
mailmanCfgTmp=$(mktemp)
sed "s/@API_KEY@/$hyperkittyApiKey/g" ${mailmanHyperkittyCfg} >"$mailmanCfgTmp"
chown mailman "$mailmanCfgTmp"
mv "$mailmanCfgTmp" $mailmanCfg
'';
serviceConfig = {
Type = "oneshot";
# RemainAfterExit makes restartIfChanged work for this service, so
# downstream services will get updated automatically when things like
# services.mailman.hyperkitty.baseUrl change. Otherwise users have to
# restart things manually, which is confusing.
RemainAfterExit = "yes";
systemd.services = {
mailman = {
description = "GNU Mailman Master Process";
after = [ "network.target" ];
restartTriggers = [ config.environment.etc."mailman.cfg".source ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pythonEnv}/bin/mailman start";
ExecStop = "${pythonEnv}/bin/mailman stop";
User = "mailman";
Group = "mailman";
Type = "forking";
RuntimeDirectory = "mailman";
LogsDirectory = "mailman";
PIDFile = "/run/mailman/master.pid";
};
};
};
systemd.services.mailman-web = {
description = "Init Postorius DB";
before = [ "httpd.service" "uwsgi.service" ];
requiredBy = [ "httpd.service" "uwsgi.service" ];
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
script = ''
${pkgs.mailman-web}/bin/mailman-web migrate
rm -rf static
${pkgs.mailman-web}/bin/mailman-web collectstatic
${pkgs.mailman-web}/bin/mailman-web compress
'';
serviceConfig = {
User = cfg.webUser;
Type = "oneshot";
# Similar to mailman-settings.service, this makes restartTriggers work
# properly for this service.
RemainAfterExit = "yes";
WorkingDirectory = "/var/lib/mailman-web";
mailman-settings = {
description = "Generate settings files (including secrets) for Mailman";
before = [ "mailman.service" "mailman-web-setup.service" "mailman-uwsgi.service" "hyperkitty.service" ];
requiredBy = [ "mailman.service" "mailman-web-setup.service" "mailman-uwsgi.service" "hyperkitty.service" ];
path = with pkgs; [ jq ];
script = ''
mailmanDir=/var/lib/mailman
mailmanWebDir=/var/lib/mailman-web
mailmanCfg=$mailmanDir/mailman-hyperkitty.cfg
mailmanWebCfg=$mailmanWebDir/settings_local.json
install -m 0775 -o mailman -g mailman -d /var/lib/mailman-web-static
install -m 0770 -o mailman -g mailman -d $mailmanDir
install -m 0770 -o ${cfg.webUser} -g mailman -d $mailmanWebDir
if [ ! -e $mailmanWebCfg ]; then
hyperkittyApiKey=$(tr -dc A-Za-z0-9 < /dev/urandom | head -c 64)
secretKey=$(tr -dc A-Za-z0-9 < /dev/urandom | head -c 64)
mailmanWebCfgTmp=$(mktemp)
jq -n '.MAILMAN_ARCHIVER_KEY=$archiver_key | .SECRET_KEY=$secret_key' \
--arg archiver_key "$hyperkittyApiKey" \
--arg secret_key "$secretKey" \
>"$mailmanWebCfgTmp"
chown root:mailman "$mailmanWebCfgTmp"
chmod 440 "$mailmanWebCfgTmp"
mv -n "$mailmanWebCfgTmp" "$mailmanWebCfg"
fi
hyperkittyApiKey="$(jq -r .MAILMAN_ARCHIVER_KEY "$mailmanWebCfg")"
mailmanCfgTmp=$(mktemp)
sed "s/@API_KEY@/$hyperkittyApiKey/g" ${mailmanHyperkittyCfg} >"$mailmanCfgTmp"
chown mailman:mailman "$mailmanCfgTmp"
mv "$mailmanCfgTmp" "$mailmanCfg"
'';
};
};
systemd.services.mailman-daily = {
description = "Trigger daily Mailman events";
startAt = "daily";
restartTriggers = [ config.environment.etc."mailman.cfg".source ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/mailman digests --send";
User = "mailman";
mailman-web-setup = {
description = "Prepare mailman-web files and database";
before = [ "uwsgi.service" "mailman-uwsgi.service" ];
requiredBy = [ "mailman-uwsgi.service" ];
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
script = ''
[[ -e "${webSettings.STATIC_ROOT}" ]] && find "${webSettings.STATIC_ROOT}/" -mindepth 1 -delete
${pythonEnv}/bin/mailman-web migrate
${pythonEnv}/bin/mailman-web collectstatic
${pythonEnv}/bin/mailman-web compress
'';
serviceConfig = {
User = cfg.webUser;
Group = "mailman";
Type = "oneshot";
WorkingDirectory = "/var/lib/mailman-web";
};
};
};
systemd.services.hyperkitty = {
inherit (cfg.hyperkitty) enable;
description = "GNU Hyperkitty QCluster Process";
after = [ "network.target" ];
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
wantedBy = [ "mailman.service" "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web qcluster";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
mailman-uwsgi = mkIf cfg.serve.enable (let
uwsgiConfig.uwsgi = {
type = "normal";
plugins = ["python3"];
home = pythonEnv;
module = "mailman_web.wsgi";
};
uwsgiConfigFile = pkgs.writeText "uwsgi-mailman.json" (builtins.toJSON uwsgiConfig);
in {
wantedBy = ["multi-user.target"];
requires = ["mailman-uwsgi.socket" "mailman-web-setup.service"];
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
# Since the mailman-web settings.py obstinately creates a logs
# dir in the cwd, change to the (writable) runtime directory before
# starting uwsgi.
ExecStart = "${pkgs.coreutils}/bin/env -C $RUNTIME_DIRECTORY ${pkgs.uwsgi.override { plugins = ["python3"]; }}/bin/uwsgi --json ${uwsgiConfigFile}";
User = cfg.webUser;
Group = "mailman";
RuntimeDirectory = "mailman-uwsgi";
};
});
mailman-daily = {
description = "Trigger daily Mailman events";
startAt = "daily";
restartTriggers = [ config.environment.etc."mailman.cfg".source ];
serviceConfig = {
ExecStart = "${pythonEnv}/bin/mailman digests --send";
User = "mailman";
Group = "mailman";
};
};
};
systemd.services.hyperkitty-minutely = {
inherit (cfg.hyperkitty) enable;
description = "Trigger minutely Hyperkitty events";
startAt = "minutely";
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web runjobs minutely";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
hyperkitty = lib.mkIf cfg.hyperkitty.enable {
description = "GNU Hyperkitty QCluster Process";
after = [ "network.target" ];
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
wantedBy = [ "mailman.service" "multi-user.target" ];
serviceConfig = {
ExecStart = "${pythonEnv}/bin/mailman-web qcluster";
User = cfg.webUser;
Group = "mailman";
WorkingDirectory = "/var/lib/mailman-web";
};
};
};
systemd.services.hyperkitty-quarter-hourly = {
inherit (cfg.hyperkitty) enable;
description = "Trigger quarter-hourly Hyperkitty events";
startAt = "*:00/15";
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web runjobs quarter_hourly";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
};
};
systemd.services.hyperkitty-hourly = {
inherit (cfg.hyperkitty) enable;
description = "Trigger hourly Hyperkitty events";
startAt = "hourly";
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web runjobs hourly";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
};
};
systemd.services.hyperkitty-daily = {
inherit (cfg.hyperkitty) enable;
description = "Trigger daily Hyperkitty events";
startAt = "daily";
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web runjobs daily";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
};
};
systemd.services.hyperkitty-weekly = {
inherit (cfg.hyperkitty) enable;
description = "Trigger weekly Hyperkitty events";
startAt = "weekly";
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web runjobs weekly";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
};
};
systemd.services.hyperkitty-yearly = {
inherit (cfg.hyperkitty) enable;
description = "Trigger yearly Hyperkitty events";
startAt = "yearly";
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pkgs.mailman-web}/bin/mailman-web runjobs yearly";
User = cfg.webUser;
WorkingDirectory = "/var/lib/mailman-web";
};
};
} // flip lib.mapAttrs' {
"minutely" = "minutely";
"quarter_hourly" = "*:00/15";
"hourly" = "hourly";
"daily" = "daily";
"weekly" = "weekly";
"yearly" = "yearly";
} (name: startAt:
lib.nameValuePair "hyperkitty-${name}" (lib.mkIf cfg.hyperkitty.enable {
description = "Trigger ${name} Hyperkitty events";
inherit startAt;
restartTriggers = [ config.environment.etc."mailman3/settings.py".source ];
serviceConfig = {
ExecStart = "${pythonEnv}/bin/mailman-web runjobs minutely";
User = cfg.webUser;
Group = "mailman";
WorkingDirectory = "/var/lib/mailman-web";
};
}));
};
meta = {
maintainers = with lib.maintainers; [ lheckemann ];
doc = ./mailman.xml;
};
}

View file

@ -0,0 +1,59 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-mailman">
<title>Mailman</title>
<para>
<link xlink:href="https://www.list.org">Mailman</link> is free
software for managing electronic mail discussion and e-newsletter
lists. Mailman and its web interface can be configured using the
corresponding NixOS module. Note that this service is best used with
an existing, securely configured Postfix setup, as it does not automatically configure this.
</para>
<section xml:id="module-services-mailman-basic-usage">
<title>Basic usage</title>
<para>
For a basic configuration, the following settings are suggested:
<programlisting>{ config, ... }: {
services.postfix = {
enable = true;
relayDomains = ["hash:/var/lib/mailman/data/postfix_domains"];
sslCert = config.security.acme.certs."lists.example.org".directory + "/full.pem";
sslKey = config.security.acme.certs."lists.example.org".directory + "/key.pem";
config = {
transport_maps = ["hash:/var/lib/mailman/data/postfix_lmtp"];
local_recipient_maps = ["hash:/var/lib/mailman/data/postfix_lmtp"];
};
};
services.mailman = {
<link linkend="opt-services.mailman.enable">enable</link> = true;
<link linkend="opt-services.mailman.serve.enable">serve.enable</link> = true;
<link linkend="opt-services.mailman.hyperkitty.enable">hyperkitty.enable</link> = true;
<link linkend="opt-services.mailman.hyperkitty.enable">webHosts</link> = ["lists.example.org"];
<link linkend="opt-services.mailman.hyperkitty.enable">siteOwner</link> = "mailman@example.org";
};
<link linkend="opt-services.nginx.virtualHosts._name_.enableACME">services.nginx.virtualHosts."lists.example.org".enableACME</link> = true;
<link linkend="opt-services.mailman.hyperkitty.enable">networking.firewall.allowedTCPPorts</link> = [ 25 80 443 ];
}</programlisting>
</para>
<para>
DNS records will also be required:
<itemizedlist>
<listitem><para><literal>AAAA</literal> and <literal>A</literal> records pointing to the host in question, in order for browsers to be able to discover the address of the web server;</para></listitem>
<listitem><para>An <literal>MX</literal> record pointing to a domain name at which the host is reachable, in order for other mail servers to be able to deliver emails to the mailing lists it hosts.</para></listitem>
</itemizedlist>
</para>
<para>
After this has been done and appropriate DNS records have been
set up, the Postorius mailing list manager and the Hyperkitty
archive browser will be available at
https://lists.example.org/. Note that this setup is not
sufficient to deliver emails to most email providers nor to
avoid spam -- a number of additional measures for authenticating
incoming and outgoing mails, such as SPF, DMARC and DKIM are
necessary, but outside the scope of the Mailman module.
</para>
</section>
</chapter>

View file

@ -240,6 +240,7 @@ in {
'');
serviceConfig = {
ExecStart = "${package}/bin/hass --config '${cfg.configDir}'";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
User = "hass";
Group = "hass";
Restart = "on-failure";

View file

@ -727,5 +727,6 @@ in {
];
meta.doc = ./matrix-synapse.xml;
meta.maintainers = teams.matrix.members;
}

View file

@ -39,6 +39,7 @@ let
"node"
"postfix"
"postgres"
"redis"
"rspamd"
"snmp"
"surfboard"
@ -171,15 +172,6 @@ in
(opt: lib.mkRemovedOptionModule [ "services" "prometheus" "${opt}" ] ''
The prometheus exporters are now configured using `services.prometheus.exporters'.
See the 18.03 release notes for more information.
'' ))
++ (lib.forEach [ "enable" "substitutions" "preset" ]
(opt: lib.mkRemovedOptionModule [ "fonts" "fontconfig" "ultimate" "${opt}" ] ''
The fonts.fontconfig.ultimate module and configuration is obsolete.
The repository has since been archived and activity has ceased.
https://github.com/bohoomil/fontconfig-ultimate/issues/171.
No action should be needed for font configuration, as the fonts.fontconfig
module is already used by default.
'' ));
options.services.prometheus.exporters = mkOption {

View file

@ -0,0 +1,19 @@
{ config, lib, pkgs, options }:
with lib;
let
cfg = config.services.prometheus.exporters.redis;
in
{
port = 9121;
serviceOpts = {
serviceConfig = {
ExecStart = ''
${pkgs.prometheus-redis-exporter}/bin/redis_exporter \
-web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
${concatStringsSep " \\\n " cfg.extraFlags}
'';
};
};
}

View file

@ -32,7 +32,10 @@ in {
environment.systemPackages = [ pkgs.tuptime ];
users.users.tuptime.description = "tuptime database owner";
users = {
groups._tuptime.members = [ "_tuptime" ];
users._tuptime.description = "tuptime database owner";
};
systemd = {
services = {
@ -45,7 +48,7 @@ in {
serviceConfig = {
StateDirectory = "tuptime";
Type = "oneshot";
User = "tuptime";
User = "_tuptime";
RemainAfterExit = true;
ExecStart = "${pkgs.tuptime}/bin/tuptime -x";
ExecStop = "${pkgs.tuptime}/bin/tuptime -xg";
@ -57,7 +60,7 @@ in {
serviceConfig = {
StateDirectory = "tuptime";
Type = "oneshot";
User = "tuptime";
User = "_tuptime";
ExecStart = "${pkgs.tuptime}/bin/tuptime -x";
};
};

View file

@ -12,6 +12,19 @@ let
(optionalString (cfg.defaultMode == "norouting") "--routing=none")
] ++ cfg.extraFlags);
splitMulitaddr = addrRaw: lib.tail (lib.splitString "/" addrRaw);
multiaddrToListenStream = addrRaw: let
addr = splitMulitaddr addrRaw;
s = builtins.elemAt addr;
in if s 0 == "ip4" && s 2 == "tcp"
then "${s 1}:${s 3}"
else if s 0 == "ip6" && s 2 == "tcp"
then "[${s 1}]:${s 3}"
else if s 0 == "unix"
then "/${lib.concatStringsSep "/" (lib.tail addr)}"
else null; # not valid for listen stream, skip
in {
###### interface
@ -80,7 +93,10 @@ in {
swarmAddress = mkOption {
type = types.listOf types.str;
default = [ "/ip4/0.0.0.0/tcp/4001" "/ip6/::/tcp/4001" ];
default = [
"/ip4/0.0.0.0/tcp/4001"
"/ip6/::/tcp/4001"
];
description = "Where IPFS listens for incoming p2p connections";
};
@ -250,14 +266,18 @@ in {
systemd.sockets.ipfs-gateway = {
wantedBy = [ "sockets.target" ];
socketConfig.ListenStream = [ "" ]
++ lib.optional (cfg.gatewayAddress == opt.gatewayAddress.default) [ "127.0.0.1:8080" "[::1]:8080" ];
socketConfig.ListenStream = let
fromCfg = multiaddrToListenStream cfg.gatewayAddress;
in [ "" ] ++ lib.optional (fromCfg != null) fromCfg;
};
systemd.sockets.ipfs-api = {
wantedBy = [ "sockets.target" ];
socketConfig.ListenStream = [ "" "%t/ipfs.sock" ]
++ lib.optional (cfg.apiAddress == opt.apiAddress.default) [ "127.0.0.1:5001" "[::1]:5001" ];
# We also include "%t/ipfs.sock" because tere is no way to put the "%t"
# in the multiaddr.
socketConfig.ListenStream = let
fromCfg = multiaddrToListenStream cfg.apiAddress;
in [ "" "%t/ipfs.sock" ] ++ lib.optional (fromCfg != null) fromCfg;
};
};

View file

@ -124,7 +124,7 @@ in {
<literal>"iponly"</literal>: specifies no authentication. ACLs authorization is used.
</para></listitem>
<listitem><para>
<literal>"strong"</literal>: authentication by username/password. If user is not registered his access is denied regardless of ACLs.
<literal>"strong"</literal>: authentication by username/password. If user is not registered their access is denied regardless of ACLs.
</para></listitem>
</itemizedlist>

View file

@ -0,0 +1,272 @@
{ config, lib, pkgs, ... }:
with lib;
let
eachBlockbook = config.services.blockbook-frontend;
blockbookOpts = { config, lib, name, ...}: {
options = {
enable = mkEnableOption "blockbook-frontend application.";
package = mkOption {
type = types.package;
default = pkgs.blockbook;
description = "Which blockbook package to use.";
};
user = mkOption {
type = types.str;
default = "blockbook-frontend-${name}";
description = "The user as which to run blockbook-frontend-${name}.";
};
group = mkOption {
type = types.str;
default = "${config.user}";
description = "The group as which to run blockbook-frontend-${name}.";
};
certFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/etc/secrets/blockbook-frontend-${name}/certFile";
description = ''
To enable SSL, specify path to the name of certificate files without extension.
Expecting <filename>certFile.crt</filename> and <filename>certFile.key</filename>.
'';
};
configFile = mkOption {
type = with types; nullOr path;
default = null;
example = "${config.dataDir}/config.json";
description = "Location of the blockbook configuration file.";
};
coinName = mkOption {
type = types.str;
default = "Bitcoin";
example = "Bitcoin";
description = ''
See <link xlink:href="https://github.com/trezor/blockbook/blob/master/bchain/coins/blockchain.go#L61"/>
for current of coins supported in master (Note: may differ from release).
'';
};
cssDir = mkOption {
type = types.path;
default = "${config.package}/share/css/";
example = "${config.dataDir}/static/css/";
description = ''
Location of the dir with <filename>main.css</filename> CSS file.
By default, the one shipped with the package is used.
'';
};
dataDir = mkOption {
type = types.path;
default = "/var/lib/blockbook-frontend-${name}";
description = "Location of blockbook-frontend-${name} data directory.";
};
debug = mkOption {
type = types.bool;
default = false;
description = "Debug mode, return more verbose errors, reload templates on each request.";
};
internal = mkOption {
type = types.nullOr types.str;
default = ":9030";
example = ":9030";
description = "Internal http server binding <literal>[address]:port</literal>.";
};
messageQueueBinding = mkOption {
type = types.str;
default = "tcp://127.0.0.1:38330";
example = "tcp://127.0.0.1:38330";
description = "Message Queue Binding <literal>address:port</literal>.";
};
public = mkOption {
type = types.nullOr types.str;
default = ":9130";
example = ":9130";
description = "Public http server binding <literal>[address]:port</literal>.";
};
rpc = {
url = mkOption {
type = types.str;
default = "http://127.0.0.1";
description = "URL for JSON-RPC connections.";
};
port = mkOption {
type = types.port;
default = 8030;
description = "Port for JSON-RPC connections.";
};
user = mkOption {
type = types.str;
default = "rpc";
example = "rpc";
description = "Username for JSON-RPC connections.";
};
password = mkOption {
type = types.str;
default = "rpc";
example = "rpc";
description = ''
RPC password for JSON-RPC connections.
Warning: this is stored in cleartext in the Nix store!!!
Use <literal>configFile</literal> or <literal>passwordFile</literal> if needed.
'';
};
passwordFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
File containing password of the RPC user.
Note: This options is ignored when <literal>configFile</literal> is used.
'';
};
};
sync = mkOption {
type = types.bool;
default = true;
description = "Synchronizes until tip, if together with zeromq, keeps index synchronized.";
};
templateDir = mkOption {
type = types.path;
default = "${config.package}/share/templates/";
example = "${config.dataDir}/templates/static/";
description = "Location of the HTML templates. By default, ones shipped with the package are used.";
};
extraConfig = mkOption {
type = types.attrs;
default = {};
example = literalExample '' {
alternative_estimate_fee = "whatthefee-disabled";
alternative_estimate_fee_params = "{\"url\": \"https://whatthefee.io/data.json\", \"periodSeconds\": 60}";
fiat_rates = "coingecko";
fiat_rates_params = "{\"url\": \"https://api.coingecko.com/api/v3\", \"coin\": \"bitcoin\", \"periodSeconds\": 60}";
coin_shortcut = "BTC";
coin_label = "Bitcoin";
xpub_magic = 76067358;
xpub_magic_segwit_p2sh = 77429938;
xpub_magic_segwit_native = 78792518;
}'';
description = ''
Additional configurations to be appended to <filename>coin.conf</filename>.
Overrides any already defined configuration options.
See <link xlink:href="https://github.com/trezor/blockbook/tree/master/configs/coins"/>
for current configuration options supported in master (Note: may differ from release).
'';
};
extraCmdLineOptions = mkOption {
type = types.listOf types.str;
default = [];
example = [ "-workers=1" "-dbcache=0" "-logtosderr" ];
description = ''
Extra command line options to pass to Blockbook.
Run blockbook --help to list all available options.
'';
};
};
};
in
{
# interface
options = {
services.blockbook-frontend = mkOption {
type = types.attrsOf (types.submodule blockbookOpts);
default = {};
description = "Specification of one or more blockbook-frontend instances.";
};
};
# implementation
config = mkIf (eachBlockbook != {}) {
systemd.services = mapAttrs' (blockbookName: cfg: (
nameValuePair "blockbook-frontend-${blockbookName}" (
let
configFile = if cfg.configFile != null then cfg.configFile else
pkgs.writeText "config.conf" (builtins.toJSON ( {
coin_name = "${cfg.coinName}";
rpc_user = "${cfg.rpc.user}";
rpc_pass = "${cfg.rpc.password}";
rpc_url = "${cfg.rpc.url}:${toString cfg.rpc.port}";
message_queue_binding = "${cfg.messageQueueBinding}";
} // cfg.extraConfig)
);
in {
description = "blockbook-frontend-${blockbookName} daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
ln -sf ${cfg.templateDir} ${cfg.dataDir}/static/
ln -sf ${cfg.cssDir} ${cfg.dataDir}/static/
${optionalString (cfg.rpc.passwordFile != null && cfg.configFile == null) ''
CONFIGTMP=$(mktemp)
${pkgs.jq}/bin/jq ".rpc_pass = \"$(cat ${cfg.rpc.passwordFile})\"" ${configFile} > $CONFIGTMP
mv $CONFIGTMP ${cfg.dataDir}/${blockbookName}-config.json
''}
'';
serviceConfig = {
User = cfg.user;
Group = cfg.group;
ExecStart = ''
${cfg.package}/bin/blockbook \
${if (cfg.rpc.passwordFile != null && cfg.configFile == null) then
"-blockchaincfg=${cfg.dataDir}/${blockbookName}-config.json"
else
"-blockchaincfg=${configFile}"
} \
-datadir=${cfg.dataDir} \
${optionalString (cfg.sync != false) "-sync"} \
${optionalString (cfg.certFile != null) "-certfile=${toString cfg.certFile}"} \
${optionalString (cfg.debug != false) "-debug"} \
${optionalString (cfg.internal != null) "-internal=${toString cfg.internal}"} \
${optionalString (cfg.public != null) "-public=${toString cfg.public}"} \
${toString cfg.extraCmdLineOptions}
'';
Restart = "on-failure";
WorkingDirectory = cfg.dataDir;
LimitNOFILE = 65536;
};
}
) )) eachBlockbook;
systemd.tmpfiles.rules = flatten (mapAttrsToList (blockbookName: cfg: [
"d ${cfg.dataDir} 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.dataDir}/static 0750 ${cfg.user} ${cfg.group} - -"
]) eachBlockbook);
users.users = mapAttrs' (blockbookName: cfg: (
nameValuePair "blockbook-frontend-${blockbookName}" {
name = cfg.user;
group = cfg.group;
home = cfg.dataDir;
isSystemUser = true;
})) eachBlockbook;
users.groups = mapAttrs' (instanceName: cfg: (
nameValuePair "${cfg.group}" { })) eachBlockbook;
};
}

View file

@ -77,6 +77,8 @@ in {
AmbientCapabilities = "CAP_NET_ADMIN CAP_NET_RAW";
NoNewPrivileges = true;
DynamicUser = true;
Type = "notify";
NotifyAccess = "main";
ExecStart = "${getBin cfg.package}/bin/corerad -c=${cfg.configFile}";
Restart = "on-failure";
};

View file

@ -11,8 +11,6 @@ let
# build nsd with the options needed for the given config
nsdPkg = pkgs.nsd.override {
configFile = "${configFile}/nsd.conf";
bind8Stats = cfg.bind8Stats;
ipv6 = cfg.ipv6;
ratelimit = cfg.ratelimit.enable;
@ -897,7 +895,10 @@ in
+ "want, please enable 'services.nsd.rootServer'.";
};
environment.systemPackages = [ nsdPkg ];
environment = {
systemPackages = [ nsdPkg ];
etc."nsd/nsd.conf".source = "${configFile}/nsd.conf";
};
users.groups.${username}.gid = config.ids.gids.nsd;

View file

@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.onedrive;
onedriveLauncher = pkgs.writeShellScriptBin
"onedrive-launcher"
''
# XDG_CONFIG_HOME is not recognized in the environment here.
if [ -f $HOME/.config/onedrive-launcher ]
then
# Hopefully using underscore boundary helps locate variables
for _onedrive_config_dirname_ in $(cat $HOME/.config/onedrive-launcher | grep -v '[ \t]*#' )
do
systemctl --user start onedrive@$_onedrive_config_dirname_
done
else
systemctl --user start onedrive@onedrive
fi
''
;
in {
### Documentation
# meta.doc = ./onedrive.xml;
### Interface
options.services.onedrive = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Enable OneDrive service";
};
package = lib.mkOption {
type = lib.types.package;
default = pkgs.onedrive;
defaultText = "pkgs.onedrive";
example = lib.literalExample "pkgs.onedrive";
description = ''
OneDrive package to use.
'';
};
};
### Implementation
config = lib.mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
systemd.user.services."onedrive@" = {
description = "Onedrive sync service";
serviceConfig = {
Type = "simple";
ExecStart = ''
${cfg.package}/bin/onedrive --monitor --verbose --confdir=%h/.config/%i
'';
Restart="on-failure";
RestartSec=3;
RestartPreventExitStatus=3;
};
};
systemd.user.services.onedrive-launcher = {
wantedBy = [ "default.target" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${onedriveLauncher}/bin/onedrive-launcher";
};
};
};
}

View file

@ -0,0 +1,34 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="onedrive">
<title>Microsoft OneDrive</title>
<para>
Microsoft Onedrive is a popular cloud file-hosting service, used by 85% of Fortune 500 companies. NixOS uses a popular OneDrive client for Linux maintained by github user abraunegg. The Linux client is excellent and allows customization of which files or paths to download, not much unlike the default Windows OneDrive client by Microsoft itself. The client allows syncing with multiple onedrive accounts at the same time, of any type- OneDrive personal, OneDrive business, Office365 and Sharepoint libraries, without any additional charge.
</para>
<para>
For more information, guides and documentation, see <link xlink:href="https://abraunegg.github.io/"/>.
</para>
<para>
To enable OneDrive support, add the following to your <filename>configuration.nix</filename>:
<programlisting>
<xref linkend="opt-services.onedrive.enable"/> = true;
</programlisting>
This installs the <literal>onedrive</literal> package and a service <literal>onedriveLauncher</literal> which will instantiate a <literal>onedrive</literal> service for all your OneDrive accounts. Follow the steps in documentation of the onedrive client to setup your accounts. To use the service with multiple accounts, create a file named <filename>onedrive-launcher</filename> in <filename>~/.config</filename> and add the filename of the config directory, relative to <filename>~/.config</filename>. For example, if you have two OneDrive accounts with configs in <filename>~/.config/onedrive_bob_work</filename> and <filename>~/.config/onedrive_bob_personal</filename>, add the following lines:
<programlisting>
onedrive_bob_work
# Not in use:
# onedrive_bob_office365
onedrive_bob_personal
</programlisting>
No such file needs to be created if you are using only a single OneDrive account with config in the default location <filename>~/.config/onedrive</filename>, in the absence of <filename>~/.config/onedrive-launcher</filename>, only a single service is instantiated, with default config path.
</para>
<para>
If you wish to use a custom OneDrive package, say from another channel, add the following line:
<programlisting>
<xref linkend="opt-services.onedrive.package"/> = pkgs.unstable.onedrive;
</programlisting>
</para>
</chapter>

View file

@ -8,8 +8,10 @@ let
confFile = pkgs.writeText "radicale.conf" cfg.config;
# This enables us to default to version 2 while still not breaking configurations of people with version 1
defaultPackage = if versionAtLeast config.system.stateVersion "17.09" then {
defaultPackage = if versionAtLeast config.system.stateVersion "20.09" then {
pkg = pkgs.radicale3;
text = "pkgs.radicale3";
} else if versionAtLeast config.system.stateVersion "17.09" then {
pkg = pkgs.radicale2;
text = "pkgs.radicale2";
} else {
@ -35,8 +37,9 @@ in
defaultText = defaultPackage.text;
description = ''
Radicale package to use. This defaults to version 1.x if
<literal>system.stateVersion &lt; 17.09</literal> and version 2.x
otherwise.
<literal>system.stateVersion &lt; 17.09</literal>, version 2.x if
<literal>17.09 system.stateVersion &lt; 20.09</literal>, and
version 3.x otherwise.
'';
};

View file

@ -109,8 +109,8 @@ in
httpListenAddr = mkOption {
type = types.str;
default = "0.0.0.0";
example = "1.2.3.4";
default = "[::1]";
example = "0.0.0.0";
description = ''
HTTP address to bind to.
'';
@ -206,16 +206,16 @@ in
If you would like to be able to modify the contents of this
directories, it is recommended that you make your user a
member of the <literal>resilio</literal> group.
member of the <literal>rslsync</literal> group.
Directories in this list should be in the
<literal>resilio</literal> group, and that group must have
<literal>rslsync</literal> group, and that group must have
write access to the directory. It is also recommended that
<literal>chmod g+s</literal> is applied to the directory
so that any sub directories created will also belong to
the <literal>resilio</literal> group. Also,
<literal>setfacl -d -m group:resilio:rwx</literal> and
<literal>setfacl -m group:resilio:rwx</literal> should also
the <literal>rslsync</literal> group. Also,
<literal>setfacl -d -m group:rslsync:rwx</literal> and
<literal>setfacl -m group:rslsync:rwx</literal> should also
be applied so that the sub directories are writable by
the group.
'';

View file

@ -15,7 +15,11 @@ let
listen:
(
{ host: "${cfg.listenAddress}"; port: "${toString cfg.port}"; }
${
concatMapStringsSep ",\n"
(addr: ''{ host: "${addr}"; port: "${toString cfg.port}"; }'')
cfg.listenAddresses
}
);
${cfg.appendConfig}
@ -33,6 +37,10 @@ let
'';
in
{
imports = [
(mkRenamedOptionModule [ "services" "sslh" "listenAddress" ] [ "services" "sslh" "listenAddresses" ])
];
options = {
services.sslh = {
enable = mkEnableOption "sslh";
@ -55,10 +63,10 @@ in
description = "Will the services behind sslh (Apache, sshd and so on) see the external IP and ports as if the external world connected directly to them";
};
listenAddress = mkOption {
type = types.str;
default = "0.0.0.0";
description = "Listening address or hostname.";
listenAddresses = mkOption {
type = types.coercedTo types.str singleton (types.listOf types.str);
default = [ "0.0.0.0" "[::]" ];
description = "Listening addresses or hostnames.";
};
port = mkOption {

View file

@ -0,0 +1,158 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.wasabibackend;
inherit (lib) mkEnableOption mkIf mkOption optionalAttrs optionalString types;
confOptions = {
BitcoinRpcConnectionString = "${cfg.rpc.user}:${cfg.rpc.password}";
} // optionalAttrs (cfg.network == "mainnet") {
Network = "Main";
MainNetBitcoinP2pEndPoint = "${cfg.endpoint.ip}:${toString cfg.endpoint.port}";
MainNetBitcoinCoreRpcEndPoint = "${cfg.rpc.ip}:${toString cfg.rpc.port}";
} // optionalAttrs (cfg.network == "testnet") {
Network = "TestNet";
TestNetBitcoinP2pEndPoint = "${cfg.endpoint.ip}:${toString cfg.endpoint.port}";
TestNetBitcoinCoreRpcEndPoint = "${cfg.rpc.ip}:${toString cfg.rpc.port}";
} // optionalAttrs (cfg.network == "regtest") {
Network = "RegTest";
RegTestBitcoinP2pEndPoint = "${cfg.endpoint.ip}:${toString cfg.endpoint.port}";
RegTestBitcoinCoreRpcEndPoint = "${cfg.rpc.ip}:${toString cfg.rpc.port}";
};
configFile = pkgs.writeText "wasabibackend.conf" (builtins.toJSON confOptions);
in {
options = {
services.wasabibackend = {
enable = mkEnableOption "Wasabi backend service";
dataDir = mkOption {
type = types.path;
default = "/var/lib/wasabibackend";
description = "The data directory for the Wasabi backend node.";
};
customConfigFile = mkOption {
type = types.nullOr types.path;
default = null;
description = "Defines the path to a custom configuration file that is copied to the user's directory. Overrides any config options.";
};
network = mkOption {
type = types.enum [ "mainnet" "testnet" "regtest" ];
default = "mainnet";
description = "The network to use for the Wasabi backend service.";
};
endpoint = {
ip = mkOption {
type = types.str;
default = "127.0.0.1";
description = "IP address for P2P connection to bitcoind.";
};
port = mkOption {
type = types.port;
default = 8333;
description = "Port for P2P connection to bitcoind.";
};
};
rpc = {
ip = mkOption {
type = types.str;
default = "127.0.0.1";
description = "IP address for RPC connection to bitcoind.";
};
port = mkOption {
type = types.port;
default = 8332;
description = "Port for RPC connection to bitcoind.";
};
user = mkOption {
type = types.str;
default = "bitcoin";
description = "RPC user for the bitcoin endpoint.";
};
password = mkOption {
type = types.str;
default = "password";
description = "RPC password for the bitcoin endpoint. Warning: this is stored in cleartext in the Nix store! Use <literal>configFile</literal> or <literal>passwordFile</literal> if needed.";
};
passwordFile = mkOption {
type = types.nullOr types.path;
default = null;
description = "File that contains the password of the RPC user.";
};
};
user = mkOption {
type = types.str;
default = "wasabibackend";
description = "The user as which to run the wasabibackend node.";
};
group = mkOption {
type = types.str;
default = cfg.user;
description = "The group as which to run the wasabibackend node.";
};
};
};
config = mkIf cfg.enable {
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' 0770 '${cfg.user}' '${cfg.group}' - -"
];
systemd.services.wasabibackend = {
description = "wasabibackend server";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
environment = {
DOTNET_PRINT_TELEMETRY_MESSAGE = "false";
DOTNET_CLI_TELEMETRY_OPTOUT = "true";
};
preStart = ''
mkdir -p ${cfg.dataDir}/.walletwasabi/backend
${if cfg.customConfigFile != null then ''
cp -v ${cfg.customConfigFile} ${cfg.dataDir}/.walletwasabi/backend/Config.json
'' else ''
cp -v ${configFile} ${cfg.dataDir}/.walletwasabi/backend/Config.json
${optionalString (cfg.rpc.passwordFile != null) ''
CONFIGTMP=$(mktemp)
cat ${cfg.dataDir}/.walletwasabi/backend/Config.json | ${pkgs.jq}/bin/jq --arg rpconnection "${cfg.rpc.user}:$(cat "${cfg.rpc.passwordFile}")" '. + { BitcoinRpcConnectionString: $rpconnection }' > $CONFIGTMP
mv $CONFIGTMP ${cfg.dataDir}/.walletwasabi/backend/Config.json
''}
''}
chmod ug+w ${cfg.dataDir}/.walletwasabi/backend/Config.json
'';
serviceConfig = {
User = cfg.user;
Group = cfg.group;
ExecStart = "${pkgs.wasabibackend}/bin/WasabiBackend";
ProtectSystem = "full";
};
};
users.users.${cfg.user} = {
name = cfg.user;
group = cfg.group;
description = "wasabibackend daemon user";
home = cfg.dataDir;
isSystemUser = true;
};
users.groups.${cfg.group} = {};
};
}

View file

@ -107,6 +107,7 @@ in
++ cfg.lockOn.extraTargets;
before = optional cfg.lockOn.suspend "systemd-suspend.service"
++ optional cfg.lockOn.hibernate "systemd-hibernate.service"
++ optional (cfg.lockOn.hibernate || cfg.lockOn.suspend) "systemd-suspend-then-hibernate.service"
++ cfg.lockOn.extraTargets;
serviceConfig = {
Type = "forking";

View file

@ -159,7 +159,7 @@ in
type = types.bool;
default = false;
description = ''
Wheter to enable Tor control socket. Control socket is created
Whether to enable Tor control socket. Control socket is created
in <literal>${torRunDirectory}/control</literal>
'';
};

View file

@ -93,7 +93,7 @@ in
type = types.bool;
default = true;
description = ''
Wheter to enable HSTS if HTTPS is also enabled.
Whether to enable HSTS if HTTPS is also enabled.
'';
};
maxAgeSeconds = mkOption {
@ -385,7 +385,7 @@ in
type = types.bool;
default = true;
description = ''
Wether to enable email registration.
Whether to enable email registration.
'';
};
allowGravatar = mkOption {

View file

@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.convos;
in
{
options.services.convos = {
enable = mkEnableOption "Convos";
listenPort = mkOption {
type = types.port;
default = 3000;
example = 8080;
description = "Port the web interface should listen on";
};
listenAddress = mkOption {
type = types.str;
default = "*";
example = "127.0.0.1";
description = "Address or host the web interface should listen on";
};
reverseProxy = mkOption {
type = types.bool;
default = false;
description = ''
Enables reverse proxy support. This will allow Convos to automatically
pick up the <literal>X-Forwarded-For</literal> and
<literal>X-Request-Base</literal> HTTP headers set in your reverse proxy
web server. Note that enabling this option without a reverse proxy in
front will be a security issue.
'';
};
};
config = mkIf cfg.enable {
systemd.services.convos = {
description = "Convos Service";
wantedBy = [ "multi-user.target" ];
after = [ "networking.target" ];
environment = {
CONVOS_HOME = "%S/convos";
CONVOS_REVERSE_PROXY = if cfg.reverseProxy then "1" else "0";
MOJO_LISTEN = "http://${toString cfg.listenAddress}:${toString cfg.listenPort}";
};
serviceConfig = {
ExecStart = "${pkgs.convos}/bin/convos daemon";
Restart = "on-failure";
StateDirectory = "convos";
WorkingDirectory = "%S/convos";
DynamicUser = true;
MemoryDenyWriteExecute = true;
ProtectHome = true;
ProtectClock = true;
ProtectHostname = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectKernelLogs = true;
ProtectControlGroups = true;
PrivateDevices = true;
PrivateMounts = true;
PrivateUsers = true;
LockPersonality = true;
RestrictRealtime = true;
RestrictNamespaces = true;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6"];
SystemCallFilter = "@system-service";
SystemCallArchitectures = "native";
CapabilityBoundingSet = "";
};
};
};
}

View file

@ -17,6 +17,10 @@ let
lib.generators.toGitINI cfg.settings
);
replicationConfig = pkgs.writeText "replication.conf" (
lib.generators.toGitINI cfg.replicationSettings
);
# Wrap the gerrit java with all the java options so it can be called
# like a normal CLI app
gerrit-cli = pkgs.writeShellScriptBin "gerrit" ''
@ -106,6 +110,15 @@ in
'';
};
replicationSettings = mkOption {
type = gitIniType;
default = {};
description = ''
Replication configuration. This will be generated to the
<literal>etc/replication.config</literal> file.
'';
};
plugins = mkOption {
type = types.listOf types.package;
default = [];
@ -138,6 +151,13 @@ in
config = mkIf cfg.enable {
assertions = [
{
assertion = cfg.replicationSettings != {} -> elem "replication" cfg.builtinPlugins;
message = "Gerrit replicationSettings require enabling the replication plugin";
}
];
services.gerrit.settings = {
cache.directory = "/var/cache/gerrit";
container.heapLimit = cfg.jvmHeapLimit;
@ -194,6 +214,7 @@ in
# copy the config, keep it mutable because Gerrit
ln -sfv ${gerritConfig} etc/gerrit.config
ln -sfv ${replicationConfig} etc/replication.config
# install the plugins
rm -rf plugins

View file

@ -69,7 +69,7 @@ in {
package = mkOption {
type = types.package;
description = "Which package to use for the Nextcloud instance.";
relatedPackages = [ "nextcloud17" "nextcloud18" ];
relatedPackages = [ "nextcloud17" "nextcloud18" "nextcloud19" ];
};
maxUploadSize = mkOption {
@ -303,6 +303,14 @@ in {
'';
};
};
occ = mkOption {
type = types.package;
default = occ;
internal = true;
description = ''
The nextcloud-occ program preconfigured to target this Nextcloud instance.
'';
};
};
config = mkIf cfg.enable (mkMerge [
@ -336,7 +344,16 @@ in {
server, and wait until the upgrade to 17 is finished.
Then, set `services.nextcloud.package` to `pkgs.nextcloud18` to upgrade to
Nextcloud version 18.
Nextcloud version 18. Please note that Nextcloud 19 is already out and it's
recommended to upgrade to nextcloud19 after that.
'')
++ (optional (versionOlder cfg.package.version "19") ''
A legacy Nextcloud install (from before NixOS 20.09/unstable) may be installed.
If/After nextcloud18 is installed successfully, you can safely upgrade to
nextcloud19. If not, please upgrade to nextcloud18 first since Nextcloud doesn't
support upgrades that skip multiple versions (i.e. an upgrade from 17 to 19 isn't
possible, but an upgrade from 18 to 19).
'');
services.nextcloud.package = with pkgs;
@ -348,7 +365,8 @@ in {
`pkgs.nextcloud`.
''
else if versionOlder stateVersion "20.03" then nextcloud17
else nextcloud18
else if versionOlder stateVersion "20.09" then nextcloud18
else nextcloud19
);
}
@ -360,6 +378,11 @@ in {
};
systemd.services = {
# When upgrading the Nextcloud package, Nextcloud can report errors such as
# "The files of the app [all apps in /var/lib/nextcloud/apps] were not replaced correctly"
# Restarting phpfpm on Nextcloud package update fixes these issues (but this is a workaround).
phpfpm-nextcloud.restartTriggers = [ cfg.package ];
nextcloud-setup = let
c = cfg.config;
writePhpArrary = a: "[${concatMapStringsSep "," (val: ''"${toString val}"'') a}]";

View file

@ -161,5 +161,11 @@
};
}</programlisting>
</para>
<para>
Ideally we should make sure that it's possible to jump two NixOS versions forward:
i.e. the warnings and the logic in the module should guard a user to upgrade from a
Nextcloud on e.g. 19.09 to a Nextcloud on 20.09.
</para>
</section>
</chapter>

View file

@ -708,6 +708,7 @@ in
wantedBy = [ "multi-user.target" ];
wants = concatLists (map (hostOpts: [ "acme-${hostOpts.hostName}.service" "acme-selfsigned-${hostOpts.hostName}.service" ]) vhostsACME);
after = [ "network.target" "fs.target" ] ++ map (hostOpts: "acme-selfsigned-${hostOpts.hostName}.service") vhostsACME;
before = map (hostOpts: "acme-${hostOpts.hostName}.service") vhostsACME;
path = [ pkg pkgs.coreutils pkgs.gnugrep ];

View file

@ -693,6 +693,10 @@ in
wantedBy = [ "multi-user.target" ];
wants = concatLists (map (vhostConfig: ["acme-${vhostConfig.serverName}.service" "acme-selfsigned-${vhostConfig.serverName}.service"]) acmeEnabledVhosts);
after = [ "network.target" ] ++ map (vhostConfig: "acme-selfsigned-${vhostConfig.serverName}.service") acmeEnabledVhosts;
# Nginx needs to be started in order to be able to request certificates
# (it's hosting the acme challenge after all)
# This fixes https://github.com/NixOS/nixpkgs/issues/81842
before = map (vhostConfig: "acme-${vhostConfig.serverName}.service") acmeEnabledVhosts;
stopIfChanged = false;
preStart = ''
${cfg.preStart}

View file

@ -0,0 +1,81 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.boot.initrd.network.openvpn;
in
{
options = {
boot.initrd.network.openvpn.enable = mkOption {
type = types.bool;
default = false;
description = ''
Starts an OpenVPN client during initrd boot. It can be used to e.g.
remotely accessing the SSH service controlled by
<option>boot.initrd.network.ssh</option> or other network services
included. Service is killed when stage-1 boot is finished.
'';
};
boot.initrd.network.openvpn.configuration = mkOption {
type = types.path; # Same type as boot.initrd.secrets
description = ''
The configuration file for OpenVPN.
<warning>
<para>
Unless your bootloader supports initrd secrets, this configuration
is stored insecurely in the global Nix store.
</para>
</warning>
'';
example = "./configuration.ovpn";
};
};
config = mkIf (config.boot.initrd.network.enable && cfg.enable) {
assertions = [
{
assertion = cfg.configuration != null;
message = "You should specify a configuration for initrd OpenVPN";
}
];
# Add kernel modules needed for OpenVPN
boot.initrd.kernelModules = [ "tun" "tap" ];
# Add openvpn and ip binaries to the initrd
# The shared libraries are required for DNS resolution
boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.openvpn}/bin/openvpn
copy_bin_and_libs ${pkgs.iproute}/bin/ip
cp -pv ${pkgs.glibc}/lib/libresolv.so.2 $out/lib
cp -pv ${pkgs.glibc}/lib/libnss_dns.so.2 $out/lib
'';
boot.initrd.secrets = {
"/etc/initrd.ovpn" = cfg.configuration;
};
# openvpn --version would exit with 1 instead of 0
boot.initrd.extraUtilsCommandsTest = ''
$out/bin/openvpn --show-gateway
'';
# Add `iproute /bin/ip` to the config, to ensure that openvpn
# is able to set the routes
boot.initrd.network.postCommands = ''
(cat /etc/initrd.ovpn; echo -e '\niproute /bin/ip') | \
openvpn /dev/stdin &
'';
};
}

View file

@ -54,7 +54,7 @@ let
type = types.bool // { merge = mergeFalseByDefault; };
default = false;
description = ''
Wether option should generate a failure when unused.
Whether option should generate a failure when unused.
Upon merging values, mandatory wins over optional.
'';
};

View file

@ -4,11 +4,15 @@ with lib;
let
blCfg = config.boot.loader;
dtCfg = config.hardware.deviceTree;
cfg = blCfg.generic-extlinux-compatible;
timeoutStr = if blCfg.timeout == null then "-1" else toString blCfg.timeout;
# The builder used to write during system activation
builder = import ./extlinux-conf-builder.nix { inherit pkgs; };
# The builder exposed in populateCmd, which runs on the build architecture
populateBuilder = import ./extlinux-conf-builder.nix { pkgs = pkgs.buildPackages; };
in
{
options = {
@ -34,11 +38,28 @@ in
Maximum number of configurations in the boot menu.
'';
};
populateCmd = mkOption {
type = types.str;
readOnly = true;
description = ''
Contains the builder command used to populate an image,
honoring all options except the <literal>-c &lt;path-to-default-configuration&gt;</literal>
argument.
Useful to have for sdImage.populateRootCommands
'';
};
};
};
config = mkIf cfg.enable {
system.build.installBootLoader = "${builder} -g ${toString cfg.configurationLimit} -t ${timeoutStr} -c";
system.boot.loader.id = "generic-extlinux-compatible";
};
config = let
builderArgs = "-g ${toString cfg.configurationLimit} -t ${timeoutStr}" + lib.optionalString (dtCfg.name != null) " -n ${dtCfg.name}";
in
mkIf cfg.enable {
system.build.installBootLoader = "${builder} ${builderArgs} -c";
system.boot.loader.id = "generic-extlinux-compatible";
boot.loader.generic-extlinux-compatible.populateCmd = "${populateBuilder} ${builderArgs}";
};
}

View file

@ -6,7 +6,7 @@ export PATH=/empty
for i in @path@; do PATH=$PATH:$i/bin; done
usage() {
echo "usage: $0 -t <timeout> -c <path-to-default-configuration> [-d <boot-dir>] [-g <num-generations>]" >&2
echo "usage: $0 -t <timeout> -c <path-to-default-configuration> [-d <boot-dir>] [-g <num-generations>] [-n <dtbName>]" >&2
exit 1
}
@ -15,7 +15,7 @@ default= # Default configuration
target=/boot # Target directory
numGenerations=0 # Number of other generations to include in the menu
while getopts "t:c:d:g:" opt; do
while getopts "t:c:d:g:n:" opt; do
case "$opt" in
t) # U-Boot interprets '0' as infinite and negative as instant boot
if [ "$OPTARG" -lt 0 ]; then
@ -29,6 +29,7 @@ while getopts "t:c:d:g:" opt; do
c) default="$OPTARG" ;;
d) target="$OPTARG" ;;
g) numGenerations="$OPTARG" ;;
n) dtbName="$OPTARG" ;;
\?) usage ;;
esac
done
@ -96,7 +97,17 @@ addEntry() {
echo " LINUX ../nixos/$(basename $kernel)"
echo " INITRD ../nixos/$(basename $initrd)"
if [ -d "$dtbDir" ]; then
echo " FDTDIR ../nixos/$(basename $dtbs)"
# if a dtbName was specified explicitly, use that, else use FDTDIR
if [ -n "$dtbName" ]; then
echo " FDT ../nixos/$(basename $dtbs)/${dtbName}"
else
echo " FDTDIR ../nixos/$(basename $dtbs)"
fi
else
if [ -n "$dtbName" ]; then
echo "Explicitly requested dtbName $dtbName, but there's no FDTDIR - bailing out." >&2
exit 1
fi
fi
echo " APPEND systemConfig=$path init=$path/init $extraParams"
}

View file

@ -55,6 +55,7 @@ let
storePath = config.boot.loader.grub.storePath;
bootloaderId = if args.efiBootloaderId == null then "NixOS${efiSysMountPoint'}" else args.efiBootloaderId;
timeout = if config.boot.loader.timeout == null then -1 else config.boot.loader.timeout;
users = if cfg.users == {} || cfg.version != 1 then cfg.users else throw "GRUB version 1 does not support user accounts.";
inherit efiSysMountPoint;
inherit (args) devices;
inherit (efi) canTouchEfiVariables;
@ -137,6 +138,67 @@ in
'';
};
users = mkOption {
default = {};
example = {
root = { hashedPasswordFile = "/path/to/file"; };
};
description = ''
User accounts for GRUB. When specified, the GRUB command line and
all boot options except the default are password-protected.
All passwords and hashes provided will be stored in /boot/grub/grub.cfg,
and will be visible to any local user who can read this file. Additionally,
any passwords and hashes provided directly in a Nix configuration
(as opposed to external files) will be copied into the Nix store, and
will be visible to all local users.
'';
type = with types; attrsOf (submodule {
options = {
hashedPasswordFile = mkOption {
example = "/path/to/file";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the path to a file containing the password hash
for the account, generated with grub-mkpasswd-pbkdf2.
This hash will be stored in /boot/grub/grub.cfg, and will
be visible to any local user who can read this file.
'';
};
hashedPassword = mkOption {
example = "grub.pbkdf2.sha512.10000.674DFFDEF76E13EA...2CC972B102CF4355";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the password hash for the account,
generated with grub-mkpasswd-pbkdf2.
This hash will be copied to the Nix store, and will be visible to all local users.
'';
};
passwordFile = mkOption {
example = "/path/to/file";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the path to a file containing the
clear text password for the account.
This password will be stored in /boot/grub/grub.cfg, and will
be visible to any local user who can read this file.
'';
};
password = mkOption {
example = "Pa$$w0rd!";
default = null;
type = with types; uniq (nullOr str);
description = ''
Specifies the clear text password for the account.
This password will be copied to the Nix store, and will be visible to all local users.
'';
};
};
});
};
mirroredBoots = mkOption {
default = [ ];
example = [

View file

@ -247,6 +247,45 @@ if ($grubVersion == 1) {
}
else {
my @users = ();
foreach my $user ($dom->findnodes('/expr/attrs/attr[@name = "users"]/attrs/attr')) {
my $name = $user->findvalue('@name') or die;
my $hashedPassword = $user->findvalue('./attrs/attr[@name = "hashedPassword"]/string/@value');
my $hashedPasswordFile = $user->findvalue('./attrs/attr[@name = "hashedPasswordFile"]/string/@value');
my $password = $user->findvalue('./attrs/attr[@name = "password"]/string/@value');
my $passwordFile = $user->findvalue('./attrs/attr[@name = "passwordFile"]/string/@value');
if ($hashedPasswordFile) {
open(my $f, '<', $hashedPasswordFile) or die "Can't read file '$hashedPasswordFile'!";
$hashedPassword = <$f>;
chomp $hashedPassword;
}
if ($passwordFile) {
open(my $f, '<', $passwordFile) or die "Can't read file '$passwordFile'!";
$password = <$f>;
chomp $password;
}
if ($hashedPassword) {
if (index($hashedPassword, "grub.pbkdf2.") == 0) {
$conf .= "\npassword_pbkdf2 $name $hashedPassword";
}
else {
die "Password hash for GRUB user '$name' is not valid!";
}
}
elsif ($password) {
$conf .= "\npassword $name $password";
}
else {
die "GRUB user '$name' has no password!";
}
push(@users, $name);
}
if (@users) {
$conf .= "\nset superusers=\"" . join(' ',@users) . "\"\n";
}
if ($copyKernels == 0) {
$conf .= "
" . $grubStore->search;
@ -350,7 +389,7 @@ sub copyToKernelsDir {
}
sub addEntry {
my ($name, $path) = @_;
my ($name, $path, $options) = @_;
return unless -e "$path/kernel" && -e "$path/initrd";
my $kernel = copyToKernelsDir(Cwd::abs_path("$path/kernel"));
@ -396,7 +435,7 @@ sub addEntry {
$conf .= " " . ($xen ? "module" : "kernel") . " $kernel $kernelParams\n";
$conf .= " " . ($xen ? "module" : "initrd") . " $initrd\n\n";
} else {
$conf .= "menuentry \"$name\" {\n";
$conf .= "menuentry \"$name\" " . ($options||"") . " {\n";
$conf .= $grubBoot->search . "\n";
if ($copyKernels == 0) {
$conf .= $grubStore->search . "\n";
@ -413,7 +452,7 @@ sub addEntry {
# Add default entries.
$conf .= "$extraEntries\n" if $extraEntriesBeforeNixOS;
addEntry("NixOS - Default", $defaultConfig);
addEntry("NixOS - Default", $defaultConfig, "--unrestricted");
$conf .= "$extraEntries\n" unless $extraEntriesBeforeNixOS;

View file

@ -1,4 +1,7 @@
#! @bash@/bin/sh -e
#! @bash@/bin/sh
# This can end up being called disregarding the shebang.
set -e
shopt -s nullglob

View file

@ -47,9 +47,9 @@ def write_loader_conf(profile, generation):
if "@timeout@" != "":
f.write("timeout @timeout@\n")
if profile:
f.write("default nixos-%s-generation-%d.conf\n".format(profile, generation))
f.write("default nixos-%s-generation-%d.conf\n" % (profile, generation))
else:
f.write("default nixos-generation-%d.conf\n".format(generation))
f.write("default nixos-generation-%d.conf\n" % (generation))
if not @editor@:
f.write("editor 0\n");
f.write("console-mode @consoleMode@\n");

View file

@ -102,6 +102,8 @@ in
systemd.services.plymouth-poweroff.wantedBy = [ "poweroff.target" ];
systemd.services.plymouth-reboot.wantedBy = [ "reboot.target" ];
systemd.services.plymouth-read-write.wantedBy = [ "sysinit.target" ];
systemd.services.systemd-ask-password-plymouth.wantedBy = ["multi-user.target"];
systemd.paths.systemd-ask-password-plymouth.wantedBy = ["multi-user.target"];
boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.plymouth}/bin/plymouthd
@ -146,6 +148,7 @@ in
# We use `mkAfter` to ensure that LUKS password prompt would be shown earlier than the splash screen.
boot.initrd.preLVMCommands = mkAfter ''
mkdir -p /etc/plymouth
mkdir -p /run/plymouth
ln -s ${configFile} /etc/plymouth/plymouthd.conf
ln -s $extraUtils/share/plymouth/plymouthd.defaults /etc/plymouth/plymouthd.defaults
ln -s $extraUtils/share/plymouth/logo.png /etc/plymouth/logo.png

View file

@ -233,6 +233,7 @@ in rec {
path = mkOption {
default = [];
type = with types; listOf (oneOf [ package str ]);
apply = ps: "${makeBinPath ps}:${makeSearchPathOutput "bin" "sbin" ps}";
description = ''
Packages added to the service's <envar>PATH</envar>

View file

@ -128,7 +128,10 @@ in
Nice = 19;
IOSchedulingClass = "idle";
ExecStart = "${pkgs.btrfs-progs}/bin/btrfs scrub start -B ${fs}";
ExecStop = "${pkgs.btrfs-progs}/bin/btrfs scrub cancel ${fs}";
# if the service is stopped before scrub end, cancel it
ExecStop = pkgs.writeShellScript "btrfs-scrub-maybe-cancel" ''
(${pkgs.btrfs-progs}/bin/btrfs scrub status ${fs} | ${pkgs.gnugrep}/bin/grep finished) || ${pkgs.btrfs-progs}/bin/btrfs scrub cancel ${fs}
'';
};
};
in listToAttrs (map scrubService cfgScrub.fileSystems);

View file

@ -197,9 +197,7 @@ in
Request encryption keys or passwords for all encrypted datasets on import.
For root pools the encryption key can be supplied via both an
interactive prompt (keylocation=prompt) and from a file
(keylocation=file://). Note that for data pools the encryption key can
be only loaded from a file and not via interactive prompt since the
import is processed in a background systemd service.
(keylocation=file://).
'';
};
@ -490,7 +488,11 @@ in
description = "Import ZFS pool \"${pool}\"";
# we need systemd-udev-settle until https://github.com/zfsonlinux/zfs/pull/4943 is merged
requires = [ "systemd-udev-settle.service" ];
after = [ "systemd-udev-settle.service" "systemd-modules-load.service" ];
after = [
"systemd-udev-settle.service"
"systemd-modules-load.service"
"systemd-ask-password-console.service"
];
wantedBy = (getPoolMounts pool) ++ [ "local-fs.target" ];
before = (getPoolMounts pool) ++ [ "local-fs.target" ];
unitConfig = {
@ -515,7 +517,20 @@ in
done
poolImported "${pool}" || poolImport "${pool}" # Try one last time, e.g. to import a degraded pool.
if poolImported "${pool}"; then
${optionalString cfgZfs.requestEncryptionCredentials "\"${packages.zfsUser}/sbin/zfs\" load-key -r \"${pool}\""}
${optionalString cfgZfs.requestEncryptionCredentials ''
${packages.zfsUser}/sbin/zfs list -rHo name,keylocation ${pool} | while IFS=$'\t' read ds kl; do
(case "$kl" in
none )
;;
prompt )
${config.systemd.package}/bin/systemd-ask-password "Enter key for $ds:" | ${packages.zfsUser}/sbin/zfs load-key "$ds"
;;
* )
${packages.zfsUser}/sbin/zfs load-key "$ds"
;;
esac) < /dev/null # To protect while read ds kl in case anything reads stdin
done
''}
echo "Successfully imported ${pool}"
else
exit 1

View file

@ -34,6 +34,25 @@ in
'';
};
containersConf = mkOption {
default = {};
description = "containers.conf configuration";
type = types.submodule {
options = {
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra configuration that should be put in the containers.conf
configuration file
'';
};
};
};
};
registries = {
search = mkOption {
type = types.listOf types.str;
@ -93,6 +112,12 @@ in
config = lib.mkIf cfg.enable {
environment.etc."containers/containers.conf".text = ''
[network]
cni_plugin_dirs = ["${pkgs.cni-plugins}/bin/"]
'' + cfg.containersConf.extraConfig;
environment.etc."containers/registries.conf".source = toTOML "registries.conf" {
registries = lib.mapAttrs (n: v: { registries = v; }) cfg.registries;
};

View file

@ -28,6 +28,10 @@ let
in
{
imports = [
(lib.mkRenamedOptionModule [ "virtualisation" "podman" "libpod" ] [ "virtualisation" "containers" "containersConf" ])
];
meta = {
maintainers = lib.teams.podman.members;
};
@ -67,25 +71,6 @@ in
'';
};
libpod = mkOption {
default = {};
description = "Libpod configuration";
type = types.submodule {
options = {
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra configuration that should be put in the libpod.conf
configuration file
'';
};
};
};
};
package = lib.mkOption {
type = types.package;
default = podmanPackage;
@ -103,11 +88,6 @@ in
environment.systemPackages = [ cfg.package ]
++ lib.optional cfg.dockerCompat dockerCompat;
environment.etc."containers/libpod.conf".text = ''
cni_plugin_dir = ["${pkgs.cni-plugins}/bin/"]
'' + cfg.libpod.extraConfig;
environment.etc."cni/net.d/87-podman-bridge.conflist".source = copyFile "${pkgs.podman-unwrapped.src}/cni/87-podman-bridge.conflist";
# Enable common /etc/containers configuration

View file

@ -46,6 +46,13 @@ let
description = "Extra options passed to device flag.";
};
name = mkOption {
type = types.nullOr types.str;
default = null;
description =
"A name for the drive. Must be unique in the drives list. Not passed to qemu.";
};
};
};
@ -74,6 +81,24 @@ let
drivesCmdLine = drives: concatStringsSep " " (imap1 driveCmdline drives);
# Creates a device name from a 1-based a numerical index, e.g.
# * `driveDeviceName 1` -> `/dev/vda`
# * `driveDeviceName 2` -> `/dev/vdb`
driveDeviceName = idx:
let letter = elemAt lowerChars (idx - 1);
in if cfg.qemu.diskInterface == "scsi" then
"/dev/sd${letter}"
else
"/dev/vd${letter}";
lookupDriveDeviceName = driveName: driveList:
(findSingle (drive: drive.name == driveName)
(throw "Drive ${driveName} not found")
(throw "Multiple drives named ${driveName}") driveList).device;
addDeviceNames =
imap1 (idx: drive: drive // { device = driveDeviceName idx; });
# Shell script to start the VM.
startVM =
''
@ -396,6 +421,7 @@ in
mkOption {
type = types.listOf (types.submodule driveOpts);
description = "Drives passed to qemu.";
apply = addDeviceNames;
};
diskInterface =
@ -512,8 +538,7 @@ in
optional cfg.writableStore "overlay"
++ optional (cfg.qemu.diskInterface == "scsi") "sym53c8xx";
virtualisation.bootDevice =
mkDefault (if cfg.qemu.diskInterface == "scsi" then "/dev/sda" else "/dev/vda");
virtualisation.bootDevice = mkDefault (driveDeviceName 1);
virtualisation.pathsInNixDB = [ config.system.build.toplevel ];
@ -542,25 +567,20 @@ in
];
virtualisation.qemu.drives = mkMerge [
[{
name = "root";
file = "$NIX_DISK_IMAGE";
driveExtraOpts.cache = "writeback";
driveExtraOpts.werror = "report";
}]
(mkIf cfg.useBootLoader [
{
file = "$NIX_DISK_IMAGE";
driveExtraOpts.cache = "writeback";
driveExtraOpts.werror = "report";
}
{
name = "boot";
file = "$TMPDIR/disk.img";
driveExtraOpts.media = "disk";
deviceExtraOpts.bootindex = "1";
}
])
(mkIf (!cfg.useBootLoader) [
{
file = "$NIX_DISK_IMAGE";
driveExtraOpts.cache = "writeback";
driveExtraOpts.werror = "report";
}
])
(imap0 (idx: _: {
file = "$(pwd)/empty${toString idx}.qcow2";
driveExtraOpts.werror = "report";
@ -608,7 +628,7 @@ in
};
} // optionalAttrs cfg.useBootLoader
{ "/boot" =
{ device = "/dev/vdb2";
{ device = "${lookupDriveDeviceName "boot" cfg.qemu.drives}2";
fsType = "vfat";
options = [ "ro" ];
noCheck = true; # fsck fails on a r/o filesystem

View file

@ -68,7 +68,7 @@ in
SUBSYSTEM=="misc", KERNEL=="vboxguest", TAG+="systemd"
'';
} (mkIf cfg.x11 {
services.xserver.videoDrivers = mkOverride 50 [ "virtualbox" "modesetting" ];
services.xserver.videoDrivers = mkOverride 50 [ "vmware" "virtualbox" "modesetting" ];
services.xserver.config =
''

View file

@ -72,6 +72,7 @@ in {
audiocontroller = "ac97";
audio = "alsa";
audioout = "on";
graphicscontroller = "vmsvga";
rtcuseutc = "on";
usb = "on";
usbehci = "on";

View file

@ -48,10 +48,9 @@ in import ./make-test-python.nix ({ lib, ... }: {
security.acme.certs."standalone.test" = {
webroot = "/var/lib/acme/acme-challenges";
};
systemd.targets."acme-finished-standalone.test" = {};
systemd.services."acme-standalone.test" = {
wants = [ "acme-finished-standalone.test.target" ];
before = [ "acme-finished-standalone.test.target" ];
systemd.targets."acme-finished-standalone.test" = {
after = [ "acme-standalone.test.service" ];
wantedBy = [ "acme-standalone.test.service" ];
};
services.nginx.enable = true;
services.nginx.virtualHosts."standalone.test" = {
@ -68,11 +67,9 @@ in import ./make-test-python.nix ({ lib, ... }: {
# A target remains active. Use this to probe the fact that
# a service fired eventhough it is not RemainAfterExit
systemd.targets."acme-finished-a.example.test" = {};
systemd.services."acme-a.example.test" = {
wants = [ "acme-finished-a.example.test.target" ];
before = [ "acme-finished-a.example.test.target" ];
after = [ "nginx.service" ];
systemd.targets."acme-finished-a.example.test" = {
after = [ "acme-a.example.test.service" ];
wantedBy = [ "acme-a.example.test.service" ];
};
services.nginx.enable = true;
@ -89,11 +86,9 @@ in import ./make-test-python.nix ({ lib, ... }: {
security.acme.server = "https://acme.test/dir";
specialisation.second-cert.configuration = {pkgs, ...}: {
systemd.targets."acme-finished-b.example.test" = {};
systemd.services."acme-b.example.test" = {
wants = [ "acme-finished-b.example.test.target" ];
before = [ "acme-finished-b.example.test.target" ];
after = [ "nginx.service" ];
systemd.targets."acme-finished-b.example.test" = {
after = [ "acme-b.example.test.service" ];
wantedBy = [ "acme-b.example.test.service" ];
};
services.nginx.virtualHosts."b.example.test" = {
enableACME = true;
@ -104,6 +99,7 @@ in import ./make-test-python.nix ({ lib, ... }: {
'';
};
};
specialisation.dns-01.configuration = {pkgs, config, nodes, lib, ...}: {
security.acme.certs."example.test" = {
domain = "*.example.test";
@ -115,10 +111,12 @@ in import ./make-test-python.nix ({ lib, ... }: {
user = config.services.nginx.user;
group = config.services.nginx.group;
};
systemd.targets."acme-finished-example.test" = {};
systemd.targets."acme-finished-example.test" = {
after = [ "acme-example.test.service" ];
wantedBy = [ "acme-example.test.service" ];
};
systemd.services."acme-example.test" = {
wants = [ "acme-finished-example.test.target" ];
before = [ "acme-finished-example.test.target" "nginx.service" ];
before = [ "nginx.service" ];
wantedBy = [ "nginx.service" ];
};
services.nginx.virtualHosts."c.example.test" = {
@ -132,6 +130,26 @@ in import ./make-test-python.nix ({ lib, ... }: {
'';
};
};
# When nginx depends on a service that is slow to start up, requesting used to fail
# certificates fail. Reproducer for https://github.com/NixOS/nixpkgs/issues/81842
specialisation.slow-startup.configuration = { pkgs, config, nodes, lib, ...}: {
systemd.services.my-slow-service = {
wantedBy = [ "multi-user.target" "nginx.service" ];
before = [ "nginx.service" ];
preStart = "sleep 5";
script = "${pkgs.python3}/bin/python -m http.server";
};
systemd.targets."acme-finished-d.example.com" = {
after = [ "acme-d.example.com.service" ];
wantedBy = [ "acme-d.example.com.service" ];
};
services.nginx.virtualHosts."d.example.com" = {
forceSSL = true;
enableACME = true;
locations."/".proxyPass = "http://localhost:8000";
};
};
};
client = {nodes, lib, ...}: {
@ -207,5 +225,15 @@ in import ./make-test-python.nix ({ lib, ... }: {
client.succeed(
"curl --cacert /tmp/ca.crt https://c.example.test/ | grep -qF 'hello world'"
)
with subtest("Can request certificate of nginx when startup is delayed"):
webserver.succeed(
"${switchToNewServer}"
)
webserver.succeed(
"/run/current-system/specialisation/slow-startup/bin/switch-to-configuration test"
)
webserver.wait_for_unit("acme-finished-d.example.com.target")
client.succeed("curl --cacert /tmp/ca.crt https://d.example.com/")
'';
})

View file

@ -33,6 +33,7 @@ in
bees = handleTest ./bees.nix {};
bind = handleTest ./bind.nix {};
bittorrent = handleTest ./bittorrent.nix {};
blockbook-frontend = handleTest ./blockbook-frontend.nix {};
buildkite-agents = handleTest ./buildkite-agents.nix {};
boot = handleTestOn ["x86_64-linux"] ./boot.nix {}; # syslinux is unsupported on aarch64
boot-stage1 = handleTest ./boot-stage1.nix {};
@ -65,6 +66,7 @@ in
containers-portforward = handleTest ./containers-portforward.nix {};
containers-restart_networking = handleTest ./containers-restart_networking.nix {};
containers-tmpfs = handleTest ./containers-tmpfs.nix {};
convos = handleTest ./convos.nix {};
corerad = handleTest ./corerad.nix {};
couchdb = handleTest ./couchdb.nix {};
deluge = handleTest ./deluge.nix {};
@ -124,6 +126,7 @@ in
grafana = handleTest ./grafana.nix {};
graphite = handleTest ./graphite.nix {};
graylog = handleTest ./graylog.nix {};
grub = handleTest ./grub.nix {};
gvisor = handleTest ./gvisor.nix {};
hadoop.hdfs = handleTestOn [ "x86_64-linux" ] ./hadoop/hdfs.nix {};
hadoop.yarn = handleTestOn [ "x86_64-linux" ] ./hadoop/yarn.nix {};
@ -148,6 +151,7 @@ in
incron = handleTest ./incron.nix {};
influxdb = handleTest ./influxdb.nix {};
initrd-network-ssh = handleTest ./initrd-network-ssh {};
initrd-network-openvpn = handleTest ./initrd-network-openvpn {};
initrdNetwork = handleTest ./initrd-network.nix {};
installer = handleTest ./installer.nix {};
iodine = handleTest ./iodine.nix {};
@ -236,6 +240,7 @@ in
nginx-pubhtml = handleTest ./nginx-pubhtml.nix {};
nginx-sandbox = handleTestOn ["x86_64-linux"] ./nginx-sandbox.nix {};
nginx-sso = handleTest ./nginx-sso.nix {};
nginx-variants = handleTest ./nginx-variants.nix {};
nix-ssh-serve = handleTest ./nix-ssh-serve.nix {};
nixos-generate-config = handleTest ./nixos-generate-config.nix {};
novacomd = handleTestOn ["x86_64-linux"] ./novacomd.nix {};
@ -306,6 +311,7 @@ in
spacecookie = handleTest ./spacecookie.nix {};
spike = handleTest ./spike.nix {};
sonarr = handleTest ./sonarr.nix {};
sslh = handleTest ./sslh.nix {};
strongswan-swanctl = handleTest ./strongswan-swanctl.nix {};
sudo = handleTest ./sudo.nix {};
switchTest = handleTest ./switch-test.nix {};
@ -345,6 +351,7 @@ in
vault = handleTest ./vault.nix {};
victoriametrics = handleTest ./victoriametrics.nix {};
virtualbox = handleTestOn ["x86_64-linux"] ./virtualbox.nix {};
wasabibackend = handleTest ./wasabibackend.nix {};
wireguard = handleTest ./wireguard {};
wordpress = handleTest ./wordpress.nix {};
xandikos = handleTest ./xandikos.nix {};

View file

@ -0,0 +1,28 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "blockbook-frontend";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ maintainers."1000101" ];
};
machine = { ... }: {
services.blockbook-frontend."test" = {
enable = true;
};
services.bitcoind = {
enable = true;
rpc = {
port = 8030;
users.rpc.passwordHMAC = "acc2374e5f9ba9e62a5204d3686616cf$53abdba5e67a9005be6a27ca03a93ce09e58854bc2b871523a0d239a72968033";
};
};
};
testScript = ''
start_all()
machine.wait_for_unit("blockbook-frontend-test.service")
machine.wait_for_open_port(9030)
machine.succeed("curl -sSfL http://localhost:9030 | grep 'Blockbook'")
'';
})

View file

@ -55,30 +55,33 @@ let
server = index: { pkgs, ... }:
let
ip = builtins.elemAt allConsensusServerHosts index;
numConsensusServers = builtins.length allConsensusServerHosts;
thisConsensusServerHost = builtins.elemAt allConsensusServerHosts index;
ip = thisConsensusServerHost; # since we already use IPs to identify servers
in
{
networking.interfaces.eth1.ipv4.addresses = pkgs.lib.mkOverride 0 [
{ address = builtins.elemAt allConsensusServerHosts index; prefixLength = 16; }
{ address = ip; prefixLength = 16; }
];
networking.firewall = firewallSettings;
services.consul =
let
thisConsensusServerHost = builtins.elemAt allConsensusServerHosts index;
in
assert builtins.elem thisConsensusServerHost allConsensusServerHosts;
{
enable = true;
inherit webUi;
extraConfig = defaultExtraConfig // {
server = true;
bootstrap_expect = builtins.length allConsensusServerHosts;
bootstrap_expect = numConsensusServers;
# Tell Consul that we never intend to drop below this many servers.
# Ensures to not permanently lose consensus after temporary loss.
# See https://github.com/hashicorp/consul/issues/8118#issuecomment-645330040
autopilot.min_quorum = numConsensusServers;
retry_join =
# If there's only 1 node in the network, we allow self-join;
# otherwise, the node must not try to join itself, and join only the other servers.
# See https://github.com/hashicorp/consul/issues/2868
if builtins.length allConsensusServerHosts == 1
if numConsensusServers == 1
then allConsensusServerHosts
else builtins.filter (h: h != thisConsensusServerHost) allConsensusServerHosts;
bind_addr = ip;
@ -104,40 +107,123 @@ in {
for m in machines:
m.wait_for_unit("consul.service")
for m in machines:
m.wait_until_succeeds("[ $(consul members | grep -o alive | wc -l) == 5 ]")
def wait_for_healthy_servers():
# See https://github.com/hashicorp/consul/issues/8118#issuecomment-645330040
# for why the `Voter` column of `list-peers` has that info.
# TODO: The `grep true` relies on the fact that currently in
# the output like
# # consul operator raft list-peers
# Node ID Address State Voter RaftProtocol
# server3 ... 192.168.1.3:8300 leader true 3
# server2 ... 192.168.1.2:8300 follower true 3
# server1 ... 192.168.1.1:8300 follower false 3
# `Voter`is the only boolean column.
# Change this to the more reliable way to be defined by
# https://github.com/hashicorp/consul/issues/8118
# once that ticket is closed.
for m in machines:
m.wait_until_succeeds(
"[ $(consul operator raft list-peers | grep true | wc -l) == 3 ]"
)
def wait_for_all_machines_alive():
"""
Note that Serf-"alive" does not mean "Raft"-healthy;
see `wait_for_healthy_servers()` for that instead.
"""
for m in machines:
m.wait_until_succeeds("[ $(consul members | grep -o alive | wc -l) == 5 ]")
wait_for_healthy_servers()
# Also wait for clients to be alive.
wait_for_all_machines_alive()
client1.succeed("consul kv put testkey 42")
client2.succeed("[ $(consul kv get testkey) == 42 ]")
# Test that the cluster can tolearate failures of any single server:
for server in servers:
server.crash()
# For each client, wait until they have connection again
# using `kv get -recurse` before issuing commands.
client1.wait_until_succeeds("consul kv get -recurse")
client2.wait_until_succeeds("consul kv get -recurse")
def rolling_reboot_test(proper_rolling_procedure=True):
"""
Tests that the cluster can tolearate failures of any single server,
following the recommended rolling upgrade procedure from
https://www.consul.io/docs/upgrading#standard-upgrades.
# Do some consul actions while one server is down.
client1.succeed("consul kv put testkey 43")
client2.succeed("[ $(consul kv get testkey) == 43 ]")
client2.succeed("consul kv delete testkey")
Optionally, `proper_rolling_procedure=False` can be given
to wait only for each server to be back `Healthy`, not `Stable`
in the Raft consensus, see Consul setting `ServerStabilizationTime` and
https://github.com/hashicorp/consul/issues/8118#issuecomment-645330040.
"""
# Restart crashed machine.
server.start()
for server in servers:
server.crash()
# For each client, wait until they have connection again
# using `kv get -recurse` before issuing commands.
client1.wait_until_succeeds("consul kv get -recurse")
client2.wait_until_succeeds("consul kv get -recurse")
# Do some consul actions while one server is down.
client1.succeed("consul kv put testkey 43")
client2.succeed("[ $(consul kv get testkey) == 43 ]")
client2.succeed("consul kv delete testkey")
# Restart crashed machine.
server.start()
if proper_rolling_procedure:
# Wait for recovery.
wait_for_healthy_servers()
else:
# NOT proper rolling upgrade procedure, see above.
wait_for_all_machines_alive()
# Wait for client connections.
client1.wait_until_succeeds("consul kv get -recurse")
client2.wait_until_succeeds("consul kv get -recurse")
# Do some consul actions with server back up.
client1.succeed("consul kv put testkey 44")
client2.succeed("[ $(consul kv get testkey) == 44 ]")
client2.succeed("consul kv delete testkey")
def all_servers_crash_simultaneously_test():
"""
Tests that the cluster will eventually come back after all
servers crash simultaneously.
"""
for server in servers:
server.crash()
for server in servers:
server.start()
# Wait for recovery.
for m in machines:
m.wait_until_succeeds("[ $(consul members | grep -o alive | wc -l) == 5 ]")
wait_for_healthy_servers()
# Wait for client connections.
client1.wait_until_succeeds("consul kv get -recurse")
client2.wait_until_succeeds("consul kv get -recurse")
# Do some consul actions with server back up.
# Do some consul actions with servers back up.
client1.succeed("consul kv put testkey 44")
client2.succeed("[ $(consul kv get testkey) == 44 ]")
client2.succeed("consul kv delete testkey")
# Run the tests.
print("rolling_reboot_test()")
rolling_reboot_test()
print("all_servers_crash_simultaneously_test()")
all_servers_crash_simultaneously_test()
print("rolling_reboot_test(proper_rolling_procedure=False)")
rolling_reboot_test(proper_rolling_procedure=False)
'';
})

30
nixos/tests/convos.nix Normal file
View file

@ -0,0 +1,30 @@
import ./make-test-python.nix ({ lib, pkgs, ... }:
with lib;
let
port = 3333;
in
{
name = "convos";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ sgo ];
};
nodes = {
machine =
{ pkgs, ... }:
{
services.convos = {
enable = true;
listenPort = port;
};
};
};
testScript = ''
machine.wait_for_unit("convos")
machine.wait_for_open_port("${toString port}")
machine.succeed("journalctl -u convos | grep -q 'Listening at.*${toString port}'")
machine.succeed("curl http://localhost:${toString port}/")
'';
})

View file

@ -42,6 +42,20 @@ import ./make-test-python.nix ({ pkgs, ... }: {
"docker rmi ${examples.nix.imageName}",
)
with subtest("The nix binary symlinks are intact"):
docker.succeed(
"docker load --input='${examples.nix}'",
"docker run --rm ${examples.nix.imageName} ${pkgs.bash}/bin/bash -c 'test nix == $(readlink ${pkgs.nix}/bin/nix-daemon)'",
"docker rmi ${examples.nix.imageName}",
)
with subtest("The nix binary symlinks are intact when the image is layered"):
docker.succeed(
"docker load --input='${examples.nixLayered}'",
"docker run --rm ${examples.nixLayered.imageName} ${pkgs.bash}/bin/bash -c 'test nix == $(readlink ${pkgs.nix}/bin/nix-daemon)'",
"docker rmi ${examples.nixLayered.imageName}",
)
with subtest("The pullImage tool works"):
docker.succeed(
"docker load --input='${examples.nixFromDockerHub}'",

View file

@ -43,7 +43,7 @@ import ./make-test-python.nix ({ pkgs, ...} : {
docker.fail("sudo -u noprivs docker ps")
docker.succeed("docker stop sleeping")
# Must match version twice to ensure client and server versions are correct
docker.succeed('[ $(docker version | grep ${pkgs.docker.version} | wc -l) = "2" ]')
# Must match version 4 times to ensure client and server git commits and versions are correct
docker.succeed('[ $(docker version | grep ${pkgs.docker.version} | wc -l) = "4" ]')
'';
})

View file

@ -32,8 +32,9 @@ let
in {
name = "dokuwiki";
meta.maintainers = with pkgs.lib.maintainers; [ "1000101" ];
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ maintainers."1000101" ];
};
machine = { ... }: {
services.dokuwiki."site1.local" = {
aclUse = false;

View file

@ -1,11 +1,6 @@
import ./make-test-python.nix ({ pkgs, ... } :
{
name = "graphite";
meta = {
# Fails on dependency `python-2.7-Twisted`'s test suite
# complaining `ImportError: No module named zope.interface`.
broken = true;
};
nodes = {
one =
{ ... }: {
@ -21,7 +16,7 @@ import ./make-test-python.nix ({ pkgs, ... } :
api = {
enable = true;
port = 8082;
finders = [ pkgs.python3Packages.influxgraph ];
finders = [ ];
};
carbon.enableCache = true;
seyren.enable = false; # Implicitely requires openssl-1.0.2u which is marked insecure
@ -41,10 +36,14 @@ import ./make-test-python.nix ({ pkgs, ... } :
# even if they're still in preStart (which takes quite long for graphiteWeb).
# Wait for ports to open so we're sure the services are up and listening.
one.wait_for_open_port(8080)
one.wait_for_open_port(8082)
one.wait_for_open_port(2003)
one.succeed('echo "foo 1 `date +%s`" | nc -N localhost 2003')
one.wait_until_succeeds(
"curl 'http://localhost:8080/metrics/find/?query=foo&format=treejson' --silent | grep foo >&2"
)
one.wait_until_succeeds(
"curl 'http://localhost:8082/metrics/find/?query=foo&format=treejson' --silent | grep foo >&2"
)
'';
})

60
nixos/tests/grub.nix Normal file
View file

@ -0,0 +1,60 @@
import ./make-test-python.nix ({ lib, ... }: {
name = "grub";
meta = with lib.maintainers; {
maintainers = [ rnhmjoj ];
};
machine = { ... }: {
virtualisation.useBootLoader = true;
boot.loader.timeout = null;
boot.loader.grub = {
enable = true;
users.alice.password = "supersecret";
# OCR is not accurate enough
extraConfig = "serial; terminal_output serial";
};
};
testScript = ''
def grub_login_as(user, password):
"""
Enters user and password to log into GRUB
"""
machine.wait_for_console_text("Enter username:")
machine.send_chars(user + "\n")
machine.wait_for_console_text("Enter password:")
machine.send_chars(password + "\n")
def grub_select_all_configurations():
"""
Selects "All configurations" from the GRUB menu
to trigger a login request.
"""
machine.send_monitor_command("sendkey down")
machine.send_monitor_command("sendkey ret")
machine.start()
# wait for grub screen
machine.wait_for_console_text("GNU GRUB")
grub_select_all_configurations()
with subtest("Invalid credentials are rejected"):
grub_login_as("wronguser", "wrongsecret")
machine.wait_for_console_text("error: access denied.")
grub_select_all_configurations()
with subtest("Valid credentials are accepted"):
grub_login_as("alice", "supersecret")
machine.send_chars("\n") # press enter to boot
machine.wait_for_console_text("Linux version")
with subtest("Machine boots correctly"):
machine.wait_for_unit("multi-user.target")
'';
})

View file

@ -2,69 +2,53 @@ import ./make-test-python.nix ({ pkgs, ... }:
let
configDir = "/var/lib/foobar";
apiPassword = "some_secret";
mqttPassword = "another_secret";
hassCli = "hass-cli --server http://hass:8123 --password '${apiPassword}'";
mqttPassword = "secret";
in {
name = "home-assistant";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ dotlambda ];
};
nodes = {
hass =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
mosquitto home-assistant-cli
];
services.home-assistant = {
inherit configDir;
enable = true;
package = pkgs.home-assistant.override {
extraPackages = ps: with ps; [ hbmqtt ];
};
config = {
homeassistant = {
name = "Home";
time_zone = "UTC";
latitude = "0.0";
longitude = "0.0";
elevation = 0;
auth_providers = [
{
type = "legacy_api_password";
api_password = apiPassword;
}
];
};
frontend = { };
mqtt = { # Use hbmqtt as broker
password = mqttPassword;
};
binary_sensor = [
{
platform = "mqtt";
state_topic = "home-assistant/test";
payload_on = "let_there_be_light";
payload_off = "off";
}
];
};
lovelaceConfig = {
title = "My Awesome Home";
views = [ {
title = "Example";
cards = [ {
type = "markdown";
title = "Lovelace";
content = "Welcome to your **Lovelace UI**.";
} ];
} ];
};
lovelaceConfigWritable = true;
nodes.hass = { pkgs, ... }: {
environment.systemPackages = with pkgs; [ mosquitto ];
services.home-assistant = {
inherit configDir;
enable = true;
config = {
homeassistant = {
name = "Home";
time_zone = "UTC";
latitude = "0.0";
longitude = "0.0";
elevation = 0;
};
frontend = {};
# uses embedded mqtt broker
mqtt.password = mqttPassword;
binary_sensor = [{
platform = "mqtt";
state_topic = "home-assistant/test";
payload_on = "let_there_be_light";
payload_off = "off";
}];
logger = {
default = "info";
logs."homeassistant.components.mqtt" = "debug";
};
};
lovelaceConfig = {
title = "My Awesome Home";
views = [{
title = "Example";
cards = [{
type = "markdown";
title = "Lovelace";
content = "Welcome to your **Lovelace UI**.";
}];
}];
};
lovelaceConfigWritable = true;
};
};
testScript = ''
@ -77,28 +61,13 @@ in {
with subtest("Check that Home Assistant's web interface and API can be reached"):
hass.wait_for_open_port(8123)
hass.succeed("curl --fail http://localhost:8123/lovelace")
assert "API running" in hass.succeed(
"curl --fail -H 'x-ha-access: ${apiPassword}' http://localhost:8123/api/"
)
with subtest("Toggle a binary sensor using MQTT"):
assert '"state": "off"' in hass.succeed(
"curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}'"
)
# wait for broker to become available
hass.wait_until_succeeds(
"mosquitto_pub -V mqttv311 -t home-assistant/test -u homeassistant -P '${mqttPassword}' -m let_there_be_light"
)
assert '"state": "on"' in hass.succeed(
"curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}'"
)
with subtest("Toggle a binary sensor using hass-cli"):
assert '"state": "on"' in hass.succeed(
"${hassCli} --output json state get binary_sensor.mqtt_binary_sensor"
"mosquitto_sub -V mqttv311 -t home-assistant/test -u homeassistant -P '${mqttPassword}' -W 1 -t '*'"
)
hass.succeed(
"${hassCli} state edit binary_sensor.mqtt_binary_sensor --json='{\"state\": \"off\"}'"
)
assert '"state": "off"' in hass.succeed(
"curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}'"
"mosquitto_pub -V mqttv311 -t home-assistant/test -u homeassistant -P '${mqttPassword}' -m let_there_be_light"
)
with subtest("Print log to ease debugging"):
output_log = hass.succeed("cat ${configDir}/home-assistant.log")
@ -107,5 +76,9 @@ in {
with subtest("Check that no errors were logged"):
assert "ERROR" not in output_log
# example line: 2020-06-20 10:01:32 DEBUG (MainThread) [homeassistant.components.mqtt] Received message on home-assistant/test: b'let_there_be_light'
with subtest("Check we received the mosquitto message"):
assert "let_there_be_light" in output_log
'';
})

View file

@ -0,0 +1,145 @@
import ../make-test-python.nix ({ lib, ...}:
{
name = "initrd-network-openvpn";
nodes =
let
# Inlining of the shared secret for the
# OpenVPN server and client
secretblock = ''
secret [inline]
<secret>
${lib.readFile ./shared.key}
</secret>
'';
in
{
# Minimal test case to check a successful boot, even with invalid config
minimalboot =
{ ... }:
{
boot.initrd.network = {
enable = true;
openvpn = {
enable = true;
configuration = "/dev/null";
};
};
};
# initrd VPN client
ovpnclient =
{ ... }:
{
virtualisation.useBootLoader = true;
virtualisation.vlans = [ 1 ];
boot.initrd = {
# This command does not fork to keep the VM in the state where
# only the initramfs is loaded
preLVMCommands =
''
/bin/nc -p 1234 -lke /bin/echo TESTVALUE
'';
network = {
enable = true;
# Work around udhcpc only getting a lease on eth0
postCommands = ''
/bin/ip addr add 192.168.1.2/24 dev eth1
'';
# Example configuration for OpenVPN
# This is the main reason for this test
openvpn = {
enable = true;
configuration = "${./initrd.ovpn}";
};
};
};
};
# VPN server and gateway for ovpnclient between vlan 1 and 2
ovpnserver =
{ ... }:
{
virtualisation.vlans = [ 1 2 ];
# Enable NAT and forward port 12345 to port 1234
networking.nat = {
enable = true;
internalInterfaces = [ "tun0" ];
externalInterface = "eth2";
forwardPorts = [ { destination = "10.8.0.2:1234";
sourcePort = 12345; } ];
};
# Trust tun0 and allow the VPN Server to be reached
networking.firewall = {
trustedInterfaces = [ "tun0" ];
allowedUDPPorts = [ 1194 ];
};
# Minimal OpenVPN server configuration
services.openvpn.servers.testserver =
{
config = ''
dev tun0
ifconfig 10.8.0.1 10.8.0.2
${secretblock}
'';
};
};
# Client that resides in the "external" VLAN
testclient =
{ ... }:
{
virtualisation.vlans = [ 2 ];
};
};
testScript =
''
# Minimal test case, checks whether enabling (with invalid config) harms
# the boot process
with subtest("Check for successful boot with broken openvpn config"):
minimalboot.start()
# If we get to multi-user.target, we booted successfully
minimalboot.wait_for_unit("multi-user.target")
minimalboot.shutdown()
# Elaborated test case where the ovpnclient (where this module is used)
# can be reached by testclient only over ovpnserver.
# This is an indirect test for success.
with subtest("Check for connection from initrd VPN client, config as file"):
ovpnserver.start()
testclient.start()
ovpnclient.start()
# Wait until the OpenVPN Server is available
ovpnserver.wait_for_unit("openvpn-testserver.service")
ovpnserver.succeed("ping -c 1 10.8.0.1")
# Wait for the client to connect
ovpnserver.wait_until_succeeds("ping -c 1 10.8.0.2")
# Wait until the testclient has network
testclient.wait_for_unit("network.target")
# Check that ovpnclient is reachable over vlan 1
ovpnserver.succeed("nc -w 2 192.168.1.2 1234 | grep -q TESTVALUE")
# Check that ovpnclient is reachable over tun0
ovpnserver.succeed("nc -w 2 10.8.0.2 1234 | grep -q TESTVALUE")
# Check that ovpnclient is reachable from testclient over the gateway
testclient.succeed("nc -w 2 192.168.2.3 12345 | grep -q TESTVALUE")
'';
})

Some files were not shown because too many files have changed in this diff Show more