Document the just test --coverage commands, coverage tools and thresholds
for each stack, and known gaps. Wired into the Sphinx toctree after
contributing.
## Summary / motivation (required)
Adds three new sections to `docs/contributing.md` to address recurring
pain points in PR reviews:
- **Consider an Add-on First**: encourages contributors to explore the
add-on API before proposing core features, to keep the codebase lean.
- **Linked Issues**: makes it explicit that every non-trivial PR must be
linked to an open issue, with a note that unlinked PRs may be
auto-closed.
- **AI-Assisted Contributions**: sets expectations for AI-generated
code: it's allowed, but contributors must understand and review every
change they submit.
### Details
Read `docs/contributing.md` and verify the three new sections render
correctly and are consistent in tone with the existing content.
Closes#4793
- Add `workflow_dispatch` trigger to CI (with macOS/Windows support)
- Allow prepare-release and release workflows from any branch
- Add `skip-ci-check` input for hotfix releases
- Add `just release::prepare` and `just ci` recipes
- Make `qt/release/build.sh` find uv in CI and local builds
- Change publish-testpypi environment from testpypi to release
- Add anki-release wheel build step
---------
Co-authored-by: Andrew Sanchez <andrewsanchez@users.noreply.github.com>
Co-authored-by: Fernando Lins <1887601+fernandolins@users.noreply.github.com>
migrates Anki Desktop packaging from the legacy
NSIS/uv-based installer to [BeeWare
Briefcase](https://briefcase.readthedocs.io/). This branch integrates
work from many related issues and PRs to deliver cross-platform native
installers (MSI on Windows, .app on macOS, PyInstaller on Linux) with
code signing, notarization, and file association support.
## Integrated PRs
- #4585 — Set up Briefcase
- #4596 — Add Briefcase icons
- #4598 — Handle Briefcase file associations
- #4601 — Add Briefcase app permissions
- #4609 — Customize Briefcase's MSI installer
- #4616 — Set up Briefcase code signing and notarization
- #4618 — Fix Briefcase packaging for x86 Macs
- #4623 — Customize Briefcase's Linux template
- #4627 — List required Debian packages for Briefcase installer
- #4630 — Update Briefcase's Windows template
- #4631 — Rewrite Linux install/uninstall scripts for PyInstaller
- #4638 — Use PyInstaller on Linux
- #4645 — Update installer docs
- #4654 — Disable Briefcase's universal builds for macOS
- #4672 — Deal with existing NSIS installations in MSI installer
- #4676 — Remove duplicate Briefcase icons
- #4677 — Tweak Linux scripts for new installer
- #4709 — Add anki-console.bat to Briefcase's Windows package
## Related Issues
- #4557 — Evaluate BeeWare Briefcase for Anki packaging and distribution
- #4678 — Support native Windows ARM64 builds for Briefcase
- #4688 — Linux installer: migrate to PyInstaller and rewrite install
scripts
- #4689 — Investigate startup performance with Briefcase
- #4690 — Specify required Linux system packages for Briefcase
- #4691 — Investigate Windows ARM64 support with Briefcase
- #4692 — Test on Linux ARM with Briefcase
- #4693 — Separate ARM and Intel macOS releases
- #4694 — Update developer documentation for Briefcase installer
- #4695 — Support upgrade/downgrade with the Briefcase installer
- #4696 — Update user documentation for new installer
- #4702 — Update Briefcase's Windows template with upstream security fix
and OS version check
- #4703 — Follow-up tweaks to Linux install/uninstall scripts
## Related PRs
- #4619 — Enable Windows ARM64 support
- #4632 — Release action
---------
Co-authored-by: Abdo <abdo@abdnh.net>
Co-authored-by: Andrew Sanchez <andrewsanchez@users.noreply.github.com>
Co-authored-by: Fernando Lins <1887601+fernandolins@users.noreply.github.com>
The command for building the image does not include the` --platform`
flag which prevents the image from running across all architectures. For
example, if I build the image on an ARM system and then try running on
x86, it won’t work.
This issue can be fixed by using `docker buildx` and adding the flag to
include all of the platforms. I have tested this by building the image
with the `linux/arm64` and `linux/amd64` platform flags on an ARM system
and then running a container with that image on an x86 system.
This would be useful in scenarios where the syncserver runs on devices
that cannot do builds.
The correct command would be
```bash
# Builds for all existing platforms supported by Docker
docker buildx build -f <Dockerfile> --platform linux/amd64,linux/arm64,windows/amd64 --no-cache --build-arg ANKI_VERSION=<version> -t anki-sync-server .
```
Reference: https://docs.docker.com/build/building/multi-platform/
This pull request adds a beginner-friendly quick start guide for
building Anki on Windows.
The guide is intended to help first-time contributors set up their
development environment more easily, without replacing or modifying the
existing official documentation.
No existing files were changed; this contribution only adds a
complementary Markdown document.
---------
Co-authored-by: user1823 <92206575+user1823@users.noreply.github.com>
* Docs/Add more required packages to Linux build guide
Updated package installation instructions for Debian/Ubuntu to include gcc-12 and libxkbfile1.
* Add Arch Linux requirements
* ADD Dependencies for linux when building the launcher
I only downloaded `gcc-aarch64-linux-gnu` (on my x86_64 debian sid) and it was able to successfully build the launcher.
* ADD example build / run instructions for the launcher for linux
* CHORE: ninja format && fix
* ADD: Entries for Mac and Windows
* FIX: Wrong level was applied to a header
* CHORE: ninja fix && format
* FIX: Sentence structure
* FIX: casing (thanks dae)
Co-authored-by: Damien Elmes <dae@users.noreply.github.com>
* FIX: casing
* FIX: binary statement (only linux has amd64 and arm64 versions)
* UPDATE: Include env vars for Win and Mac
* CHANGE: include env vars for Win and Mac in the table instead
* Migrate build system to uv
Closes#3787, and is a step towards #3081 and #4022
This change breaks our PyOxidizer bundling process. While we probably
could update it to work with the new venvs & lockfile, my intention
is to use this as a base to try out a uv-based packager/installer.
Some notes about the changes:
- Use uv for python download + venv installation
- Drop python/requirements* in favour of pyproject files / uv.lock
- Bumped to latest Python 3.9 version. The move to 3.13 should be
a fairly trivial change when we're ready.
- Dropped the old write_wheel.py in favour of uv/hatchling. This has
the unfortunate side-effect of dropping leading zeros in our wheels,
which we could try hack around in the future.
- Switch to Qt 6.7 for the dev repo, as it's the first PyQt version
with a Linux/ARM WebEngine wheel.
- Unified our macOS deployment target with minimum required for ARM.
- Dropped unused fluent python files
- Dropped unused python license generation
- Dropped helpers to run under Qt 5, as our wheels were already
requiring Qt 6 to install.
* Build action to create universal uv binary
* Drop some PyOxidizer-related files
* Use Windows ARM64 cargo/node binaries during build
We can't provide ARM64 wheels to users yet due to #4079, but we can
at least speed up the build.
The rustls -> native-tls change on Windows is because ring requires
clang to compile for ARM64, and I figured it's best to keep our Windows
deps consistent. We already built the wheels with native-tls.
* Make libankihelper a universal library
We were shipping a single arch library in a purelib, leading to
breakages when running on a different platform.
* Use Python wheel for mpv/lame on Windows/Mac
This is convenient, but suboptimal on a Mac at the moment. The first
run of mpv will take a number of seconds for security checks to run,
and our mpv code ends up timing out, repeating the process each time.
Our installer stub will need to invoke mpv once first to get it validated.
We could address this by distributing the audio with the installer/stub,
or perhaps by putting the binaries in a .pkg file that's notarized+stapled
and then included in the wheel.
* Add some helper scripts to build a fully-locked wheel
* Initial macOS launcher prototype
* Add a hidden env var to preload our libs and audio helpers on macOS
* qt/bundle -> qt/launcher
- remove more of the old bundling code
- handle app icon
* Fat binary, notarization & dmg
* Publish wheels on testpypi for testing
* Use our Python pin for the launcher too
* Python cleanups
* Extend launcher to other platforms + more
- Switch to Qt 6.8 for repo default, as 6.7 depends on an older
libwebp/tiff which is unavailable on newer installs
- Drop tools/mac-x86, as we no longer need to test against Qt 5
- Add flags to cross compile wheels on Mac and Linux
- Bump glibc target to 2_36, building on Debian Stable
- Increase mpv timeout on macOS to allow for initial gatekeeper checks
- Ship both arm64 and amd64 uv on Linux, with a bash stub to pick
the appropriate arch.
* Fix pylint on Linux
* Fix failure to run from /usr/local/bin
* Remove remaining pyoxidizer refs, and clean up duplicate release folder
* Rust dep updates
- Rust 1.87 for now (1.88 due out in around a week)
- Nom looks involved, so I left it for now
- prost-reflect depends on a new prost version that got yanked
* Python 3.13 + dep updates
Updated protoc binaries + add helper in order to try fix build breakage.
Ended up being due to an AI-generated update to pip-system-certs that
was not reviewed carefully enough:
https://gitlab.com/alelec/pip-system-certs/-/issues/36
The updated mypy/black needed some tweaks to our files.
* Windows compilation fixes
* Automatically run Anki after installing on Windows
* Touch pyproject.toml upon install, so we check for updates
* Update Python deps
- urllib3 for CVE
- pip-system-certs got fixed
- markdown/pytest also updated
This commit explains how to calls a method implemented in a language
from a different language.
This explains how to declare the RPCs, how to call them and how to
implement them. This is based on examples of code at main at the time
of writting. I used permalink to ensure that the links remains
relevant even if the specific examples change later.
The last section is about the special case of calling TypeScript from
Python, which does not use RPC but is still relevant in a bridge
document.
This commit also add a paragraph explaining what protobuf is in the
protobuf documentation, so that new contributors who don't know what
protobuf is can understand why we use it.
Hardcode them to:
SYNC_PORT=8080
SYNC_BASE=/anki_data
If these env variables are passed into the container with different values,
they are ignored.
The reasons is if the user modifies SYNC_BASE they risk data loss since
anki-sync-server will no longer write data into the volume. If they change
SYNC_PORT they need to also change it when mapping this internal port to the
external port of the container, which could be confusing plus it has no benefit
to allow this since it's always possible to change the external port even if
the internal port is fixed to 8080 (e.g. `-p 1234:8080`).
In both cases there is no benefit to making these values configurable and there
are risks associated.
Unfortunately there is no easy way of implementing this for the
Dockerfile.distroless so it's up to the user not to modify these values.
PUID and PGID are optional env variables to specify the user and group id of
the user that the anki-sync-server process should run with.
This gives more flexibility for solving permission problems with volumes and is
a common pattern for Docker images (e.g. see here:
https://docs.linuxserver.io/general/understanding-puid-and-pgid/)
The anki-sync-server process will write any files with the permissions of the
user it's running with, which can be a problem when you need to access those
files from outside the container or when they are being written into a bind
mount that is owned by a particular user on the host system.
To be able to implement this the entrypoint.sh needs to run as root (since it
needs to create a user and change file permissions). anki-sync-server then
needs to be started with the user 'anki', which is why the new dependency
'su-exec' is required. The user 'anki' and group 'anki-group' can no longer be
created at image build time because then their ids would be fixed.
Also update the build instructions to require building the Docker image inside
the directory where the Dockerfile resides since the build now needs to copy
the entrypoint.sh and it seems wrong the specify the path
docs/syncserver/entrypoint.sh inside the Dockerfile.
Now that an ARM wheel is on PyPI, we no longer need to rely on a
system PyQt to build on ARM. The install is skipped when PYTHONPATH
is set, so older distros with glibc <2.39 can continue to use the
system packages instead.
Otherwise data would be lost by default when removing (or re-creating) a
container.
It would be possible to expose the default directory (e.g.
/home/anki/.syncserver) but it would be different for the two Dockerfiles and
less convenient for users of the Docker container to specify such a long path
when naming their volumes.
Setting the permissions is necessary since anki will be running with 'anki'
user permissions inside the container.
* Qt 6.8.1
Bumps minimum glibc to 2.35, and minimum macOS to 12
* Drop generation of Qt5 packaged build
Closes#3615
* Include qt6 requirements in aqt wheel; drop extra deps
* Fix aqt wheels growing over time
* Add myself to CONTRIBUTORS file
* replace localhost with 127.0.0.1 in syncserver Dockerfile
The healthcheck was failing, presumably because localhost was resolving to ::1
(IPv6), as detailed in this issue: https://github.com/maildev/maildev/pull/500
* docs(docker): Change suggested version numbre
* deps(docker): Bump rust to 1.83.0 and alpine to 3.21.0
* deps(docker): Bump rust to 1.83.0
* CONTRIBUTORS: Add my name
* Add myself to CONTRIBUTORS file
* avoid warning by setting SYNC_PORT as ARG in Dockerfile
1 warning found (use docker --debug to expand):
- UndefinedVar: Usage of undefined variable '$SYNC_PORT'
- rslib(http_server): add `is_running()` method
- rslib(sync): introduce `--healthcheck` argument for health probe in distroless
- doc(syncserver): add table comparing Dockerfile and Dockerfile.distroless
- Expand cross-platform support with distroless
- add `Dockerfile.distroless`
- Dockerfile: bump rust `1.79` to `1.80.1`
- Dockerfile: bump alpine `3.20` to `3.20.2`
Note: Implemented an internal health check because distroless images do not include curl, which is used to reduce image size and attack surface. For more details, see https://blog.sixeyed.com/docker-healthchecks-why-not-to-use-curl-or-iwr/https://github.com/GoogleContainerTools/distroless
fix: failed: check:format:rust
typo
remove extra space
fix failed:check:format:rust
update doc
fetch `host` and `port` using envy
fix: failed: check:format:rust
Update doc + add dockerignore
- dockerignore: This helps avoid sending unwanted files and directories to the builder
- add new line
- I am still experimenting cross platform compilation, I am getting
4.337 From https://github.com/ankitects/rust-url
4.337 * [new ref] bb930b8d089f4d30d7d19c12e54e66191de47b88 -> refs/commit/bb930b8d089f4d30d7d19c12e54e66191de47b88
4.397 error: failed to get `percent-encoding-iri` as a dependency of package `anki v0.0.0 (/app/rslib)`
still checking what could be the issue
fix: failed: check:format:dprint
* Update base images and introduce health endpoint
sync-server: introduce `/health` endpoint to check if the service is reachable.
bump(alpine): bump alpine base image from `3.19` to `3.20`
bump(rust): bump rust-alpine build image from `1.76` to `1.79`
* fix cargo fmt
* add allow clippy::extra_unused_type_parameters
* Remove unused type param (dae)
* Route /health directly (dae)
* Fix for latest axum (dae)
* Simplify the offline build
The two environment variables OFFLINE_BUILD and NO_VENV jointly provide
the ability to build Anki fully offline. This commit boils them down
into just one, namely OFFLINE_BUILD.
The rationale being that first, OFFLINE_BUILD implies the use of
a custom non-networked Python environment.
Second, building Anki with a custom Python environment in a networked
setting is a use case, that we currently do not support.
Developers in need of such a solution may want to give containerized
development environments a try. Users could also look into building
Anki fully offline instead.
* Add documentation for offline builds.
* Add support for offline generation of Sphinx documentation.
Control installation of Sphinx dependencies via the network through the
OFFLINE_BUILD environment variable.
* Add documentation for offline generation of Sphinx documentation.
* Add `extra` directory as a designated ignored folder
Excludes `extra/` from version tracking, file formatters, and file checks.
* Remove pytest cache from exclusion rules
Python test discovery is easy enough to disable for the workspace in VS Code's settings and pytest does not serve any purpose in the context of the project anyway.