Compare commits

..

30 Commits

Author SHA1 Message Date
SteveLauC
ea13c51b7d chore: release v14.0.1 (#662) 2024-01-25 15:40:52 +08:00
Cat Core
3ed763b884 Fix system updates for Nobara (#661)
* Fix system updates for Nobara

* fmt

* Add os-release test for Nobara

* Make requested changes

* cargo fmt
2024-01-24 19:29:20 +08:00
samhanic
10e1e170b7 fix vscode extensions update step (#650)
* fix vscode extensions update using the new update-extensions cli

* fix non-linux compilation
2024-01-24 10:32:00 +08:00
Sandro
ffa62afc66 Follow up to the follow up in #616 (#660) 2024-01-24 10:22:36 +08:00
SteveLauC
f794329913 feat: skip breaking changes notification with env var (#659)
* feat: skip breaking changes notification with env var

* ci: apply that env in ci
2024-01-23 14:50:35 +08:00
SteveLauC
f9a35c7661 docs: add doc on how to do a new release (#658) 2024-01-23 11:58:09 +08:00
SteveLauC
ed496f3462 chore: fix file name typo[skip ci] (#657)
chore: fix file name typo
2024-01-23 11:50:02 +08:00
Rui Chen
6accdae232 workflows(homebrew): replace Homebrew/actions/bump-formulae with Homebrew/actions/bump-packages (#656)
Signed-off-by: Rui Chen <rui@chenrui.dev>
2024-01-23 10:29:48 +08:00
SteveLauC
96efcc6c0d chore: release v14.0.0 (#652) 2024-01-22 11:13:33 +08:00
SteveLauC
bf72d7bb5a fix: oh-my-zsh step issue #646 (#647) 2024-01-22 09:18:27 +08:00
dependabot[bot]
dadffb1081 chore(deps): bump h2 from 0.3.22 to 0.3.24 (#645)
Bumps [h2](https://github.com/hyperium/h2) from 0.3.22 to 0.3.24.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.24/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.22...v0.3.24)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-20 12:13:19 +08:00
Ned Wolpert
78dc567226 Added an Audit step for FreeBSD and DragonFly packagees. (#640)
* Added an Audit step for FreeBSD and DragonFly.

Allows for auditing the packages to be disabled since they are breaking steps.
Current behaivor is the default, where if the audit fails topgrade stops. Can
be disabled in the [misc] section independenly from other sections
2024-01-08 09:40:01 +08:00
Mike Wood
362ce4f4f9 fix(os) consider Fedora Kinoite and other immutable distros to be the FedoraImmutable (#638)
* fix(os) consider Fedora Kinoite to be the Fedora Silverblue distribution

* fix(os) support additional Fedora immutable variants

Rename FedoraSilverblue Distribution to FedoraImmutable.  Add test cases for Onyx, Sericea and Silverblue.  Rename upgrade method to match distribution.

Fixes #637
2024-01-08 08:48:48 +08:00
Carrol Cox
ab35cd7b10 feat(pipx-update): add quiet flag for pipx upgrade-all on version 1.4.0+ (#635)
This commit introduces conditional logic to the `run_pipx_update` function that checks the installed version of pipx. If the version is 1.4.0 or higher, the `--quiet` argument is added to the `pipx upgrade-all` command to suppress non-critical output during the upgrade process, adhering to the new feature introduced in pipx 1.4.0 as per the documentation (https://pipx.pypa.io/stable/docs/#pipx-upgrade-all). This change aims to make the upgrade process less verbose and more manageable in automated scripts or CI/CD pipelines where log brevity is beneficial.
2023-12-31 11:38:39 +08:00
SteveLauC
15f4ad7cd1 refactor: update pip if extern managed and global.break-system-packages is true (#634)
refactor: update pip if extern managed and global.break-system-packages is true
2023-12-30 18:23:33 +08:00
Rebecca Turner
cbfb92041f Skip nix upgrade-nix when Nix is installed in a nix profile (#622)
Make `nix upgrade-nix` a separate step

Also check that Nix can be upgraded before running `nix upgrade-nix` to
work around a bug.

See: <https://github.com/NixOS/nix/issues/5473>
2023-12-21 08:55:32 +08:00
SteveLauC
a506c67cac fix: remove deprecated brew option '--ignore-pinned' (#629) 2023-12-19 17:09:32 +08:00
SteveLauC
788e0412f6 feat: inform users of breaking changes on first run (#619) 2023-12-03 09:52:35 +08:00
Nils
18b37ce3e3 Update config.example.toml (#621)
Added WinGet setting:
enable_winget = true
2023-11-26 08:06:17 +08:00
Jakob Fels
a15e6748c7 Add option to ignore containers to pull (#613) 2023-11-24 16:44:52 +08:00
SteveLauC
c6d0539fd2 chore(deps): bump all deps (#618) 2023-11-24 07:50:41 +08:00
LeSnake
3eb3867944 Bun packages fixes (#617)
* fix running with --only

* fix error when no packages installed
2023-11-23 06:36:00 +08:00
DomGlusk
810315b0e2 Make zinit and zi use parallel updates (#614)
* Update zsh.rs to make zinit and zi use parallel

* run cargo fmt

---------

Co-authored-by: Dominic Gluskin <rhinoarmyleader@gmail.com>
2023-11-22 11:18:41 +08:00
SteveLauC
b461fc2536 refactor: cleanup for #615 (#616) 2023-11-22 09:34:21 +08:00
Sam Vente
7e63977ba0 revert git pushing functionalities (#615) 2023-11-22 09:04:19 +08:00
SteveLauC
78dec892cf docs: migration and breaking changes (#606) 2023-11-12 11:43:58 +08:00
pacjo
9ea6628b5c docs: fix typo in config.example.toml (#603)
docs(config): fix typo (dfault -> default)
2023-11-10 10:32:15 +08:00
LeSnake
465df2e9be feat: add Bun packages step (#599) 2023-11-05 10:34:21 +08:00
SteveLauC
61ef926849 chore: update issue template label (#596) 2023-11-01 08:57:57 +08:00
SteveLauC
7fa38c593e fix: omz remote execution if ZSH is not present (#592) 2023-10-29 18:05:20 +08:00
31 changed files with 1642 additions and 1027 deletions

View File

@@ -2,7 +2,7 @@
name: Bug report name: Bug report
about: Topgrade is misbehaving about: Topgrade is misbehaving
title: '' title: ''
labels: 'bug' labels: 'C-bug'
assignees: '' assignees: ''
--- ---

View File

@@ -2,7 +2,7 @@
name: Feature request name: Feature request
about: Can you please support...? about: Can you please support...?
title: '' title: ''
labels: '' labels: 'C-feature request'
assignees: '' assignees: ''
--- ---

View File

@@ -17,5 +17,5 @@ jobs:
CONFIG_PATH=~/.config/topgrade.toml; CONFIG_PATH=~/.config/topgrade.toml;
if [ -f "$CONFIG_PATH" ]; then rm $CONFIG_PATH; fi if [ -f "$CONFIG_PATH" ]; then rm $CONFIG_PATH; fi
cargo build; cargo build;
./target/debug/topgrade --dry-run --only system; TOPGRADE_SKIP_BRKC_NOTIFY=true ./target/debug/topgrade --dry-run --only system;
stat $CONFIG_PATH; stat $CONFIG_PATH;

View File

@@ -29,7 +29,8 @@ jobs:
if: steps.cache.outputs.cache-hit != 'true' if: steps.cache.outputs.cache-hit != 'true'
run: brew install-bundler-gems run: brew install-bundler-gems
- name: Bump formulae - name: Bump formulae
uses: Homebrew/actions/bump-formulae@master uses: Homebrew/actions/bump-packages@master
continue-on-error: true
with: with:
# Custom GitHub access token with only the 'public_repo' scope enabled # Custom GitHub access token with only the 'public_repo' scope enabled
token: ${{secrets.HOMEBREW_ACCESS_TOKEN}} token: ${{secrets.HOMEBREW_ACCESS_TOKEN}}

12
BREAKINGCHANGES.md Normal file
View File

@@ -0,0 +1,12 @@
1. In 13.0.0, we introduced a new feature, pushing git repos, now this feature
has been removed as some users are not satisfied with it.
For configuration entries, the following ones are gone:
```toml
[git]
pull_only_repos = []
push_only_repos = []
pull_arguments = ""
push_arguments = ""
```

0
BREAKINGCHANGES_dev.md Normal file
View File

View File

@@ -101,6 +101,21 @@ Be sure to apply your changes to
[`config.example.toml`](https://github.com/topgrade-rs/topgrade/blob/master/config.example.toml), [`config.example.toml`](https://github.com/topgrade-rs/topgrade/blob/master/config.example.toml),
and have some basic documentations guiding user how to use these options. and have some basic documentations guiding user how to use these options.
## Breaking changes
If your PR introduces a breaking change, document it in [`BREAKINGCHANGES_dev.md`][bc_dev],
it should be written in Markdown and wrapped in 80, for example:
```md
1. The configuration location has been updated to x.
2. The step x has been removed.
3. ...
```
[bc_dev]: https://github.com/topgrade-rs/topgrade/blob/main/BREAKINGCHANGES_dev.md
## Before you submit your PR ## Before you submit your PR
Make sure your patch passes the following tests on your host: Make sure your patch passes the following tests on your host:

1400
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,9 +5,9 @@ categories = ["os"]
keywords = ["upgrade", "update"] keywords = ["upgrade", "update"]
license = "GPL-3.0" license = "GPL-3.0"
repository = "https://github.com/topgrade-rs/topgrade" repository = "https://github.com/topgrade-rs/topgrade"
version = "13.0.0" version = "14.0.1"
authors = ["Roey Darwish Dror <roey.ghost@gmail.com>", "Thomas Schönauer <t.schoenauer@hgs-wt.at>"] authors = ["Roey Darwish Dror <roey.ghost@gmail.com>", "Thomas Schönauer <t.schoenauer@hgs-wt.at>"]
exclude = ["doc/screenshot.gif"] exclude = ["doc/screenshot.gif", "BREAKINGCHANGES_dev.md"]
edition = "2021" edition = "2021"
readme = "README.md" readme = "README.md"
@@ -22,26 +22,26 @@ path = "src/main.rs"
[dependencies] [dependencies]
home = "~0.5" home = "~0.5"
etcetera = "~0.8" etcetera = "~0.8"
once_cell = "~1.17" once_cell = "~1.18"
serde = { version = "~1.0", features = ["derive"] } serde = { version = "~1.0", features = ["derive"] }
toml = "0.5" toml = "0.8"
which_crate = { version = "~4.1", package = "which" } which_crate = { version = "~4.1", package = "which" }
shellexpand = "~2.1" shellexpand = "~3.1"
clap = { version = "~3.1", features = ["cargo", "derive"] } clap = { version = "~4.4", features = ["cargo", "derive"] }
clap_complete = "~3.1" clap_complete = "~4.4"
clap_mangen = "~0.1" clap_mangen = "~0.2"
walkdir = "~2.3" walkdir = "~2.4"
console = "~0.15" console = "~0.15"
lazy_static = "~1.4" lazy_static = "~1.4"
chrono = "~0.4" chrono = "~0.4"
glob = "~0.3" glob = "~0.3"
strum = { version = "~0.24", features = ["derive"] } strum = { version = "~0.24", features = ["derive"] }
thiserror = "~1.0" thiserror = "~1.0"
tempfile = "~3.6" tempfile = "~3.8"
cfg-if = "~1.0" cfg-if = "~1.0"
tokio = { version = "~1.18", features = ["process", "rt-multi-thread"] } tokio = { version = "~1.34", features = ["process", "rt-multi-thread"] }
futures = "~0.3" futures = "~0.3"
regex = "~1.7" regex = "~1.10"
semver = "~1.0" semver = "~1.0"
shell-words = "~1.1" shell-words = "~1.1"
color-eyre = "~0.6" color-eyre = "~0.6"
@@ -49,10 +49,10 @@ tracing = { version = "~0.1", features = ["attributes", "log"] }
tracing-subscriber = { version = "~0.3", features = ["env-filter", "time"] } tracing-subscriber = { version = "~0.3", features = ["env-filter", "time"] }
merge = "~0.1" merge = "~0.1"
regex-split = "~0.1" regex-split = "~0.1"
notify-rust = "~4.8" notify-rust = "~4.10"
[package.metadata.generate-rpm] [package.metadata.generate-rpm]
assets = [{source = "target/release/topgrade", dest="/usr/bin/topgrade"}] assets = [{ source = "target/release/topgrade", dest = "/usr/bin/topgrade" }]
[package.metadata.generate-rpm.requires] [package.metadata.generate-rpm.requires]
git = "*" git = "*"
@@ -61,8 +61,7 @@ git = "*"
depends = "$auto,git" depends = "$auto,git"
[target.'cfg(unix)'.dependencies] [target.'cfg(unix)'.dependencies]
libc = "~0.2" nix = { version = "~0.27", features = ["hostname", "signal", "user"] }
nix = "~0.24"
rust-ini = "~0.19" rust-ini = "~0.19"
self_update_crate = { version = "~0.30", default-features = false, optional = true, package = "self_update", features = ["archive-tar", "compression-flate2", "rustls"] } self_update_crate = { version = "~0.30", default-features = false, optional = true, package = "self_update", features = ["archive-tar", "compression-flate2", "rustls"] }

View File

@@ -42,15 +42,18 @@ The compiled binaries contain a self-upgrading feature.
Just run `topgrade`. Just run `topgrade`.
Visit the documentation at [topgrade-rs.github.io](https://topgrade-rs.github.io/) for more information.
> **Warning**
> Work in Progress
## Configuration ## Configuration
See `config.example.toml` for an example configuration file. See `config.example.toml` for an example configuration file.
## Migration and Breaking Changes
Whenever there is a **breaking change**, the major version number will be bumped,
and we will document these changes in the release note, please take a look at
it when updated to a major release.
> Got a question? Feel free to open an issue or discussion!
### Configuration Path ### Configuration Path
#### `CONFIG_DIR` on each platform #### `CONFIG_DIR` on each platform

64
RELEASE_PROCEDURE.md Normal file
View File

@@ -0,0 +1,64 @@
> This document lists the steps that lead to a successful release of Topgrade.
1. Open a PR that:
> Here is an [Example PR](https://github.com/topgrade-rs/topgrade/pull/652)
> that you can refer to.
1. bumps the version number.
> If there are breaking changes, the major version number should be increased.
2. Overwrite [`BREAKINGCHANGES`][breaking_changes] with
[`BREAKINGCHANGES_dev`][breaking_changes_dev], and create a new dev file:
```sh'
$ cd topgrade
$ cp BREAKINGCHANGES_dev.md BREAKINGCHANGES.md
$ touch BREAKINGCHANGES_dev.md
```
[breaking_changes_dev]: https://github.com/topgrade-rs/topgrade/blob/main/BREAKINGCHANGES_dev.md
[breaking_changes]: https://github.com/topgrade-rs/topgrade/blob/main/BREAKINGCHANGES.md
2. Check and merge that PR.
3. Go to the [release](https://github.com/topgrade-rs/topgrade/releases) page
and click the [Draft a new release button](https://github.com/topgrade-rs/topgrade/releases/new)
4. Write the release notes
We usually use GitHub's [Automatically generated release notes][auto_gen_release_notes]
functionality to generate release notes, but you write your own one instead.
[auto_gen_release_notes]: https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes
5. Attaching binaries
You don't need to do this as our CI will automatically do it for you,
binaries for Linux, macOS and Windows will be created and attached.
And the CI will publish the new binary to:
1. AUR
2. PyPi
3. Homebrew (seems that this is not working correctly)
6. Manually release it to Crates.io
> Yeah, this is unfortunate, our CI won't do this for us. We should probably add one.
1. `cd` to the Topgrade directory, make sure that it is the latest version
(i.e., including the PR that bumps the version number).
2. Set up your token with `cargo login`.
3. Dry-run the publish `cargo publish --dry-run`.
4. If step 3 works, then do the final release `cargo publish`.
> You can also take a look at the official tutorial [Publishing on crates.io][doc]
>
> [doc]: https://doc.rust-lang.org/cargo/reference/publishing.html

View File

@@ -32,7 +32,7 @@
# Arguments to pass tmux when pulling Repositories # Arguments to pass tmux when pulling Repositories
# tmux_arguments = "-S /var/tmux.sock" # tmux_arguments = "-S /var/tmux.sock"
# Do not set the terminal title (dfault: true) # Do not set the terminal title (default: true)
# set_title = true # set_title = true
# Display the time in step titles (default: true) # Display the time in step titles (default: true)
@@ -153,33 +153,20 @@
[git] [git]
# How many repos to pull at max in parallel
# max_concurrency = 5 # max_concurrency = 5
# Git repositories that you want to pull and push # Additional git repositories to pull
# repos = [ # repos = [
# "~/src/*/", # "~/src/*/",
# "~/.config/something" # "~/.config/something"
# ] # ]
# Repositories that you only want to pull
# pull_only_repos = [
# "~/.config/something_else"
# ]
# Repositories that you only want to push
# push_only_repos = [
# "~/src/*/",
# "~/.config/something_third"
# ]
# Don't pull the predefined git repos # Don't pull the predefined git repos
# pull_predefined = false # pull_predefined = false
# Arguments to pass Git when pulling repositories # Arguments to pass Git when pulling Repositories
# pull_arguments = "--rebase --autostash" # arguments = "--rebase --autostash"
# Arguments to pass Git when pushing repositories
# push_arguments = "--all"
[windows] [windows]
@@ -197,6 +184,9 @@
# manager such as Scoop or Cargo # manager such as Scoop or Cargo
# self_rename = true # self_rename = true
# Enable WinGet upgrade
# enable_winget = true
[npm] [npm]
# Use sudo if the NPM directory isn't owned by the current user # Use sudo if the NPM directory isn't owned by the current user
@@ -237,4 +227,6 @@
[distrobox] [distrobox]
# use_root = false # use_root = false
# containers = ["archlinux-latest"] # containers = ["archlinux-latest"]
[containers]
# ignored_containers = ["ghcr.io/rancher-sandbox/rancher-desktop/rdx-proxy:latest"]

167
src/breaking_changes.rs Normal file
View File

@@ -0,0 +1,167 @@
//! Inform the users of the breaking changes introduced in this major release.
//!
//! Print the breaking changes and possibly a migration guide when:
//! 1. The Topgrade being executed is a new major release
//! 2. This is the first launch of that major release
use crate::terminal::print_separator;
#[cfg(windows)]
use crate::WINDOWS_DIRS;
#[cfg(unix)]
use crate::XDG_DIRS;
use color_eyre::eyre::Result;
use etcetera::base_strategy::BaseStrategy;
use std::{
env::var,
fs::{read_to_string, OpenOptions},
io::Write,
path::PathBuf,
str::FromStr,
};
/// Version string x.y.z
static VERSION_STR: &str = env!("CARGO_PKG_VERSION");
/// Version info
#[derive(Debug)]
pub(crate) struct Version {
_major: u64,
minor: u64,
patch: u64,
}
impl FromStr for Version {
type Err = std::convert::Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
const NOT_SEMVER: &str = "Topgrade version is not semantic";
const NOT_NUMBER: &str = "Topgrade version is not dot-separated numbers";
let mut iter = s.split('.').take(3);
let major = iter.next().expect(NOT_SEMVER).parse().expect(NOT_NUMBER);
let minor = iter.next().expect(NOT_SEMVER).parse().expect(NOT_NUMBER);
let patch = iter.next().expect(NOT_SEMVER).parse().expect(NOT_NUMBER);
// They cannot be all 0s
assert!(
!(major == 0 && minor == 0 && patch == 0),
"Version numbers can not be all 0s"
);
Ok(Self {
_major: major,
minor,
patch,
})
}
}
impl Version {
/// True if this version is a new major release.
pub(crate) fn is_new_major_release(&self) -> bool {
// We have already checked that they cannot all be zeros, so `self.major`
// is guaranteed to be non-zero.
self.minor == 0 && self.patch == 0
}
}
/// Topgrade's breaking changes
///
/// We store them in the compiled binary.
pub(crate) static BREAKINGCHANGES: &str = include_str!("../BREAKINGCHANGES.md");
/// Return platform's data directory.
fn data_dir() -> PathBuf {
#[cfg(unix)]
return XDG_DIRS.data_dir();
#[cfg(windows)]
return WINDOWS_DIRS.data_dir();
}
/// Return Topgrade's keep file path.
///
/// keep file is a file under the data directory containing a major version
/// number, it will be created on first run and is used to check if an execution
/// of Topgrade is the first run of a major release, for more details, see
/// `first_run_of_major_release()`.
fn keep_file_path() -> PathBuf {
let keep_file = "topgrade_keep";
data_dir().join(keep_file)
}
/// If environment variable `TOPGRADE_SKIP_BRKC_NOTIFY` is set to `true`, then
/// we won't notify the user of the breaking changes.
pub(crate) fn should_skip() -> bool {
if let Ok(var) = var("TOPGRADE_SKIP_BRKC_NOTIFY") {
return var.as_str() == "true";
}
false
}
/// True if this is the first execution of a major release.
pub(crate) fn first_run_of_major_release() -> Result<bool> {
let version = VERSION_STR.parse::<Version>().expect("should be a valid version");
let keep_file = keep_file_path();
// disable this lint here as the current code has better readability
#[allow(clippy::collapsible_if)]
if version.is_new_major_release() {
if !keep_file.exists() || read_to_string(&keep_file)? != VERSION_STR {
return Ok(true);
}
}
Ok(false)
}
/// Print breaking changes to the user.
pub(crate) fn print_breaking_changes() {
let header = format!("Topgrade {VERSION_STR} Breaking Changes");
print_separator(header);
let contents = if BREAKINGCHANGES.is_empty() {
"No Breaking changes"
} else {
BREAKINGCHANGES
};
println!("{contents}\n");
}
/// This function will be ONLY executed when the user has confirmed the breaking
/// changes, once confirmed, we write the keep file, which means the first run
/// of this major release is finished.
pub(crate) fn write_keep_file() -> Result<()> {
std::fs::create_dir_all(data_dir())?;
let keep_file = keep_file_path();
let mut file = OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.open(keep_file)?;
let _ = file.write(VERSION_STR.as_bytes())?;
Ok(())
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn is_new_major_release_works() {
let first_major_release: Version = "1.0.0".parse().unwrap();
let under_dev: Version = "0.1.0".parse().unwrap();
assert!(first_major_release.is_new_major_release());
assert!(!under_dev.is_new_major_release());
}
#[test]
#[should_panic(expected = "Version numbers can not be all 0s")]
fn invalid_version() {
let all_0 = "0.0.0";
all_0.parse::<Version>().unwrap();
}
}

View File

@@ -7,7 +7,7 @@ use std::path::{Path, PathBuf};
use std::process::Command; use std::process::Command;
use std::{env, fs}; use std::{env, fs};
use clap::{ArgEnum, Parser}; use clap::{Parser, ValueEnum};
use clap_complete::Shell; use clap_complete::Shell;
use color_eyre::eyre::Context; use color_eyre::eyre::Context;
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
@@ -19,10 +19,10 @@ use serde::Deserialize;
use strum::{EnumIter, EnumString, EnumVariantNames, IntoEnumIterator}; use strum::{EnumIter, EnumString, EnumVariantNames, IntoEnumIterator};
use which_crate::which; use which_crate::which;
use super::utils::{editor, hostname}; use super::utils::editor;
use crate::command::CommandExt; use crate::command::CommandExt;
use crate::sudo::SudoKind; use crate::sudo::SudoKind;
use crate::utils::string_prepend_str; use crate::utils::{hostname, string_prepend_str};
use tracing::{debug, error}; use tracing::{debug, error};
pub static EXAMPLE_CONFIG: &str = include_str!("../config.example.toml"); pub static EXAMPLE_CONFIG: &str = include_str!("../config.example.toml");
@@ -44,7 +44,7 @@ macro_rules! str_value {
pub type Commands = BTreeMap<String, String>; pub type Commands = BTreeMap<String, String>;
#[derive(ArgEnum, EnumString, EnumVariantNames, Debug, Clone, PartialEq, Eq, Deserialize, EnumIter, Copy)] #[derive(ValueEnum, EnumString, EnumVariantNames, Debug, Clone, PartialEq, Eq, Deserialize, EnumIter, Copy)]
#[clap(rename_all = "snake_case")] #[clap(rename_all = "snake_case")]
#[serde(rename_all = "snake_case")] #[serde(rename_all = "snake_case")]
#[strum(serialize_all = "snake_case")] #[strum(serialize_all = "snake_case")]
@@ -53,11 +53,13 @@ pub enum Step {
AppMan, AppMan,
Asdf, Asdf,
Atom, Atom,
Audit,
Bin, Bin,
Bob, Bob,
BrewCask, BrewCask,
BrewFormula, BrewFormula,
Bun, Bun,
BunPackages,
Cargo, Cargo,
Chezmoi, Chezmoi,
Chocolatey, Chocolatey,
@@ -158,23 +160,23 @@ pub struct Include {
paths: Option<Vec<String>>, paths: Option<Vec<String>>,
} }
#[derive(Deserialize, Default, Debug, Merge)]
#[serde(deny_unknown_fields)]
pub struct Containers {
#[merge(strategy = crate::utils::merge_strategies::vec_prepend_opt)]
ignored_containers: Option<Vec<String>>,
}
#[derive(Deserialize, Default, Debug, Merge)] #[derive(Deserialize, Default, Debug, Merge)]
#[serde(deny_unknown_fields)] #[serde(deny_unknown_fields)]
pub struct Git { pub struct Git {
max_concurrency: Option<usize>, max_concurrency: Option<usize>,
#[merge(strategy = crate::utils::merge_strategies::string_append_opt)] #[merge(strategy = crate::utils::merge_strategies::string_append_opt)]
pull_arguments: Option<String>, arguments: Option<String>,
#[merge(strategy = crate::utils::merge_strategies::string_append_opt)]
push_arguments: Option<String>,
#[merge(strategy = crate::utils::merge_strategies::vec_prepend_opt)] #[merge(strategy = crate::utils::merge_strategies::vec_prepend_opt)]
repos: Option<Vec<String>>, repos: Option<Vec<String>>,
#[merge(strategy = crate::utils::merge_strategies::vec_prepend_opt)]
pull_only_repos: Option<Vec<String>>,
#[merge(strategy = crate::utils::merge_strategies::vec_prepend_opt)]
push_only_repos: Option<Vec<String>>,
pull_predefined: Option<bool>, pull_predefined: Option<bool>,
} }
@@ -417,6 +419,9 @@ pub struct ConfigFile {
#[merge(strategy = crate::utils::merge_strategies::inner_merge_opt)] #[merge(strategy = crate::utils::merge_strategies::inner_merge_opt)]
git: Option<Git>, git: Option<Git>,
#[merge(strategy = crate::utils::merge_strategies::inner_merge_opt)]
containers: Option<Containers>,
#[merge(strategy = crate::utils::merge_strategies::inner_merge_opt)] #[merge(strategy = crate::utils::merge_strategies::inner_merge_opt)]
windows: Option<Windows>, windows: Option<Windows>,
@@ -616,22 +621,7 @@ impl ConfigFile {
} }
} }
if let Some(paths) = result.git.as_mut().and_then(|git| git.pull_only_repos.as_mut()) { debug!("Loaded configuration: {:?}", result);
for path in paths.iter_mut() {
let expanded = shellexpand::tilde::<&str>(&path.as_ref()).into_owned();
debug!("Path {} expanded to {}", path, expanded);
*path = expanded;
}
}
if let Some(paths) = result.git.as_mut().and_then(|git| git.push_only_repos.as_mut()) {
for path in paths.iter_mut() {
let expanded = shellexpand::tilde::<&str>(&path.as_ref()).into_owned();
debug!("Path {} expanded to {}", path, expanded);
*path = expanded;
}
}
Ok(result) Ok(result)
} }
@@ -692,19 +682,19 @@ pub struct CommandLineArgs {
no_retry: bool, no_retry: bool,
/// Do not perform upgrades for the given steps /// Do not perform upgrades for the given steps
#[clap(long = "disable", value_name = "STEP", arg_enum, multiple_values = true)] #[clap(long = "disable", value_name = "STEP", value_enum, num_args = 1..)]
disable: Vec<Step>, disable: Vec<Step>,
/// Perform only the specified steps (experimental) /// Perform only the specified steps (experimental)
#[clap(long = "only", value_name = "STEP", arg_enum, multiple_values = true)] #[clap(long = "only", value_name = "STEP", value_enum, num_args = 1..)]
only: Vec<Step>, only: Vec<Step>,
/// Run only specific custom commands /// Run only specific custom commands
#[clap(long = "custom-commands", value_name = "NAME", multiple_values = true)] #[clap(long = "custom-commands", value_name = "NAME", num_args = 1..)]
custom_commands: Vec<String>, custom_commands: Vec<String>,
/// Set environment variables /// Set environment variables
#[clap(long = "env", value_name = "NAME=VALUE", multiple_values = true)] #[clap(long = "env", value_name = "NAME=VALUE", num_args = 1..)]
env: Vec<String>, env: Vec<String>,
/// Output debug logs. Alias for `--log-filter debug`. /// Output debug logs. Alias for `--log-filter debug`.
@@ -724,9 +714,8 @@ pub struct CommandLineArgs {
short = 'y', short = 'y',
long = "yes", long = "yes",
value_name = "STEP", value_name = "STEP",
arg_enum, value_enum,
multiple_values = true, num_args = 0..,
min_values = 0
)] )]
yes: Option<Vec<Step>>, yes: Option<Vec<Step>>,
@@ -753,7 +742,7 @@ pub struct CommandLineArgs {
pub log_filter: String, pub log_filter: String,
/// Print completion script for the given shell and exit /// Print completion script for the given shell and exit
#[clap(long, arg_enum, hide = true)] #[clap(long, value_enum, hide = true)]
pub gen_completion: Option<Shell>, pub gen_completion: Option<Shell>,
/// Print roff manpage and exit /// Print roff manpage and exit
@@ -859,23 +848,17 @@ impl Config {
&self.config_file.commands &self.config_file.commands
} }
/// The list of git repositories to push and pull. /// The list of additional git repositories to pull.
pub fn git_repos(&self) -> Option<&Vec<String>> { pub fn git_repos(&self) -> Option<&Vec<String>> {
self.config_file.git.as_ref().and_then(|git| git.repos.as_ref()) self.config_file.git.as_ref().and_then(|git| git.repos.as_ref())
} }
/// The list of additional git repositories to pull.
pub fn git_pull_only_repos(&self) -> Option<&Vec<String>> { /// The list of docker/podman containers to ignore.
pub fn containers_ignored_tags(&self) -> Option<&Vec<String>> {
self.config_file self.config_file
.git .containers
.as_ref() .as_ref()
.and_then(|git| git.pull_only_repos.as_ref()) .and_then(|containers| containers.ignored_containers.as_ref())
}
/// The list of git repositories to push.
pub fn git_push_only_repos(&self) -> Option<&Vec<String>> {
self.config_file
.git
.as_ref()
.and_then(|git| git.push_only_repos.as_ref())
} }
/// Tell whether the specified step should run. /// Tell whether the specified step should run.
@@ -986,19 +969,9 @@ impl Config {
.and_then(|misc| misc.ssh_arguments.as_ref()) .and_then(|misc| misc.ssh_arguments.as_ref())
} }
/// Extra Git arguments for when pushing /// Extra Git arguments
pub fn push_git_arguments(&self) -> Option<&String> { pub fn git_arguments(&self) -> Option<&String> {
self.config_file self.config_file.git.as_ref().and_then(|git| git.arguments.as_ref())
.git
.as_ref()
.and_then(|git| git.push_arguments.as_ref())
}
/// Extra Git arguments for when pulling
pub fn pull_git_arguments(&self) -> Option<&String> {
self.config_file
.git
.as_ref()
.and_then(|git| git.pull_arguments.as_ref())
} }
/// Extra Tmux arguments /// Extra Tmux arguments

View File

@@ -1,6 +1,6 @@
//! SIGINT handling in Unix systems. //! SIGINT handling in Unix systems.
use crate::ctrlc::interrupted::set_interrupted; use crate::ctrlc::interrupted::set_interrupted;
use nix::sys::signal; use nix::sys::signal::{sigaction, SaFlags, SigAction, SigHandler, SigSet, Signal};
/// Handle SIGINT. Set the interruption flag. /// Handle SIGINT. Set the interruption flag.
extern "C" fn handle_sigint(_: i32) { extern "C" fn handle_sigint(_: i32) {
@@ -10,12 +10,8 @@ extern "C" fn handle_sigint(_: i32) {
/// Set the necessary signal handlers. /// Set the necessary signal handlers.
/// The function panics on failure. /// The function panics on failure.
pub fn set_handler() { pub fn set_handler() {
let sig_action = signal::SigAction::new( let sig_action = SigAction::new(SigHandler::Handler(handle_sigint), SaFlags::empty(), SigSet::empty());
signal::SigHandler::Handler(handle_sigint),
signal::SaFlags::empty(),
signal::SigSet::empty(),
);
unsafe { unsafe {
signal::sigaction(signal::SIGINT, &sig_action).unwrap(); sigaction(Signal::SIGINT, &sig_action).unwrap();
} }
} }

View File

@@ -6,19 +6,20 @@ use std::path::PathBuf;
use std::process::exit; use std::process::exit;
use std::time::Duration; use std::time::Duration;
use crate::breaking_changes::{first_run_of_major_release, print_breaking_changes, should_skip, write_keep_file};
use clap::CommandFactory; use clap::CommandFactory;
use clap::{crate_version, Parser}; use clap::{crate_version, Parser};
use color_eyre::eyre::Context; use color_eyre::eyre::Context;
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
use console::Key; use console::Key;
use etcetera::base_strategy::BaseStrategy;
#[cfg(windows)] #[cfg(windows)]
use etcetera::base_strategy::Windows; use etcetera::base_strategy::Windows;
use etcetera::base_strategy::{BaseStrategy, Xdg}; #[cfg(unix)]
use etcetera::base_strategy::Xdg;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use tracing::debug; use tracing::debug;
use crate::steps::git::GitAction;
use self::config::{CommandLineArgs, Config, Step}; use self::config::{CommandLineArgs, Config, Step};
use self::error::StepFailed; use self::error::StepFailed;
#[cfg(all(windows, feature = "self-update"))] #[cfg(all(windows, feature = "self-update"))]
@@ -28,6 +29,7 @@ use self::terminal::*;
use self::utils::{install_color_eyre, install_tracing, update_tracing}; use self::utils::{install_color_eyre, install_tracing, update_tracing};
mod breaking_changes;
mod command; mod command;
mod config; mod config;
mod ctrlc; mod ctrlc;
@@ -45,10 +47,11 @@ mod sudo;
mod terminal; mod terminal;
mod utils; mod utils;
pub static HOME_DIR: Lazy<PathBuf> = Lazy::new(|| home::home_dir().expect("No home directory")); pub(crate) static HOME_DIR: Lazy<PathBuf> = Lazy::new(|| home::home_dir().expect("No home directory"));
pub static XDG_DIRS: Lazy<Xdg> = Lazy::new(|| Xdg::new().expect("No home directory")); #[cfg(unix)]
pub(crate) static XDG_DIRS: Lazy<Xdg> = Lazy::new(|| Xdg::new().expect("No home directory"));
#[cfg(windows)] #[cfg(windows)]
pub static WINDOWS_DIRS: Lazy<Windows> = Lazy::new(|| Windows::new().expect("No home directory")); pub(crate) static WINDOWS_DIRS: Lazy<Windows> = Lazy::new(|| Windows::new().expect("No home directory"));
fn run() -> Result<()> { fn run() -> Result<()> {
install_color_eyre()?; install_color_eyre()?;
@@ -132,6 +135,22 @@ fn run() -> Result<()> {
let ctx = execution_context::ExecutionContext::new(run_type, sudo, &git, &config); let ctx = execution_context::ExecutionContext::new(run_type, sudo, &git, &config);
let mut runner = runner::Runner::new(&ctx); let mut runner = runner::Runner::new(&ctx);
// If
//
// 1. the breaking changes notification shouldnot be skipped
// 2. this is the first execution of a major release
//
// inform user of breaking changes
if !should_skip() && first_run_of_major_release()? {
print_breaking_changes();
if prompt_yesno("Confirmed?")? {
write_keep_file()?;
} else {
exit(1);
}
}
// Self-Update step, this will execute only if: // Self-Update step, this will execute only if:
// 1. the `self-update` feature is enabled // 1. the `self-update` feature is enabled
// 2. it is not disabled from configuration (env var/CLI opt/file) // 2. it is not disabled from configuration (env var/CLI opt/file)
@@ -249,14 +268,14 @@ fn run() -> Result<()> {
runner.execute(Step::Pkg, "DragonFly BSD Packages", || { runner.execute(Step::Pkg, "DragonFly BSD Packages", || {
dragonfly::upgrade_packages(&ctx) dragonfly::upgrade_packages(&ctx)
})?; })?;
dragonfly::audit_packages(&ctx)?; runner.execute(Step::Audit, "DragonFly Audit", || dragonfly::audit_packages(&ctx))?;
} }
#[cfg(target_os = "freebsd")] #[cfg(target_os = "freebsd")]
{ {
runner.execute(Step::Pkg, "FreeBSD Packages", || freebsd::upgrade_packages(&ctx))?; runner.execute(Step::Pkg, "FreeBSD Packages", || freebsd::upgrade_packages(&ctx))?;
runner.execute(Step::System, "FreeBSD Upgrade", || freebsd::upgrade_freebsd(&ctx))?; runner.execute(Step::System, "FreeBSD Upgrade", || freebsd::upgrade_freebsd(&ctx))?;
freebsd::audit_packages(&ctx)?; runner.execute(Step::Audit, "FreeBSD Audit", || freebsd::audit_packages(&ctx))?;
} }
#[cfg(target_os = "openbsd")] #[cfg(target_os = "openbsd")]
@@ -274,11 +293,13 @@ fn run() -> Result<()> {
{ {
runner.execute(Step::Yadm, "yadm", || unix::run_yadm(&ctx))?; runner.execute(Step::Yadm, "yadm", || unix::run_yadm(&ctx))?;
runner.execute(Step::Nix, "nix", || unix::run_nix(&ctx))?; runner.execute(Step::Nix, "nix", || unix::run_nix(&ctx))?;
runner.execute(Step::Nix, "nix upgrade-nix", || unix::run_nix_self_upgrade(&ctx))?;
runner.execute(Step::Guix, "guix", || unix::run_guix(&ctx))?; runner.execute(Step::Guix, "guix", || unix::run_guix(&ctx))?;
runner.execute(Step::HomeManager, "home-manager", || unix::run_home_manager(&ctx))?; runner.execute(Step::HomeManager, "home-manager", || unix::run_home_manager(&ctx))?;
runner.execute(Step::Asdf, "asdf", || unix::run_asdf(&ctx))?; runner.execute(Step::Asdf, "asdf", || unix::run_asdf(&ctx))?;
runner.execute(Step::Pkgin, "pkgin", || unix::run_pkgin(&ctx))?; runner.execute(Step::Pkgin, "pkgin", || unix::run_pkgin(&ctx))?;
runner.execute(Step::Bun, "bun", || unix::run_bun(&ctx))?; runner.execute(Step::Bun, "bun", || unix::run_bun(&ctx))?;
runner.execute(Step::BunPackages, "bun-packages", || unix::run_bun_packages(&ctx))?;
runner.execute(Step::Shell, "zr", || zsh::run_zr(&ctx))?; runner.execute(Step::Shell, "zr", || zsh::run_zr(&ctx))?;
runner.execute(Step::Shell, "antibody", || zsh::run_antibody(&ctx))?; runner.execute(Step::Shell, "antibody", || zsh::run_antibody(&ctx))?;
runner.execute(Step::Shell, "antidote", || zsh::run_antidote(&ctx))?; runner.execute(Step::Shell, "antidote", || zsh::run_antidote(&ctx))?;
@@ -332,7 +353,7 @@ fn run() -> Result<()> {
runner.execute(Step::Vcpkg, "vcpkg", || generic::run_vcpkg_update(&ctx))?; runner.execute(Step::Vcpkg, "vcpkg", || generic::run_vcpkg_update(&ctx))?;
runner.execute(Step::Pipx, "pipx", || generic::run_pipx_update(&ctx))?; runner.execute(Step::Pipx, "pipx", || generic::run_pipx_update(&ctx))?;
runner.execute(Step::Vscode, "Visual Studio Code extensions", || { runner.execute(Step::Vscode, "Visual Studio Code extensions", || {
generic::run_vscode_extensions_upgrade(&ctx) generic::run_vscode_extensions_update(&ctx)
})?; })?;
runner.execute(Step::Conda, "conda", || generic::run_conda_update(&ctx))?; runner.execute(Step::Conda, "conda", || generic::run_conda_update(&ctx))?;
runner.execute(Step::Mamba, "mamba", || generic::run_mamba_update(&ctx))?; runner.execute(Step::Mamba, "mamba", || generic::run_mamba_update(&ctx))?;
@@ -384,35 +405,35 @@ fn run() -> Result<()> {
if config.should_run(Step::Emacs) { if config.should_run(Step::Emacs) {
if !emacs.is_doom() { if !emacs.is_doom() {
if let Some(directory) = emacs.directory() { if let Some(directory) = emacs.directory() {
git_repos.insert_if_repo(directory, GitAction::Pull); git_repos.insert_if_repo(directory);
} }
} }
git_repos.insert_if_repo(HOME_DIR.join(".doom.d"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".doom.d"));
} }
if config.should_run(Step::Vim) { if config.should_run(Step::Vim) {
git_repos.insert_if_repo(HOME_DIR.join(".vim"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".vim"));
git_repos.insert_if_repo(HOME_DIR.join(".config/nvim"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".config/nvim"));
} }
git_repos.insert_if_repo(HOME_DIR.join(".ideavimrc"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".ideavimrc"));
git_repos.insert_if_repo(HOME_DIR.join(".intellimacs"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".intellimacs"));
if config.should_run(Step::Rcm) { if config.should_run(Step::Rcm) {
git_repos.insert_if_repo(HOME_DIR.join(".dotfiles"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".dotfiles"));
} }
#[cfg(unix)] #[cfg(unix)]
{ {
git_repos.insert_if_repo(zsh::zshrc(), GitAction::Pull); git_repos.insert_if_repo(zsh::zshrc());
if config.should_run(Step::Tmux) { if config.should_run(Step::Tmux) {
git_repos.insert_if_repo(HOME_DIR.join(".tmux"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".tmux"));
} }
git_repos.insert_if_repo(HOME_DIR.join(".config/fish"), GitAction::Pull); git_repos.insert_if_repo(HOME_DIR.join(".config/fish"));
git_repos.insert_if_repo(XDG_DIRS.config_dir().join("openbox"), GitAction::Pull); git_repos.insert_if_repo(XDG_DIRS.config_dir().join("openbox"));
git_repos.insert_if_repo(XDG_DIRS.config_dir().join("bspwm"), GitAction::Pull); git_repos.insert_if_repo(XDG_DIRS.config_dir().join("bspwm"));
git_repos.insert_if_repo(XDG_DIRS.config_dir().join("i3"), GitAction::Pull); git_repos.insert_if_repo(XDG_DIRS.config_dir().join("i3"));
git_repos.insert_if_repo(XDG_DIRS.config_dir().join("sway"), GitAction::Pull); git_repos.insert_if_repo(XDG_DIRS.config_dir().join("sway"));
} }
#[cfg(windows)] #[cfg(windows)]
@@ -420,39 +441,24 @@ fn run() -> Result<()> {
WINDOWS_DIRS WINDOWS_DIRS
.cache_dir() .cache_dir()
.join("Packages/Microsoft.WindowsTerminal_8wekyb3d8bbwe/LocalState"), .join("Packages/Microsoft.WindowsTerminal_8wekyb3d8bbwe/LocalState"),
GitAction::Pull,
); );
#[cfg(windows)] #[cfg(windows)]
windows::insert_startup_scripts(&mut git_repos).ok(); windows::insert_startup_scripts(&mut git_repos).ok();
if let Some(profile) = powershell.profile() { if let Some(profile) = powershell.profile() {
git_repos.insert_if_repo(profile, GitAction::Pull); git_repos.insert_if_repo(profile);
} }
} }
if config.should_run(Step::GitRepos) { if config.should_run(Step::GitRepos) {
if let Some(custom_git_repos) = config.git_repos() { if let Some(custom_git_repos) = config.git_repos() {
for git_repo in custom_git_repos { for git_repo in custom_git_repos {
git_repos.glob_insert(git_repo, GitAction::Pull); git_repos.glob_insert(git_repo);
git_repos.glob_insert(git_repo, GitAction::Push);
} }
} }
if let Some(git_pull_only_repos) = config.git_pull_only_repos() {
for git_repo in git_pull_only_repos {
git_repos.glob_insert(git_repo, GitAction::Pull);
}
}
if let Some(git_push_only_repos) = config.git_push_only_repos() {
for git_repo in git_push_only_repos {
git_repos.glob_insert(git_repo, GitAction::Push);
}
}
runner.execute(Step::GitRepos, "Git repositories", || { runner.execute(Step::GitRepos, "Git repositories", || {
git.multi_repo_step(&git_repos, &ctx) git.multi_pull_step(&git_repos, &ctx)
})?; })?;
} }

View File

@@ -48,7 +48,9 @@ impl Display for Container {
/// Returns a Vector of all containers, with Strings in the format /// Returns a Vector of all containers, with Strings in the format
/// "REGISTRY/[PATH/]CONTAINER_NAME:TAG" /// "REGISTRY/[PATH/]CONTAINER_NAME:TAG"
fn list_containers(crt: &Path) -> Result<Vec<Container>> { ///
/// Containers specified in `ignored_containers` will be filtered out.
fn list_containers(crt: &Path, ignored_containers: Option<&Vec<String>>) -> Result<Vec<Container>> {
debug!( debug!(
"Querying '{} image ls --format \"{{{{.Repository}}}}:{{{{.Tag}}}}/{{{{.ID}}}}\"' for containers", "Querying '{} image ls --format \"{{{{.Repository}}}}:{{{{.Tag}}}}/{{{{.ID}}}}\"' for containers",
crt.display() crt.display()
@@ -83,6 +85,16 @@ fn list_containers(crt: &Path) -> Result<Vec<Container>> {
assert_eq!(split_res.len(), 2); assert_eq!(split_res.len(), 2);
let (repo_tag, image_id) = (split_res[0], split_res[1]); let (repo_tag, image_id) = (split_res[0], split_res[1]);
if let Some(ignored_containers) = ignored_containers {
if ignored_containers
.iter()
.any(|ignored_container| repo_tag.eq(ignored_container))
{
debug!("Skipping ignored container '{}'", line);
continue;
}
}
debug!( debug!(
"Querying '{} image inspect --format \"{{{{.Os}}}}/{{{{.Architecture}}}}\"' for container {}", "Querying '{} image inspect --format \"{{{{.Os}}}}/{{{{.Architecture}}}}\"' for container {}",
crt.display(), crt.display(),
@@ -109,7 +121,8 @@ pub fn run_containers(ctx: &ExecutionContext) -> Result<()> {
print_separator("Containers"); print_separator("Containers");
let mut success = true; let mut success = true;
let containers = list_containers(&crt).context("Failed to list Docker containers")?; let containers =
list_containers(&crt, ctx.config().containers_ignored_tags()).context("Failed to list Docker containers")?;
debug!("Containers to inspect: {:?}", containers); debug!("Containers to inspect: {:?}", containers);
for container in containers.iter() { for container in containers.iter() {

View File

@@ -8,6 +8,7 @@ use std::{fs, io::Write};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use color_eyre::eyre::Context; use color_eyre::eyre::Context;
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
use semver::Version;
use tempfile::tempfile_in; use tempfile::tempfile_in;
use tracing::{debug, error}; use tracing::{debug, error};
@@ -23,6 +24,18 @@ use crate::{
terminal::print_warning, terminal::print_warning,
}; };
#[cfg(target_os = "linux")]
pub fn is_wsl() -> Result<bool> {
let output = Command::new("uname").arg("-r").output_checked_utf8()?.stdout;
debug!("Uname output: {}", output);
Ok(output.contains("microsoft"))
}
#[cfg(not(target_os = "linux"))]
pub fn is_wsl() -> Result<bool> {
Ok(false)
}
pub fn run_cargo_update(ctx: &ExecutionContext) -> Result<()> { pub fn run_cargo_update(ctx: &ExecutionContext) -> Result<()> {
let cargo_dir = env::var_os("CARGO_HOME") let cargo_dir = env::var_os("CARGO_HOME")
.map(PathBuf::from) .map(PathBuf::from)
@@ -324,35 +337,57 @@ pub fn run_vcpkg_update(ctx: &ExecutionContext) -> Result<()> {
command.args(["upgrade", "--no-dry-run"]).status_checked() command.args(["upgrade", "--no-dry-run"]).status_checked()
} }
pub fn run_vscode_extensions_upgrade(ctx: &ExecutionContext) -> Result<()> { pub fn run_vscode_extensions_update(ctx: &ExecutionContext) -> Result<()> {
let vscode = require("code")?; // Calling vscode in WSL may install a server instead of updating extensions (https://github.com/topgrade-rs/topgrade/issues/594#issuecomment-1782157367)
print_separator("Visual Studio Code extensions"); if is_wsl()? {
return Err(SkipStep(String::from("Should not run in WSL")).into());
// Vscode does not have CLI command to upgrade all extensions (see https://github.com/microsoft/vscode/issues/56578)
// Instead we get the list of installed extensions with `code --list-extensions` command (obtain a line-return separated list of installed extensions)
let extensions = Command::new(&vscode)
.arg("--list-extensions")
.output_checked_utf8()?
.stdout;
// Then we construct the upgrade command: `code --force --install-extension [ext0] --install-extension [ext1] ... --install-extension [extN]`
if !extensions.is_empty() {
let mut command_args = vec!["--force"];
for extension in extensions.split_whitespace() {
command_args.extend(["--install-extension", extension]);
}
ctx.run_type().execute(&vscode).args(command_args).status_checked()?;
} }
Ok(()) let vscode = require("code")?;
// Vscode has update command only since 1.86 version ("january 2024" update), disable the update for prior versions
// Use command `code --version` which returns 3 lines: version, git commit, instruction set. We parse only the first one
let version: Result<Version> = match Command::new("code")
.arg("--version")
.output_checked_utf8()?
.stdout
.lines()
.next()
{
Some(item) => Version::parse(item).map_err(|err| err.into()),
_ => return Err(SkipStep(String::from("Cannot find vscode version")).into()),
};
if !matches!(version, Ok(version) if version >= Version::new(1, 86, 0)) {
return Err(SkipStep(String::from("Too old vscode version to have update extensions command")).into());
}
print_separator("Visual Studio Code extensions");
ctx.run_type()
.execute(vscode)
.arg("--update-extensions")
.status_checked()
} }
pub fn run_pipx_update(ctx: &ExecutionContext) -> Result<()> { pub fn run_pipx_update(ctx: &ExecutionContext) -> Result<()> {
let pipx = require("pipx")?; let pipx = require("pipx")?;
print_separator("pipx"); print_separator("pipx");
ctx.run_type().execute(pipx).arg("upgrade-all").status_checked() let mut command_args = vec!["upgrade-all"];
// pipx version 1.4.0 introduced a new command argument `pipx upgrade-all --quiet`
// (see https://pipx.pypa.io/stable/docs/#pipx-upgrade-all)
let version_str = Command::new("pipx")
.args(["--version"])
.output_checked_utf8()
.map(|s| s.stdout.trim().to_owned());
let version = Version::parse(&version_str?);
if matches!(version, Ok(version) if version >= Version::new(1, 4, 0)) {
command_args.push("--quiet")
}
ctx.run_type().execute(pipx).args(command_args).status_checked()
} }
pub fn run_conda_update(ctx: &ExecutionContext) -> Result<()> { pub fn run_conda_update(ctx: &ExecutionContext) -> Result<()> {
@@ -425,20 +460,53 @@ pub fn run_pip3_update(ctx: &ExecutionContext) -> Result<()> {
.output_checked_utf8() .output_checked_utf8()
.map_err(|_| SkipStep("pip does not exist".to_string()))?; .map_err(|_| SkipStep("pip does not exist".to_string()))?;
let check_externally_managed = "import sysconfig; from os import path; print('Y') if path.isfile(path.join(sysconfig.get_path('stdlib'), 'EXTERNALLY-MANAGED')) else print('N')"; let check_extern_managed_script = "import sysconfig; from os import path; print('Y') if path.isfile(path.join(sysconfig.get_path('stdlib'), 'EXTERNALLY-MANAGED')) else print('N')";
Command::new(&python3) let output = Command::new(&python3)
.args(["-c", check_externally_managed]) .args(["-c", check_extern_managed_script])
.output_checked_utf8()?;
let stdout = output.stdout.trim();
let extern_managed = match stdout {
"N" => false,
"Y" => true,
_ => unreachable!("unexpected output from `check_extern_managed_script`"),
};
let allow_break_sys_pkg = match Command::new(&python3)
.args(["-m", "pip", "config", "get", "global.break-system-packages"])
.output_checked_utf8() .output_checked_utf8()
.map_err(|_| SkipStep("pip may be externally managed".to_string())) {
.and_then(|output| match output.stdout.trim() { Ok(output) => {
"N" => Ok(()), let stdout = output.stdout.trim();
"Y" => Err(SkipStep("pip is externally managed".to_string())), stdout
_ => { .parse::<bool>()
print_warning("Unexpected output when checking EXTERNALLY-MANAGED"); .expect("unexpected output that is not `true` or `false`")
print_warning(output.stdout.trim()); }
Err(SkipStep("pip may be externally managed".to_string())) // it can fail because this key may not be set
} //
})?; // ```sh
// $ pip --version
// pip 23.0.1 from /usr/lib/python3/dist-packages/pip (python 3.11)
//
// $ pip config get global.break-system-packages
// ERROR: No such key - global.break-system-packages
//
// $ echo $?
// 1
// ```
Err(_) => false,
};
debug!("pip3 externally managed: {} ", extern_managed);
debug!("pip3 global.break-system-packages: {}", allow_break_sys_pkg);
// Even though pip3 is externally managed, we should still update it if
// `global.break-system-packages` is true.
if extern_managed && !allow_break_sys_pkg {
return Err(SkipStep(
"Skip pip3 update as it is externally managed and global.break-system-packages is not true".to_string(),
)
.into());
}
print_separator("pip3"); print_separator("pip3");
if env::var("VIRTUAL_ENV").is_ok() { if env::var("VIRTUAL_ENV").is_ok() {

View File

@@ -27,17 +27,9 @@ pub struct Git {
git: Option<PathBuf>, git: Option<PathBuf>,
} }
#[derive(Clone, Copy)]
pub enum GitAction {
Push,
Pull,
}
#[derive(Debug)]
pub struct Repositories<'a> { pub struct Repositories<'a> {
git: &'a Git, git: &'a Git,
pull_repositories: HashSet<String>, repositories: HashSet<String>,
push_repositories: HashSet<String>,
glob_match_options: MatchOptions, glob_match_options: MatchOptions,
bad_patterns: Vec<String>, bad_patterns: Vec<String>,
} }
@@ -52,36 +44,6 @@ fn output_checked_utf8(output: Output) -> Result<()> {
Ok(()) Ok(())
} }
} }
async fn push_repository(repo: String, git: &Path, ctx: &ExecutionContext<'_>) -> Result<()> {
let path = repo.to_string();
println!("{} {}", style("Pushing").cyan().bold(), path);
let mut command = AsyncCommand::new(git);
command
.stdin(Stdio::null())
.current_dir(&repo)
.args(["push", "--porcelain"]);
if let Some(extra_arguments) = ctx.config().push_git_arguments() {
command.args(extra_arguments.split_whitespace());
}
let output = command.output().await?;
let result = match output.status.success() {
true => Ok(()),
false => Err(format!("Failed to push {repo}")),
};
if result.is_err() {
println!("{} pushing {}", style("Failed").red().bold(), &repo);
};
match result {
Ok(_) => Ok(()),
Err(e) => Err(eyre!(e)),
}
}
async fn pull_repository(repo: String, git: &Path, ctx: &ExecutionContext<'_>) -> Result<()> { async fn pull_repository(repo: String, git: &Path, ctx: &ExecutionContext<'_>) -> Result<()> {
let path = repo.to_string(); let path = repo.to_string();
@@ -96,7 +58,7 @@ async fn pull_repository(repo: String, git: &Path, ctx: &ExecutionContext<'_>) -
.current_dir(&repo) .current_dir(&repo)
.args(["pull", "--ff-only"]); .args(["pull", "--ff-only"]);
if let Some(extra_arguments) = ctx.config().pull_git_arguments() { if let Some(extra_arguments) = ctx.config().git_arguments() {
command.args(extra_arguments.split_whitespace()); command.args(extra_arguments.split_whitespace());
} }
@@ -219,7 +181,7 @@ impl Git {
None None
} }
pub fn multi_repo_step(&self, repositories: &Repositories, ctx: &ExecutionContext) -> Result<()> { pub fn multi_pull_step(&self, repositories: &Repositories, ctx: &ExecutionContext) -> Result<()> {
// Warn the user about the bad patterns. // Warn the user about the bad patterns.
// //
// NOTE: this should be executed **before** skipping the Git step or the // NOTE: this should be executed **before** skipping the Git step or the
@@ -230,15 +192,12 @@ impl Git {
.iter() .iter()
.for_each(|pattern| print_warning(format!("Path {pattern} did not contain any git repositories"))); .for_each(|pattern| print_warning(format!("Path {pattern} did not contain any git repositories")));
if repositories.is_empty() { if repositories.repositories.is_empty() {
return Err(SkipStep(String::from("No repositories to pull or push")).into()); return Err(SkipStep(String::from("No repositories to pull")).into());
} }
print_separator("Git repositories"); print_separator("Git repositories");
self.multi_pull(repositories, ctx)?; self.multi_pull(repositories, ctx)
self.multi_push(repositories, ctx)?;
Ok(())
} }
pub fn multi_pull(&self, repositories: &Repositories, ctx: &ExecutionContext) -> Result<()> { pub fn multi_pull(&self, repositories: &Repositories, ctx: &ExecutionContext) -> Result<()> {
@@ -246,7 +205,7 @@ impl Git {
if ctx.run_type().dry() { if ctx.run_type().dry() {
repositories repositories
.pull_repositories .repositories
.iter() .iter()
.for_each(|repo| println!("Would pull {}", &repo)); .for_each(|repo| println!("Would pull {}", &repo));
@@ -254,7 +213,7 @@ impl Git {
} }
let futures_iterator = repositories let futures_iterator = repositories
.pull_repositories .repositories
.iter() .iter()
.filter(|repo| match has_remotes(git, repo) { .filter(|repo| match has_remotes(git, repo) {
Some(false) => { Some(false) => {
@@ -281,47 +240,6 @@ impl Git {
let error = results.into_iter().find(|r| r.is_err()); let error = results.into_iter().find(|r| r.is_err());
error.unwrap_or(Ok(())) error.unwrap_or(Ok(()))
} }
pub fn multi_push(&self, repositories: &Repositories, ctx: &ExecutionContext) -> Result<()> {
let git = self.git.as_ref().unwrap();
if ctx.run_type().dry() {
repositories
.push_repositories
.iter()
.for_each(|repo| println!("Would push {}", &repo));
return Ok(());
}
let futures_iterator = repositories
.push_repositories
.iter()
.filter(|repo| match has_remotes(git, repo) {
Some(false) => {
println!(
"{} {} because it has no remotes",
style("Skipping").yellow().bold(),
repo
);
false
}
_ => true, // repo has remotes or command to check for remotes has failed. proceed to pull anyway.
})
.map(|repo| push_repository(repo.clone(), git, ctx));
let stream_of_futures = if let Some(limit) = ctx.config().git_concurrency_limit() {
iter(futures_iterator).buffer_unordered(limit).boxed()
} else {
futures_iterator.collect::<FuturesUnordered<_>>().boxed()
};
let basic_rt = runtime::Runtime::new()?;
let results = basic_rt.block_on(async { stream_of_futures.collect::<Vec<Result<()>>>().await });
let error = results.into_iter().find(|r| r.is_err());
error.unwrap_or(Ok(()))
}
} }
impl<'a> Repositories<'a> { impl<'a> Repositories<'a> {
@@ -334,27 +252,22 @@ impl<'a> Repositories<'a> {
Self { Self {
git, git,
repositories: HashSet::new(),
bad_patterns: Vec::new(), bad_patterns: Vec::new(),
glob_match_options, glob_match_options,
pull_repositories: HashSet::new(),
push_repositories: HashSet::new(),
} }
} }
pub fn insert_if_repo<P: AsRef<Path>>(&mut self, path: P, action: GitAction) -> bool { pub fn insert_if_repo<P: AsRef<Path>>(&mut self, path: P) -> bool {
if let Some(repo) = self.git.get_repo_root(path) { if let Some(repo) = self.git.get_repo_root(path) {
match action { self.repositories.insert(repo);
GitAction::Push => self.push_repositories.insert(repo),
GitAction::Pull => self.pull_repositories.insert(repo),
};
true true
} else { } else {
false false
} }
} }
pub fn glob_insert(&mut self, pattern: &str, action: GitAction) { pub fn glob_insert(&mut self, pattern: &str) {
if let Ok(glob) = glob_with(pattern, self.glob_match_options) { if let Ok(glob) = glob_with(pattern, self.glob_match_options) {
let mut last_git_repo: Option<PathBuf> = None; let mut last_git_repo: Option<PathBuf> = None;
for entry in glob { for entry in glob {
@@ -370,7 +283,7 @@ impl<'a> Repositories<'a> {
continue; continue;
} }
} }
if self.insert_if_repo(&path, action) { if self.insert_if_repo(&path) {
last_git_repo = Some(path); last_git_repo = Some(path);
} }
} }
@@ -388,27 +301,16 @@ impl<'a> Repositories<'a> {
} }
} }
/// Return true if `pull_repos` and `push_repos` are both empty. #[cfg(unix)]
pub fn is_empty(&self) -> bool { pub fn is_empty(&self) -> bool {
self.pull_repositories.is_empty() && self.push_repositories.is_empty() self.repositories.is_empty()
} }
// The following 2 functions are `#[cfg(unix)]` because they are only used in // The following 2 functions are `#[cfg(unix)]` because they are only used in
// the `oh-my-zsh` step, which is UNIX-only. // the `oh-my-zsh` step, which is UNIX-only.
#[cfg(unix)] #[cfg(unix)]
/// Return true if `pull_repos` is empty. pub fn remove(&mut self, path: &str) {
pub fn pull_is_empty(&self) -> bool { let _removed = self.repositories.remove(path);
self.pull_repositories.is_empty()
}
#[cfg(unix)]
/// Remove `path` from `pull_repos`
///
/// # Panic
/// Will panic if `path` is not in the `pull_repos` under a debug build.
pub fn remove_from_pull(&mut self, path: &str) {
let _removed = self.pull_repositories.remove(path);
debug_assert!(_removed); debug_assert!(_removed);
} }
} }

View File

@@ -19,7 +19,9 @@ pub fn upgrade_packages(ctx: &ExecutionContext) -> Result<()> {
pub fn audit_packages(ctx: &ExecutionContext) -> Result<()> { pub fn audit_packages(ctx: &ExecutionContext) -> Result<()> {
let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?; let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?;
println!();
print_separator("DragonFly BSD Audit");
#[allow(clippy::disallowed_methods)] #[allow(clippy::disallowed_methods)]
if !Command::new(sudo) if !Command::new(sudo)
.args(["/usr/local/sbin/pkg", "audit", "-Fr"]) .args(["/usr/local/sbin/pkg", "audit", "-Fr"])

View File

@@ -30,7 +30,9 @@ pub fn upgrade_packages(ctx: &ExecutionContext) -> Result<()> {
pub fn audit_packages(ctx: &ExecutionContext) -> Result<()> { pub fn audit_packages(ctx: &ExecutionContext) -> Result<()> {
let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?; let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?;
println!();
print_separator("FreeBSD Audit");
Command::new(sudo) Command::new(sudo)
.args(["/usr/sbin/pkg", "audit", "-Fr"]) .args(["/usr/sbin/pkg", "audit", "-Fr"])
.status_checked()?; .status_checked()?;

View File

@@ -8,6 +8,7 @@ use tracing::{debug, warn};
use crate::command::CommandExt; use crate::command::CommandExt;
use crate::error::{SkipStep, TopgradeError}; use crate::error::{SkipStep, TopgradeError};
use crate::execution_context::ExecutionContext; use crate::execution_context::ExecutionContext;
use crate::steps::generic::is_wsl;
use crate::steps::os::archlinux; use crate::steps::os::archlinux;
use crate::terminal::print_separator; use crate::terminal::print_separator;
use crate::utils::{require, require_option, which, PathExt, REQUIRE_SUDO}; use crate::utils::{require, require_option, which, PathExt, REQUIRE_SUDO};
@@ -24,7 +25,7 @@ pub enum Distribution {
CentOS, CentOS,
ClearLinux, ClearLinux,
Fedora, Fedora,
FedoraSilverblue, FedoraImmutable,
Debian, Debian,
Gentoo, Gentoo,
OpenMandriva, OpenMandriva,
@@ -38,6 +39,7 @@ pub enum Distribution {
Exherbo, Exherbo,
NixOS, NixOS,
KDENeon, KDENeon,
Nobara,
} }
impl Distribution { impl Distribution {
@@ -52,10 +54,14 @@ impl Distribution {
Some("alpine") => Distribution::Alpine, Some("alpine") => Distribution::Alpine,
Some("centos") | Some("rhel") | Some("ol") => Distribution::CentOS, Some("centos") | Some("rhel") | Some("ol") => Distribution::CentOS,
Some("clear-linux-os") => Distribution::ClearLinux, Some("clear-linux-os") => Distribution::ClearLinux,
Some("fedora") | Some("nobara") => { Some("fedora") => {
return if let Some(variant) = variant { return if let Some(variant) = variant {
if variant.contains(&"Silverblue") { if variant.contains(&"Silverblue")
Ok(Distribution::FedoraSilverblue) || variant.contains(&"Kinoite")
|| variant.contains(&"Sericea")
|| variant.contains(&"Onyx")
{
Ok(Distribution::FedoraImmutable)
} else { } else {
Ok(Distribution::Fedora) Ok(Distribution::Fedora)
} }
@@ -64,6 +70,7 @@ impl Distribution {
}; };
} }
Some("nobara") => Distribution::Nobara,
Some("void") => Distribution::Void, Some("void") => Distribution::Void,
Some("debian") | Some("pureos") | Some("Deepin") => Distribution::Debian, Some("debian") | Some("pureos") | Some("Deepin") => Distribution::Debian,
Some("arch") | Some("manjaro-arm") | Some("garuda") | Some("artix") => Distribution::Arch, Some("arch") | Some("manjaro-arm") | Some("garuda") | Some("artix") => Distribution::Arch,
@@ -131,7 +138,7 @@ impl Distribution {
Distribution::Alpine => upgrade_alpine_linux(ctx), Distribution::Alpine => upgrade_alpine_linux(ctx),
Distribution::Arch => archlinux::upgrade_arch_linux(ctx), Distribution::Arch => archlinux::upgrade_arch_linux(ctx),
Distribution::CentOS | Distribution::Fedora => upgrade_redhat(ctx), Distribution::CentOS | Distribution::Fedora => upgrade_redhat(ctx),
Distribution::FedoraSilverblue => upgrade_fedora_silverblue(ctx), Distribution::FedoraImmutable => upgrade_fedora_immutable(ctx),
Distribution::ClearLinux => upgrade_clearlinux(ctx), Distribution::ClearLinux => upgrade_clearlinux(ctx),
Distribution::Debian => upgrade_debian(ctx), Distribution::Debian => upgrade_debian(ctx),
Distribution::Gentoo => upgrade_gentoo(ctx), Distribution::Gentoo => upgrade_gentoo(ctx),
@@ -147,6 +154,7 @@ impl Distribution {
Distribution::Bedrock => update_bedrock(ctx), Distribution::Bedrock => update_bedrock(ctx),
Distribution::OpenMandriva => upgrade_openmandriva(ctx), Distribution::OpenMandriva => upgrade_openmandriva(ctx),
Distribution::PCLinuxOS => upgrade_pclinuxos(ctx), Distribution::PCLinuxOS => upgrade_pclinuxos(ctx),
Distribution::Nobara => upgrade_nobara(ctx),
} }
} }
@@ -185,12 +193,6 @@ fn update_bedrock(ctx: &ExecutionContext) -> Result<()> {
Ok(()) Ok(())
} }
fn is_wsl() -> Result<bool> {
let output = Command::new("uname").arg("-r").output_checked_utf8()?.stdout;
debug!("Uname output: {}", output);
Ok(output.contains("microsoft"))
}
fn upgrade_alpine_linux(ctx: &ExecutionContext) -> Result<()> { fn upgrade_alpine_linux(ctx: &ExecutionContext) -> Result<()> {
let apk = require("apk")?; let apk = require("apk")?;
let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?; let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?;
@@ -230,7 +232,41 @@ fn upgrade_redhat(ctx: &ExecutionContext) -> Result<()> {
Ok(()) Ok(())
} }
fn upgrade_fedora_silverblue(ctx: &ExecutionContext) -> Result<()> { fn upgrade_nobara(ctx: &ExecutionContext) -> Result<()> {
let sudo = require_option(ctx.sudo().as_ref(), REQUIRE_SUDO.to_string())?;
let pkg_manager = require("dnf")?;
let mut update_command = ctx.run_type().execute(sudo);
update_command.arg(&pkg_manager);
if ctx.config().yes(Step::System) {
update_command.arg("-y");
}
update_command.arg("update");
// See https://nobaraproject.org/docs/upgrade-troubleshooting/how-do-i-update-the-system/
update_command.args([
"rpmfusion-nonfree-release",
"rpmfusion-free-release",
"fedora-repos",
"nobara-repos",
]);
update_command.arg("--refresh").status_checked()?;
let mut upgrade_command = ctx.run_type().execute(sudo);
upgrade_command.arg(&pkg_manager);
if ctx.config().yes(Step::System) {
upgrade_command.arg("-y");
}
upgrade_command.arg("distro-sync");
upgrade_command.status_checked()?;
Ok(())
}
fn upgrade_fedora_immutable(ctx: &ExecutionContext) -> Result<()> {
let ostree = require("rpm-ostree")?; let ostree = require("rpm-ostree")?;
let mut command = ctx.run_type().execute(ostree); let mut command = ctx.run_type().execute(ostree);
command.arg("upgrade"); command.arg("upgrade");
@@ -1036,6 +1072,17 @@ mod tests {
test_template(include_str!("os_release/fedora"), Distribution::Fedora); test_template(include_str!("os_release/fedora"), Distribution::Fedora);
} }
#[test]
fn test_fedora_immutable() {
test_template(
include_str!("os_release/fedorasilverblue"),
Distribution::FedoraImmutable,
);
test_template(include_str!("os_release/fedorakinoite"), Distribution::FedoraImmutable);
test_template(include_str!("os_release/fedoraonyx"), Distribution::FedoraImmutable);
test_template(include_str!("os_release/fedorasericea"), Distribution::FedoraImmutable);
}
#[test] #[test]
fn test_manjaro() { fn test_manjaro() {
test_template(include_str!("os_release/manjaro"), Distribution::Arch); test_template(include_str!("os_release/manjaro"), Distribution::Arch);
@@ -1105,4 +1152,9 @@ mod tests {
fn test_solus() { fn test_solus() {
test_template(include_str!("os_release/solus"), Distribution::Solus); test_template(include_str!("os_release/solus"), Distribution::Solus);
} }
#[test]
fn test_nobara() {
test_template(include_str!("os_release/nobara"), Distribution::Nobara);
}
} }

View File

@@ -0,0 +1,23 @@
NAME="Fedora Linux"
VERSION="39.20240105.0 (Kinoite)"
ID=fedora
VERSION_ID=39
VERSION_CODENAME=""
PLATFORM_ID="platform:f39"
PRETTY_NAME="Fedora Linux 39.20240105.0 (Kinoite)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:39"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://kinoite.fedoraproject.org"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-kinoite/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://pagure.io/fedora-kde/SIG/issues"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=39
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=39
SUPPORT_END=2024-11-12
VARIANT="Kinoite"
VARIANT_ID=kinoite
OSTREE_VERSION='39.20240105.0'

View File

@@ -0,0 +1,22 @@
NAME="Fedora Linux"
VERSION="39 (Onyx)"
ID=fedora
VERSION_ID=39
VERSION_CODENAME=""
PLATFORM_ID="platform:f39"
PRETTY_NAME="Fedora Linux 39 (Onyx)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:39"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/onyx/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-onyx/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=39
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=39
SUPPORT_END=2024-05-14
VARIANT="Onyx"
VARIANT_ID=onyx

View File

@@ -0,0 +1,22 @@
NAME="Fedora Linux"
VERSION="39 (Sericea)"
ID=fedora
VERSION_ID=39
VERSION_CODENAME=""
PLATFORM_ID="platform:f39"
PRETTY_NAME="Fedora Linux 39 (Sericea)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:39"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/sericea/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-sericea/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://gitlab.com/fedora/sigs/sway/SIG/-/issues"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=39
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=39
SUPPORT_END=2024-05-14
VARIANT="Sericea"
VARIANT_ID=sericea

View File

@@ -0,0 +1,22 @@
NAME="Fedora Linux"
VERSION="39 (Silverblue)"
ID=fedora
VERSION_ID=39
VERSION_CODENAME=""
PLATFORM_ID="platform:f39"
PRETTY_NAME="Fedora Linux 39 (Silverblue)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:39"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://silverblue.fedoraproject.org"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-silverblue/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://github.com/fedora-silverblue/issue-tracker/issues"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=39
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=39
SUPPORT_END=2024-05-14
VARIANT="Silverblue"
VARIANT_ID=silverblue

View File

@@ -0,0 +1,23 @@
NAME="Nobara Linux"
VERSION="39 (GNOME Edition)"
ID=nobara
ID_LIKE="rhel centos fedora"
VERSION_ID=39
VERSION_CODENAME=""
PLATFORM_ID="platform:f39"
PRETTY_NAME="Nobara Linux 39 (GNOME Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=nobara-logo-icon
CPE_NAME="cpe:/o:nobaraproject:nobara:39"
DEFAULT_HOSTNAME="nobara"
HOME_URL="https://nobaraproject.org/"
DOCUMENTATION_URL="https://www.nobaraproject.org/"
SUPPORT_URL="https://www.nobaraproject.org/"
BUG_REPORT_URL="https://gitlab.com/gloriouseggroll/nobara-images"
REDHAT_BUGZILLA_PRODUCT="Nobara"
REDHAT_BUGZILLA_PRODUCT_VERSION=39
REDHAT_SUPPORT_PRODUCT="Nobara"
REDHAT_SUPPORT_PRODUCT_VERSION=39
SUPPORT_END=2024-05-14
VARIANT="GNOME Edition"
VARIANT_ID=gnome

View File

@@ -1,11 +1,15 @@
use std::ffi::OsStr;
use std::fs; use std::fs;
use std::os::unix::fs::MetadataExt; use std::os::unix::fs::MetadataExt;
use std::path::Component;
use std::path::PathBuf; use std::path::PathBuf;
use std::process::Command; use std::process::Command;
use std::{env::var, path::Path}; use std::{env::var, path::Path};
use crate::command::CommandExt; use crate::command::CommandExt;
use crate::{Step, HOME_DIR}; use crate::{Step, HOME_DIR};
use color_eyre::eyre::eyre;
use color_eyre::eyre::Context;
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
use home; use home;
use ini::Ini; use ini::Ini;
@@ -283,7 +287,7 @@ pub fn run_brew_formula(ctx: &ExecutionContext, variant: BrewVariant) -> Result<
variant.execute(run_type).arg("update").status_checked()?; variant.execute(run_type).arg("update").status_checked()?;
variant variant
.execute(run_type) .execute(run_type)
.args(["upgrade", "--ignore-pinned", "--formula"]) .args(["upgrade", "--formula"])
.status_checked()?; .status_checked()?;
if ctx.config().cleanup() { if ctx.config().cleanup() {
@@ -365,23 +369,8 @@ pub fn run_nix(ctx: &ExecutionContext) -> Result<()> {
debug!("nix profile: {:?}", profile_path); debug!("nix profile: {:?}", profile_path);
let manifest_json_path = profile_path.join("manifest.json"); let manifest_json_path = profile_path.join("manifest.json");
// Should we attempt to upgrade Nix with `nix upgrade-nix`?
#[allow(unused_mut)]
let mut should_self_upgrade = cfg!(target_os = "macos");
#[cfg(target_os = "linux")]
{
// We can't use `nix upgrade-nix` on NixOS.
if let Ok(Distribution::NixOS) = Distribution::detect() {
should_self_upgrade = false;
}
}
print_separator("Nix"); print_separator("Nix");
let multi_user = fs::metadata(&nix)?.uid() == 0;
debug!("Multi user nix: {}", multi_user);
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
{ {
if require("darwin-rebuild").is_ok() { if require("darwin-rebuild").is_ok() {
@@ -393,30 +382,12 @@ pub fn run_nix(ctx: &ExecutionContext) -> Result<()> {
} }
let run_type = ctx.run_type(); let run_type = ctx.run_type();
let nix_args = ["--extra-experimental-features", "nix-command"];
if should_self_upgrade {
if multi_user {
ctx.execute_elevated(&nix, true)?
.args(nix_args)
.arg("upgrade-nix")
.status_checked()?;
} else {
run_type
.execute(&nix)
.args(nix_args)
.arg("upgrade-nix")
.status_checked()?;
}
}
run_type.execute(nix_channel).arg("--update").status_checked()?; run_type.execute(nix_channel).arg("--update").status_checked()?;
if Path::new(&manifest_json_path).exists() { if Path::new(&manifest_json_path).exists() {
run_type run_type
.execute(&nix) .execute(nix)
.args(nix_args) .args(nix_args())
.arg("profile") .arg("profile")
.arg("upgrade") .arg("upgrade")
.arg(".*") .arg(".*")
@@ -432,6 +403,123 @@ pub fn run_nix(ctx: &ExecutionContext) -> Result<()> {
} }
} }
pub fn run_nix_self_upgrade(ctx: &ExecutionContext) -> Result<()> {
let nix = require("nix")?;
// Should we attempt to upgrade Nix with `nix upgrade-nix`?
#[allow(unused_mut)]
let mut should_self_upgrade = cfg!(target_os = "macos");
#[cfg(target_os = "linux")]
{
// We can't use `nix upgrade-nix` on NixOS.
if let Ok(Distribution::NixOS) = Distribution::detect() {
should_self_upgrade = false;
}
}
if !should_self_upgrade {
return Err(SkipStep(String::from(
"`nix upgrade-nix` can only be used on macOS or non-NixOS Linux",
))
.into());
}
if nix_profile_dir(&nix)?.is_none() {
return Err(SkipStep(String::from(
"`nix upgrade-nix` cannot be run when Nix is installed in a profile",
))
.into());
}
print_separator("Nix (self-upgrade)");
let multi_user = fs::metadata(&nix)?.uid() == 0;
debug!("Multi user nix: {}", multi_user);
let nix_args = nix_args();
if multi_user {
ctx.execute_elevated(&nix, true)?
.args(nix_args)
.arg("upgrade-nix")
.status_checked()
} else {
ctx.run_type()
.execute(&nix)
.args(nix_args)
.arg("upgrade-nix")
.status_checked()
}
}
/// If we try to `nix upgrade-nix` but Nix is installed with `nix profile`, we'll get a `does not
/// appear to be part of a Nix profile` error.
///
/// We duplicate some of the `nix` logic here to avoid this.
/// See: <https://github.com/NixOS/nix/blob/f0180487a0e4c0091b46cb1469c44144f5400240/src/nix/upgrade-nix.cc#L102-L139>
///
/// See: <https://github.com/NixOS/nix/issues/5473>
fn nix_profile_dir(nix: &Path) -> Result<Option<PathBuf>> {
// NOTE: `nix` uses the location of the `nix-env` binary for this but we're using the `nix`
// binary; should be the same.
let nix_bin_dir = nix.parent();
if nix_bin_dir.and_then(|p| p.file_name()) != Some(OsStr::new("bin")) {
debug!("Nix is not installed in a `bin` directory: {nix_bin_dir:?}");
return Ok(None);
}
let nix_dir = nix_bin_dir
.and_then(|bin_dir| bin_dir.parent())
.ok_or_else(|| eyre!("Unable to find Nix install directory from Nix binary {nix:?}"))?;
debug!("Found Nix in {nix_dir:?}");
let mut profile_dir = nix_dir.to_path_buf();
while profile_dir.is_symlink() {
profile_dir = profile_dir
.parent()
.ok_or_else(|| eyre!("Path has no parent: {profile_dir:?}"))?
.join(
profile_dir
.read_link()
.wrap_err_with(|| format!("Failed to read symlink {profile_dir:?}"))?,
);
// NOTE: `nix` uses a hand-rolled canonicalize function, Rust just uses `realpath`.
if profile_dir
.canonicalize()
.wrap_err_with(|| format!("Failed to canonicalize {profile_dir:?}"))?
.components()
.any(|component| component == Component::Normal(OsStr::new("profiles")))
{
break;
}
}
debug!("Found Nix profile {profile_dir:?}");
let user_env = profile_dir
.canonicalize()
.wrap_err_with(|| format!("Failed to canonicalize {profile_dir:?}"))?;
Ok(
if user_env
.file_name()
.and_then(|name| name.to_str())
.map(|name| name.ends_with("user-environment"))
.unwrap_or(false)
{
Some(profile_dir)
} else {
None
},
)
}
fn nix_args() -> [&'static str; 2] {
["--extra-experimental-features", "nix-command"]
}
pub fn run_yadm(ctx: &ExecutionContext) -> Result<()> { pub fn run_yadm(ctx: &ExecutionContext) -> Result<()> {
let yadm = require("yadm")?; let yadm = require("yadm")?;
@@ -555,6 +643,19 @@ pub fn run_bun(ctx: &ExecutionContext) -> Result<()> {
ctx.run_type().execute(bun).arg("upgrade").status_checked() ctx.run_type().execute(bun).arg("upgrade").status_checked()
} }
pub fn run_bun_packages(ctx: &ExecutionContext) -> Result<()> {
let bun = require("bun")?;
print_separator("Bun Packages");
if !HOME_DIR.join(".bun/install/global/package.json").exists() {
println!("No global packages installed");
return Ok(());
}
ctx.run_type().execute(bun).args(["-g", "update"]).status_checked()
}
/// Update dotfiles with `rcm(7)`. /// Update dotfiles with `rcm(7)`.
/// ///
/// See: <https://github.com/thoughtbot/rcm> /// See: <https://github.com/thoughtbot/rcm>

View File

@@ -8,7 +8,6 @@ use tracing::debug;
use crate::command::CommandExt; use crate::command::CommandExt;
use crate::execution_context::ExecutionContext; use crate::execution_context::ExecutionContext;
use crate::steps::git::GitAction;
use crate::terminal::{print_separator, print_warning}; use crate::terminal::{print_separator, print_warning};
use crate::utils::{require, which}; use crate::utils::{require, which};
use crate::{error::SkipStep, steps::git::Repositories}; use crate::{error::SkipStep, steps::git::Repositories};
@@ -240,7 +239,7 @@ pub fn insert_startup_scripts(git_repos: &mut Repositories) -> Result<()> {
if let Ok(lnk) = parselnk::Lnk::try_from(Path::new(&path)) { if let Ok(lnk) = parselnk::Lnk::try_from(Path::new(&path)) {
debug!("Startup link: {:?}", lnk); debug!("Startup link: {:?}", lnk);
if let Some(path) = lnk.relative_path() { if let Some(path) = lnk.relative_path() {
git_repos.insert_if_repo(&startup_dir.join(path), GitAction::Pull); git_repos.insert_if_repo(&startup_dir.join(path));
} }
} }
} }

View File

@@ -122,7 +122,10 @@ pub fn run_zinit(ctx: &ExecutionContext) -> Result<()> {
print_separator("zinit"); print_separator("zinit");
let cmd = format!("source {} && zinit self-update && zinit update --all", zshrc.display(),); let cmd = format!(
"source {} && zinit self-update && zinit update --all -p",
zshrc.display(),
);
ctx.run_type() ctx.run_type()
.execute(zsh) .execute(zsh)
.args(["-i", "-c", cmd.as_str()]) .args(["-i", "-c", cmd.as_str()])
@@ -137,7 +140,7 @@ pub fn run_zi(ctx: &ExecutionContext) -> Result<()> {
print_separator("zi"); print_separator("zi");
let cmd = format!("source {} && zi self-update && zi update --all", zshrc.display(),); let cmd = format!("source {} && zi self-update && zi update --all -p", zshrc.display(),);
ctx.run_type().execute(zsh).args(["-i", "-c", &cmd]).status_checked() ctx.run_type().execute(zsh).args(["-i", "-c", &cmd]).status_checked()
} }
@@ -176,21 +179,15 @@ pub fn run_oh_my_zsh(ctx: &ExecutionContext) -> Result<()> {
// children processes won't get it either, so we source the zshrc and set // children processes won't get it either, so we source the zshrc and set
// the ZSH variable for topgrade here. // the ZSH variable for topgrade here.
if ctx.under_ssh() { if ctx.under_ssh() {
let zshrc_path = zshrc().require()?; let res_env_zsh = Command::new("zsh")
let output = Command::new("zsh") .args(["-ic", "print -rn -- ${ZSH:?}"])
.args([ .output_checked_utf8();
"-c",
// ` > /dev/null` is used in case the user's zshrc will have some stdout output. // this command will fail if `ZSH` is not set
format!( if let Ok(output) = res_env_zsh {
"source {} > /dev/null && export -p | grep ZSH > /dev/null && echo $ZSH", let env_zsh = output.stdout;
zshrc_path.display() debug!("Oh-my-zsh: under SSH, setting ZSH={}", env_zsh);
) env::set_var("ZSH", env_zsh);
.as_str(),
])
.output_checked_utf8()?;
let zsh_env = output.stdout.trim();
if !zsh_env.is_empty() {
env::set_var("ZSH", zsh_env);
} }
} }
@@ -227,11 +224,11 @@ pub fn run_oh_my_zsh(ctx: &ExecutionContext) -> Result<()> {
for entry in WalkDir::new(custom_dir).max_depth(2) { for entry in WalkDir::new(custom_dir).max_depth(2) {
let entry = entry?; let entry = entry?;
custom_repos.insert_if_repo(entry.path(), crate::steps::git::GitAction::Pull); custom_repos.insert_if_repo(entry.path());
} }
custom_repos.remove_from_pull(&oh_my_zsh.to_string_lossy()); custom_repos.remove(&oh_my_zsh.to_string_lossy());
if !custom_repos.pull_is_empty() { if !custom_repos.is_empty() {
println!("Pulling custom plugins and themes"); println!("Pulling custom plugins and themes");
ctx.git().multi_pull(&custom_repos, ctx)?; ctx.git().multi_pull(&custom_repos, ctx)?;
} }

View File

@@ -119,44 +119,13 @@ pub fn string_prepend_str(string: &mut String, s: &str) {
*string = new_string; *string = new_string;
} }
/* sys-info-rs
*
* Copyright (c) 2015 Siyu Wang
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#[cfg(target_family = "unix")] #[cfg(target_family = "unix")]
pub fn hostname() -> Result<String> { pub fn hostname() -> Result<String> {
use std::ffi; match nix::unistd::gethostname() {
extern crate libc; Ok(os_str) => Ok(os_str
.into_string()
unsafe { .map_err(|_| SkipStep("Failed to get a UTF-8 encoded hostname".into()))?),
let buf_size = libc::sysconf(libc::_SC_HOST_NAME_MAX) as usize; Err(e) => Err(e.into()),
let mut buf = Vec::<u8>::with_capacity(buf_size + 1);
if libc::gethostname(buf.as_mut_ptr() as *mut libc::c_char, buf_size) < 0 {
return Err(SkipStep(format!("Failed to get hostname: {}", std::io::Error::last_os_error())).into());
}
let hostname_len = libc::strnlen(buf.as_ptr() as *const libc::c_char, buf_size);
buf.set_len(hostname_len);
Ok(ffi::CString::new(buf).unwrap().into_string().unwrap())
} }
} }