Multiplatform, dependency-free system packaging solution in a C library
bzt ba554c48bb Updated to work with latest mbedtls, plus handling cookies in redirects when downloading payloads | 1 年之前 | |
---|---|---|
docs | 2 年之前 | |
inc | 2 年之前 | |
src | 1 年之前 | |
tests | 3 年之前 | |
.gitignore | 3 年之前 | |
LICENSE | 3 年之前 | |
OLVASSEL.md | 2 年之前 | |
README.md | 2 年之前 |
Why? Because let's be honest, all packaging solutions s*ck, big time. What's with that armada of commands for deb? Installing "apt-get", listing packages "dpkg", searching "apt-cache", searching files "dpkg-query", setting up "dpkg-reconfigure"? And who have tought for even a moment that pacman's switches are sane? "-Ss" to search, "-S" to install, and "-Syu" to update? What's wrong with "-s", "-i", "-u"? That you can't accidentally mix search with install? And why is there a need for "yaourt" in the first place? Commands and switches for "emerge" is quite good actually, but why is there a need for separate tools like "e-file", "qpkg" and "ebuild"?
Internationalization? Translateable package descriptions, preview images and screenshots? What are those, right?
Why can't any of the existing packaging systems handle circular dependencies and removing unwanted packages automatically? Have you ever run into a dependency hell? Why are the not-needed dependencies and the payloads left on the machine eating up precious storage space (so that I have to manually purge /var/cache and hunt for orphaned packages)?
And what is that mess with the repos? I couldn't even dream of adding the certs, why do I have to get GPG key (yet another tool and another dependency in the toolchain)? And when I got that key, why is it so problematic to add new keys and user defined repos? Why do pacman and apt handle certs globally instead of per package? And failing if you don't manually download trusted.gpg.d or update archlinux-keyring first? Why can't I just add my .deb or AUR and keep it synced with the rest of the packages? Etc. etc. etc. etc. etc. So many serious usability issues.
Finally, have you ever tried to port any of the existing solutions to a new operating system? Nightmare! Freddy is just a lullaby compared to them and their dependencies! Why is it so important that a package management software should not depend on packages at all? Well, because it IS the package management software! It can't stop working after it removes the unzip package for example, right? Otherwise how could it reinstall the package? Once I had to reinstall an entire Gentoo system because emerge got into a dependency hell updating its own dependency...
Long story short, here's a very simple, easy to use, fully-featured, rolling-release system packaging solution, implemented as a stand-alone, dependency-free ANSI C library. You can wrap it in a command line tool or in a native GUI application for your OS if you'd like.
Should be user friendly with a very short learning curve and easy to remember switches. About using the library, see the underlying library's API documentation.
To get you started quickly, the library is wrapped in a demo tool.
$ syspkg (command) [parameters]
Although it is fully functional, it is mainly for API usage demonstration purposes. You're supposed to creare a native packaging application on your OS using this library, instead of porting the demo tool (but that can work too).
Flag / Command | Description |
---|---|
-u / update |
Download the latest package metainfo from repos |
-U / upgrade |
Upgrade all packages which have newer versions |
-l / list |
List installed packages |
-s / search |
Search for package in name and description |
-w / which |
Which package contains a certain file |
-d / depends |
List packages that depend on a certain package |
-i / info |
Detailed package info |
-I / install |
(Re)install or upgrade a package |
-r / reconf |
Reconfigure an installed package |
-R / remove |
Remove a package, repository or untrust certificate |
-A / repo |
Add a repository or list configured repositories |
-T / trust |
Add a certifate to the trusted cert list |
Flag / Command | Description |
---|---|
-c / cert |
Generate a certificate |
-S / cert --sign |
Sign a developer certificate with a repo cert |
-C / check |
Check metainfo or payload integrity and validity |
-B / build |
Sign a metainfo (and/or payloads), url list |
Consult the maintainer's manual for creating packages and repositories. This library has your back covered and provides functions not only to manage, but also to build packages. You can also find a step-by-step HOW TO tutorial.
The concept behind syspkg is plain simple, although it has multiple levels. The assumption of syspkg is that software moved to code repositories like github or gitlab (but also provides an easy way to support old-fashined tarballs on a static website method).
Level | Description |
---|---|
repo list | List of repository URLs in a local config file on the machine |
repository | A plain simple text file listing metainfo URLs |
metainfo | A JSON file describing the package, versions and containing links to the payloads |
payloads | Are the archive files (tarballs) for the package |
From top to bottom: repo list is a configuration file that the user can edit locally on their computer. They can add or remove repository URLs as they wish, as well as trust certs. Operating systems can provide their own default lists (one URL for base packages, one for community, one for non-free etc.) and they can preinstall their official certs.
On update
, libsyspkg downloads these repository files and concatenates into one big list of metainfo URLs. Then
it iterates on that list and downloads metainfo files. With this the list of available packages is stored locally.
The search
, which
, depends
and list
commands operate on this local package database.
These metainfo describe code repositories and how to download their contents. It also contains additional info like translated package description and screenshot urls. It is perfectly valid that a 3rd party creates a metainfo for a repository on an entirely different server.
On install
, the metainfo is consulted for dependencies, which are then installed along automatically. There's a
reference counter for each package, and user manually installing a package counts as one. Circular dependencies
are detected and both packages installed at once. If you messed up the configuration file of a package, you can
re-run the same configuration process used when installing with the reconf
switch.
On remove
, the package as well as its dependants are deleted. Packages that the removed package was depending on
has their counter decreased by one, and if reached zero, also deleted.
That's it.
There's only one configuration file to store the repository URLs locally. It is a
JSON with the following fields: lang
, osver
,
arch
, license
(optional) and repos
(mandatory). The filters limit what packages are listed and installed, the last
is just a list of URL strings. If not specified, filters default to "LANG" environment variable, any OS version,
compilation architecture and any license. If more architectures are given, then it also determines preference: if a package
has multiple payloads, then the first architecture in the list that matches will be installed.
Example:
{
"lang": "en_GB",
"osver": "bugsir",
"arch": [ "x86_64", "x86_32", "any" ],
"license": [ "MIT", "BSD", "GPL", "LGPL", "PD" ],
"repos": [
"https://gitlab.com/bztsrc/syspkg/-/raw/examplerepo/base.txt",
"https://gitlab.com/bztsrc/syspkg/-/raw/examplerepo/community.txt",
"https://gitlab.com/bztsrc/syspkg/-/raw/examplerepo/extra.txt",
"https://gitlab.com/bztsrc/syspkg/-/raw/examplerepo/nonfree.txt",
"https://github.com/someone/coolstuff/raw/master/repository.txt",
"https://somewebserver.com/3rdpartytools.txt"
]
}