fd-commit and checkupdates both require that there are two name fields,
AutoName: and Name:. This is only used for the commit messages. Since the
current devs do it manually, we can remove the fd-commit shell script, then
focus on checkupdates when revamping AutoName/Name.
https://botbot.me/freenode/fdroid-dev/msg/82539152
PyPI and Python packages expect the description to be in reST format, which
is a lot different than Markdown. This does the conversion if pandoc is
installed.
docker-py 1.10.6 is the last version released under docker-py, after that it
will be just docker, but has breaking changes, see:
https://github.com/docker/docker-py/releases/tag/2.0.0
Update the list of allowed versions of requests to docker-py master.
I ran into some annoying issues with UTF-8 output in the vagrant logs, and
it was hard to solve. So I switched to using python-vagrant, which handles
it all for us. Its been around since 2012, has a number of contributors,
and is still actively maintained, so it seems like a good bet. I also
packaged it for Debian, including a backport in jessie-backports.
On Debian/jessie, do `apt-get install python3-vagrant/jessie-backports`
* New command `dscanner`, enables one to scan signed APKs with Drozer
* Drozer is a dynamic vulnerability scanner for Android
* Drozer runs in a emulator or on-device, this new `dscanner` command...
* starts a docker image with Drozer and the Android Emulator pre-installed,
* loads the signed APK into the emulator
* activates Drozer automated tests for the APK
* gathers the report output and places it next to the original APK
* The Drozer docker image can be:
* cached locally for re-use (just don't run --clean*)
* retrieved from dockerhub.com for more efficient runtime
* or be built from scratch (in the new "./docker" directory)
* New "Vulnerability Scanning" documentation section (run gendocs.sh)
It will make it a lot easier to manage the cache if we use the original
file names, which often include the file version. This also changes the
download process to be resumable if there is a partial file in the cache,
and switches from calling wget on the command line to using the python libs
'requests' and 'clint' to provide a similar experience. While its not so
important for this particular bit of code to use those libraries, I think
those two will allow us to provide a better user experience throughout the
whole of fdroidserver.
In this case, it is already doing special tricks fetching the file size
from the server before trying to download it. I suppose this code could
instead check if the file exists, and if so, check the hash sum. I think
that would be slower for most people since checking the hash on large files
takes a noticeable about of time, while a HTTP HEAD request is pretty tiny.
This adds a new method for `fdroid import` that will generate the fdroidserver
metadata based on a local git repo. This new mode generates the metadata in
the new .fdroid.yaml format in the git repo itself. It is intended as a quick
way to get starting building apps using the fdroidserver tools.
This keeps the numbers of names down to a minimum, and since the config
is placed right next to the script, this keeps tab completion working
nicely when the config file is in place.
The old file name is still supported.
Initially, the scanner used libmagic which used magic numbers in the file's
content to detect what kind of file it appears to be. Since that library isn't
available on all systems, we added support for two other libraries, mimetypes
amongst them.
The issue with mimetypes is that it only uses the file's extension, not its
actual content. So this ends in variable behaviour depending on what system
you're using fdroidserver on. For example, an executable binary without
extension would be ignored if mimetypes was being used.
We now drop all libraries - mimetypes too as it depends on the system's
mime.types file - and instead check extensions ourselves. On top of that, do
a simple binary content check to find binary executables that don't have an
extension.
The new in-house code without any dependencies doesn't add any new checks, so
no builds should break. The current checks still work:
% fdroid scanner app.openconnect:1029
[...]
Found executable binary at assets/raw/armeabi/curl
Found executable binary at assets/raw/mips/curl
Found executable binary at assets/raw/x86/curl
Found JAR file at lib/XposedBridgeApi-54.jar
Found JAR file at libs/acra-4.5.0.jar
Found JAR file at libs/openconnect-wrapper.jar
Found JAR file at libs/stoken-wrapper.jar
Found shared library at libs/armeabi/libopenconnect.so
Found shared library at libs/armeabi/libstoken.so
Found shared library at libs/mips/libopenconnect.so
Found shared library at libs/mips/libstoken.so
Found shared library at libs/x86/libopenconnect.so
Found shared library at libs/x86/libstoken.so
YAML is a format that is quite similar to the .txt format, but is a
widespread standard that has editing modes in popular editors. It is also
easily parsable in python.
The .pickle for testing is a lightly edited version of the real metadata
for org.videolan.vlc:
* comments were removed
Instead of trying to guess which absolute path and .egg name
everything will be installed into, just use a relative path and the
setup process will do the right thing (so far, at least).
There are two forms of python-magic, one included in libmagic that is
default on Debian, and another Python wrapper for libmagic that is
called 'python-magic' on pypi. Those both rely on the compiled binary
library libmagic. For platforms without good package management, fallback
to using the built-in 'mimetypes' library if 'magic' is not available.
This supports OSX, Windows, and BSD.
To make the core tools portable to platforms like Mac OS X and Windows,
remove the dependency on wget and instead use Python Requests, which
probably has better performance anyway.
PIL is the old standard Python Imaging. Pillow is a fork where development
is actually continuing. It seems that all the distros are also switching
from PIL to Pillow, including Debian, Ubuntu, Fedora, brew, MacPorts, etc.
This makes the AWS S3 setup dead simple: just put in a awsbucket name of
your choosing, set the AWS credentials, and it'll do the rest, whether the
bucket exists already or not. S3 buckets are trivial to delete too, in
case of error: `s3cmd rb s3://mybadbucketname`.
apache-libcloud enables uploading to basically any cloud storage service.
This is the first implementation that allows `fdroid server` to push a repo
up to a AWS S3 'bucket'. Supporting other cloud storage services should
mostly be a matter of finding the libcloud "Provider" and setting the
access creditials.
fixes#3137https://dev.guardianproject.info/issues/3137
This assumes that the smartcard is already setup with a signing key. init
does not generate a key on the smartcard, and skips genkey() if things are
configured to use a smartcard.
This also does not touch APK signing because that is a much more elaborate
question, since each app is signed by its own key.
setuptools wants to stick any relative install path in data_files into the
.egg package. Things are not setup to use the egg now. We might want to
consider using sticking files into the egg via pkg_resource in the future.
This also makes the file layout in git basically the same as the installed
file layout, using an examples/ dir. I'm not sure if config.buildserver.py
is an example conf file, or a conf file that is actually in use, so I did
not move it.
This means they'll automatically be installed in the right place by the
packaging processes of various UNIX distros, and then that makes it easy
for the upcoming 'fdroid init' to find them.
With this setup.py, its possible to install the required packages using:
pip install -e .
The Debian packaging will also automatically get the dependencies from
install_requires. This does not handle the generation of the docs at all.
I have not found a straightforward way to include running ./gendocs.sh in
setup.py, but its easy to run in the Debian packaging.