This is part of the effort to remove moving parts from the whole build
server setup. Why wrap shell scripts in ruby and chef if we can just
directly run a shell script?
The other apps are too flaky on gpjenkins right now, and that's our
only box for running full buildserver tests. Once we get the
buildserver tests running on jenkins.debian.net, then we can add a
bunch more apps to the test script. gpjenkins is an extra locked down
box, so that's why the builds are flaky: gradle and maven downloads
regularly fail because they are blocked.
Note that the apt packages are split into two halves, because it takes
too long (on 64 bit!) to install them all. The sensible fix would be
to simply up the timeout on the package installation section, but this
is completely broken in chef.
Java8 and other fixes
This fixes the OSX travis-ci job for the Java8 update, and also two other fixes as described in the commit message. @CiaranG I think the _/etc/profile_ approach in 7b466b6266 gets us closer to the idea of having the `fdroid build` jobs run in the same environment as `vagrant ssh`.
See merge request !138
The ZIP format has no official encoding :-| so we have to do hacks. The
zipfile devs couldn't even sort this out:
https://bugs.python.org/issue10614closes#167
`fdroid update` should be able to handle any valid filename (hopefully
aapt doesn't barf on them). To handle that, the environment where the
shell commands are run in needs to have a UTF-8 locale set. If LANG is
not set, things default to ASCII and UTF-8 filenames fail.
This also renames test APK with lots of Unicode chars as a test case.
closes#167
This will make `vagrant ssh` and `fdroid build --server` be the same env,
so troubleshooting should be easier. !135 Here's what `man bash` says:
When bash is invoked as an interactive login shell, or as a
non-interactive shell with the --login option, it first reads and
executes commands from the file /etc/profile, if that file
exists. After reading that file, it looks for ~/.bash_profile,
~/.bash_login, and ~/.profile, in that order, and reads and
executes commands from the first one that exists and is readable.
The --noprofile option may be used when the shell is started to
inhibit this behavior. When a login shell exits, bash reads and
executes commands from the file ~/.bash_logout, if it exists.
This stops these errors:
fdroid/fdroidserver/fdroidserver/update.py:744: ResourceWarning: unclosed
file <_io.BufferedReader
name='repo/icons-320/info.guardianproject.urzip.100.png'>
fdroid/fdroidserver/fdroidserver/update.py:721: DeprecationWarning: The
'warn' function is deprecated, use 'warning' instead
all: switch to jdk8 as default
@eighthave this will probably require some action on the jenkins machine to replace jdk7 with jdk8.
Any thoughts @CiaranG?
See merge request !137
Also, remove jdk7 as it will become unused. We added jdk8 for
retrolambda, and now that we will use jdk8 as the default, jdk7 is
unnecessary as retrolambda can work fine with just jdk8.
This removes it from the buildserver, and the new CI image also only has
jdk8 from jessie-backports.
Fixes#185.
Some build fixes
This includes a couple of fixes discussed with @mvdan including removing the `ANDROID_NDK_HOME` env var from the buildserver and removing the default behavior to force the build-tools version.
See merge request !136
This replaces the current default behavior of always forcing the
build_tools version and allows the user to set build-tools forcing in
config.py.
closes#147
By checking first, this prevents a stacktrace when the duplicate metadata
file is not valid. For example, in the tests, the duplicate is just a zero
length file, which was causing a stacktrace.
`fdroid build` handles setting the NDK env vars since the NDK version can
change depending on the app being build. Unlike ANDROID_HOME, there is no
single global NDK location. The NDK installs are all versioned.
Fix buildserver, broken by e449d2f
Switching to a script /etc/profile.d to set up the environment is all
well and good, except that doesn't get run when you directly execute a
command directly via ssh, which means that the buildserver didn't work
at all (at least for anything that used gradle, or relied on the
environment variables.
This fix doesn't look very nice, but it works - it just forces the
appropriate script to run before build.py is executed on the server.
(Side note, I thought we had tests for this, how did it get past them?)
See merge request !135
Tone down makebuildserver terminal spamming
The progress bar updating at each 1k chunk spams the terminal to such an
extent it stops responding to keypresses (at least with a fast
connection). It also seems extremely inefficient to be writing the file
in 1k chunks. This makes it 64k, which is more reasonable but quite
probably still too small.
See merge request !134
Switching to a script /etc/profile.d to set up the environment is all
well and good, except that doesn't get run when you directly execute a
command directly via ssh, which means that the buildserver didn't work
at all (at least for anything that used gradle, or relied on the
environment variables.
This fix doesn't look very nice, but it works - it just forces the
appropriate script to run before build.py is executed on the server.
(Side note, I thought we had tests for this, how did it get past them?)
The progress bar updating at each 1k chunk spams the terminal to such an
extent it stops responding to keypresses (at least with a fast
connection). It also seems extremely inefficient to be writing the file
in 1k chunks. This makes it 64k, which is more reasonable but quite
probably still too small.
Adding support for DSA and ECDSA signatures in binaries
Before this patch only RSA signature files was supported when using the Binaries metadata directive as described in https://f-droid.org/wiki/page/Deterministic,_Reproducible_Builds.
This patch simply makes .DSA and .EC files also be recognised as signature files.
See merge request !133
switch SDK/NDK/gradle buildserver provisioning to shell scripts
This converts the SDK, NDK, and gradle chef recipes to vagrant provisioning shell scripts. Those recipes were just shell scripts anyway, forced into that ugly ruby/chef syntax. Hopefully this will make things easier to maintain since simple bash is easier for most devs than ruby/chef. Also, it might speed up the provisioning a little bit since the whole script is sent to the VM then executed, rather than sent line-by-line. Additionally, the SDK components are now installed using `android update sdk` so we do not need to duplicate Google's crazy kludges with that stuff.
See merge request !132
When the metadata changes, different things will be stored about each APK.
So invalidate the cached info parsed from APKs if the cache's metadata
version does not match the metadata version of the currently running tools.
Prevent build processes from modifying the cache, it is only needed
during provisioning anyway. A malicious build could still use sudo to
change the cache, but this is more to prevent mistaken modifications.
This was not using anything special from chef, so do it in a shell script
instead. This makes the script easier for the python/shell people, and
probably uses less memory, since chef is a memory hog. This might even
make the provision go faster since it uploads the whole script as a file to
the VM, then runs it there. I think chef sends each command via SSH.
`android update sdk --no-ui` is the standard command line tool for
installing the Android SDK. By symlinking into the $ANDROID_HOME/temp dir,
the cached files can still be used. This converts the chef recipe to a
vagrant shell provisioning script since it was all bash anyway.
Some file names no longer officially have a -linux in them, so those were
changed to keep the cache working with the default filename.
bash provides a standard file location for a script to be run when the
shell starts: /etc/profile.d/ This converts the scattered bits of code for
making ~/.bsenv into a single provisioning script to generate
/etc/profile.d/bsenv.sh, which gets automatically executed when bash starts
Fix ./makebuildserver CI build
In order to get reproducible builds work reliably, we need to have buildserver create be fully automatic, and be able to run in VMs. It should also handle caching downloads well, so that people can easily rebuild their buildserver, and CI builds don't have to download gigs and gigs for each run. These are some related fixes.
See merge request !131